Apple is rebuilding Maps from the ground up

Apple is rebuilding Maps from the ground up

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so It’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 Beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been 4 years in the making, which is when Apple began to develop its new data gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps in an interview last week.  “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and over a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of Maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, “Where do we want to take Maps? What are the things that we want to do in Maps? We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now numbering in the millions, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map real-time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high resolution satellite imagery and brand new intensely high resolution image data gathered from its ground cars until it had what it felt was a ‘best in class’ mapping product.

There is only really one big company on earth who owns an entire map stack from the ground up: Google .

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with ‘Apple Maps’ signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed up GPS rig on the roof, four LiDAR arrays mounted at the corners and 8 cameras shooting overlapping high-resolution images – there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven and monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly immediately and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case which is delivered to Apple’s data center where a suite of software eliminates private information like faces, license plates and other info from the images. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and Privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design as it wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it —in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the ‘ground truth’ data provided by its own mapping vehicles with this ‘probe data’ sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows who that ID refers to. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving, say, an ad for a Chevron on your route to you, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving it can also send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your maps app has been active, say you check the map, look for directions etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full on point cloud that maps the world around the mapping van in 3D. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it could also enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on ‘any future plans’ for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high resolution image data from car and satellite, plus a 3D point cloud results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher resolution and easier to see, visually. And it’s synchronized with the ‘panoramic’ images from the car, the satellite view and the raw data. These techniques are used in self driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to ‘see’ through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the ‘last 50 feet’ of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the US, it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate through into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the ‘current’ version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover including grass and trees represented on the map as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the US.”

Source: Mobile – Techcruch

AT&T’s low-cost TV streaming service Watch TV goes live

AT&T’s low-cost TV streaming service Watch TV goes live

AT&T’s newly announced Watch TV, a low-cost live TV streaming service announced in the wake of the AT&T / Time Warner merger, is now up and running. The company already has one over-the-top streaming service with DirecTV Now, but this one is cheaper, has some restrictions, and doesn’t include local channels or sports to keep costs down.

At $15 per month, the service undercuts the existing low-cost leader Philo by a dollar, but offers a different lineup (Fomopop has a nice channel-by-channel comparison between the two, if you’re in the market.)

Both have 25 of the same channels in their packages, including A&E, AMC, Comedy Central, Food Network, Discovery, HGTV, History and others, but AT&T Watch is missing MTV, Nickelodeon, and Travel Channel.

In total, Watch TV has over 30 live TV channels, plus 15,000+ TV  shows and movies on demand, and allows you to subscribe by way of updated AT&T Wireless plans. Non-AT&T customers can subscribe for $15 per month directly.

AT&T has been monkeying around with its wireless plans to best take advantage of its Time Warner acquisition. With the new unlimited plans, it removed the previously free HBO perk and raised the entry-level plan by $5 per month, Ars Technica reported, detailing the changes that coincided with the launch of Watch TV. (Existing customers were grandfathered in to free HBO.)

Instead, wireless customers on the top-tier AT&T Unlimited & More Premium plan can choose to add on another option – like HBO – for free. Other services they can opt for instead include Showtime, Starz, Amazon Music Unlimited, Pandora Premium and VRV.

The company also quietly raised its “administrative fee” for postpaid wireless customers from $0.76 to $1.99 per month, Ars noted as well, citing BTIG Research. This will bring in $800 million of incremental service revenue per year, the analyst firm said.

Despite the price hikes and valid concerns over AT&T’s behavior, there’s likely going to be a market for this low-cost live TV service. The company’s DirecTV Now streaming service, launched in December 2016, reached 1.46 million subscribers in April. It’s catching up to longtime leader, Dish’s Sling TV, which debuted at CES back in January 2015 and now has 2.3 million subscribers. Other newer arrivals, like Hulu with Live TV and YouTube TV, have subscribers in the hundreds of thousands.

AT&T’s Watch TV service will be available across platforms, including iOS, Android, Apple TV, Chromecast and Amazon Fire TV/Fire TV Stick, according to the service’s website. However, it only streams in high-def on the Premium wireless plan. It also doesn’t offer perks common to other live TV services, like a cloud DVR or support for multiple simultaneous streams.

The Watch TV apps are rolling out now. Early reviews note there’s some similarity in the layout to DirecTV Now. There are no reports of crashing as of yet, which are common to new launches like this.

Source: Mobile – Techcruch

India’s Cashify raises $12M for its second-hand smartphone business

India’s Cashify raises M for its second-hand smartphone business

Cashify, a company that buys and sells used smartphones, is the latest India startup to raise capital from Chinese investors after it announced a $12 million Series C round.

Chinese funds CDH Investments and Morningside led the round which included participation from Aihuishou, a China-based startup that sells used electronics in a similar way to Cashify and has raised over $120 million. Existing investors including Bessemer Ventures and Shunwei also took part in the round.

This new capital takes Cashify to $19 million raised to date.

The business was started in 2013 by co-founders Mandeep Manocha (CEO), Nakul Kumar (COO) and Amit Sethi (CTO) initially as ‘ReGlobe.’ The business gives consumers a fast way to sell their existing electronics, it deals mainly in smartphones but also takes laptops, consoles, TVs and tablets.

“When we began we saw a lot of transaction for phone sales moving from offline to online,” Manocha told TechCrunch in an interview. “But consumer-to-consumer [for used devices] is highly opaque on price discovery and you never know if you’re making the right decision on price and whether the transaction will take place in the timeframe.”

These days, the company estimates that the average upgrade cycle has shifted from 20 months to 12 months, and now it is doubling down.

With Cashify, sellers simply fill out some details online about their device, then Cashify dispatches a representative who comes to their house to perform diagnostic checks and gives them cash for the device that day. The startup also offers an app which automatically carries out the checks — for example ensuring the camera, Bluetooth module, etc all work — and offers a higher cash payment for the user since Cashify uses fewer resources.

 

A sample of the Cashify Q&A for selling a device.

Beyond its website and app, Cashify gets devices from trade-in programs for Samsung, Xiaomi and Apple in India, as well as e-commerce companies like Flipkart, Amazon and Paytm Mall.

Used device acquired, what happens next is interesting.

The startup has built out a network of offline merchants who specialize in selling used phones. Each phone it acquires is then sold (perhaps after minor refurbishments) to that network, so it might pop up for sale anywhere in India.

With this new money, Cashify CEO Manocha said the company will develop an online resale site that will allow anyone to buy a used phone from the company’s network. Devices sold by Cashify online will be refurbished with new parts where needed, and they’ll include a box and six-month warranty to give a better consumer experience, Manocha added.

Today, Cashify claims to handle 100,000 smartphones a month, but it is planning to grow that to 200,000 by the end of this year. Cashify said its devices are typically low-end, those that retail for sub-$300 when new. A large part of that push comes from the online site, but the startup is also enlarging its offline merchant network and working to reach more consumers who are actually selling their device. That’s where Manocha said he sees particular value in working with Aihuishou.

Cashify is also developing other services. It recently started offering at-home repairs for customers and Manocha said that adding Chinese investors — and Aihuishou in particular — will help it with its sourcing of components for the repairs service and general refurbishments.

Cashify estimates that the used smartphone market in India will see 90 million phones sold this year, with as many as 120 million trading by 2020. That’s close to the 124 million shipments that analysts estimate India saw in 2017, but with surprisingly higher margins.

A reseller can make 10 percent profit on a device, Manocha explained, and Cashify’s own price elasticity — the difference between what it buys from consumers at and what it sells to resellers for — is typically 30-35 percent, he added. That’s more than most OEMs, but that doesn’t take into account costs on the Cashify side which bring that number down.

“When I sell to a reseller, the margins aren’t that exciting which is why we want to sell direct to consumers,” the Cashify CEO said.

The startup has plenty going on at home in India, but already it is considering overseas possibilities.

“We will focus on India for at least next 12 months but we have had discussions on markets that would make sense to enter,” Manocha, explaining that the Middle East and Southeast Asia are early frontrunners.

“We are working very closely with one of the Chinese players and figuring out if we can do some business in Hong Kong because that’s the hub for second-hand phones in this part of the world,” he added.

Note: The original version of this article was updated to correct that Amit Sethi is CTO not CFO.

Source: Mobile – Techcruch

Hands on with the Echo Dots Kids Edition

Hands on with the Echo Dots Kids Edition
Earlier this year, Amazon introduced an Echo Dot for kids, with its $80 Echo Dot Kids Edition device, which comes in your choice of a red, blue, or green protective case. The idea is to market a version of Amazon’s existing Dot hardware to families by bundling it with an existing subscription service, and by throwing in a few extra features – like having Alexa encourage kids to say “please” when making their demands, for example.
The device makes sense in a couple of scenarios – for helicopter parents who want to fully lock down an Echo device before putting it in a kid’s room, and for those who were in the market for a FreeTime Unlimited subscription anyway.
I’ve been testing out an Echo Dot Kids Edition, and ran into some challenges which I thought I’d share. This is not a hardware review – I’m sure you can find those elsewhere. 
Music Filtering
As a parent of an 8-year old myself, I’ve realized it’s too difficult to keep her from ever hearing bad words – especially in music, TV and movies – so I’ve just explained to her that while she will sometimes hear those words, that doesn’t mean it’s okay to say them. (We have a similar rule about art – sometimes people will be nude in paintings, but that doesn’t mean it’s okay to walk around naked all the time.)
Surprisingly, I’ve been able to establish a level of shame around adult and inappropriate content to the point that she will confess to me when she hears it on places like YouTube. She will even turn it off without my instruction! I have a good kid, I guess.

But I understand some parents will only want kids to access the sanitized version of songs – especially if their children are still in the preschool years, or have a tendency to seek out explicit content because they’re little monsters.
Amazon FreeTime would be a good option in that case, but there are some caveats.
For starters, if you plan on using the explicit language filter on songs the Echo Dot plays, then you’re stuck with Amazon Music. While the Echo Dot itself can play music from a variety of services, including on-demand offerings from Pandora and Spotify, you can’t use these services when the explicit filter is enabled as “music services that do not support this filter will be blocked,” Amazon explains.
We’re a Spotify household, so that means my child’s favorite bedtime music playlist became unavailable when we swapped out her existing Echo Dot for the Kids Edition which had the explicit filter enabled.

Above: Parent Dashboard? Where? Maybe a link would help?
You can disable the explicit filter from the Parent Dashboard, but this option is inconveniently available just via the web. When you dig around in the Alexa app – which is where you’d think these controls would be found, there’s only a FreeTime On/Off toggle switch and instructions to “Go to the Parent Dashboard to see activity, manage time limits, and add content.”
It’s not even hyperlinked!
You have to just know the dashboard’s URL is parents.amazon.com. (And not www.parents.amazon.com, by the way. That doesn’t work.)
Then to actually disable the filter, it’s several more steps.

You’ll click the gear icon next to the child’s name, click on “Echo Dot Kids Edition” under “Alexa Settings,” then click “Manage Music.” Here, you can turn the switch on or off.
If you don’t have a subscription music service, the Echo Dot Kids Edition also ships with access to ad-free kid-safe stations on iHeartRadio Family.
Whitelisting Alexa skills…well, some skills!
Another issue with the way FreeTime works with Alexa, is that it’s not clear that nearly everything your child accesses on the device has to be whitelisted.
This leads to a confusing first-time user workflow.
Likely, you’ll start by browsing in the Alexa app’s Skills section or the Skills Store on the web to find some appropriate kid-friendly skills for your child to try. For example, I found and enabled a skill called “Math Facts – Math Practice for Kids.”
But when I instructed “Alexa, open Math Facts,” she responded, “I can’t do that.”
She didn’t say why.
As I hadn’t used FreeTime in quite a while, it didn’t occur to me that each Alexa skill would have to be toggled on – just like the third-party apps, videos, books and audiobooks the child has access to that didn’t ship with FreeTime Unlimited itself.
Instead, I mistakenly assumed that skills from the “Kids” section of the Skills store would just work.
Again, you’ll have to know to go to parents.amazon.com to toggle things on.
And again, the process for doing so is too many clicks deep in the user interface to be immediately obvious to newcomers. (You click the gear by the kid’s name, then “Add Content” – not “Echo Dot Kids Edition” as you might think! Then, on the “Add Content” screen, click over to the “Alexa Skills” tab and toggle on the skills you want the child to use.)

The issue with this system is that it prevents Echo Dot Kids Edition users – kids and adults alike – from discovering and enabling skills by voice. And it adds an unnecessary step by forcing parents to toggle skills on.
After all, if the parents are the ones signing in when visiting the Skills store in-app or on the web, that means they’re the ones choosing to enable the Skills, too.
And if they’re enabling a skill from Kids section, one would assume it’s for their kids to use on their device!
The problem, largely, is that FreeTime isn’t really integrated with the Alexa app. All of this – from explicit content filters to whitelisting skills to turning on or off calling, messaging and drop-ins – should be managed from within the Alexa app, not from a separate website.
Amazon obviously did minimal integration work in order to sell parents a pricier Echo Dot.

To make matters more confusing is the fact that Amazon has partnered with some kids skill publishers, similar to how it partnered with other content providers for apps and movies. That means there’s a list of skills that don’t appear in your Parent Dashboard that also don’t require whitelisting.
This includes: Disney Stories, Loud House Challenge, No Way That’s True, Funny Fill In, Spongebob Challenge, Weird but True, Name that Animal, This or That, Word world, Ben ten, Classroom thirteen, Batman Adventures, and Climb the Beanstalk.
But it’s confusing that you can immediately use these skills, and not others clearly meant for kids. You end up feeling like you did something wrong when some skills don’t work, before you figure out this whole whitelisting system.
In addition, it’s not clear that these “Premium” skills come with the FreeTime subscription – most are not available in the Skills store. If your FreeTime subscription expires, it seems you’ll lose access to these, as well.
Overall, the FreeTime experience for Echo feels disjointed, and there’s a steep learning curve for new users.

Your FreeTime Unlimited 1-year Subscription
It’s also frustrating that there’s no information on the FreeTime Parents dashboard about the nature of your subscription.
You can’t confirm that you’re currently subscribed to the paid product known as FreeTime Unlimited. You can’t see when that subscription expires, or when your first free year is up. It’s unclear if you’ll just be charged, or when that will take place. And there’s no toggle to turn the subscription off if you decide you no longer need it.
Instead, you can only “modify” which credit card you use with Amazon’s 1-click. Seriously. That’s it.

Above: want to manage your subscription?
Below: hahaha, good luck with that!

I still don’t know where to turn this subscription off – I guess the option to disable it doesn’t even appear until your free year is up? (Even clicking on “FreeTime Unlimited” from Amazon.com’s subscription management page routes you back to this useless Parent dashboard page for managing your 1-Click settings.)
So, ask me in a year, maybe?
That said, if you are in the market for both a FreeTime Unlimited subscription and an Echo Dot, you may as well buy the Kids Edition.
FreeTime Unlimited works on Fire tablets, Android devices, Kindle, and as of this month, iOS devices, providing access to over 15,000 kid-safe apps, games, videos, books and educational content. On Amazon devices, parents can also set screen time limits and educational goals.
The service by itself is $2.99 per month for Prime members (for one profile) or $4.99 per month for non-members. It’s more if you buy the Family subscription. Meanwhile, the regular 2nd gen Echo Dot is currently $49.99. So you’re basically looking at $50 + $36/year for FreeTime Unlimited if you bought these things separately as a Prime member.
The Echo Dot Kids Edition comes with one year of FreeTime Unlimited and is $79.99. So you’re saving a tiny bit there. Plus, you can always turn FreeTime off on the device, if you’d rather just use the kids Echo Dot as a regular Echo Dot – while still getting a free year of FreeTime for another device, like the kid’s iPad.
Still, watch out because Echo Dot often goes on sale – and probably will be on sale again for Prime Day this summer. Depending on the price cut it gets, it may not be worth it to buy the bundle.

Other Perks
There are other perks that Amazon tries to use to sell the Echo Dot Kids Edition to families, but the most notable is “Magic Word.”
This feature turns on when FreeTime is enabled, and thanks kids for saying “please” when they speak to Alexa. Yes, that seems like a small thing but it was something that a lot of parents were upset about. They thought kids were learning bad manners by barking commands at Alexa.
I don’t know about that. My kid seems to understand that we say “please” and “thank you” to people, but Alexa doesn’t get her feelings hurt by being told to “play Taylor Swift.” But to each their own!
This feature will thrill some parents, I’m sure.
Parents can also use FreeTime to pause the device or configure a bedtime so kids don’t stay up talking to Alexa, but honestly, LET ‘EM.

It’s far better than when they stall bedtime by badgering you for that extra glass of water, one more blanket, turn on that light, now crack the door…a little more…a little less…Honestly, escaping the kid’s room at bedtime is an art form.
If Alexa can keep them busy and less afraid of the dark, I’m calling it a win.
FreeTime with the Echo Dot Kids Edition also lets you set up “Character Alarms” – meaning, kids can configure Alexa to wake them up with an alarm click featuring characters from brands like Disney and Nickelodeon.

This is hilarious to me.
Because if you have a kid in the preschool to tween age range who actually requires an alarm clock to wake up in the morning instead of getting up at the crack of dawn (or maybe one who has gone through years of training so they DON’T ALSO WAKE YOU UP AT THE CRACK OF DAWN OH MY GOD) – then, I guess, um, enjoy character alarms?
I’m sorry, let me stop laughing….Hold on.
I’m sure somebody needs this.
Sorry for laughing. But please explain how you’ve taught your children to sleep in? Do they go to bed at a decent hour too? No seriously, email me. I have no idea.
The Echo Dot Kids Edition can also work as a household intercom, but so do regular Echo devices.
You can turn off voice purchasing on the Kids Edition, but you can do that on regular devices, too (despite what Amazon’s comparison chart says.)
Plus, kids can now control smart home devices with the Echo Dot Kids Edition – a feature that shamefully wasn’t available at launch, but is now.
And that cute protective case? Well, a regular Echo Dot is actually pretty sturdy. We’ve dropped ours probably a dozen times from dresser to floor (uncarpeted!) with no issues.
I like how Amazon tries to sell the case, though:

I guess if your kid plans to do CHEMISTRY EXPERIMENTS by the Echo Dot, you may need this.
In reality, the case is just cute – and can help the Echo better match the kid’s room.
The Echo Kids Edition, overall, is not a must-have device. You’ll have more flexibility with a regular Echo and a little old-school parenting.

Source: Gadgets – techcrunch

Researchers train bipedal robots to step lightly over rough terrain

Researchers train bipedal robots to step lightly over rough terrain
Researchers at the Hybrid Robotics Group at UC Berkeley and CMU are hard at work making sure their robots don’t fall over when tiptoeing through rough terrain. Using machine learning and ATRIAS robots, the teams are able to “teach” robots to traverse stepping stones they’ve never seen before.
Their robots, described here, are unique in that they are bipedal and use a mixture of balance and jumping to ensure they don’t tip off the blocks.
“What’s different about our methods is that they allow for dynamic walking as opposed to the slower quasi-static motions that robots tend to use,” write the researchers. “By reasoning about the nonlinearities in the dynamics of the system and by taking advantage of recent advances in optimal and nonlinear control technology, we can specify control objectives and desired robot behaviors in a simple and compact form while providing formal stability and safety guarantees. This means our robots can walk over discrete terrain without slipping or falling over, backed by some neat math and some cool experimental videos.”
The robots are currently “blind” and can’t use visual input to plan their next move. However, with a robot called CASSIE, they will be able to see and feel the stones as they hop along, ensuring that they don’t tip over in the heat of fun… or battle.

Source: Gadgets – techcrunch

Instagram Stories now lets its 400M users add soundtracks

Instagram Stories now lets its 400M users add soundtracks

The right music can make a boring photo or video epic, so Instagram is equipping users with a way to add popular songs to their Stories. TechCrunch had the scoop on the music feature’s prototype in early May, and now it’s launching to iOS and Android users in 6 countries including, the U.S. Thanks to Facebook’s recent deals with record labels, users will be able to choose from thousands of songs from artists including Bruno Mars, Dua Lipa, Calvin Harris and Guns N’ Roses. The launch could make Instagram Stories more fun to post and watch in a way that copyrights won’t allow on Snapchat, while giving the app a way to compete with tween favorite Musical.ly.

And just a week after revealing its app has 1 billion monthly users, the company also announced today that Instagram Stories has 400 million daily users, up from 300 million in November and 250 million a year ago. That means Instagram Stories is growing about six times faster than Snapchat’s whole app, which only added 13 million daily users over the six months of Q4 2017 and Q1 2018 to reach 191 million.

Snapchat’s growth rate fell to its slowest pace ever last quarter amidst a despised redesign, while Instagram Stories has steadily added unique and popular features like Highlights, Superzoom and resharing of public feed posts. Instagram said last September that it had 500 million total daily users, so it’s likely that a majority of community is now hooked on the Stories format Snapchat invented.

Instagram Stories music

“Now you can add a soundtrack to your story that fits any moment and helps you express how you’re feeling,” Instagram writes. To access the new music feature, users will be able to choose a special song sticker after they shoot a photo or video. They can search for a specific song or artist, or browse by mood, genre or what’s popular. Once they select a song, they can pick the specific snippet they want to have accompany their content. Alternatively, iOS users can switch to the Music shutter mode in the Stories camera to pick a song before they capture a photo or video so they can sync up their actions to the music. That will come to Android eventually, and the whole feature will roll out to more countries soon following today’s launch in Australia, New Zealand, France, Germany, Sweden, the UK and the U.S. [Correction: The feature is launch in version 51 of Instagram, not in 51 countries.]

When friends watch a music-equipped Story, the song will post automatically. They’ll also be able to tap on the sticker to see artist and song title info, but for now these stickers won’t link out to a musician’s Instagram page or their presence on streaming services — though that would certainly be helpful. I also suggest that Instagram should create deeplinks that artists can share with their fans that automatically opens the Stories camera with that song’s sticker added.

It’s easy to imagine users lip syncing to their favorite jams, adding clashing background music for comedic effect or earnestly trying to compose something emotionally powerful. Suddenly people ‘Gramming from home will be a new way to entertain themselves and their pals.

Instagram tells me that musicians and rights holders will be compensated for the use of their songs, but wouldn’t specify how those payments would work. Facebook secured deals with all the major record labels and many independents to pave the way for this feature. Facebook has since announced that users can also add copyrighted music soundtracks to videos on their own before uploading and they wouldn’t be taken down like before. It’s also started testing a Lip Sync Live feature with a collection of chart-topping hits.

The big question will be whether the “thousands” of songs available through today’s launch will cover what most users want to hear, otherwise they might just be disappointed. With a few enhancements and a widened catalog, Instagram Music could become a powerful way for artists to go viral. All those shaky phone camera clips are going to start looking more like indie music videos you’ll watch til the end.

Source: Mobile – Techcruch