‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely
Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.
“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”
Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.
The first robot was put to work in 2016, and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that a new iteration was called for.
The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle
The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.
Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”

This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.
The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.
Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”
Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.

Source: Gadgets – techcrunch

The Google Assistant is now bilingual 

The Google Assistant is now bilingual 
The Google Assistant just got more useful for multilingual families. Starting today, you’ll be able to set up two languages in the Google Home app and the Assistant on your phone and Google Home will then happily react to your commands in both English and Spanish, for example.
Today’s announcement doesn’t exactly come as a surprise, given that Google announced at its I/O developer conference earlier this year that it was working on this feature. It’s nice to see that this year, Google is rolling out its I/O announcements well before next year’s event. That hasn’t always been the case in the past.
Currently, the Assistant is only bilingual and it still has a few languages to learn. But for the time being, you’ll be able to set up any language pair that includes English, German, French, Spanish, Italian and Japanese. More pairs are coming in the future and Google also says it is working on trilingual support, too.
Google tells me this feature will work with all Assistant surfaces that support the languages you have selected. That’s basically all phones and smart speakers with the Assistant, but not the new smart displays, as they only support English right now.

While this may sound like an easy feature to implement, Google notes this was a multi-year effort. To build a system like this, you have to be able to identify multiple languages, understand them and then make sure you present the right experience to the user. And you have to do all of this within a few seconds.
Google says its language identification model (LangID) can now distinguish between 2,000 language pairs. With that in place, the company’s researchers then had to build a system that could turn spoken queries into actionable results in all supported languages. “When the user stops speaking, the model has not only determined what language was being spoken, but also what was said,” Google’s VP Johan Schalkwyk and Google Speech engineer Lopez Moreno write in today’s announcement. “Of course, this process requires a sophisticated architecture that comes with an increased processing cost and the possibility of introducing unnecessary latency.”

If you are in Germany, France or the U.K., you’ll now also be able to use the bilingual assistant on a Google Home Max. That high-end version of the Google Home family is going on sale in those countries today.
In addition, Google also today announced that a number of new devices will soon support the Assistant, including the tado° thermostats, a number of new security and smart home hubs (though not, of course, Amazon’s own Ring Alarm), smart bulbs and appliances, including the iRobot Roomba 980, 896 and 676 vacuums. Who wants to have to push a button on a vacuum, after all.

Source: Gadgets – techcrunch

Autonomous retail startup Inokyo’s first store feels like stealing

Autonomous retail startup Inokyo’s first store feels like stealing

Inokyo wants to be the indie Amazon Go. It’s just launched its prototype cashierless autonomous retail store. Cameras track what you grab from shelves, and with a single QR scan of its app on your way in and out of the store, you’re charged for what you got.

Inokyo‘s first store is now open on Mountain View’s Castro Street selling an array of bougie kombuchas, snacks, protein powders and bath products. It’s sparse and a bit confusing, but offers a glimpse of what might be a commonplace shopping experience five years from now. You can get a glimpse yourself in our demo video below:

“Cashierless stores will have the same level of impact on retail as self-driving cars will have on transportation,” Inokyo co-founder Tony Francis tells me. “This is the future of retail. It’s inevitable that stores will become increasingly autonomous.”

Inokyo (rhymes with Tokyo) is now accepting signups for beta customers who want early access to its Mountain View store. The goal is to collect enough data to dictate the future product array and business model. Inokyo is deciding whether it wants to sell its technology as a service to other retail stores, run its own stores or work with brands to improve their product’s positioning based on in-store sensor data on custom behavior.

We knew that building this technology in a lab somewhere wouldn’t yield a successful product,” says Francis. “Our hypothesis here is that whoever ships first, learns in the real world and iterates the fastest on this technology will be the ones to make these stores ubiquitous.” Inokyo might never rise into a retail giant ready to compete with Amazon and Whole Foods. But its tech could even the playing field, equipping smaller businesses with the tools to keep tech giants from having a monopoly on autonomous shopping experiences.

It’s about what cashiers do instead

Amazon isn’t as ahead as we assumed,” Francis remarks. He and his co-founder Rameez Remsudeen took a trip to Seattle to see the Amazon Go store that first traded cashiers for cameras in the U.S. Still, they realized, “This experience can be magical.” The two met at Carnegie Mellon through machine learning classes before they went on to apply that knowledge at Instagram and Uber. The two decided that if they jumped into autonomous retail soon enough, they could still have a say in shaping its direction.

Next week, Inokyo will graduate from Y Combinator’s accelerator that provided its initial seed funding. In six weeks during the program, they found a retail space on Mountain View’s main drag, studied customer behaviors in traditional stores, built an initial product line and developed the technology to track what users are taking off the shelves.

Here’s how the Inokyo store works. You download its app and connect a payment method, and you get a QR code that you wave in front of a little sensor as you stroll into the shop. Overhead cameras will scan your body shape and clothing without facial recognition in order to track you as you move around the store. Meanwhile, on-shelf cameras track when products are picked up or put back. Combined, knowing who’s where and what’s grabbed lets it assign the items to your cart. You scan again on your way out, and later you get a receipt detailing the charges.

Originally, Inokyo actually didn’t make you scan on the way out, but it got the feedback that customers were scared they were actually stealing. The scan-out is more about peace of mind than engineering necessity. There is a subversive pleasure to feeling like, “well, if Inokyo didn’t catch all the stuff I chose, that’s not my problem.” And if you’re overcharged, there’s an in-app support button for getting a refund.

Inokyo co-founders (from left): Tony Francis and Rameez Remsudeen

Inokyo was accurate in what it charged me despite me doing a few switcharoos with products I nabbed. But there were only about three people in the room at the time. The real test for these kinds of systems are when a rush of customers floods in and cameras have to differentiate between multiple similar-looking people. Inokyo will likely need to be more than 99 percent accurate to be more of a help than a headache. An autonomous store that constantly over- or undercharges would be more trouble than it’s worth, and patrons would just go to the nearest classic shop.

Just because autonomous retail stores will be cashier-less doesn’t mean they’ll have no staff. To maximize cost-cutting, they could just trust that people won’t loot it. However, Inokyo plans to have someone minding the shop to make sure people scan in the first place and to answer questions about the process. But there’s also an opportunity in reassigning labor from being cashiers to concierges that can recommend the best products or find what’s the right fit for the customer. These stores will be judged by the convenience of the holistic experience, not just the tech. At the very least, a single employee might be able to handle restocking, customer support and store maintenance once freed from cashier duties.

The Amazon Go autonomous retail store in Seattle is equipped with tons of overhead cameras

While Amazon Go uses cameras in a similar way to Inokyo, it also relies on weight sensors to track items. There are plenty of other companies chasing the cashierless dream. China’s BingoBox has nearly $100 million in funding and has more than 300 stores, though they use less sophisticated RFID tags. Fellow Y Combinator startup Standard Cognition has raised $5 million to equip old-school stores with autonomous camera-tech. AiFi does the same, but touts that its cameras can detect abnormal behavior that might signal someone is a shoplifter.

The store of the future seems like more and more of a sure thing. The race’s winner will be determined by who builds the most accurate tracking software, easy-to-install hardware and pleasant overall shopping flow. If this modular technology can cut costs and lines without alienating customers, we could see our local brick-and-mortars adapt quickly. The bigger question than if or even when this future arrives is what it will mean for the millions of workers who make their living running the checkout lane.

Source: Mobile – Techcruch

NASA’s Parker Solar Probe launches to ‘touch the sun’

NASA’s Parker Solar Probe launches to ‘touch the sun’
Update: Launch successful!
NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:53 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.
If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.
This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly.
(Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.
It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.
Go on – it’s quite cool.
The car-sized Parker will orbit the sun and constantly rotate itself so the heat shield is facing inward and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.
And such instruments! There are three major experiments or instrument sets on the probe.
WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation and other nuisances will produce an amazingly clear picture.
SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.
FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.
They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.
Even then, they’ll get so hot that the team needed to implement the first-ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.
The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that seven more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.
On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.
It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.
The mission is scheduled to last seven years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.
The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

Source: Gadgets – techcrunch

Autonomous drones could herd birds away from airports

Autonomous drones could herd birds away from airports
Bird strikes on aircraft may be rare, but not so rare that airports shouldn’t take precautions against them. But keeping birds away is a difficult proposition: How do you control the behavior of flocks of dozens or hundreds of birds? Perhaps with a drone that autonomously picks the best path to do so, like this one developed by CalTech researchers.
Right now airports may use manually piloted drones, which are expensive and of course limited by the number of qualified pilots, or trained falcons — which as you might guess is a similarly difficult method to scale.
Soon-Jo Chung at CalTech became interested in the field after seeing the near-disaster in 2009 when US Airways 1549 nearly crashed due to a bird strike but was guided to a comparatively safe landing in the Hudson.
“It made me think that next time might not have such a happy ending,” he said in a CalTech news release. “So I started looking into ways to protect airspace from birds by leveraging my research areas in autonomy and robotics.”
A drone seems like an obvious solution — put it in the air and send those geese packing. But predicting and reliably influencing the behavior of a flock is no simple matter.
“You have to be very careful in how you position your drone. If it’s too far away, it won’t move the flock. And if it gets too close, you risk scattering the flock and making it completely uncontrollable,” Chung said.
The team studied models of how groups of animals move and affect one another and arrived at their own that described how birds move in response to threats. From this can be derived the flight path a drone should follow that will cause the birds to swing aside in the desired direction but not panic and scatter.
Armed with this new software, drones were deployed in several spaces with instructions to deter birds from entering a given protected area. As you can see below (an excerpt from this video), it seems to have worked:
More experimentation is necessary, of course, to tune the model and get the system to a state that is reliable and works with various sizes of flocks, bird airspeeds, and so on. But it’s not hard to imagine this as a standard system for locking down airspace: a dozen or so drones informed by precision radar could protect quite a large area.
The team’s results are published in IEEE Transactions on Robotics.

Source: Gadgets – techcrunch

JBL’s $250 Google Assistant smart display is now available for pre-order

JBL’s 0 Google Assistant smart display is now available for pre-order
It’s been a week since Lenovo’s Google Assistant-powered smart display went on sale. Slowly but surely, its competitors are launching their versions, too. Today, JBL announced that its $249.95 JBL Link View is now available for pre-order, with an expected ship date of September 3, 2018.
JBL went for a slightly different design than Lenovo (and the upcoming LG WK9), but in terms of functionality, these devices are pretty much the same. The Link View features an 8-inch HD screen; unlike Lenovo’s Smart Display, JBL is not making a larger 10-inch version. It’s got two 10W speakers and the usual support for Bluetooth, as well as Google’s Chromecast protocol.
JBL says the unit is splash proof (IPX4), so you can safely use it to watch YouTube recipe videos in your kitchen. It also offers a 5MP front-facing camera for your video chats and a privacy switch that lets you shut off the camera and microphone.
JBL, Lenovo and LG all announced their Google Assistant smart displays at CES earlier this. Lenovo was the first to actually ship a product, and both the hardware as well as Google’s software received a positive reception. There’s no word on when LG’s WK9 will hit the market.

Review: Lenovo’s Google Smart Display is pretty and intelligent

Source: Gadgets – techcrunch

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors
Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.
Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

OpenAI’s ‘Dota 2’ neural nets are defeating human opponents

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.
The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.
The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)
In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.
They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.
The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.
What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.
This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.
As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Source: Gadgets – techcrunch

Digging deeper into smart speakers reveals two clear paths

Digging deeper into smart speakers reveals two clear paths
In a truly fascinating exploration into two smart speakers – the Sonos One and the Amazon Echo – BoltVC’s Ben Einstein has found some interesting differences in the way a traditional speaker company and an infrastructure juggernaut look at their flagship devices.
The post is well worth a full read but the gist is this: Sonos, a very traditional speaker company, has produced a good speaker and modified its current hardware to support smart home features like Alexa and Google Assistant. The Sonos One, notes Einstein, is a speaker first and smart hardware second.
“Digging a bit deeper, we see traditional design and manufacturing processes for pretty much everything. As an example, the speaker grill is a flat sheet of steel that’s stamped, rolled into a rounded square, welded, seams ground smooth, and then powder coated black. While the part does look nice, there’s no innovation going on here,” he writes.
The Amazon Echo, on the other hand, looks like what would happen if an engineer was given an unlimited budget and told to build something that people could talk to. The design decisions are odd and intriguing and it is ultimately less a speaker than a home conversation machine. Plus it is very expensive to make.
Pulling off the sleek speaker grille, there’s a shocking secret here: this is an extruded plastic tube with a secondary rotational drilling operation. In my many years of tearing apart consumer electronics products, I’ve never seen a high-volume plastic part with this kind of process. After some quick math on the production timelines, my guess is there’s a multi-headed drill and a rotational axis to create all those holes. CNC drilling each hole individually would take an extremely long time. If anyone has more insight into how a part like this is made, I’d love to see it! Bottom line: this is another surprisingly expensive part.

Sonos, which has been making a form of smart speaker for 15 years, is a CE company with cachet. Amazon, on the other hand, sees its devices as a way into living rooms and a delivery system for sales and is fine with licensing its tech before making its own. Therefore to compare the two is a bit disingenuous. Einstein’s thesis that Sonos’ trajectory is troubled by the fact that it depends on linear and closed manufacturing techniques while Amazon spares no expense to make its products is true. But Sonos makes speakers that work together amazingly well. They’ve done this for a decade and a half. If you compare their products – and I have – with competing smart speakers an non-audiophile “dumb” speakers you will find their UI, UX, and sound quality surpass most comers.
Amazon makes things to communicate with Amazon. This is a big difference.
Where Einstein is correct, however, is in his belief that Sonos is at a definite disadvantage. Sonos chases smart technology while Amazon and Google (and Apple, if their HomePod is any indication) lead. That said, there is some value to having a fully-connected set of speakers with add-on smart features vs. having to build an entire ecosystem of speaker products that can take on every aspect of the home theatre.
On the flip side Amazon, Apple, and Google are chasing audio quality while Sonos leads. While we can say that in the future we’ll all be fine with tinny round speakers bleating out Spotify in various corners of our room, there is something to be said for a good set of woofers. Whether this nostalgic love of good sound survives this generation’s tendency to watch and listen to low resolution media is anyone’s bet, but that’s Amazon’s bet to lose.
Ultimately Sonos is strong and fascinating company. An upstart that survived the great CE destruction wrought by Kickstarter and Amazon, it produces some of the best mid-range speakers I’ve used. Amazon makes a nice – almost alien – product, but given that it can be easily copied and stuffed into a hockey puck that probably costs less than the entire bill of materials for the Amazon Echo it’s clear that Amazon’s goal isn’t to make speakers.
Whether the coming Sonos IPO will be successful depends partially on Amazon and Google playing ball with the speaker maker. The rest depends on the quality of product and the dedication of Sonos users. This good will isn’t as valuable as a signed contract with major infrastructure players but Sonos’ good will is far more than Amazon and Google have with their popular but potentially intrusive product lines. Sonos lives in the home while Google and Amazon want to invade it. That is where Sonos wins.

Source: Gadgets – techcrunch

Your next summer DIY project is an AI-powered doodle camera

Your next summer DIY project is an AI-powered doodle camera
With long summer evenings comes the perfect opportunity to dust off your old boxes of circuits and wires and start to build something. If you’re short on inspiration, you might be interested in artist and engineer Dan Macnish’s how-to guide on building an AI-powered doodle camera using a thermal printer, Raspberry pi, a dash of Python and Google’s Quick Draw data set.
“Playing with neural networks for object recognition one day, I wondered if I could take the concept of a Polaroid one step further, and ask the camera to re-interpret the image, printing out a cartoon instead of a faithful photograph.” Macnish wrote on his blog about the project, called Draw This.
To make this work, Macnish drew on Google’s object recognition neural network and the data set created for the game Google Quick, Draw! Tying the two systems together with some python code, Macnish was able to have his creation recognize real images and print out the best corresponding doodle in the Quick, Draw! data set
But since output doodles are limited to the data set, there can be some discrepancy between what the camera “sees” and what it generates for the photo.
“You point and shoot – and out pops a cartoon; the camera’s best interpretation of what it saw,” Macnish writes. “The result is always a surprise. A food selfie of a healthy salad might turn into an enormous hot dog.”
If you want to give this a go for yourself, Macnish has uploaded the instructions and code needed to build this project on GitHub.

Source: Gadgets – techcrunch

Apple is rebuilding Maps from the ground up

Apple is rebuilding Maps from the ground up

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so It’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 Beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been 4 years in the making, which is when Apple began to develop its new data gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps in an interview last week.  “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and over a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of Maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, “Where do we want to take Maps? What are the things that we want to do in Maps? We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now numbering in the millions, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map real-time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high resolution satellite imagery and brand new intensely high resolution image data gathered from its ground cars until it had what it felt was a ‘best in class’ mapping product.

There is only really one big company on earth who owns an entire map stack from the ground up: Google .

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with ‘Apple Maps’ signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed up GPS rig on the roof, four LiDAR arrays mounted at the corners and 8 cameras shooting overlapping high-resolution images – there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven and monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly immediately and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case which is delivered to Apple’s data center where a suite of software eliminates private information like faces, license plates and other info from the images. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and Privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design as it wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it —in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the ‘ground truth’ data provided by its own mapping vehicles with this ‘probe data’ sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows who that ID refers to. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving, say, an ad for a Chevron on your route to you, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving it can also send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your maps app has been active, say you check the map, look for directions etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full on point cloud that maps the world around the mapping van in 3D. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it could also enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on ‘any future plans’ for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high resolution image data from car and satellite, plus a 3D point cloud results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher resolution and easier to see, visually. And it’s synchronized with the ‘panoramic’ images from the car, the satellite view and the raw data. These techniques are used in self driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to ‘see’ through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the ‘last 50 feet’ of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the US, it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate through into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the ‘current’ version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover including grass and trees represented on the map as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the US.”

Source: Mobile – Techcruch