Pixel 2 vs Pixel 3: Should you upgrade?

Pixel 2 vs Pixel 3: Should you upgrade?

If you’re considering making the jump to Google’s newly announced Pixel 3 and Pixel 3 XL, you’re in the right place. Whether you’re a Pixel 2 owner eyeing greener pastures or a bargain type hunting for a last-gen smartphone that’s still top of the line, comparing new and old is often useful.

On specs alone, the Pixel 3 shares most of its DNA with the Pixel 2, but there are a handful of meaningful differences, and they’re not all obvious. What is obvious: The Pixel 3’s AMOLED screen is now 5.5 inches compared to the Pixel 2’s 5-inch display. The Pixel 3 XL now offers a 6.3-inch display, up .3 inches from the Pixel 2 XL.

The Pixel 3 and Pixel 3 XL upgrade the Pixel 2’s processor slightly and add an additional front-facing camera for some of the device’s newest tricks. The primary camera also gets an under-the-hood upgrade to its visual co-processing chip, called Visual Core. The Visual Core chip update is what powers some of the new camera features that we’ll get into in just a bit.

Pixel 3 XL

Beyond that, the hardware looks very similar for the most part, though the Pixel 3 and Pixel 3 XL do offer some changes in screen size, like we mentioned. Most noticeably, the Pixel 3 XL has an iPhone-like notch this time around, while the notchless Pixel 3 offers a reduced bezel but no edge-to-edge screen.

Pixel 2 XL

The Pixel 3 starts at $799 (64GB of storage) while the base model Pixel 2 is currently priced at $649, though more price drops could be in store. The Pixel 3 XL starts at $899 for 64GB of storage and offers 128GB for $999. The Pixel 2 XL is more deeply discounted than its smaller sibling at the moment, with a 64GB base option on sale for $699. If it sounds complicated, it’s not really. Each Pixel comes in two sizes: 64GB or 128GB and more storage costs $100 bucks extra.

The black and white Pixel 2 XL

With the Pixel 3, Google has unified the color scheme across both sizes of device, offering “Just Black,” “Clearly White” with an eye-catching seafoam colored button and a very Apple-like “Not Pink” that comes with a coral-colored button.

Google’s Pixel 2 also came in black and white but also a muted greyish-blue color, which was cool. The Pixel 2 XL came in all black or black and white with a brightly colored power button, so we’re a little sad to see that color go. Google also noted in its launch event that the new phones feel more comfortable to hold, though we’d have to try that out with the Pixel 3 XL to see if that really holds true.

Like we said, if you’re not vehemently anti-notch, the hardware isn’t that different. The dual front-facing camera is the most substantial change. But since we’re talking about Google phones, what we’re really talking about is software — and when it comes to software, Google has held some substantial perks exclusive to the Pixel 3.

We spoke to Google to clarify which features won’t be coming to the Pixel 2, at least not yet:

  • Photobooth: The hands-free selfie mode that snaps photos when you smile.
  • Top Shot: Burst photo mode that picks your best shots.
  • Super Res Zoom: A new machine learning-powered camera mode that merges many burst images to fill in additional details.
  • Wide-angle selfies: That extra front-facing camera wasn’t for nothing. Mark my words, this is the Pixel 3’s real killer feature, even if it takes a while to catch on.
  • Motion Auto focus: A camera mode that allows you to tap a subject once and track it while it moves.
  • Lens Suggestions: A new mode for Google Lens.
  • Titan M: A new security chip with a cool name that Google touts for providing enterprise-level security.
  • Wireless charging: Either a big deal to you or it’s not.

Thrift-minded shoppers and fairly content Pixel 2 owners fear not. There are plenty of new features that don’t rely on hardware improvements and will be coming to vintage Pixels. Those include Call Screen, Night Sight, Playground (the AR sticker thing) and Digital Wellbeing, already available in beta.

So, do you need to upgrade? Well, as always, that’s a very personal and often very nitpickily detail-oriented question. Are you dying for a slight but not unsubstantial bump in screen real estate? Does Google’s very solid lineup of cool new camera modes entice you? Is wireless charging an absolute dealmaker?

As for me, I’m perfectly happy with the Pixel 2 for now, but as someone who regularly takes front-facing photos with more than one human in them, that extra-wide group selfie mode does beckon. If I were still using a first-generation Pixel I’d be all over the Pixel 3, but my device has a ton of life left in it.

A Google spokesperson emphasized that as always with its flagship smartphone line, the company will “try to bring as many features as possible to existing phones so they keep getting better over time.”

The Pixel 2 is still one of the best smartphones ever made and it’s more affordable now than before. Even with last-gen hardware — often the best deal for smartphone shoppers — you can rest easy knowing that Google won’t abandon the Pixel 2.

more Google Event 2018 coverage

Source: Mobile – Techcruch

Google ups the Pixel 3’s camera game with Top Shot, group selfies and more

Google ups the Pixel 3’s camera game with Top Shot, group selfies and more

With the Pixel 2, Google introduced one of the best smartphone cameras ever made. It’s fitting, then, that the Pixel 3 builds on an already pretty perfect camera, adding some bells and whistles sure to please mobile photographers rather than messing with a good thing. On paper, the Pixel 3’s camera doesn’t look much different than its recent forebear. But, because we’re talking about Google, software is where the device will really shine. We’ll go over everything that’s new.

Starting with specs, both the Pixel 3 and the Pixel 3 XL will sport a 12.2MP rear camera with an f/1.8 aperture and an 8MP dual front camera capable of both normal field of view and ultra-wide angle shots. The rear video camera captures 1080p video at 30, 60 or 120 fps, while the front-facing video camera is capable of capturing 1080p video at 30fps. Google did not add a second rear-facing camera, deeming it “unnecessary,” given what the company can do with machine learning alone. Knowing how good the Pixel 2’s camera is, we can’t really argue here.

While it’s not immediately evident from the specs sheet, Google also updated the Pixel visual co-processing chip known as Visual Core for the Pixel 3 and Pixel 3 XL. The updated Visual Core chip update is what powers some of the powerful and processing-heavy new photo features.

Top Shot

With the Pixel 3, Google introduces Top Shot. With Top Shot, the Pixel 3 compares a burst set of images taken in rapid succession and automatically detects the best shot using machine learning. The idea is that the camera can screen out any photos in which a subject might have their eyes closed or be making a weird face unintentionally, choosing “smiles instead of sneezes” and offering the user the best of the batch. Stuff like this is usually gimmicky, but given Google’s image processing prowess it’s honestly probably going to be pretty good. Or as TechCrunch’s Matt Burns puts it, “Top Shots is Live Photo but useful,” which seems like a fair assessment.

Super Res Zoom

Google’s next Pixel 3 camera trick is called Super Res Zoom, which is what it sounds like. Super Res Zoom enables the camera to take a burst of photos and then leverages the fact that each image is very slightly different due to minute hand movements, combining those images together to recreate detail “without grain” — or so Google claims. Because smartphone cameras are limited due to their lack of optical zoom, Super Res Zoom employs burst shooting and a merging algorithm to compensate for detail at a distance, merging slightly different photos into one higher resolution photo. Because digital zoom is notoriously universally bad, we’re looking forward to putting this new method to the test. After all, if it worked for imaging the surface of Mars, it’s bound to work for concert photos.

Night Sight

A machine learning camera hack designed to inspire people to retire flash once and for all (please), Night Sight can visualize a photo taken in “extreme low light.” The idea is that machine learning can make educated guesses about the content in the frame, filling in detail and color correcting so it isn’t just one big noisy mess. If it works remains to be seen, but given the Pixel 2’s already stunning low-light performance we’d bet this is probably pretty cool.

Group Selfie Cam

Google knows what the people really want. One of the biggest hardware changes to the Pixel 3 line is the introduction of dual front-facing cameras that enable super-wide front-facing shots capable of capturing group photos. The wide-angle front-facing shots feature a 97 degree field of view compared to the normal already fairly wide 75 degree field of view. Yes, Google is trying to make “Groupies” a thing — yes, that’s a selfie where you all cram in and hand the phone to the friend with the longest arms. Honestly, it might succeed.

Google has a few more handy tricks up its sleeve. In Photobooth mode, the Pixel 3 can snap the selfie shutter when you smile, no hands needed. With a new kind of motion-tracking auto-focus option you can tap once to track the subject of a photo without needing to tap to refocus, a feature sure to be handy for the kind of people that fill up their storage with hundreds of out-of-focus pet shots.

Google Lens is also back, of course, but honestly its utility is usually left forgotten in the camera settings. And Google’s AR stickers are now called Playground and respond to actions and facial expressions. Google is also launching a Childish Gambino AR experience on Playground (probably as good as this whole AR sticker thing gets, tbh), which will launch with the Pixel 3 and come to the Pixel 1 and Pixel 2 a bit later on.

With the Pixel 3, Google will also improve upon the Pixel 2’s already excellent Portrait Mode, offering the ability to change the depth of field and the subject. And, of course, the company will still offer free unlimited full resolution photo storage in the wonderfully useful Google Photos, which remains superior to every aspect of photo processing and storage on the iPhone.

Many of the features that Google announced today for the Pixel 3 rely on its new Visual Core chip and dual front cameras, but older Pixels will also be able to use Night Sight. Google clarified to TechCrunch that Photobooth, Top Shot, Super Res Zoom, Group Selfie Cam and Motion Auto focus are exclusive to the Pixel 3 and Pixel 3 XL due to a dependence on hardware updates.

With its Pixel line, now three generations deep, Google has leaned heavily on software-powered tricks and machine learning to make a smartphone camera far better than it should be. Given Google’s image processing chops, that’s a great thing, and most of its experimental software workarounds generally work very well. We’re looking forward to taking its latest set of photography tricks for a spin, so keep an eye out for our upcoming Pixel 3 hands-on posts and reviews.

more Google Event 2018 coverage

Source: Mobile – Techcruch

Here are all the details on the new Pixel 3, Pixel Slate, Pixel Stand, and Home Hub

Here are all the details on the new Pixel 3, Pixel Slate, Pixel Stand, and Home Hub
At a special event in New York City, Google announced some of its latest, flagship hardware devices. During the hour-long press conference Google executives and product managers took the wraps off the company’s latest products and explained their features. Chief among the lot is the Pixel 3, Google’s latest flagship Android device. Like the Pixel 2 before it, the Pixel 3’s main feature is its stellar camera but there’s a lot more magic packed inside the svelte frame.
Pixel 3

Contrary to some earlier renders, the third version of Google’s Android flagship (spotted by 9 to 5 Google) does boast a sizable notch up top, in keeping with earlier images of the larger XL. Makes sense, after all, Google went out of its way to boast about notch functionality when it introduced Pie, the latest version of its mobile OS.
The device is available for preorder today and will start shipping October 18, starting at $799. The larger XL starts at $899, still putting the product at less than the latest flagships from Apple and Samsung.
Pixel Slate

The device looks pretty much exactly like the leaks lead us to believe — it’s a premium slate with a keyboard cover that doubles as a stand. It also features a touch pad, which gives it the edge over products like Samsung’s most recent Galaxy Tab. There’s also a matching Google Pen, which appears to more or less be the same product announced around the Pixel Book, albeit with a darker paint job to match the new product.
The product starts at $599, plus $199 for the keyboard and $99 for the new dark Pen. All three are shipping at some point later this year.
Home Hub

The device looks like an Android tablet mounted on top of a speaker — which ought to address the backward firing sound, which is one of the largest design flaws of the recently introduced Echo Show 2. The speaker fabric comes in a number of different colors, in keeping with the rest of the Pixel/Home products, including the new Aqua.
When not in use, the product doubles as a smart picture frame, using albums from Google Photos. A new Live Albums, which auto updates, based on the people you choose. So you can, say, select your significant others and it will create a gallery based on that person. Sweet and also potentially creepy. Machine learning, meanwhile, will automatically filter out all of the lousy shots.
The Home Hub is up for pre-order today for a very reasonable $149. In fact, the device actually seems like a bit of a loss leader for the company in an attempt to hook people into the Google Assistant ecosystem. It will start shipping October 22.
Pixel Stand

The Pixel Stand is basically a sleek little round dock for your phone. While it can obviously charge your phone, what’s maybe more interesting is that when you put your phone into the cradle, it looks like it’ll start a new notifications view that’s not unlike what you’d see on a smart display. It costs $79.

Source: Gadgets – techcrunch

This bipedal robot has a flying head

This bipedal robot has a flying head
Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things Atlas can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter?
University of Tokyo have done just that with their wild Aerial-Biped. The robot isn’t completely bipedal but it’s designed instead to act like a bipedal robot without the tricky issue of being truly bipedal. Think of the these legs as more a sort of fun bit of puppetry that mimics walking but doesn’t really walk.
“The goal is to develop a robot that has the ability to display the appearance of bipedal walking with dynamic mobility, and to provide a new visual experience. The robot enables walking motion with very slender legs like those of a flamingo without impairing dynamic mobility. This approach enables casual users to choreograph biped robot walking without expertise. In addition, it is much cheaper compared to a conventional bipedal walking robot,” the team told IEEE.
The robot is similar to the bizarre-looking Ballu, a blimp robot with a floating head and spindly legs. The new robot learned how to walk convincingly through machine learning, a feat that gives it a realistic gait even though it is really an aerial system. It’s definitely a clever little project and could be interesting at a theme park or in an environment where a massive bipedal robot falling over on someone might be discouraged.

Source: Gadgets – techcrunch

Autonomous drones could herd birds away from airports

Autonomous drones could herd birds away from airports
Bird strikes on aircraft may be rare, but not so rare that airports shouldn’t take precautions against them. But keeping birds away is a difficult proposition: How do you control the behavior of flocks of dozens or hundreds of birds? Perhaps with a drone that autonomously picks the best path to do so, like this one developed by CalTech researchers.
Right now airports may use manually piloted drones, which are expensive and of course limited by the number of qualified pilots, or trained falcons — which as you might guess is a similarly difficult method to scale.
Soon-Jo Chung at CalTech became interested in the field after seeing the near-disaster in 2009 when US Airways 1549 nearly crashed due to a bird strike but was guided to a comparatively safe landing in the Hudson.
“It made me think that next time might not have such a happy ending,” he said in a CalTech news release. “So I started looking into ways to protect airspace from birds by leveraging my research areas in autonomy and robotics.”
A drone seems like an obvious solution — put it in the air and send those geese packing. But predicting and reliably influencing the behavior of a flock is no simple matter.
“You have to be very careful in how you position your drone. If it’s too far away, it won’t move the flock. And if it gets too close, you risk scattering the flock and making it completely uncontrollable,” Chung said.
The team studied models of how groups of animals move and affect one another and arrived at their own that described how birds move in response to threats. From this can be derived the flight path a drone should follow that will cause the birds to swing aside in the desired direction but not panic and scatter.
Armed with this new software, drones were deployed in several spaces with instructions to deter birds from entering a given protected area. As you can see below (an excerpt from this video), it seems to have worked:
More experimentation is necessary, of course, to tune the model and get the system to a state that is reliable and works with various sizes of flocks, bird airspeeds, and so on. But it’s not hard to imagine this as a standard system for locking down airspace: a dozen or so drones informed by precision radar could protect quite a large area.
The team’s results are published in IEEE Transactions on Robotics.

Source: Gadgets – techcrunch

Your next summer DIY project is an AI-powered doodle camera

Your next summer DIY project is an AI-powered doodle camera
With long summer evenings comes the perfect opportunity to dust off your old boxes of circuits and wires and start to build something. If you’re short on inspiration, you might be interested in artist and engineer Dan Macnish’s how-to guide on building an AI-powered doodle camera using a thermal printer, Raspberry pi, a dash of Python and Google’s Quick Draw data set.
“Playing with neural networks for object recognition one day, I wondered if I could take the concept of a Polaroid one step further, and ask the camera to re-interpret the image, printing out a cartoon instead of a faithful photograph.” Macnish wrote on his blog about the project, called Draw This.
To make this work, Macnish drew on Google’s object recognition neural network and the data set created for the game Google Quick, Draw! Tying the two systems together with some python code, Macnish was able to have his creation recognize real images and print out the best corresponding doodle in the Quick, Draw! data set
But since output doodles are limited to the data set, there can be some discrepancy between what the camera “sees” and what it generates for the photo.
“You point and shoot – and out pops a cartoon; the camera’s best interpretation of what it saw,” Macnish writes. “The result is always a surprise. A food selfie of a healthy salad might turn into an enormous hot dog.”
If you want to give this a go for yourself, Macnish has uploaded the instructions and code needed to build this project on GitHub.

Source: Gadgets – techcrunch

Original Stitch’s new Bodygram will measure your body

Original Stitch’s new Bodygram will measure your body

After years of teasing, Original Stitch has officially launched their Bodygram service and will be rolling it out this summer. The system can scan your body based on front and side photos and will create custom shirts with your precise measurements.

“Bodygram gives you full body measurements as accurate as taken by professional tailors from just two photos on your phone. Simply take a front photo and a side photo and upload to our cloud and you will receive a push notification within minutes when your Bodygram sizing report is ready,” said CEO Jin Koh. “In the sizing report you will find your full body measurements including neck, sleeve, shoulder, chest, waist, hip, etc. Bodygram is capable of producing sizing result within 99 percent accuracy compared to professional human tailors.”

The technology is a clever solution to the biggest problem in custom clothing: fit. While it’s great to find a service that will tailor your clothing based on your measurements, often these measurements are slightly off and can affect the cut of the shirt or pants. Right now, Koh said, his team offers free returns if the custom shirts don’t fit.

Further, the technology is brand new and avoids many of the pitfalls of the original body-scanning tech. For example, Bodygram doesn’t require you to get into a Spandex onesie like most systems do and it can capture 40 measurements with only two full-body photos.

“Bodygram is the first sizing technology that works on your phone capable of giving you highly accurate sizing result from just two photos with you wearing normal clothing on any background,” said Koh. “Legacy technologies on the market today require you to wear a very tight-fitting spandex suit, take 360 photos of you and require a plain background to work. Other technologies give you accuracy with five inches deviation in accuracy while Bodygram is the first technology to give you sub-one-inch accuracy. We are the first to use both computer vision and machine learning techniques to solve the problem of predicting your body shape underneath the clothes. Once we predicted your body shape we wrote our proprietary algorithm to calculate the circumferences and the length for each part of the body.”

Koh hopes the technology will reduce returns.

“It’s not uncommon to see clothing return rates reaching in the 40-50 percent range,” he said. “Apparel clothing sales is among the lowest penetration in online shopping.”

The system also can be used to measure your body over time in order to collect health and weight data as well as help other manufacturers produce products that fit you perfectly. The app will launch this summer on Android and iOS. The company will be licensing the technology to other providers that will be able to create custom fits based on just a few side and front photos. Sales at the company grew 175 percent this year and they now have 350,000 buyers that are already creating custom shirts.

A number of competitors are in this interesting space, most notably ShapeScale, a company that appeared at TechCrunch Disrupt and promised a full body scan using a robotic scale. This, however, is the first commercial use of standard photos to measure your appendages and thorax and it’s an impressive step forward in the world of custom clothing.

Source: Mobile – Techcruch

Apple is rebuilding Maps from the ground up

Apple is rebuilding Maps from the ground up

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so It’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 Beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been 4 years in the making, which is when Apple began to develop its new data gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps in an interview last week.  “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and over a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of Maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, “Where do we want to take Maps? What are the things that we want to do in Maps? We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now numbering in the millions, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map real-time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high resolution satellite imagery and brand new intensely high resolution image data gathered from its ground cars until it had what it felt was a ‘best in class’ mapping product.

There is only really one big company on earth who owns an entire map stack from the ground up: Google .

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with ‘Apple Maps’ signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed up GPS rig on the roof, four LiDAR arrays mounted at the corners and 8 cameras shooting overlapping high-resolution images – there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven and monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly immediately and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case which is delivered to Apple’s data center where a suite of software eliminates private information like faces, license plates and other info from the images. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and Privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design as it wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it —in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the ‘ground truth’ data provided by its own mapping vehicles with this ‘probe data’ sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows who that ID refers to. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving, say, an ad for a Chevron on your route to you, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving it can also send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your maps app has been active, say you check the map, look for directions etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full on point cloud that maps the world around the mapping van in 3D. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it could also enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on ‘any future plans’ for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high resolution image data from car and satellite, plus a 3D point cloud results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher resolution and easier to see, visually. And it’s synchronized with the ‘panoramic’ images from the car, the satellite view and the raw data. These techniques are used in self driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to ‘see’ through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the ‘last 50 feet’ of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the US, it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate through into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the ‘current’ version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover including grass and trees represented on the map as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the US.”

Source: Mobile – Techcruch

‘SmartLens’ app created by a high schooler is a step towards all-purpose visual search

‘SmartLens’ app created by a high schooler is a step towards all-purpose visual search

A couple of years ago I was eagerly expectant of an app that would identify anything you pointed it at. Turns out the problem was much harder than anyone expected — but that didn’t stop high school senior Michael Royzen from trying. His app, SmartLens, attempts to solve the problem of seeing something and wanting to identify and learn more about it — with mixed success, to be sure, but it’s something I don’t mind having in my pocket.

Royzen reached out to me a while back and I was curious — as well as skeptical — about the idea that where the likes of Google and Apple have so far failed (or at least failed to release anything good), a high schooler working in his spare time would succeed. I met him at a coffee shop to see the app in action and was pleasantly surprised, but a little baffled.

The idea is simple, of course: You point your phone’s camera at something and the app attempts to identify it using an enormous but highly optimized classification agent trained on tens of millions of images. It connects to Wikipedia and Amazon to let you immediately learn more about what you’ve ID’ed, or buy it.

It recognizes more than 17,000 objects — things like different species of fruit and flower, landmarks, tools and so on. The app had little trouble telling an apple from a (weird-looking) mango, a banana from a plantain and even identified the pistachios I’d ordered as a snack. Later, in my own testing, I found it quite useful for identifying the plants springing up in my neighborhood: periwinkles, anemones, wood sorrel, it got them all, though not without the occasional hesitation.

The kicker is that this all happens offline — it’s not sending an image over the cell network or Wi-Fi to a server somewhere to be analyzed. It all happens on-device and within a second or two. Royzen scraped his own image database from various sources and trained up multiple convolutional neural networks using days of AWS EC2 compute time.

Then there are far more than that number in products that it recognizes by reading the text of the item and querying the Amazon database. It ID’ed books, a bottle of pills and other packaged goods almost instantly, providing links to buy them. Wikipedia links pop up if you’re online as well, though a considerable amount of basic descriptions are kept on the device.

On that note, it must be said that SmartLens is a more than 500-megabyte download. Royzen’s model is huge, since it must keep all the recognition data and offline content right there on the phone. This is a much different approach to the problem than Amazon’s own product recognition engine on the Fire Phone (RIP) or Google Goggles (RIP) or the scan feature in Google Photos (which was pretty useless for things SmartLens reliably did in half a second).

“With the several past generations of smartphones containing desktop-class processors and the advent of native machine learning APIs that can harness them (and GPUs), the hardware exists for a blazing-fast visual search engine,” Royzen wrote in an email. But none of the large companies you would expect to create one has done so. Why?

The app size and toll on the processor is one thing, for sure, but the edge and on-device processing is where all this stuff will go eventually — Royzen is just getting an early start. The likely truth is twofold: it’s hard to make money and the quality of the search isn’t high enough.

It must be said at this point that SmartLens, while smart, is far from infallible. Its suggestions for what an item might be are almost always hilariously wrong for a moment before arriving at, as it often does, the correct answer.

It identified one book I had as “White Whale,” and no, it wasn’t Moby Dick. An actual whale paperweight it decided was a trowel. Many items briefly flashed guesses of “Human being” or “Product design” before getting to a guess with higher confidence. One flowering bush it identified as four or five different plants — including, of course, Human Being. My monitor was a “computer display,” “liquid crystal display,” “computer monitor,” “computer,” “computer screen,” “display device” and more. Game controllers were all “control.” A spatula was a wooden spoon (close enough), with the inexplicable subheading “booby prize.” What?!

This level of performance (and weirdness in general, however entertaining) wouldn’t be tolerated in a standalone product released by Google or Apple. Google Lens was slow and bad, but it’s just an optional feature in a working, useful app. If it put out a visual search app that identified flowers as people, the company would never hear the end of it.

And the other side of it is the monetization aspect. Although it’s theoretically convenient to be able to snap a picture of a book your friend has and instantly order it, it isn’t so much more convenient than taking a picture and searching for it later, or just typing the first few words into Google or Amazon, which will do the rest for you.

Meanwhile for the user there is still confusion. What can it identify? What can’t it identify? What do I need it to identify? It’s meant to ID many things, from dog breeds and storefronts, but it likely won’t identify, for example, a cool Bluetooth speaker or mechanical watch your friend has, or the creator of a painting at a local gallery (some paintings are recognized, though). As I used it I felt like I was only ever going to use it for a handful of tasks in which it had proven itself, like identifying flowers, but would be hesitant to try it on many other things when I might just be frustrated by some unknown incapability or unreliability.

And yet the idea that in the very near future there will not be something just like SmartLens is ridiculous to me. It seems so clearly something we will all take for granted in a few years. And it’ll be on-device, no need to upload your image to a server somewhere to be analyzed on your behalf.

Royzen’s app has its issues, but it works very well in many circumstances and has obvious utility. The idea that you could point your phone at the restaurant you’re across the street from and see Yelp reviews two seconds later — no need to open up a map or type in an address or name — is an extremely natural expansion of existing search paradigms.

“Visual search is still a niche, but my goal is to give people the taste of a future where one app can deliver useful information about anything around them — today,” wrote Royzen. “Still, it’s inevitable that big companies will launch their competing offerings eventually. My strategy is to beat them to market as the first universal visual search app and amass as many users as possible so I can stay ahead (or be acquired).”

My biggest gripe of all, however, is not the functionality of the app, but in how Royzen has decided to monetize it. Users can download it for free but upon opening it are immediately prompted to sign up for a $2/month subscription (though the first month is free) — before they can even see whether the app works or not. If I didn’t already know what the app did and didn’t do, I would delete it without a second thought upon seeing that dialog, and even knowing what I do, I’m not likely to pay in perpetuity for it.

A one-time fee to activate the app would be more than reasonable, and there’s always the option of referral codes for those Amazon purchases. But demanding rent from users who haven’t even tested the product is a non-starter. I’ve told Royzen my concerns and I hope he reconsiders.

It would also be nice to scan images you’ve already taken, or save images associated with searches. UI improvements like a confidence indicator or some kind of feedback to let you know it’s still working on identification would be nice as well — features that are at least theoretically on the way.

In the end I’m impressed with Royzen’s efforts — when I take a step back it’s amazing to me that it’s possible for a single person, let alone one in high school, to put together an app capable of completing such sophisticated computer vision tasks. It’s the kind of (over-) ambitious app-building one expects to come out of a big, playful company like the Google of a decade ago. This may be more of a curiosity than a tool right now, but so were the first text-based search engines.

SmartLens is in the App Store now — give it a shot.

Source: Mobile – Techcruch

Spectral Edge’s image enhancing tech pulls in $5.3M

Spectral Edge’s image enhancing tech pulls in .3M

Cambridge, U.K.-based startup Spectral Edge has closed a $5.3M Series A funding round from existing investors Parkwalk Advisors and IQ Capital.

The team, which in 2014 spun the business out of academic research at the University of East Anglia, has developed a mathematical technique for improving photographic imagery in real-time, also using machine learning technology. 

As we’ve reported previously, their technology — which can be embedded in software or in silicon — is designed to enhance pictures and videos on mass-market devices. Mooted use cases include for enhancing low light smartphone images, improving security camera footage or even for drone cameras. 

This month Spectral Edge announced its first customer, IT services provider NTT data, which said it would be incorporating the technology into its broadcast infrastructure offering — to offer its customers an “HDR-like experience”, via improved image quality, without the need for them to upgrade their hardware.

“We are in advanced trials with a number of global tech companies — household names — and hope to be able to announce more deals later this year,” CEO Rhodri Thomas tells us, adding that he expects 2-3 more deals in the broadcast space to follow “soon”, and enhance viewing experiences “in a variety of ways”.

On the smartphone front, Thomas says the company is waiting for consumer hardware to catch up — noting that RGB-IR sensors “haven’t yet begun to deploy on smartphones on a great scale”.

Once the smartphone hardware is there he reckons its technology will be able to help with various issues such as white balancing and bokeh processing.

“Right now there is no real solution for white balancing across the whole image [on smartphones] — so you’ll get areas of the image with excessive blues or yellows, perhaps, because the balance is out — but our tech allows this to be solved elegantly and with great results,” he suggests. “We also can support bokeh processing by eliminating artifacts that are common in these images.”

The new funding is going towards ramping up Spectral Edge’s efforts to commercialize its tech, including by growing the R&D team to 12 — with hires planned for specialists in image processing, machine learning and embedded software development.

The startup will also focus on developing real-world apps for smartphones, webcams and security applications alongside its existing products for the TV & display industries.

“The company is already very IP strong, with 10 patent families in the world (some granted, some filed and a couple about to be filed),” says Thomas. “The focus now is productizing and commercializing.”

“In a year, I expect our technology to be launched or launching on major flagship [smartphone] devices,” he adds. “We also believe that by then our CVD (color vision deficiency) product, Eyeteq, is helping millions of people suffering from color blindness to enjoy significantly better video experiences.”

Source: Mobile – Techcruch