Google ups the Pixel 3’s camera game with Top Shot, group selfies and more

Google ups the Pixel 3’s camera game with Top Shot, group selfies and more

With the Pixel 2, Google introduced one of the best smartphone cameras ever made. It’s fitting, then, that the Pixel 3 builds on an already pretty perfect camera, adding some bells and whistles sure to please mobile photographers rather than messing with a good thing. On paper, the Pixel 3’s camera doesn’t look much different than its recent forebear. But, because we’re talking about Google, software is where the device will really shine. We’ll go over everything that’s new.

Starting with specs, both the Pixel 3 and the Pixel 3 XL will sport a 12.2MP rear camera with an f/1.8 aperture and an 8MP dual front camera capable of both normal field of view and ultra-wide angle shots. The rear video camera captures 1080p video at 30, 60 or 120 fps, while the front-facing video camera is capable of capturing 1080p video at 30fps. Google did not add a second rear-facing camera, deeming it “unnecessary,” given what the company can do with machine learning alone. Knowing how good the Pixel 2’s camera is, we can’t really argue here.

While it’s not immediately evident from the specs sheet, Google also updated the Pixel visual co-processing chip known as Visual Core for the Pixel 3 and Pixel 3 XL. The updated Visual Core chip update is what powers some of the powerful and processing-heavy new photo features.

Top Shot

With the Pixel 3, Google introduces Top Shot. With Top Shot, the Pixel 3 compares a burst set of images taken in rapid succession and automatically detects the best shot using machine learning. The idea is that the camera can screen out any photos in which a subject might have their eyes closed or be making a weird face unintentionally, choosing “smiles instead of sneezes” and offering the user the best of the batch. Stuff like this is usually gimmicky, but given Google’s image processing prowess it’s honestly probably going to be pretty good. Or as TechCrunch’s Matt Burns puts it, “Top Shots is Live Photo but useful,” which seems like a fair assessment.

Super Res Zoom

Google’s next Pixel 3 camera trick is called Super Res Zoom, which is what it sounds like. Super Res Zoom enables the camera to take a burst of photos and then leverages the fact that each image is very slightly different due to minute hand movements, combining those images together to recreate detail “without grain” — or so Google claims. Because smartphone cameras are limited due to their lack of optical zoom, Super Res Zoom employs burst shooting and a merging algorithm to compensate for detail at a distance, merging slightly different photos into one higher resolution photo. Because digital zoom is notoriously universally bad, we’re looking forward to putting this new method to the test. After all, if it worked for imaging the surface of Mars, it’s bound to work for concert photos.

Night Sight

A machine learning camera hack designed to inspire people to retire flash once and for all (please), Night Sight can visualize a photo taken in “extreme low light.” The idea is that machine learning can make educated guesses about the content in the frame, filling in detail and color correcting so it isn’t just one big noisy mess. If it works remains to be seen, but given the Pixel 2’s already stunning low-light performance we’d bet this is probably pretty cool.

Group Selfie Cam

Google knows what the people really want. One of the biggest hardware changes to the Pixel 3 line is the introduction of dual front-facing cameras that enable super-wide front-facing shots capable of capturing group photos. The wide-angle front-facing shots feature a 97 degree field of view compared to the normal already fairly wide 75 degree field of view. Yes, Google is trying to make “Groupies” a thing — yes, that’s a selfie where you all cram in and hand the phone to the friend with the longest arms. Honestly, it might succeed.

Google has a few more handy tricks up its sleeve. In Photobooth mode, the Pixel 3 can snap the selfie shutter when you smile, no hands needed. With a new kind of motion-tracking auto-focus option you can tap once to track the subject of a photo without needing to tap to refocus, a feature sure to be handy for the kind of people that fill up their storage with hundreds of out-of-focus pet shots.

Google Lens is also back, of course, but honestly its utility is usually left forgotten in the camera settings. And Google’s AR stickers are now called Playground and respond to actions and facial expressions. Google is also launching a Childish Gambino AR experience on Playground (probably as good as this whole AR sticker thing gets, tbh), which will launch with the Pixel 3 and come to the Pixel 1 and Pixel 2 a bit later on.

With the Pixel 3, Google will also improve upon the Pixel 2’s already excellent Portrait Mode, offering the ability to change the depth of field and the subject. And, of course, the company will still offer free unlimited full resolution photo storage in the wonderfully useful Google Photos, which remains superior to every aspect of photo processing and storage on the iPhone.

Many of the features that Google announced today for the Pixel 3 rely on its new Visual Core chip and dual front cameras, but older Pixels will also be able to use Night Sight. Google clarified to TechCrunch that Photobooth, Top Shot, Super Res Zoom, Group Selfie Cam and Motion Auto focus are exclusive to the Pixel 3 and Pixel 3 XL due to a dependence on hardware updates.

With its Pixel line, now three generations deep, Google has leaned heavily on software-powered tricks and machine learning to make a smartphone camera far better than it should be. Given Google’s image processing chops, that’s a great thing, and most of its experimental software workarounds generally work very well. We’re looking forward to taking its latest set of photography tricks for a spin, so keep an eye out for our upcoming Pixel 3 hands-on posts and reviews.

more Google Event 2018 coverage

Source: Mobile – Techcruch

Spectral Edge’s image enhancing tech pulls in $5.3M

Spectral Edge’s image enhancing tech pulls in .3M

Cambridge, U.K.-based startup Spectral Edge has closed a $5.3M Series A funding round from existing investors Parkwalk Advisors and IQ Capital.

The team, which in 2014 spun the business out of academic research at the University of East Anglia, has developed a mathematical technique for improving photographic imagery in real-time, also using machine learning technology. 

As we’ve reported previously, their technology — which can be embedded in software or in silicon — is designed to enhance pictures and videos on mass-market devices. Mooted use cases include for enhancing low light smartphone images, improving security camera footage or even for drone cameras. 

This month Spectral Edge announced its first customer, IT services provider NTT data, which said it would be incorporating the technology into its broadcast infrastructure offering — to offer its customers an “HDR-like experience”, via improved image quality, without the need for them to upgrade their hardware.

“We are in advanced trials with a number of global tech companies — household names — and hope to be able to announce more deals later this year,” CEO Rhodri Thomas tells us, adding that he expects 2-3 more deals in the broadcast space to follow “soon”, and enhance viewing experiences “in a variety of ways”.

On the smartphone front, Thomas says the company is waiting for consumer hardware to catch up — noting that RGB-IR sensors “haven’t yet begun to deploy on smartphones on a great scale”.

Once the smartphone hardware is there he reckons its technology will be able to help with various issues such as white balancing and bokeh processing.

“Right now there is no real solution for white balancing across the whole image [on smartphones] — so you’ll get areas of the image with excessive blues or yellows, perhaps, because the balance is out — but our tech allows this to be solved elegantly and with great results,” he suggests. “We also can support bokeh processing by eliminating artifacts that are common in these images.”

The new funding is going towards ramping up Spectral Edge’s efforts to commercialize its tech, including by growing the R&D team to 12 — with hires planned for specialists in image processing, machine learning and embedded software development.

The startup will also focus on developing real-world apps for smartphones, webcams and security applications alongside its existing products for the TV & display industries.

“The company is already very IP strong, with 10 patent families in the world (some granted, some filed and a couple about to be filed),” says Thomas. “The focus now is productizing and commercializing.”

“In a year, I expect our technology to be launched or launching on major flagship [smartphone] devices,” he adds. “We also believe that by then our CVD (color vision deficiency) product, Eyeteq, is helping millions of people suffering from color blindness to enjoy significantly better video experiences.”

Source: Mobile – Techcruch