Google’s latest hardware innovation: Price

Google’s latest hardware innovation: Price
With its latest consumer hardware products, Google’s prices are undercutting Apple, Samsung and Amazon. The search giant just unveiled its latest flagship smartphone, tablet and smart home device, all available at prices well below their direct competitors. Where Apple and Samsung are pushing prices of its latest products even higher, Google is seemingly happy to keep prices low, and this is creating a distinct advantage for the company’s products.
Google, like Amazon and nearly Apple, is a services company that happens to sell hardware. It needs to acquire users through multiple verticals, including hardware. Somewhere, deep in the Googleplex, a team of number-crunchers decided it made more sense to make its hardware prices dramatically lower than competitors. If Google is taking a loss on the hardware, it is likely making it back through services.
Amazon does this with Kindle devices. Microsoft and Sony do it with game consoles. This is a proven strategy to increase market share where the revenue generated on the back end recovers the revenue lost on selling hardware with slim or negative margins.
Look at the Pixel 3. The base 64GB model is available for $799, while the base 64GB iPhone XS is $999. Want a bigger screen? The 64GB Pixel 3 XL is $899, and the 64GB iPhone XS Max is $1,099. Regarding the specs, both phones offer OLED displays and amazing cameras. There are likely pros and cons regarding the speed of the SoC, amount of RAM and wireless capabilities. Will consumers care that the screen and camera are so similar? Probably not.
Google also announced the Home Hub today. Like the Echo Show, it’s designed to be the central part of a smart home. It puts Google Assistant on a fixed screen where users can ask it questions and control a smart home. It’s $149. That’s $80 less than the Echo Show, though the Google version lacks video conferencing and a dedicated smart home hub — the Google Home Hub requires extra hardware for some smart home objects. Still, even with fewer features, the Home Hub is compelling because of its drastically lower price. For just a few dollars more than an Echo Show, a buyer could get a Home Hub and two Home Minis.
The Google Pixel Slate is Google’s answer to the iPad Pro. From everything we’ve seen, it appears to lack a lot of the processing power found in Apple’s top tablet. It doesn’t seem as refined or capable of specific tasks. But for view media, creating content and playing games, it feels just fine. It even has a Pixelbook Pen and a great keyboard that shows Google is positioning this against the iPad Pro. And the 12.3-inch Pixel Slate is available for $599, where the 12.9-inch iPad Pro is $799.
The upfront price is just part of the equation. When considering the resale value of these devices, a different conclusion can be reached. Apple products consistently resale for more money than Google products. On Gazelle.com, a company that buys used smartphones, a used iPhone X is worth $425, whereas a used Pixel 2 is $195. A used iPhone 8, a phone that sold for a price closer to the Pixel 2, is worth $240.
In the end, Google likely doesn’t expect to make money off the hardware it sells. It needs users to buy into its services. The best way to do that is to make the ecosystem competitive though perhaps not investing the capital to make it the best. It needs to be just good enough, and that’s how I would describe these devices. Good enough to be competitive on a spec-to-spec basis while available for much less.

Google Pixel 3 and Pixel 3 XL up close and hands-on

The Pixel 3’s best new features

Source: Gadgets – techcrunch

Happy 10th anniversary, Android

Happy 10th anniversary, Android

It’s been 10 years since Google took the wraps off the G1, the first Android phone. Since that time the OS has grown from buggy, nerdy iPhone alternative to arguably the most popular (or at least populous) computing platform in the world. But it sure as heck didn’t get there without hitting a few bumps along the road.

Join us for a brief retrospective on the last decade of Android devices: the good, the bad, and the Nexus Q.

HTC G1 (2008)

This is the one that started it all, and I have a soft spot in my heart for the old thing. Also known as the HTC Dream — this was back when we had an HTC, you see — the G1 was about as inauspicious a debut as you can imagine. Its full keyboard, trackball, slightly janky slide-up screen (crooked even in official photos), and considerable girth marked it from the outset as a phone only a real geek could love. Compared to the iPhone, it was like a poorly dressed whale.

But in time its half-baked software matured and its idiosyncrasies became apparent for the smart touches they were. To this day I occasionally long for a trackball or full keyboard, and while the G1 wasn’t pretty, it was tough as hell.

Moto Droid (2009)

Of course, most people didn’t give Android a second look until Moto came out with the Droid, a slicker, thinner device from the maker of the famed RAZR. In retrospect, the Droid wasn’t that much better or different than the G1, but it was thinner, had a better screen, and had the benefit of an enormous marketing push from Motorola and Verizon. (Disclosure: Verizon owns Oath, which owns TechCrunch, but this doesn’t affect our coverage in any way.)

For many, the Droid and its immediate descendants were the first Android phones they had — something new and interesting that blew the likes of Palm out of the water, but also happened to be a lot cheaper than an iPhone.

HTC/Google Nexus One (2010)

This was the fruit of the continued collaboration between Google and HTC, and the first phone Google branded and sold itself. The Nexus One was meant to be the slick, high-quality device that would finally compete toe-to-toe with the iPhone. It ditched the keyboard, got a cool new OLED screen, and had a lovely smooth design. Unfortunately it ran into two problems.

First, the Android ecosystem was beginning to get crowded. People had lots of choices and could pick up phones for cheap that would do the basics. Why lay the cash out for a fancy new one? And second, Apple would shortly release the iPhone 4, which — and I was an Android fanboy at the time — objectively blew the Nexus One and everything else out of the water. Apple had brought a gun to a knife fight.

HTC Evo 4G (2010)

Another HTC? Well, this was prime time for the now-defunct company. They were taking risks no one else would, and the Evo 4G was no exception. It was, for the time, huge: the iPhone had a 3.5-inch screen, and most Android devices weren’t much bigger, if they weren’t smaller.

The Evo 4G somehow survived our criticism (our alarm now seems extremely quaint, given the size of the average phone now) and was a reasonably popular phone, but ultimately is notable not for breaking sales records but breaking the seal on the idea that a phone could be big and still make sense. (Honorable mention goes to the Droid X.)

Samsung Galaxy S (2010)

Samsung’s big debut made a hell of a splash, with custom versions of the phone appearing in the stores of practically every carrier, each with their own name and design: the AT&T Captivate, T-Mobile Vibrant, Verizon Fascinate, and Sprint Epic 4G. As if the Android lineup wasn’t confusing enough already at the time!

Though the S was a solid phone, it wasn’t without its flaws, and the iPhone 4 made for very tough competition. But strong sales reinforced Samsung’s commitment to the platform, and the Galaxy series is still going strong today.

Motorola Xoom (2011)

This was an era in which Android devices were responding to Apple, and not vice versa as we find today. So it’s no surprise that hot on the heels of the original iPad we found Google pushing a tablet-focused version of Android with its partner Motorola, which volunteered to be the guinea pig with its short-lived Xoom tablet.

Although there are still Android tablets on sale today, the Xoom represented a dead end in development — an attempt to carve a piece out of a market Apple had essentially invented and soon dominated. Android tablets from Motorola, HTC, Samsung and others were rarely anything more than adequate, though they sold well enough for a while. This illustrated the impossibility of “leading from behind” and prompted device makers to specialize rather than participate in a commodity hardware melee.

Amazon Kindle Fire (2011)

And who better to illustrate than Amazon? Its contribution to the Android world was the Fire series of tablets, which differentiated themselves from the rest by being extremely cheap and directly focused on consuming digital media. Just $200 at launch and far less later, the Fire devices catered to the regular Amazon customer whose kids were pestering them about getting a tablet on which to play Fruit Ninja or Angry Birds, but who didn’t want to shell out for an iPad.

Turns out this was a wise strategy, and of course one Amazon was uniquely positioned to do with its huge presence in online retail and the ability to subsidize the price out of the reach of competition. Fire tablets were never particularly good, but they were good enough, and for the price you paid, that was kind of a miracle.

Xperia Play (2011)

Sony has always had a hard time with Android. Its Xperia line of phones for years were considered competent — I owned a few myself — and arguably industry-leading in the camera department. But no one bought them. And the one they bought the least of, or at least proportional to the hype it got, has to be the Xperia Play. This thing was supposed to be a mobile gaming platform, and the idea of a slide-out keyboard is great — but the whole thing basically cratered.

What Sony had illustrated was that you couldn’t just piggyback on the popularity and diversity of Android and launch whatever the hell you wanted. Phones didn’t sell themselves, and although the idea of playing Playstation games on your phone might have sounded cool to a few nerds, it was never going to be enough to make it a million-seller. And increasingly that’s what phones needed to be.

Samsung Galaxy Note (2012)

As a sort of natural climax to the swelling phone trend, Samsung went all out with the first true “phablet,” and despite groans of protest the phone not only sold well but became a staple of the Galaxy series. In fact, it wouldn’t be long before Apple would follow on and produce a Plus-sized phone of its own.

The Note also represented a step towards using a phone for serious productivity, not just everyday smartphone stuff. It wasn’t entirely successful — Android just wasn’t ready to be highly productive — but in retrospect it was forward thinking of Samsung to make a go at it and begin to establish productivity as a core competence of the Galaxy series.

Google Nexus Q (2012)

This abortive effort by Google to spread Android out into a platform was part of a number of ill-considered choices at the time. No one really knew, apparently at Google or anywhere elsewhere in the world, what this thing was supposed to do. I still don’t. As we wrote at the time:

Here’s the problem with the Nexus Q:  it’s a stunningly beautiful piece of hardware that’s being let down by the software that’s supposed to control it.

It was made, or rather nearly made in the USA, though, so it had that going for it.

HTC First — “The Facebook Phone” (2013)

The First got dealt a bad hand. The phone itself was a lovely piece of hardware with an understated design and bold colors that stuck out. But its default launcher, the doomed Facebook Home, was hopelessly bad.

How bad? Announced in April, discontinued in May. I remember visiting an AT&T store during that brief period and even then the staff had been instructed in how to disable Facebook’s launcher and reveal the perfectly good phone beneath. The good news was that there were so few of these phones sold new that the entire stock started selling for peanuts on Ebay and the like. I bought two and used them for my early experiments in ROMs. No regrets.

HTC One/M8 (2014)

This was the beginning of the end for HTC, but their last few years saw them update their design language to something that actually rivaled Apple. The One and its successors were good phones, though HTC oversold the “Ultrapixel” camera, which turned out to not be that good, let alone iPhone-beating.

As Samsung increasingly dominated, Sony plugged away, and LG and Chinese companies increasingly entered the fray, HTC was under assault and even a solid phone series like the One couldn’t compete. 2014 was a transition period with old manufacturers dying out and the dominant ones taking over, eventually leading to the market we have today.

Google/LG Nexus 5X and Huawei 6P (2015)

This was the line that brought Google into the hardware race in earnest. After the bungled Nexus Q launch, Google needed to come out swinging, and they did that by marrying their more pedestrian hardware with some software that truly zinged. Android 5 was a dream to use, Marshmallow had features that we loved … and the phones became objects that we adored.

We called the 6P “the crown jewel of Android devices”. This was when Google took its phones to the next level and never looked back.

Google Pixel (2016)

If the Nexus was, in earnest, the starting gun for Google’s entry into the hardware race, the Pixel line could be its victory lap. It’s an honest-to-god competitor to the Apple phone.

Gone are the days when Google is playing catch-up on features to Apple, instead, Google’s a contender in its own right. The phone’s camera is amazing. The software works relatively seamlessly (bring back guest mode!), and phone’s size and power are everything anyone could ask for. The sticker price, like Apple’s newest iPhones, is still a bit of a shock, but this phone is the teleological endpoint in the Android quest to rival its famous, fruitful, contender.

The rise and fall of the Essential phone

In 2017 Andy Rubin, the creator of Android, debuted the first fruits of his new hardware startup studio, Digital Playground, with the launch of Essential (and its first phone). The company had raised $300 million to bring the phone to market, and — as the first hardware device to come to market from Android’s creator — it was being heralded as the next new thing in hardware.

Here at TechCrunch, the phone received mixed reviews. Some on staff hailed the phone as the achievement of Essential’s stated vision — to create a “lovemark” for Android smartphones, while others on staff found the device… inessential.

Ultimately, the market seemed to agree. Four months ago plans for a second Essential phone were put on hold, while the company explored a sale and pursued other projects. There’s been little update since.

A Cambrian explosion in hardware

In the ten years since its launch, Android has become the most widely used operating system for hardware. Some version of its software can be found in roughly 2.3 billion devices around the world and its powering a technology revolution in countries like India and China — where mobile operating systems and access are the default. As it enters its second decade, there’s no sign that anything is going to slow its growth (or dominance) as the operating system for much of the world.

Let’s see what the next ten years bring.

Source: Mobile – Techcruch

iOS 12.1 beta hints at new iPad Pro

iOS 12.1 beta hints at new iPad Pro
iOS 12 is still brand new, but Apple is already testing iOS 12.1 with a developer beta version. Steve Troughton-Smith and Guilherme Rambo found references to a brand new iPad that would support Face ID.
First, there are changes to Face ID. You can find references to landscape orientation in the iOS 12.1 beta. Face ID on the iPhone is limited to portrait orientation. Chances are you didn’t even notice this limitation because there’s only one orientation for the lock screen and home screen.
But the iPad is a different story as people tend to use it in landscape. And even when you hold it in landscape, some people will have the home button on the left while others will have the home button on the right.
In other words, in order to bring Face ID to the iPad, it needs to support multiple orientations. This beta indicates that iOS 12.1 could be the version of iOS that ships with the next iPad.
If that wasn’t enough, there’s a new device codename in the setup reference files. This device is called iPad2018Fall, which clearly means that a new iPad is right around the corner.
Analyst Ming-Chi Kuo previously indicated that the iPad Pro could switch from Lightning to USB-C. This would open up a ton of possibilities when it comes to accessories. For instance, you could plug an external monitor without any dongle and send a video feed to this external monitor.
As for iPhone users, in addition to bug fixes, iOS 12.1 brings back Group FaceTime, a feature that was removed at the last minute before the release of iOS 12. If it’s still too buggy, Apple could still choose to remove the feature once again. Memojis could support iCloud syncing across your devices, which would be useful for an iPad Pro with Face ID.

Source: Gadgets – techcrunch

Scientists make a touch tablet that rolls and scrolls

Scientists make a touch tablet that rolls and scrolls

Research scientists at Queen’s University’s Human Media Lab have built a prototype touchscreen device that’s neither smartphone nor tablet but kind of both — and more besides. The device, which they’ve christened the MagicScroll, is inspired by ancient (papyrus/paper/parchment) scrolls so it takes a rolled-up, cylindrical form factor — enabled by a flexible 7.5inch touchscreen housed in the casing.

This novel form factor, which they made using 3D printing, means the device can be used like an erstwhile Rolodex (remember those?!) for flipping through on-screen contacts quickly by turning a physical rotary wheel built into the edge of the device. (They’ve actually added one on each end.)

Then, when more information or a deeper dive is required, the user is able to pop the screen out of the casing to expand the visible display real estate. The flexible screen on the prototype has a resolution of 2K. So more mid-tier mobile phone of yore than crisp iPhone Retina display at this nascent stage.

 

 

The scientists also reckon the scroll form factor offers a pleasing ergonomically option for making actual phone calls too, given that a rolled up scroll can sit snugly against the face.

Though they admit their prototype is still rather large at this stage — albeit, that just adds to the delightfully retro feel of the thing, making it come over like a massive mobile phone of the 1980s. Like the classic Motorola 8000X Dynatac of 1984.

While still bulky at this R&D stage, the team argues the cylindrical, flexible screen form factor of their prototype offers advantages by being lightweight and easier to hold with one hand than a traditional tablet device, such as an iPad. And when rolled up they point out it can also fit in a pocket. (Albeit, a large one.)

They also imagine it being used as a dictation device or pointing device, as well as a voice phone. And the prototype includes a camera — which allows the device to be controlled using gestures, similar to Nintendo’s ‘Wiimote’ gesture system.

In another fun twist they’ve added robotic actuators to the rotary wheels so the scroll can physically move or spin in place in various scenarios, such as when it receives a notification. Clocky eat your heart out.

“We were inspired by the design of ancient scrolls because their form allows for a more natural, uninterrupted experience of long visual timelines,” said Roel Vertegaal, professor of human-computer interaction and director of the lab, in a statement.

“Another source of inspiration was the old Rolodex filing systems that were used to store and browse contact cards. The MagicScroll’s scroll wheel allows for infinite scroll action for quick browsing through long lists. Unfolding the scroll is a tangible experience that gives a full screen view of the selected item. Picture browsing through your Instagram timeline, messages or LinkedIn contacts this way!”

“Eventually, our hope is to design the device so that it can even roll into something as small as a pen that you could carry in your shirt pocket,” he added. “More broadly, the MagicScroll project is also allowing us to further examine notions that ‘screens don’t have to be flat’ and ‘anything can become a screen’. Whether it’s a reusable cup made of an interactive screen on which you can select your order before arriving at a coffee-filling kiosk, or a display on your clothes, we’re exploring how objects can become the apps.”

The team has made a video showing the prototype in action (embedded below), and will be presenting the project at the MobileHCI conference on Human-Computer Interaction in Barcelona next month.

While any kind of mobile device resembling the MagicScroll is clearly very, very far off even a sniff of commercialization (especially as these sorts of concept devices have long been teased by mobile device firms’ R&D labs — while the companies keep pumping out identikit rectangles of touch-sensitive glass… ), it’s worth noting that Samsung has been slated to be working on a smartphone with a foldable screen for some years now. And, according to the most recent chatter about this rumor, it might be released next year. Or, well, it still might not.

But whether Samsung’s definition of ‘foldable’ will translate into something as flexibly bendy as the MagicScroll prototype is highly, highly doubtful. A fused clamshell design — where two flat screens could be opened to seamlessly expand them and closed up again to shrink the device footprint for pocketability — seems a much more likely choice for Samsung designers to make, given the obvious commercial challenges of selling a device with a transforming form factor that’s also robust enough to withstand everyday consumer use and abuse.

Add to that, for all the visual fun of these things, it’s not clear that consumers would be inspired to adopt anything so different en masse. Sophisticated (and inevitably) fiddly devices are more likely to appeal to specific niche use cases and user scenarios.

For the mainstream six inches of touch-sensitive (and flat) glass seems to do the trick.

Source: Mobile – Techcruch

This happy robot helps kids with autism

This happy robot helps kids with autism
A little bot named QTrobot from LuxAI could be the link between therapists, parents, and autistic children. The robot, which features an LCD face and robotic arms, allows kids who are overwhelmed by human contact to become more comfortable in a therapeutic setting.
The project comes from LuxAI, a spin-off of the University of Luxembourg. They will present their findings at the RO-MAN 2018 conference at the end of this month.
“The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child,” co-founder Aida Nazarikhorram told IEEE. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.”
The robot reduces anxiety in autistic children and the researchers saw many behaviors – hand flapping, for example – slow down with the robot in the mix.
Interestingly the robot is a better choice for children than an app or tablet. Because the robot is “embodied,” the researchers found that it that draws attention and improves learning, especially when compared to a standard iPad/educational app pairing. In other words children play with tablets and work with robots.
The robot is entirely self-contained and easily programmable. It can run for hours at a time and includes a 3D camera and full processor.
The researchers found that the robot doesn’t become the focus of the therapy but instead helps the therapist connect with the patient. This, obviously, is an excellent outcome for an excellent (and cute) little piece of technology.

Source: Gadgets – techcrunch

Apple releases new iPad, FaceID ads

Apple releases new iPad, FaceID ads
Apple has released a handful of new ads promoting the iPad’s portability and convenience over both laptops and traditional paper solutions. The 15-second ads focus on how the iPad can make even the most tedious things — travel, notes, paperwork, and ‘stuff’ — just a bit easier.
Three out of the four spots show the sixth-generation iPad, which was revealed at Apple’s education event in March, and which offers a lower-cost ($329 in the U.S.) option with Pencil support.
The ads were released on Apple’s international YouTube channels (UAE, Singapore, and United Kingdom).

This follows another 90-second ad released yesterday, focusing on FaceID. The commercial shows a man in a gameshow-type setting asked to remember the banking password he created earlier that morning. He struggles for an excruciating amount of time before realizing he can access the banking app via FaceID.

There has been some speculation that FaceID may be incorporated into some upcoming models of the iPad, though we’ll have to wait until Apple’s next event (likely in September) to find out for sure.

Source: Gadgets – techcrunch

Apple is rebuilding Maps from the ground up

Apple is rebuilding Maps from the ground up

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so It’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 Beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been 4 years in the making, which is when Apple began to develop its new data gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps in an interview last week.  “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and over a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of Maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, “Where do we want to take Maps? What are the things that we want to do in Maps? We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now numbering in the millions, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map real-time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high resolution satellite imagery and brand new intensely high resolution image data gathered from its ground cars until it had what it felt was a ‘best in class’ mapping product.

There is only really one big company on earth who owns an entire map stack from the ground up: Google .

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with ‘Apple Maps’ signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed up GPS rig on the roof, four LiDAR arrays mounted at the corners and 8 cameras shooting overlapping high-resolution images – there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven and monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly immediately and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case which is delivered to Apple’s data center where a suite of software eliminates private information like faces, license plates and other info from the images. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and Privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design as it wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it —in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the ‘ground truth’ data provided by its own mapping vehicles with this ‘probe data’ sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows who that ID refers to. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving, say, an ad for a Chevron on your route to you, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving it can also send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your maps app has been active, say you check the map, look for directions etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full on point cloud that maps the world around the mapping van in 3D. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it could also enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on ‘any future plans’ for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high resolution image data from car and satellite, plus a 3D point cloud results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher resolution and easier to see, visually. And it’s synchronized with the ‘panoramic’ images from the car, the satellite view and the raw data. These techniques are used in self driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to ‘see’ through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the ‘last 50 feet’ of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the US, it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate through into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the ‘current’ version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover including grass and trees represented on the map as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the US.”

Source: Mobile – Techcruch

Bag Week 2018: WP Standard’s Rucksack goes the distance

Bag Week 2018: WP Standard’s Rucksack goes the distance
WP Standard – formerly called Whipping Post Leather – makes rugged leather bags, totes, and briefcases and their Rucksack is one of my favorites. Designed to look like something a Pony Express rider would slip on for a visit to town, this $275 satchel is sturdy, handsome, and ages surprisingly well.

There are some trade-offs, however. Except for two small front pouches there are no hidden nooks and crannies in this spare 15×15 inch sack. The main compartment can fit a laptop and a few notebooks and the front pouches can hold accessories like mice or a little collection of plugs. There is no fancy nylon mesh or gear organizers here, just a brown expanse of full grain leather.

I wore this backpack for a few months before writing this and found it surprisingly comfortable and great for travel. Because it is so simple I forced myself to pare down my gear slightly and I was able to consolidate my cables and other accessories into separate pouches. I could fit a laptop, iPad Pro, and a paperback along side multiple notebooks and planners and I could even overstuff the thing on long flights. As long as I was able to buckle the front strap nothing fell out or was lost.
This bag assumes that you’re OK with thick, heavy leather and that you’re willing to forgo a lot of the bells and whistles you get with more modern styles. That said, it has a great classic look and it’s very usable. I suspect this bag would last decades longer than anything you could buy at Office Depot and it would look good doing it. At $275 it’s a bit steep but you’re paying for years – if not decades – of regular use and abuse. It’s worth the investment.

Source: Gadgets – techcrunch

Apple slapped with $6.6M fine in Australia over bricked devices

Apple slapped with .6M fine in Australia over bricked devices
Apple has been fined AUS$9M (~$6.6M) by a court in Australia following a legal challenge by a consumer rights group related to the company’s response after iOS updates bricked devices that had been repaired by third parties.
The Australian Competitor and Consumer Commission (ACCC) invested a series of complaints relating to an error (‘error 53’) which disabled some iPhones and iPads after owners downloaded an update to Apple’s iOS operating system.
The ACCC says Apple admitted that, between February 2015 and February 2016 — via the Apple US’ website, Apple Australia’s staff in-store and customer service phone calls — it had informed at least 275 Australian customers affected by error 53 that they were no longer eligible for a remedy if their device had been repaired by a third party.
Image credit: 70023venus2009 via Flickr under license CC BY-ND 2.0
The court judged Apple’s action to have breached the Australian consumer law.
“If a product is faulty, customers are legally entitled to a repair or a replacement under the Australian Consumer Law, and sometimes even a refund. Apple’s representations led customers to believe they’d be denied a remedy for their faulty device because they used a third party repairer,” said ACCC commissioner Sarah Court in a statement.
“The Court declared the mere fact that an iPhone or iPad had been repaired by someone other than Apple did not, and could not, result in the consumer guarantees ceasing to apply, or the consumer’s right to a remedy being extinguished.”
The ACCC notes that after it notified Apple about its investigation, the company implemented an outreach program to compensate individual consumers whose devices were made inoperable by error 53. It says this outreach program was extended to approximately 5,000 consumers.
It also says Apple Australia offered a court enforceable undertaking to improve staff training, audit information about warranties and Australian Consumer Law on its website, and improve its systems and procedures to ensure future compliance with the law.
The ACCC further notes that a concern addressed by the undertaking is that Apple was allegedly providing refurbished goods as replacements, after supplying a good which suffered a major failure — saying Apple has committed to provide new replacements in those circumstances if the consumer requests one.
“If people buy an iPhone or iPad from Apple and it suffers a major failure, they are entitled to a refund. If customers would prefer a replacement, they are entitled to a new device as opposed to refurbished, if one is available,” said Court.
The court also held the Apple parent company, Apple US, responsible for the conduct of its Australian subsidiary. “Global companies must ensure their returns policies are compliant with the Australian Consumer Law, or they will face ACCC action,” added Court.
We’ve reached out to Apple for comment on the court decision and will update this post with any response.
A company spokeswoman told Reuters it had had “very productive conversations with the ACCC about this” but declined to comment further on the court finding.
More recently, Apple found itself in hot water with consumer groups around the world over its use of a power management feature that throttled performance on older iPhones to avoid unexpected battery shutdowns.
The company apologized in December for not being more transparent about the feature, and later said it would add a control allowing consumers to turn it off if they did not want their device’s performance to be impacted.

Source: Gadgets – techcrunch

How to watch the live stream for today’s Apple WWDC keynote

How to watch the live stream for today’s Apple WWDC keynote
Apple is holding a keynote today at the San Jose Convention Center, and the company is expected to unveil new updates for iOS, macOS, tvOS, watchOS and maybe also some new hardware. At 10 AM PT (1 PM in New York, 6 PM in London, 7 PM in Paris), you’ll be able to watch the event as the company is streaming it live.
Apple is likely to talk about some new features for all its software platforms — WWDC is a developer conference after all. Rumor has it that Apple could also unveil some MacBook Pro update with new Intel processors.
If you have the most recent Apple TV, you can download the Apple Events app in the App Store. It lets you stream today’s event and rewatch old events. Users with old Apple TVs can simply turn on their devices. Apple is pushing out the “Apple Events” channel so that you can watch the event.
And if you don’t have an Apple TV, the company also lets you live-stream the event from the Apple Events section on its website. This video feed works in Safari and Microsoft Edge. And for the first time, Apple says that the video should also work in Google Chrome and Mozilla Firefox.
So to recap, here’s how you can watch today’s Apple event:

Safari on the Mac or iOS.
Microsoft Edge on Windows 10.
Maybe Google Chrome or Mozilla Firefox.
An Apple TV gen 4 with the Apple Events app in the App Store.
An Apple TV gen 2 or 3, with the Apple Events channel that arrives automatically right before the event.

Of course, you also can read TechCrunch’s live blog if you’re stuck at work and really need our entertaining commentary track to help you get through your day. We have a big team in the room this year.

Source: Gadgets – techcrunch