The new era in mobile

The new era in mobile

A future dominated by autonomous vehicles (AVs) is, for many experts, a foregone conclusion. Declarations that the automobile will become the next living room are almost as common — but, they are imprecise. In our inevitable driverless future, the more apt comparison is to the mobile device. As with smartphones, operating systems will go a long way toward determining what autonomous vehicles are and what they could be. For mobile app companies trying to seize on the coming AV opportunity, their future depends on how the OS landscape shapes up.

By most measures, the mobile app economy is still growing, yet the time people spend using their apps is actually starting to dip. A recent study reported that overall app session activity grew only 6 percent in 2017, down from the 11 percent growth it reported in 2016. This trend suggests users are reaching a saturation point in terms of how much time they can devote to apps. The AV industry could reverse that. But just how mobile apps will penetrate this market and who will hold the keys in this new era of mobility is still very much in doubt.

When it comes to a driverless future, multiple factors are now converging. Over the last few years, while app usage showed signs of stagnation, the push for driverless vehicles has only intensified. More cities are live-testing driverless software than ever, and investments in autonomous vehicle technology and software by tech giants like Google and Uber (measured in the billions) are starting to mature. And, after some reluctance, automakers have now embraced this idea of a driverless future. Expectations from all sides point to a “passenger economy” of mobility-as-a-service, which, by some estimates, may be worth as much as $7 trillion by 2050.

For mobile app companies this suggests several interesting questions: Will smart cars, like smartphones before them, be forced to go “exclusive” with a single OS of record (Google, Apple, Microsoft, Amazon/AGL), or will they be able to offer multiple OS/platforms of record based on app maturity or functionality? Or, will automakers simply step in to create their own closed loop operating systems, fragmenting the market completely?

Automakers and tech companies clearly recognize the importance of “connected mobility.”

Complicating the picture even further is the potential significance of an OS’s ability to support multiple Digital Assistants of Record (independent of the OS), as we see with Google Assistant now working on iOS. Obviously, voice NLP/U will be even more critical for smart car applications as compared to smart speakers and phones. Even in those established arenas the battle for OS dominance is only just beginning. Opening a new front in driverless vehicles could have a fascinating impact. Either way, the implications for mobile app companies are significant.

Looking at the driverless landscape today there are several indications as to which direction the OSes in AVs will ultimately go. For example, after some initial inroads developing their own fleet of autonomous vehicles, Google has now focused almost all its efforts on autonomous driving software while striking numerous partnership deals with traditional automakers. Some automakers, however, are moving forward developing their own OSes. Volkswagen, for instance, announced that vw.OS will be introduced in VW brand electric cars from 2020 onward, with an eye toward autonomous driving functions. (VW also plans to launch a fleet of autonomous cars in 2019 to rival Uber.) Tesla, a leader in AV, is building its own unified hardware-software stack. Companies like Udacity, however, are building an “open-source” self-driving car tech. Mobileye and Baidu have a partnership in place to provide software for automobile manufacturers.

Clearly, most smartphone apps would benefit from native integration, but there are several categories beyond music, voice and navigation that require significant hardware investment to natively integrate. Will automakers be interested in the Tesla model? If not, how will smart cars and apps (independent of OS/voice assistant) partner up? Given the hardware requirements necessary to enable native app functionality and optimal user experience, how will this force smart car manufacturers to work more seamlessly with platforms like AGL to ensure competitive advantage and differentiation? And, will this commoditize the OS dominance we see in smartphones today?

It’s clearly still early days and — at least in the near term — multiple OS solutions will likely be employed until preferred solutions rise to the top. Regardless, automakers and tech companies clearly recognize the importance of “connected mobility.” Connectivity and vehicular mobility will very likely replace traditional auto values like speed, comfort and power. The combination of Wi-Fi hotspot and autonomous vehicles (let alone consumer/business choice of on-demand vehicles) will propel instant conversion/personalization of smart car environments to passenger preferences. And, while questions remain around the how and the who in this new era in mobile, it’s not hard to see the why.

Americans already spend an average of 293 hours per year inside a car, and the average commute time has jumped around 20 percent since 1980. In a recent survey (conducted by Ipsos/GenPop) researchers found that in a driverless future people would spend roughly a third of the time communicating with friends and family or for business and online shopping. By 2030, it’s estimated the autonomous cars “will free up a mind-blowing 1.9 trillion minutes for passengers.” Another analysis suggested that even with just 10 percent adoption, driverless cars could account for $250 billion in driver productivity alone.

Productivity in this sense extends well beyond personal entertainment and commerce and into the realm of business productivity. Use of integrated display (screen and heads-up) and voice will enable business multi-tasking from video conferencing, search, messaging, scheduling, travel booking, e-commerce and navigation. First-mover advantage goes to the mobile app companies that first bundle into a single compelling package information density, content access and mobility. An app company that can claim 10 to 15 percent of this market will be a significant player.

For now, investors are throwing lots of money at possible winners in the autonomous automotive race, who, in turn, are beginning to define the shape of the mobile app landscape in a driverless future. In fact, what we’re seeing now looks a lot like the early days of smartphones with companies like Tesla, for example, applying an Apple -esque strategy for smart car versus smartphone. Will these OS/app marketplaces be dominated by a Tesla — or Google (for that matter) — and command a 30 percent revenue share from apps, or will auto manufacturers with proprietary platforms capitalize on this opportunity? Questions like these — while at the same time wondering just who the winners and losers in AV will be — mean investment and entrepreneurship in the mobile app sector is an extremely lucrative but risky gamble.

Source: Mobile – Techcruch

Uber in fatal crash detected pedestrian but had emergency braking disabled

Uber in fatal crash detected pedestrian but had emergency braking disabled
The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.
Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words, to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.
It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.
It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much farther away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

Here’s how Uber’s self-driving cars are supposed to detect pedestrians

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar six seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).
The car following the collision
During these six seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.
Then, 1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.
It was only when, less than a second before impact, the driver happened to look up from whatever it was she was doing and saw Herzberg, whom the car had known about in some way for five long seconds by then. It struck and killed her.
It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.
Arizona, where the crash took place, barred Uber from further autonomous testing, and Uber yesterday ended its program in the state.
Uber offered the following statement on the report:
Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.

Source: Gadgets – techcrunch

Waymo reportedly applies to put autonomous cars on California roads with no safety drivers

Waymo reportedly applies to put autonomous cars on California roads with no safety drivers
Waymo has become the second company to apply for the newly-available permit to deploy autonomous vehicles without safety drivers on some California roads, the San Francisco Chronicle reports. It would be putting its cars — well, minivans — on streets around Mountain View, where it already has an abundance of data.
The company already has driverless driverless cars in play over in Phoenix, as it showed in a few promotional videos last month. So this isn’t the first public demonstration of its confidence.
California only just made it possible to grant permits allowing autonomous vehicles without safety drivers on April 2; one other company has applied for it in addition to Waymo, but it’s unclear which. The new permit type also allows for vehicles lacking any kind of traditional manual controls, but for now the company is sticking with its modified Chrysler Pacificas. Hey, they’re practical.
The recent fatal collision of an Uber self-driving car with a pedestrian, plus another fatality in a Tesla operating in semi-autonomous mode, make this something of an awkward time to introduce vehicles to the road minus safety drivers. Of course, it must be said that both of those cars had people behind the wheel at the time of their crashes.
Assuming the permit is granted, Waymo’s vehicles will be limited to the Mountain View area, which makes sense — the company has been operating there essentially since its genesis as a research project within Google. So there should be no shortage of detail in the data, and the local authorities will be familiar with the people necessary for handling any issues like accidents, permit problems, and so on.
No details yet on what exactly the cars will be doing, or whether you’ll be able to ride in one. Be patient.

Source: Gadgets – techcrunch

Luminar puts its lidar tech into production through acquisitions and smart engineering

Luminar puts its lidar tech into production through acquisitions and smart engineering
When Luminar came out of stealth last year with its built-from-scratch lidar system, it seemed to beat established players like Velodyne at their own game — but at great expense and with no capability to build at scale. After the tech proved itself on the road, however, Luminar got to work making its device better, cheaper, and able to be assembled in minutes rather than hours.
“This year for us is all about scale. Last year it took a whole day to build each unit — they were being hand assembled by optics PhDs,” said Luminar’s wunderkind founder Austin Russell. “Now we’ve got a 136,000 square foot manufacturing center and we’re down to 8 minutes a unit.”
Lest you think the company has sacrificed quality for quantity, be it known that the production unit is about 30 percent lighter and more power efficient, can see a bit further (250 meters vs 200), and detect objects with lower reflectivity (think people wearing black clothes in the dark).
The secret — to just about the whole operation, really — is the sensor. Luminar’s lidar systems, like all others, fire out a beam of light and essentially time its return. That means you need a photosensitive surface that can discern just a handful of photons.
Most photosensors, like those found in digital cameras and in other lidar systems, use a silicon-based photodetector. Silicon is well-understood, cheap, and the fabrication processes are mature.
Luminar, however, decided to start from the ground up with its system, using an alloy called indium gallium arsenide, or InGaAs. An InGaAs-based photodetector works at a different frequency of light (1,550nm rather than ~900) and is far more efficient at capturing it. (Some physics here.)
The more light you’ve got, the better your sensor — that’s usually the rule. And so it is here; Luminar’s InGaAs sensor and a single laser emitter produced images tangibly superior to devices of a similar size and power draw, but with fewer moving parts.
The problem is that indium gallium arsenide is like the Dom Perignon of sensor substrates. It’s expensive as hell and designing for it is a highly specialized field. Luminar only got away with it by minimizing the amount of InGaAs used: only a tiny sliver of it is used where it’s needed, and they engineered around that rather than use the arrays of photodetectors found in many other lidar products. (This restriction goes hand in glove with the “fewer moving parts” and single laser method.)
Last year Luminar was working with a company called Black Forest Engineering to design these chips, and finding their paths inextricably linked (unless someone in the office wanted to volunteer to build InGaAs ASICs), Luminar bought them. The 30 employees at Black Forest, combined with the 200 hired since coming out of stealth, brings the company to 350 total.
By bringing the designers in house and building their own custom versions of not just the photodetector but also the various chips needed to parse and pass on the signals, they brought the cost of the receiver down from tens of thousands of dollars to… three dollars.
“We’ve been able to get rid of these expensive processing chips for timing and stuff,” said Russell. “We build our own ASIC. We only take like a speck of InGaAs and put it onto the chip. And we custom fab the chips.”
“This is something people have assumed there was no way you could ever scale it for production fleets,” he continued. “Well, it turns out it doesn’t actually have to be expensive!”
Sure — all it took was a bunch of geniuses, five years, and a seven-figure budget (and I’d be surprised if the $36M in seed funding was all they had to work with). But let’s not quibble.
Quality inspection time in the clean room.
It’s all being done with a view to the long road ahead, though. Last year the company demonstrated that its systems not only worked, but worked well, even if there were only a few dozen of them at first. And they could get away with it, since as Russell put it, “What everyone has been building out so far has been essentially an autonomous test fleet. But now everyone is looking into building an actual, solidified hardware platform that can scale to real world deployment.”
Some companies took a leap of faith, like Toyota and a couple other unnamed companies, even though it might have meant temporary setbacks.
“It’s a very high barrier to entry, but also a very high barrier to exit,” Russell pointed out. “Some of our partners, they’ve had to throw out tens of thousands of miles of data and redo a huge portion of their software stack to move over to our sensor. But they knew they had to do it eventually. It’s like ripping off the band-aid.”
We’ll soon see how the industry progresses — with steady improvement but also intense anxiety and scrutiny following the fatal crash of an Uber autonomous car, it’s difficult to speculate on the near future. But Luminar seems to be looking further down the road.

Source: Gadgets – techcrunch

Massterly aims to be the first full-service autonomous marine shipping company

Massterly aims to be the first full-service autonomous marine shipping company
Logistics may not be the most exciting application of autonomous vehicles, but it’s definitely one of the most important. And the marine shipping industry — one of the oldest industries in the world, you can imagine — is ready for it. Or at least two major Norwegian shipping companies are: they’re building an autonomous shipping venture called Massterly from the ground up.
“Massterly” isn’t just a pun on mass; “Maritime Autonomous Surface Ship” is the term Wilhelmson and Kongsberg coined to describe the self-captaining boats that will ply the seas of tomorrow.
These companies, with “a combined 360 years of experience” as their video put it, are trying to get the jump on the next phase of shipping, starting with creating the world’s first fully electric and autonomous container ship, the Yara Birkeland. It’s a modest vessel by shipping terms — 250 feet long and capable of carrying 120 containers according to the concept — but will be capable of loading, navigating and unloading without a crew
The Yara Birkeland, as envisioned in concept art.
(One assumes there will be some people on board or nearby to intervene if anything goes wrong, of course. Why else would there be railings up front?)
Each has major radar and lidar units, visible light and IR cameras, satellite connectivity and so on.
Control centers will be on land, where the ships will be administered much like air traffic, and ships can be taken over for manual intervention if necessary.
At first there will be limited trials, naturally: the Yara Birkeland will stay within 12 nautical miles of the Norwegian coast, shuttling between Larvik, Brevik and Herøya. It’ll only be going 6 knots — so don’t expect it to make any overnight deliveries.

“As a world-leading maritime nation, Norway has taken a position at the forefront in developing autonomous ships,” said Wilhelmson group CEO Thomas Wilhelmson in a press release. “We take the next step on this journey by establishing infrastructure and services to design and operate vessels, as well as advanced logistics solutions associated with maritime autonomous operations. Massterly will reduce costs at all levels and be applicable to all companies that have a transport need.”
The Yara Birkeland is expected to be seaworthy by 2020, though Massterly should be operating as a company by the end of the year.

Source: Gadgets – techcrunch