Uber in fatal crash detected pedestrian but had emergency braking disabled

Uber in fatal crash detected pedestrian but had emergency braking disabled
The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.
Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words, to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.
It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.
It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much farther away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

Here’s how Uber’s self-driving cars are supposed to detect pedestrians

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar six seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).
The car following the collision
During these six seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.
Then, 1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.
It was only when, less than a second before impact, the driver happened to look up from whatever it was she was doing and saw Herzberg, whom the car had known about in some way for five long seconds by then. It struck and killed her.
It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.
Arizona, where the crash took place, barred Uber from further autonomous testing, and Uber yesterday ended its program in the state.
Uber offered the following statement on the report:
Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.

Source: Gadgets – techcrunch

Waymo reportedly applies to put autonomous cars on California roads with no safety drivers

Waymo reportedly applies to put autonomous cars on California roads with no safety drivers
Waymo has become the second company to apply for the newly-available permit to deploy autonomous vehicles without safety drivers on some California roads, the San Francisco Chronicle reports. It would be putting its cars — well, minivans — on streets around Mountain View, where it already has an abundance of data.
The company already has driverless driverless cars in play over in Phoenix, as it showed in a few promotional videos last month. So this isn’t the first public demonstration of its confidence.
California only just made it possible to grant permits allowing autonomous vehicles without safety drivers on April 2; one other company has applied for it in addition to Waymo, but it’s unclear which. The new permit type also allows for vehicles lacking any kind of traditional manual controls, but for now the company is sticking with its modified Chrysler Pacificas. Hey, they’re practical.
The recent fatal collision of an Uber self-driving car with a pedestrian, plus another fatality in a Tesla operating in semi-autonomous mode, make this something of an awkward time to introduce vehicles to the road minus safety drivers. Of course, it must be said that both of those cars had people behind the wheel at the time of their crashes.
Assuming the permit is granted, Waymo’s vehicles will be limited to the Mountain View area, which makes sense — the company has been operating there essentially since its genesis as a research project within Google. So there should be no shortage of detail in the data, and the local authorities will be familiar with the people necessary for handling any issues like accidents, permit problems, and so on.
No details yet on what exactly the cars will be doing, or whether you’ll be able to ride in one. Be patient.

Source: Gadgets – techcrunch

Luminar puts its lidar tech into production through acquisitions and smart engineering

Luminar puts its lidar tech into production through acquisitions and smart engineering
When Luminar came out of stealth last year with its built-from-scratch lidar system, it seemed to beat established players like Velodyne at their own game — but at great expense and with no capability to build at scale. After the tech proved itself on the road, however, Luminar got to work making its device better, cheaper, and able to be assembled in minutes rather than hours.
“This year for us is all about scale. Last year it took a whole day to build each unit — they were being hand assembled by optics PhDs,” said Luminar’s wunderkind founder Austin Russell. “Now we’ve got a 136,000 square foot manufacturing center and we’re down to 8 minutes a unit.”
Lest you think the company has sacrificed quality for quantity, be it known that the production unit is about 30 percent lighter and more power efficient, can see a bit further (250 meters vs 200), and detect objects with lower reflectivity (think people wearing black clothes in the dark).
The secret — to just about the whole operation, really — is the sensor. Luminar’s lidar systems, like all others, fire out a beam of light and essentially time its return. That means you need a photosensitive surface that can discern just a handful of photons.
Most photosensors, like those found in digital cameras and in other lidar systems, use a silicon-based photodetector. Silicon is well-understood, cheap, and the fabrication processes are mature.
Luminar, however, decided to start from the ground up with its system, using an alloy called indium gallium arsenide, or InGaAs. An InGaAs-based photodetector works at a different frequency of light (1,550nm rather than ~900) and is far more efficient at capturing it. (Some physics here.)
The more light you’ve got, the better your sensor — that’s usually the rule. And so it is here; Luminar’s InGaAs sensor and a single laser emitter produced images tangibly superior to devices of a similar size and power draw, but with fewer moving parts.
The problem is that indium gallium arsenide is like the Dom Perignon of sensor substrates. It’s expensive as hell and designing for it is a highly specialized field. Luminar only got away with it by minimizing the amount of InGaAs used: only a tiny sliver of it is used where it’s needed, and they engineered around that rather than use the arrays of photodetectors found in many other lidar products. (This restriction goes hand in glove with the “fewer moving parts” and single laser method.)
Last year Luminar was working with a company called Black Forest Engineering to design these chips, and finding their paths inextricably linked (unless someone in the office wanted to volunteer to build InGaAs ASICs), Luminar bought them. The 30 employees at Black Forest, combined with the 200 hired since coming out of stealth, brings the company to 350 total.
By bringing the designers in house and building their own custom versions of not just the photodetector but also the various chips needed to parse and pass on the signals, they brought the cost of the receiver down from tens of thousands of dollars to… three dollars.
“We’ve been able to get rid of these expensive processing chips for timing and stuff,” said Russell. “We build our own ASIC. We only take like a speck of InGaAs and put it onto the chip. And we custom fab the chips.”
“This is something people have assumed there was no way you could ever scale it for production fleets,” he continued. “Well, it turns out it doesn’t actually have to be expensive!”
Sure — all it took was a bunch of geniuses, five years, and a seven-figure budget (and I’d be surprised if the $36M in seed funding was all they had to work with). But let’s not quibble.
Quality inspection time in the clean room.
It’s all being done with a view to the long road ahead, though. Last year the company demonstrated that its systems not only worked, but worked well, even if there were only a few dozen of them at first. And they could get away with it, since as Russell put it, “What everyone has been building out so far has been essentially an autonomous test fleet. But now everyone is looking into building an actual, solidified hardware platform that can scale to real world deployment.”
Some companies took a leap of faith, like Toyota and a couple other unnamed companies, even though it might have meant temporary setbacks.
“It’s a very high barrier to entry, but also a very high barrier to exit,” Russell pointed out. “Some of our partners, they’ve had to throw out tens of thousands of miles of data and redo a huge portion of their software stack to move over to our sensor. But they knew they had to do it eventually. It’s like ripping off the band-aid.”
We’ll soon see how the industry progresses — with steady improvement but also intense anxiety and scrutiny following the fatal crash of an Uber autonomous car, it’s difficult to speculate on the near future. But Luminar seems to be looking further down the road.

Source: Gadgets – techcrunch

Massterly aims to be the first full-service autonomous marine shipping company

Massterly aims to be the first full-service autonomous marine shipping company
Logistics may not be the most exciting application of autonomous vehicles, but it’s definitely one of the most important. And the marine shipping industry — one of the oldest industries in the world, you can imagine — is ready for it. Or at least two major Norwegian shipping companies are: they’re building an autonomous shipping venture called Massterly from the ground up.
“Massterly” isn’t just a pun on mass; “Maritime Autonomous Surface Ship” is the term Wilhelmson and Kongsberg coined to describe the self-captaining boats that will ply the seas of tomorrow.
These companies, with “a combined 360 years of experience” as their video put it, are trying to get the jump on the next phase of shipping, starting with creating the world’s first fully electric and autonomous container ship, the Yara Birkeland. It’s a modest vessel by shipping terms — 250 feet long and capable of carrying 120 containers according to the concept — but will be capable of loading, navigating and unloading without a crew
The Yara Birkeland, as envisioned in concept art.
(One assumes there will be some people on board or nearby to intervene if anything goes wrong, of course. Why else would there be railings up front?)
Each has major radar and lidar units, visible light and IR cameras, satellite connectivity and so on.
Control centers will be on land, where the ships will be administered much like air traffic, and ships can be taken over for manual intervention if necessary.
At first there will be limited trials, naturally: the Yara Birkeland will stay within 12 nautical miles of the Norwegian coast, shuttling between Larvik, Brevik and Herøya. It’ll only be going 6 knots — so don’t expect it to make any overnight deliveries.

“As a world-leading maritime nation, Norway has taken a position at the forefront in developing autonomous ships,” said Wilhelmson group CEO Thomas Wilhelmson in a press release. “We take the next step on this journey by establishing infrastructure and services to design and operate vessels, as well as advanced logistics solutions associated with maritime autonomous operations. Massterly will reduce costs at all levels and be applicable to all companies that have a transport need.”
The Yara Birkeland is expected to be seaworthy by 2020, though Massterly should be operating as a company by the end of the year.

Source: Gadgets – techcrunch