Head of Ford autonomy says self-driving systems need ‘safe and reliable development processes’

Posted on

As Ford prepares to launch its Mustang Mach-E electric car, the first in its portfolio to benefit from a new highway semi-self-driving suite, its executives argue for a cautious approach to autonomy in the personal vehicle space. safety as the main driver of both developing the technology and keeping it under control.

Scott Griffith, who oversees Ford’s autonomy and mobility division, argued on Medium Tuesday in a post unequivocally titled “Playing it Safe. There’s no other way to launch self-driving cars.” In this, Ford joins Daimler, Volkswagen, General Motors and others in what is basically a rejection of Tesla’s ambitious (and, as some have argued, misleading) public rollout of the ‘Full Self Driving’ variant of its Autopilot. system.

“Over the decades, Ford and the rest of the automotive industry have devoted a tremendous amount of resources to developing robust, comprehensive processes to ensure we design and deploy safe vehicles because we care about the safety of our customers” , Griffith said.

Most discussions of autonomous and semi-autonomous technology essentially boil down to the idea that for the public to broadly accept these systems, the risks inherent in the technology must be much less than those of the unreliability inherent in human motives . As long as that is not the case, potential buyers are more likely to rely on human judgment than that of a computer.

“That’s one area where we want to improve our understanding – learning lessons from the very best human drivers and turning those lessons into algorithms that can be applied to self-driving cars,” Griffith said. “That way we can make sure that if there is an unexpected icy road or a pedestrian suddenly stepping into traffic, the car can use all the friction on the road very reliably to get out of the danger zone.

“Of course there is a clear distinction between vehicles with people at the helm and vehicles without,” he said. “Human powered vehicles have been refined and improved over time to help improve driving behavior, but the challenge for self-driving cars will be to manage all driving activities yourself – making decisions and maneuvers to navigate numerous scenarios.”

“Just as decades of experience have given us safe and reliable development processes for human-powered cars, we must draw on that experience and develop the same processes for self-driving cars,” said Griffith.

Daimler, along with Volkswagen, BMW, FCA, Continental and several other auto and technology companies, wrote a white paper in 2019 entitled ‘Safety First for Automated Driving’ which suggested that the industry should move well beyond that tipping point, referring to the concept as a ‘positive risk balance’. Mercedes-Benz plans to release the Drive Pilot system to a wider audience next year.

“The introduction of automated driving has great potential to reduce the number of accidents. However, there are also major challenges in realizing the full safety benefit of automated driving to achieve the objective of ‘positive risk balance versus human driving performance’, such as recommended by the German Ethics Committee, ”he said.

The benefits of this approach have already been proven to some extent by GM, whose Super Cruise suite received higher marks from Consumer Reports than Tesla’s Autopilot. GM plans to expand the technology to its entire lineup in the coming years.

Griffith’s post addresses other aspects of the technological and ethical obstacles to autonomy, and we recommend that you read it in its entirety.