Auto makers and tech companies have sought to cultivate the trust of consumers as they've developed and launched driver-assist systems that many consider a prelude to self-driving vehicles. Those efforts may be misguided.
Consumers often are confused by the performance capabilities and limitations of driver-assist systems, which can control some steering, acceleration and braking tasks. Motorists who place too much trust in these features may find themselves in potentially dangerous or deadly situations.
Instead of seeking trust, auto makers should emphasize the collaborative nature of driver-assist features, said Matthew Young, director of research at Thatcham Research, an automotive safety agency based in the United Kingdom that develops repair methods and conducts crash tests.
"Auto makers will ask me, 'Are you asking us to make these systems so the driver doesn't trust it?' and to some degree the answer is yes," he told Automotive News. "You don't want to sell a car that randomly decides to get one thing wrong. That spooks the driver. You want to try and make sure the driver is always engaged in the driving process."
In efforts to help drivers clarify the life-and-death distinctions between driver-assist and self-driving technologies and assess the competence of these fledgling systems, Thatcham Research recently helped Euro NCAP, the European New Car Assessment Program, develop a grading structure that's believed to be the first of its kind.
Results from the inaugural testing round of driver-assist systems in highway applications were released this month. The Mercedes-Benz GLE earned a "very good" rating and scored 174 points, the highest of the 10 vehicles tested. Systems on the BMW 3-Series and the Audi Q8 also earned "very good" marks.
Beyond the particulars of the initial results, what's most interesting may be how Thatcham and Euro NCAP devised the testing regimen.
Researchers measured performance across three categories, analyzing how well subsystems worked together to control speed and steering, how effective vehicles were in monitoring human drivers to ensure they remain engaged and how backup systems respond in emergencies or during malfunctions.
Establishing a healthy friction and collaboration between human and machine, rather than an implicit trust, is an underlying tenet of the ratings. Young describes this as a careful balancing act. If there's not enough automation, drivers won't appreciate the systems or find them helpful.
"If your keyboard only has half the letters in the alphabet, that doesn't help you write an email," he said. "However, with too much automation, users will come to rely on it even if it's not competent. So there is a critical sweet spot with the person and car working together that provides the safest and most efficient operation."
Without reaching that sweet spot, advanced driver-assist systems can lull drivers into a sense of automation complacency, a phenomenon that federal crash investigators have listed as a contributor to a range of transportation crashes, including fatal incidents involving Tesla's Autopilot system, in the U.S.
Thatcham tested Autopilot in a Model 3 vehicle, and its findings illustrate the complexity involved in devising a robust system. Autopilot earned the highest marks of any of the evaluated technologies in two of the three categories. But it earned the lowest score in driver engagement.
While many other auto makers use inward-facing cameras to ensure drivers are watching the road ahead, Tesla uses steering-wheel torque to monitor that. Findings from National Transportation Safety Board investigations have said that's an inadequate measure of engagement.
Young finds it problematic for more mundane reasons.
"The driver engagement is really small," he said. "During the driving task, the system provides very strong resistance on the steering wheel. If you want to make a minor course correction, the car actually fights you. With other systems, you can interact more and drive around a pothole if you want. Tesla fights this and disconnects, which is psychologically telling the driver, 'If you fight me, I'm going to turn off.' It's the completely wrong ethos with assisted driving."
Overall, Tesla earned a "moderate" grade on the tests.
The Thatcham and Euro NCAP tests come at a time when transportation officials in Britain and Germany are proposing regulations that would permit automakers to deploy more highly automated systems, those with SAE Level 3 automation capabilities, in which responsibility can be passed between human and machine. Systems currently on the road, including those tested by Thatcham and Euro NCAP, are Level 2, in which humans retain responsibility for vehicle operations at all times.
Safety organizations in the U.S. have embarked on similar efforts that scrutinize driver-assist systems and some of their features.
AAA, the nation's largest motorist organization, has conducted road tests that examine how often driver-assist systems require human intervention and spotlighted the shortcomings of the technology in preventing pedestrian crashes. The Insurance Institute for Highway Safety has developed assessments of front-crash avoidance and vehicle-to-pedestrian front-crash prevention.
IIHS has done research into driver complacency while using driver-assist systems, and a spokesperson says a ratings program for Level 2 automated systems likely will be introduced in the first half of 2021.
As these systems evolve, Young says, so will the testing. For now, he wants motorists to distinguish between driver-assist systems and self-driving technology.
"The key thing is, if you are saying it's assistance, then it involves a driver driving," he said. "You don't want to allow the driver to come out of the loop, and that's where manufacturers want to go. If the systems are so robust, then you don't need the human. We want to very, very clearly draw that line."