The Tesla Autopilot crash fatality in May points to how safe, not unsafe, self-driving cars can be in the future, and how driver assistance features today are at preventing crashes by warning or intervening. The very real danger is that knuckleheads today are confusing individual driver assist features for an integrated system that allows your attention to drift safely for extended periods of time.
Experts say the development of self-driving cars over the coming decade depends on an unreliable assumption by many automakers: that the humans in them will be ready to step in and take control if the car’s systems fail.
Instead, experience with automation in other modes of transportation like aviation and rail suggests that the strategy will lead to more deaths like that of a Florida Tesla driver in May.
Decades of research shows that people have a difficult time keeping their minds on boring tasks like monitoring systems that rarely fail and hardly ever require them to take action. The human brain continually seeks stimulation. If the mind isn’t engaged, it will wander until it finds something more interesting to think about. The more reliable the system, the more likely it is that attention will wane.