Thursday, April 05, 2018

The Deadly Flaw Hiding in Self Driving Cars and Pilotless Airplanes (Hint: It's Humans)

Photo: Timtempleton  CC BY-SA 4.0

A common theme on this blog has been about the promise—and pitfalls, of automation in aviation. Pilotless airplanes have been trumpeted simultaneously as the final nail in the coffin of aviation accidents and as the solution to the ongoing worldwide pilot shortage.

Not to be outdone in the hyperbole of the future department, driverless cars are heralded as the end of everything from traffic jams and fatalities to the need to even own an automobile. Simply summon one on your smartphone and away you go to the opera or to work.

The reality of the future, while not thwarting all those dreams outright, may be riding the brakes a bit.

The fatal collision between a driverless Uber car and a pedestrian last month is calling into question the idea that driverless technology is ready for prime time. And the interesting part is that Uber, for their part, thought the same thing. Their driverless car wasn't really driverless, but had a driver hired for the purpose of sitting behind the wheel to take over if the machine made a mistake.

Well, the machine made a mistake when a woman crossed the road outside of a crosswalk and was hit and later died of her injuries. Tragic as that was, it is inevitable that these types of accidents are going to occur. Sensor technology, while good and getting better, still has a long way to go. If you find it difficult to drive in heavy rain or snow, machines have even more difficulty.

These problems will eventually be solved, but in the interim, it will be up to humans, whether in the car, or at a remote facility, to monitor the machines. In this case, the human monitor was not able to avert the crash. This, then, is the flaw in the system: humans make lousy monitors of machines, be it an autonomous car or an automation flown airliner.

A recent article in the WSJ highlighted the stressful nature of the job for which Uber's monitor/drivers were responsible:

“The computer is fallible, so it’s the human who is supposed to be perfect,” one former Uber test driver said. “It’s kind of the reverse of what you think about computers.”
 Also, as autonomous technology improves, the need for drivers to take action diminishes, making it harder to stay focused, test drivers said.

Humans, being human, become bored and distracted after a very short period of time. Well, then, you might say, we should employ other machines to watch the machines. This begs the question of what the monitor machine (or more likely software) should watch and why couldn't this functionality be incorporated into the primary control software.

This also gets to the nature of how machines think versus how humans think. Humans are better than machines at processing ambiguous information and confronting situations which are new to them. AI, or artificial intelligence, is how software engineers hope to emulate the human ability to make decisions when confronted with novel situations which haven't been pre-programmed.

This capability is getting better all the time, but has a way to go before humans can be completely written out of the equation. In the meantime, humans will need to be somewhere in the control loop. We should all hope that the human monitor isn't dozing when the sun gets in the eyes of the computer driven car while we're crossing the street.

No comments:

Post a Comment

I welcome feedback. If you have any comments, questions or requests for future topics, please feel free to comment. Comment moderation is on to reduce spam, but I'll post all legit comments.Thanks for stopping by and don't forget to visit my Facebook page!

Capt Rob