X Close

UCLse Blog

Home

Thoughts from the staff of the UCL Centre for Systems Engineering

Menu

Autonomous vehicles, guidelines and challenges

By Ian Raper, on 11 September 2017

There has been a recent article on The Conversation about the world’s first ethical guidelines for driverless cars.

It is good to see that people are trying to influence how this new world is going to look and this is being seriously considered by legislative bodies. The description in the article triggers some important questions which I’m sure the autonomy community are thinking of, but I suspect it may take some crashes and some court cases before we really understand what is acceptable.

Based on the article there is an important issue around the transition period, before the technology is mature enough for full autonomy, and who is in control. The suggestion that the system could hand back control in morally ambiguous dilemma situations immediately begs the question of how long would be left between the hand-over of control to the human driver and expected incident. As autonomy takes over more of the function of driving we can expect the human occupant to become de-skilled. So we may effectively have a low skilled operator being asked to react in a short time period to a challenging situation. Scientific American has already published an article on this: What NASA could teach Tesla about autopilot’s limits.

The article also notes that the car will have a “Black Box” style recorder so that it is clear who was in control of the vehicle at the time of a collision. To the suspicious minded this suggests that manufacturers could try and pass the blame for accidents to the nominal human driver when their autonomy system can no longer arrive at an unambiguous and ‘safe’ answer. For courts to be able to rule on such cases I expect that the reasonable level of anticipated skill of the driver and the time required between handover of control to avoid any incident will need to established.

However, it is good to see that in the guidelines themselves it is clear that ‘abrupt handover of control to the driver (“emergency”) is virtually obviated” but this requires the software and technology to be appropriately designed. I expect there is much work still to do to characterise the capabilities of the technology in a sufficiently wide range of real world scenarios to reach this aspiration. It is also good to see that the onus is on developers to adapt their system to human capabilities rather than expecting humans to enhance their capabilities. From the discussion above it seems clear that this must take into account the potential de-skilled human driver rather than an assumption on skill levels being maintained at the level of today’s drivers. Finally it seems that guidelines are also anticipating that in such an emergency situation the system should adopt a “safe condition” but acknowledges that the definition of this has not been established.

We could look to the aviation sector and note that most commercial aircraft have utilised autopilot for some decades with an improving safety record. But we must also remember that pilots typically spend many hours in a simulator with the opportunity to be taken through a variety of incident scenarios to enhance their skills. For our driverless cars how are we going to maintain the human skill level such that they can react in a reasoned and safe way?

For those who enjoy grappling with such ethical dilemmas in this age of technology, and indeed those who will be responsible for implementing such systems, I can recommend going back to read the likes of Asimov’s robot series (“I, Robot” and “The Rest of the Robots”) to understand however hard we try to foresee and control this new world there is always ambiguity to catch us out.