X Close

UCLse Blog

Home

Thoughts from the staff of the UCL Centre for Systems Engineering

Menu

Autonomous vehicles, guidelines and challenges

By Ian Raper, on 11 September 2017

There has been a recent article on The Conversation about the world’s first ethical guidelines for driverless cars.

It is good to see that people are trying to influence how this new world is going to look and this is being seriously considered by legislative bodies. The description in the article triggers some important questions which I’m sure the autonomy community are thinking of, but I suspect it may take some crashes and some court cases before we really understand what is acceptable.

Based on the article there is an important issue around the transition period, before the technology is mature enough for full autonomy, and who is in control. The suggestion that the system could hand back control in morally ambiguous dilemma situations immediately begs the question of how long would be left between the hand-over of control to the human driver and expected incident. As autonomy takes over more of the function of driving we can expect the human occupant to become de-skilled. So we may effectively have a low skilled operator being asked to react in a short time period to a challenging situation. Scientific American has already published an article on this: What NASA could teach Tesla about autopilot’s limits.

The article also notes that the car will have a “Black Box” style recorder so that it is clear who was in control of the vehicle at the time of a collision. To the suspicious minded this suggests that manufacturers could try and pass the blame for accidents to the nominal human driver when their autonomy system can no longer arrive at an unambiguous and ‘safe’ answer. For courts to be able to rule on such cases I expect that the reasonable level of anticipated skill of the driver and the time required between handover of control to avoid any incident will need to established.

However, it is good to see that in the guidelines themselves it is clear that ‘abrupt handover of control to the driver (“emergency”) is virtually obviated” but this requires the software and technology to be appropriately designed. I expect there is much work still to do to characterise the capabilities of the technology in a sufficiently wide range of real world scenarios to reach this aspiration. It is also good to see that the onus is on developers to adapt their system to human capabilities rather than expecting humans to enhance their capabilities. From the discussion above it seems clear that this must take into account the potential de-skilled human driver rather than an assumption on skill levels being maintained at the level of today’s drivers. Finally it seems that guidelines are also anticipating that in such an emergency situation the system should adopt a “safe condition” but acknowledges that the definition of this has not been established.

We could look to the aviation sector and note that most commercial aircraft have utilised autopilot for some decades with an improving safety record. But we must also remember that pilots typically spend many hours in a simulator with the opportunity to be taken through a variety of incident scenarios to enhance their skills. For our driverless cars how are we going to maintain the human skill level such that they can react in a reasoned and safe way?

For those who enjoy grappling with such ethical dilemmas in this age of technology, and indeed those who will be responsible for implementing such systems, I can recommend going back to read the likes of Asimov’s robot series (“I, Robot” and “The Rest of the Robots”) to understand however hard we try to foresee and control this new world there is always ambiguity to catch us out.

Engineering, Ethics & Risk

By Ian Raper, on 30 September 2015

The recent issue with emissions testing has highlighted a few issues which are very important within the field of systems engineering, and indeed engineering in general.

The first is ethics, and is one that is considered important by the various professional bodies representing the engineering professions. For example, to quote from the Institute of Engineering and Technology (IET), “Being a professional engineer means that the wider public trust you to be competent and to adhere to certain ethical standards”.

We have to question therefore how the ‘cheat device’ software was able to be present in and used in operations of those vehicles. There is various speculation in the press about these matters and it is not the purpose of this article to comment on such media reporting. It can be presumed though that engineers either chose to, or were coerced into, making use of the functionality of the software designed to aid with factory testing beyond this design intent.

The second issue relates to the risk culture of the organisation. Did anyone in the organisation make the association between the inappropriate use of this software and the potential impact of it being discovered. In risk management terms this event would probably be hoped to have a very low likelihood (i.e. they hoped they wouldn’t be found out) but the consequence was always going to be huge ($billions wiped off market value, massive lost of trust in the brand). Was this assessment of the risk ever made, was it captured, and if so how far up the organisation did the risk review go?

Organisations are complex systems in their own right, and the culture of the organisation is an emergent property of the interactions of the various parts (management, departments, employees, suppliers, etc.). Culture can also be affected by a reinforcing feedback loop, i.e. behaviour begets the same kind of behaviour. So any review of the organisation needs to recognise these factors.

This is just a very brief highlight of the complexity of two of the issues that surround this situation. It will be interesting over the coming weeks, months and even years to see whether the true root causes are identified and addressed. It is a useful wake up call to all organisations that ethics are important and that appropriate risk management might help avoid making the worst decisions.