X Close

UCLse Blog

Home

Thoughts from the staff of the UCL Centre for Systems Engineering

Menu

Autonomous vehicles, guidelines and challenges

By Ian Raper, on 11 September 2017

There has been a recent article on The Conversation about the world’s first ethical guidelines for driverless cars.

It is good to see that people are trying to influence how this new world is going to look and this is being seriously considered by legislative bodies. The description in the article triggers some important questions which I’m sure the autonomy community are thinking of, but I suspect it may take some crashes and some court cases before we really understand what is acceptable.

Based on the article there is an important issue around the transition period, before the technology is mature enough for full autonomy, and who is in control. The suggestion that the system could hand back control in morally ambiguous dilemma situations immediately begs the question of how long would be left between the hand-over of control to the human driver and expected incident. As autonomy takes over more of the function of driving we can expect the human occupant to become de-skilled. So we may effectively have a low skilled operator being asked to react in a short time period to a challenging situation. Scientific American has already published an article on this: What NASA could teach Tesla about autopilot’s limits.

The article also notes that the car will have a “Black Box” style recorder so that it is clear who was in control of the vehicle at the time of a collision. To the suspicious minded this suggests that manufacturers could try and pass the blame for accidents to the nominal human driver when their autonomy system can no longer arrive at an unambiguous and ‘safe’ answer. For courts to be able to rule on such cases I expect that the reasonable level of anticipated skill of the driver and the time required between handover of control to avoid any incident will need to established.

However, it is good to see that in the guidelines themselves it is clear that ‘abrupt handover of control to the driver (“emergency”) is virtually obviated” but this requires the software and technology to be appropriately designed. I expect there is much work still to do to characterise the capabilities of the technology in a sufficiently wide range of real world scenarios to reach this aspiration. It is also good to see that the onus is on developers to adapt their system to human capabilities rather than expecting humans to enhance their capabilities. From the discussion above it seems clear that this must take into account the potential de-skilled human driver rather than an assumption on skill levels being maintained at the level of today’s drivers. Finally it seems that guidelines are also anticipating that in such an emergency situation the system should adopt a “safe condition” but acknowledges that the definition of this has not been established.

We could look to the aviation sector and note that most commercial aircraft have utilised autopilot for some decades with an improving safety record. But we must also remember that pilots typically spend many hours in a simulator with the opportunity to be taken through a variety of incident scenarios to enhance their skills. For our driverless cars how are we going to maintain the human skill level such that they can react in a reasoned and safe way?

For those who enjoy grappling with such ethical dilemmas in this age of technology, and indeed those who will be responsible for implementing such systems, I can recommend going back to read the likes of Asimov’s robot series (“I, Robot” and “The Rest of the Robots”) to understand however hard we try to foresee and control this new world there is always ambiguity to catch us out.

Finding the similarities, not the differences

By Ian Raper, on 21 December 2016

I recently read an article on the Nesta blog “Fall in love with the solution, not the problem” which is a phrase to immediately get a system engineer’s interest.

I’ll immediately say that I’ve not worked in the development sector and so I’m not going to comment on the efficacy of the proposed approach in that context, but it did get me thinking (particularly the diagram ‘3 possible strategies for problem solving‘) about lifecycle models in general.

The first strategy, a problem focused approach, is what some might say Systems Engineering espouses. The traditional lifecycle model of choice in the SE world is the Vee model.  If you simply assume that the left hand side of the Vee is a linear process then it would indeed look like you spend a lot of time fully exploring the problem and then move on to creating a solution to solve the defined problem.

But the Vee model is just a model, and like all models it is an abstraction of reality. It is useful in providing some scaffolding around which systems engineers can communicate, but competent engineers will recognise that there are many subtle nuances that come to play in the real world. They will also recognise that this is not the only lifecycle model and will be able to blend lifecycle thinking and mix elements of different approaches as appropriate (e.g. based on development risk).

These models can also be useful education tools in that you have to get people on the first rung of the ladder of understanding before showing them the possible complexity in application. (If you want to read another author’s views on the Vee model then try The Design of Design by Frederick P. Brooks Jr. It should certainly get you thinking even if you don’t agree with all he says.)

If we consider the process of architecting within this stage of the lifecycle then authors such as Maier & Rechtin (The art of systems architecting.  CRC Press 2009) show a process model that includes parallel activities of problem structuring and solution structuring. And if we consider an approach such as the Seven Samurai of Systems Engineering (Proceedings of the International Council on Systems Engineering (INCOSE) International Symposium, 2004) then we recognise that not only do we have to consider the ‘solution’ (system of interest) but how that solution is created, how it is supported and maintained, and how the solution will change the context into which it is deployed (which might create new problems to be solved).

This implies fully immersing yourself in the context where the problem/solution exists, and engaging with the wide range of stakeholders that are, or will be, concerned with the solution once deployed. A design on the bench is not the same as a design in the environment where it is meant to operate. It also means understanding what already exists and the benefits and challenges of those existing solutions.

When I think about it in these terms, then what I understand as good systems engineering starts to look more like the co-evolution approach (at least in the pictorial form) from the Nesta article.

As always, these thoughts are my own and thoughts can change with time and additional input. So please feel free to comment and add your own perspective.

UCLse selected to deliver Space Systems Engineering training to the European Space Agency

By Ian Raper, on 11 March 2016

UCL’s Mullard Space Science Laboratory (MSSL) has won a major 2 year contract to provide space systems engineering training to the next generation of systems engineers at the European Space Agency (ESA).

MSSL’s Technology Management Group will deliver the training at a venue close to ESA’s European Space Research and Technology Centre (ESTEC) in the Netherlands, with a team primarily consisting of Dr Michael Emes (Programme Manager), Ian Raper and Dr Doug Cowper.

Dr Emes said: “We are excited to be able to continue our relationship with ESA, following on from the successful project management training now in its 2nd year. Winning this contract is a testimony to the quality of the training we offer, the strength of our staff in the subject area, and recognition of the strong space systems engineering heritage at MSSL.

The UCL Centre for Systems Engineering, hosted within MSSL, has been training systems engineers for over 17 years through its MSc programmes and directly to industry with clients in many sectors. We will be drawing on all of our experience in both teaching and space systems engineering, together with experts from ESA, in order to ensure that this development opportunity works in supporting the delivery of ESA Agenda 2015. “

The major part of the training will be aimed at ESA staff who are technical discipline engineers, space scientists and new appointed system engineers. We will also be delivering targeted training for space systems engineers with several years of experience looking for the latest trends and techniques in systems engineering.


 

The ESA Space Systems Engineering Training Course is being carried out under a programme of, and funded by, the European Space Agency. The view expressed herein are those of the UCL Centre for Systems Engineering and can in no way be taken to reflect the official opinion of the European Space Agency.

Teaching compared with Curating

By Ian Raper, on 3 December 2015

Following my PGCert in Teaching and Learning in Higher and Professional Education I’ve taken to thinking more about the profession that I have joined. I like finding metaphors and analogies to understand the world around me and that I occupy, and this time I thought about an analogy between curators and teachers.

This article is based on my incomplete knowledge of at least one of these roles, and I hope others will contribute their thoughts.

A curator is an expert in their subject field and also in the art of curating. They have a passion for the subject and wish to promote that through displaying the artefacts using their skills to make them engaging, interesting and relevant.

No one museum will have all of the artefacts related to their subject field, and within an exhibition they can only display a subset of what they do have.

If an attendee wishes to know more than is on display, or if the question exceeds the curator’s own knowledge, then they will know how to help the enquirer discover what they are looking for.

And I see parallels with teaching on an MSc level programme.

Within a lecture we have a limited window in which to display the ‘artefacts’ of the topic at hand. We are the curators of the knowledge of our field and we must use our teaching and learning skills to convey these to the students in an interesting and engaging way.

No one lecturer can hold all the knowledge but they should know the core structure of the knowledge and have the skills to research areas they don’t yet understand.

Engineering, Ethics & Risk

By Ian Raper, on 30 September 2015

The recent issue with emissions testing has highlighted a few issues which are very important within the field of systems engineering, and indeed engineering in general.

The first is ethics, and is one that is considered important by the various professional bodies representing the engineering professions. For example, to quote from the Institute of Engineering and Technology (IET), “Being a professional engineer means that the wider public trust you to be competent and to adhere to certain ethical standards”.

We have to question therefore how the ‘cheat device’ software was able to be present in and used in operations of those vehicles. There is various speculation in the press about these matters and it is not the purpose of this article to comment on such media reporting. It can be presumed though that engineers either chose to, or were coerced into, making use of the functionality of the software designed to aid with factory testing beyond this design intent.

The second issue relates to the risk culture of the organisation. Did anyone in the organisation make the association between the inappropriate use of this software and the potential impact of it being discovered. In risk management terms this event would probably be hoped to have a very low likelihood (i.e. they hoped they wouldn’t be found out) but the consequence was always going to be huge ($billions wiped off market value, massive lost of trust in the brand). Was this assessment of the risk ever made, was it captured, and if so how far up the organisation did the risk review go?

Organisations are complex systems in their own right, and the culture of the organisation is an emergent property of the interactions of the various parts (management, departments, employees, suppliers, etc.). Culture can also be affected by a reinforcing feedback loop, i.e. behaviour begets the same kind of behaviour. So any review of the organisation needs to recognise these factors.

This is just a very brief highlight of the complexity of two of the issues that surround this situation. It will be interesting over the coming weeks, months and even years to see whether the true root causes are identified and addressed. It is a useful wake up call to all organisations that ethics are important and that appropriate risk management might help avoid making the worst decisions.

Exploring risk management

By Michael Emes, on 21 January 2015

At UCLse we are interested in managing risk in complex projects. It’s a topic that we cover in many of our courses in Systems Engineering, Technology Management and Project Management. But we are also interested on a practical level since we are part of the Mullard Space Science Laboratory (MSSL), responsible for the design and manufacture of instrumentation for scientific spacecraft for the world’s major space agencies.

This week I’ve been in the Netherlands with Prof Alan Smith delivering a training course to European Space Agency project managers. We ask the trainees to ‘imagine there’s a bear sitting on your shoulder reminding you to think about risk’.
RiskBear

Many people think that risk management revolves around the practice of producing risk logs or risk registers. At its worst, this view of risk management assumes that once a basic list of risks is compiled at the start of a project, risk management has been completed.

Done properly, though, risk registers can effectively summarise the range of risk events that might affect a project’s objectives, together with the probability and impact of the events, the ownership of the risks, the actions taken to mitigate the risks and the cost and effectiveness of the mitigation (how much risk will remain – the residual risk – after mitigation).

The risk register is a vehicle for communicating risk exposure within and outside the project team. But even here, the risk register is merely a window onto the risk management process. Risk events cannot really be represented by simple point estimates of probability and impact. Risks events are by definition uncertain and are best described by a range of possible outcomes, each with an associated likelihood. The risk register has a place as a simple summary for ease of communication, but does not tell the whole story.

One of our PhD students, Zakari Tsiga, is exploring the relationship between risk management and project success. The first part of his research is a survey of the critical success factors of projects.

If you’re interested in taking part in the survey, please follow this link:

https://opinio.ucl.ac.uk/s?s=35076

 

The importance of measuring the right thing

By Ian Raper, on 22 December 2014

Within the world of projects there is a phrase that says “what gets measured gets managed” and whilst such one liners can be used as a lazy justification of poorly thought through management techniques, if we explore the meaning more deeply then I think it reveals some interesting points.

The first thing I’d observe is that the saying is broadly true. It’s like shining a torch beam on something, and when others can see the areas being highlighted then they will focus their attention there too. This can be both a good thing and a bad thing. Are we sure that we are shining our torch on the right areas, and are we confident that those areas left unilluminated aren’t adding unacceptable risk. Is there a monster growing in the shadows that will leap out and bite you further on in the project.

An important thing to think about now is what you are measuring. Projects are frequently driven by the achievement of milestones and this is taken as a measurement of progress. But I have seen many projects where the milestone is simply the delivery of a design artefact, e.g. “have we delivered the systems requirements specification”, and the milestone on its own tells us nothing about the quality of the deliverable. It is the quality of the deliverable that is the true measure of progress (i.e. the maturity of the design) and not the existence of the deliverable itself.

Significant design milestones of course tend not to be just the delivery of a document, but are also attended by an in-depth review. Provided the review is well conducted by staff competent in the area then poor quality in the design should be trapped and actioned for correction. But the risk is that we become reliant on the review to perform this function rather than building it in to the design process. And of course if we only detect poor quality at the review then we are already late. One way of ‘building it in’ can be through appropriate measurement.

So what is ‘appropriate measurement’ if it’s not just about delivering ‘things’? I believe it’s about thinking what the desired outcome of the ‘thing’ is and trying to develop measures that can monitor the progress towards that outcome. Measuring something of the quality of ‘things’ is much harder than measuring the existence of these ‘things’, but these are the only really meaningful measures.

So, an example may help. In a previous role I was responsible for delivering the Strategy and Process Improvement programme for our department and this required a rigorous approach to measurement (for which we happened to have selected the balanced score card approach,  but I won’t dwell on that in this post).

One area of improvement was in our relationship to our customer, for which we developed a series of training materials to develop the staff in this skill. The leader of improvement activity proposed measuring the delivery of the pack of materials and the number of staff who had been through the training. I argued that this would tell us nothing about our progress towards actually improving our relationship with the customer. This outcome of course is much harder to measure, and the data that is available (e.g. number of customer accolades) may be the result of a number of actions that the business has taken, but it is still this outcome that the business really cares about.

This example illustrates the tension between those who are responsible for delivering products, whether that is individual process products or the overall project, and who many want to measured on delivery of the thing and those who understand the responsibility of delivering the ‘thing’ that meets the stakeholders needs.

I will mention here as way of concluding this post that the International Council on Systems Engineering (INCOSE) has developed a product called the “SE Leading Indicators Guide” which has provided some possible ways of measuring the activities of a complex engineering project, and provides plenty of food for thought.

New Start for Technology Management

By Matt Whyndham, on 7 November 2014

 The MSc in Technology Management, recently introduced at UCL by the Technology Management Group at the Department of Space and Climate Physics, accepted its first intake in September. Matt Whyndham, the Programme Tutor, reflects on the start of the programme.

The last couple of months witnessed a concerted campaign of activity in TMG, as the new academic year got started with increased enrolments. This year, we are starting a new track in our academic programme: MSc Technology Management. Over the summer period, we had received a good number of applications for this new degree, and 15 new students eventually enrolled in September.

Read the rest of this entry »

The importance of thinking

By Ian Raper, on 26 June 2014

Yesterday I had another opportunity to deliver my course on Developing the Engineering Management Plan for Complex Systems Projects and, in discussion with the delegates, it has once again reinforced for me the importance of thinking in the early stages of projects.

It seems that my own experience is echoed by others that too often the detailed planning of the engineering activities consists of getting the plan for the last project off the shelf, updating it to reflect the new project and then putting it back on the shelf. Of course that is a very wide broom I’m using and I am aware that there are shades from no planning to very detailed planning that happen in industry.

When you consider that around 80% of the final project costs are committed in the first 20% of engineering activity you would expect that more effort would be put into planning how to make best use of those early activities.

The problem is that this means

  • thinking through the entire lifecycle of the system you’re designing, from cradle to grave
  • working out all the risks to success that might hit you along the way
  • figure out what you need to do to reduce or remove those risks
  • work out whether you can do those things given the constraints of your project, and if you can’t then figure out what residual risk your project carries
  • understand what value each activity is adding to move you towards success, which means understanding the principle behind doing the activity
  • understand how all the activities interrelate and how they sit in the wider business context (the project is a system too with functions and interfaces so needs to be understood in the same way)
  • figuring out the methods and tools you’ll need to do the activity and the skills of the people required to deliver the desired outcomes
  • and then working out how to measure progress, not in terms of deliverables (like the system requirements document) but in terms of real outcomes (such as the quality of the system requirements).
  • and a bunch of other stuff too…

Which is a lot of thinking. But if you don’t do this then really you’re walking blindly into your project with wishful thinking that everything will be OK.

Then you need to re-evaluate and update the plan at least at every major review point. It’s like a navigation plan. Events will blow you off course and you need way-points to check whether you’re still on track or need to adjust your plan to reach your goal.

The development of the Engineering Management Plan is also a great opportunity to build a shared understanding of the project with the team. Used intelligently the plan should greatly ease the journey of design and development of complex systems.

Business optimisation

By Raúl Leal, on 14 May 2014

We were recently invited to participate in a bid to provide consultancy to an organisation. They are looking for the work of consultants hoping to gain a step change in their own capabilities when confronted with their very difficult business challenges.

This call for proposals made me think, what are the necessary conditions for external consultants to be effective in their contribution to the management of an organisation and how do we understand their input in terms of the systemicity of the organisation?

Sometimes consultants are brought into organisations when they are faced with very difficult problems and feel surpassed by the situation or somehow unable to resolve it. I wonder if the idea of solving some difficult problems through consultants can be seen, under certain circumstances, as a ‘disturbance’ that shakes the organisation into new areas that take them to find better answers. I would argue this is a problem that can be partly understood as an optimisation problem when you are searching for the most appropriate variables and their most appropriate combination to find the optimum of a function. Here the problem is the most appropriate way to deliver a complex project  (for example) and the variables are the exact number and identity of the resources (with their capability) involved. The function to be optimised is the business performance metrics. Of course, the situation arises not only when we get to the point of bringing in consultants, sometimes organisations face difficult problems and embark on solving them themselves. But the question is still relevant, if the organisation is a system, is the consultant (or the ‘hero’ within the organisation) anti-systemic? are they part of the system even if their participation is sporadic and is not in keeping with the internal dynamics? Or are they perhaps just bringing to light dynamics (or parts) of the system that had not been identified? Finally, is there anything we can learn from the field of optimisation in maths and search algorithms (in numerical computing) that we can transfer to the management and design of complex systems, including organisations?

It will be nigh on impossible to transfer directly an organisational problem to a numerical optimisation problem because many of the variables at play are non-quantifiable, being of a human nature. Nevertheless I reckon making the analogy with numerical optimisation gives you a very good chance of understanding the underlying and overarching dynamics of the organisation.

Just systems thinking…