X Close

UCL Department of Science, Technology, Engineering and Public Policy

Home

Applied in Focus. Global in Reach

Menu

The importance of collaboration to advance digital health

By luis.lacerda, on 27 March 2024

Earlier this month the Government announced a £3billion+ package to update fragmented and outdated IT systems across the NHS and transform the use of data to ease administrative burdens. That same week, the Policy Impact Unit (PIU) hosted a roundtable on digital health in the UK, bringing together colleagues from across UCL (see co-authors) as well as visiting researchers from the FioCruz Oswaldo Cruz Foundation in Brazil.

FioCruz is a federal public research foundation working with academic autonomy under the Ministry of Health of Brazil which was responsible for coordinating the COVID-19 vaccination campaign. The Brazilian delegation were keen to hear about UK experiences on health digitisation, challenges and opportunities, as well as developing a deeper understanding of the context and evaluation of several commitments agreed under the Brazil-UK High-Level Strategic Dialogues from 2020, some of which focussed on health cooperation and were funded by the Official Development Assistance (ODA) [1].

The main challenges discussed in the meeting, in relation to the digitalisation of the NHS, were systems’ interoperability, training and workforce capacity. Although there has been a push towards the adoption of federated data platforms (FDP), which will sit across NHS trusts and integrated care systems allowing them to connect data they already hold in a secure and safe environment, these are still disjointed and connecting them relies on platform providers talking to each other, which often does not happen.

Common challenges: interoperability, training and workforce capacity

The adoption of new digital health approaches is also reliant on having trained healthcare professionals to understand the power of data and new technologies. Particularly in primary care and GPs it is essential to have digitally literate colleagues that can engage communities, be clear and transparent about how health data is used and input it correctly to build FDPs that can be further used for research and to invest on the health of the nation[2]. Programmes like the NHS “Developing healthcare workers’ confidence in artificial intelligence” and inclusive digital healthcare are important, because there is a risk that ambitions to digitise the NHS, which are well intended, could exacerbate existing health inequalities and exclude some groups.

Incidentally, there is still a lack of progress to de-identify General Practice data and address low levels of confidence in new technologies among diverse communities – such as highlighted in the Health and Social Care Committee’s recent evaluation. Trust can be undermined as is the societal buy-in needed to deliver on ambitions for a more digital NHS.

Opportunities and way forward: innovation in regulatory mechanisms

On the flipside, there is an opportunity to bring people in early on to discussions on how AI tools are being used in medical devices, and how to properly manage the balance of risk and benefits such technologies may bring. The recent launch of the UK Regulatory Science and Innovation Networks was discussed, as well as the launch of a MHRA roadmap to create a framework for medical devices in the UK. Patients, researchers and industry representatives being included in this process, and being clear about how data can be used for the purposes of research, poses a great opportunity to bring real impact to clinical practice in terms of diagnosis, treatment, and monitoring of diseases.

Including other global partners in this conversation is essential given the importance of sharing learnings in different contexts, but also given the increasingly important role of international recognition in the medical domain as a factor to evidence impact. Specifically for global issues such as AI and post-market surveillance, where it is very difficult for regulators to know how new tools will perform before they are deployed, there is now a chance to have new standards emerge to shape digital health strategies across countries. We hope that visits like this inspire colleagues to work collaboratively and look forward to hearing from FioCruz how their visit is supporting Brazilian policy decisions on the development of digital health strategies.

Authors Note

Written by Dr. Luís Lacerda, Policy Impact Unit and co-authored by Professor Amitava Banerjee, UCL Institute of Health Informatics, Professor Derek Hill, UCL Dept of Medical Physics & Biomedical Engineering and Professor Patty Kostkova, UCL Institute for Risk & Disaster Reduction.

References

[1] For a list of projects funded under the scheme, please visit https://devtracker.fcdo.gov.uk/

[2] A particular good example was the COVID-19 registry where data such as vaccination rates, long-covid reports were put together in the same place and from different countries.

What can we do to decrease the cost of advanced cancer therapies and make them available for all?

By luis.lacerda, on 9 February 2024

There are 3 million people living with cancer in the UK, predicted to rise to 4 million by 2030[1]. Different societal groups are affected differently, in particular ethnic minorities who experience poorer outcomes[2]. Health inequalities are complex and their root causes diverse, including the fact that some cancers are more prevalent in specific communities[3]. Advanced research on targeted and personalised treatments can therefore bring hope to improve outcomes in the future and to “close the gap” in the access to cancer care. But how can these be made more affordable and included in holistic government strategies to manage cancer care?

Illustration of two people, two pill bottles and two DNA strandsAt UCL, the Future Targeted Healthcare Manufacturing Hub (FTHM Hub), which brings together academics, manufacturers, and policymakers, has been addressing manufacturing, business, and regulatory challenges to ensure that new targeted biological medicines can be developed quickly and manufactured at a cost affordable to society. This includes innovative research on the manufacture of promising cancer therapies ranging from Chimeric antigen receptor T-cell (CAR-T) therapies through to targeted drug therapies such as antibody-drug conjugates and cancer vaccines. The Hub engages with and supports several clinical groups at UCL that develop advanced therapy medicinal products (ATMPs), some of which have been commercialised or are being translated into the clinic.

The FTHM Hub’s work also includes more fundamental research into optimising manufacturing by innovating processes and finding new ways of reducing production costs of these therapies. Examples of this activity include manufacturing autologous CAR-T therapy at the patient’s bedside or in an automated “GMP-in-a box” system[4], which can bring about benefits in terms of cost reductions, accelerating bench-to-bedside innovation, and mitigate risks that are generated by market shortages[5].

The Hub has worked closely with healthcare specialists and regulatory authorities to analyse how CAR-Ts and other high-cost therapies affect NHS England’s ability to resource other health services. It has conducted detailed supply chain economics analysis to identify key cost of goods drivers for CAR-T therapies, supply chain optimisation, and to assess the risk-reward trade-offs between centralised and distributed manufacture.

The recent agreement reached between the Department of Health and Social Care (DHSC), NHS England and the Association of the British Pharmaceutical Industry (ABPI) on a voluntary scheme for branded medicines pricing, access and growth is a welcomed programme to explore how industry and government can better work to support the delivery of new advanced treatments for cancer, but this is not enough.

Furthermore, and for this important work to continue, investment and support on advanced manufacturing is required to understand possible implementation challenges of novel options such GMP-in-a-box in clinical settings. The new UK’s life sciences manufacturing funding to build resilience for future health emergencies is a good opportunity to do this to expand on the FTHM Hub’s work and ensure every patient living with cancer will have accessibility of treatment irrespective of geographical location.

In addition, time and cost of travel to specialised centres can pose an economic burden to patients and carers due to disparities in cancer care. New centres will also need dedicated staff to help deliver advanced therapies and the FTHM Hub is also training a new generation of professionals to enable rollout of those to patients.

In the week that marks World Cancer Day, the FTHM Hub continues to develop important work to treat patients with cancer and it is our hope at the Policy Impact Unit that we can work towards imagining new futures together, close the care gap, and bring better outcomes for all of those living with cancer.

 

References

[1] https://www.macmillan.org.uk/dfsmedia/1a6f23537f7f4519bb0cf14c45b2a629/11424-10061/Macmillan%20statistics%20fact%20sheet%20February%202023

[2] Martins, T., Abel, G., Ukoumunne, O.C. et al. Ethnic inequalities in routes to diagnosis of cancer: a population-based UK cohort study. Br J Cancer 127, 863–871 (2022). https://doi.org/10.1038/s41416-022-01847-x

[3] Delon, C., Brown, K.F., Payne, N.W.S. et al. Differences in cancer incidence by broad ethnic group in England, 2013–2017. Br J Cancer 126, 1765–1773 (2022). https://doi.org/10.1038/s41416-022-01718-5

[4] Pereira Chilima, T. & S. Farid. 2019. ‘A roadmap to successful commercialization of autologous CAR T-cell products with centralized and bedside manufacture.’ Cell Gene Therapies VI 73. Comisel, R. 2022. Decisional Tools for Supply Chain Economics of Cell and Gene Therapy Products. Diss. UCL (University College London).

[5] Bicudo, E. & I. Brass. 2023, ‘Advanced therapies, hospital exemptions & marketing authorizations: the UK’s emerging regulatory framework for point-of-care manufacture’ Cell and Gene Therapy Insights 9(1), 101-120.

Adversarial Attacks, Robustness and Generalization in Deep Reinforcement Learning

By Ezgi Korkmaz, on 20 December 2023

Reinforcement learning has achieved substantial progress on successfully completing tasks, from solving complex games to large language models (i.e. GPT-4) including many different fields from medical applications to self-driving vehicles and finance, by learning from raw high-dimensional data with the utilization of deep neural networks as function approximators.

The vulnerabilities of deep reinforcement learning policies against adversarial attacks have been demonstrated in prior studies [1,2,3,4]. However, a recent study takes these vulnerabilities one step further and introduces natural attacks (i.e. natural changes to the environment given that these changes are imperceptible) while providing a contradistinction between adversarial attacks and natural attacks. The instances of such changes include, but are not limited to creating a blur, introduction of compression artifacts, or perspective projection of the state observations at a level that humans cannot perceive the change.

Intriguingly, the results reported demonstrate that these natural attacks are at least equally, and often more imperceptible compared to adversarial attacks, while causing larger drop in policy performance. While these results carry significant concerns regarding artificial intelligence safety [5,6,7], they further raise questions on the model’s security. Note that the prior studies on adversarial attacks on deep reinforcement learning rely on the strong adversary assumption, in which the adversary has access to the policy’s perception system, training details of the policy (e.g. algorithm, neural network architecture, training dataset), and the ability to alter observations in real time with simultaneous modifications to the observation system of the policy with computationally demanding adversarial formulations. Thus, the fact that natural attacks described in [8] are black-box adversarial attacks, i.e. the adversary does not have access to the training details of the policy and the policy’s perception system to compute the adversarial perturbations, raises further questions on machine learning safety and responsible artificial intelligence.

Furthermore, the second part of the paper investigates the robustness of adversarially trained deep reinforcement learning policies (i.e. robust reinforcement learning) under natural attacks, and demonstrates that vanilla trained deep reinforcement learning policies are more robust than adversarially, i.e. robust, trained policies. While these results reveal further security concerns regarding the robust reinforcement learning algorithms, they further demonstrate that adversarially trained deep reinforcement learning policies cannot generalize at the same level as straightforward vanilla trained deep reinforcement learning algorithms.

This study overall, while providing a contradistinction between adversarial attacks and natural black-box attacks, further reveals the connection between generalization in reinforcement learning and the adversarial perspective.

Author’s Note: This blog post is based on the paper ‘Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness’ published in AAAI 2023.
References:
[1] Adversarial Attacks on Neural Network Policies, ICLR 2017.
[2] Investigating Vulnerabilities of Deep Neural Policies. Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 2021.
[3] Deep Reinforcement Learning Policies Learn Shared Adversarial Features Across MDPs. AAAI Conference on Artificial Intelligence, AAAI 2022. [Paper Link]
[4] Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions. International Conference on Machine Learning, ICML 2023. [Paper Link]
[5] New York Times. Global Leaders Warn A.I. Could Cause ‘Catastrophic’ Harm, November 2023.
[6] The Washington Post. 17 fatalities, 736 crashes: The shocking toll of Tesla’s Autopilot, June 2023.
[7] The Guardian. UK, US, EU and China sign declaration of AI’s ‘catastrophic’ danger, November 2023.
[8] Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness, AAAI Conference on Artificial Intelligence, AAAI 2023. [Paper Link]
[9] Understanding and Diagnosing Deep Reinforcement Learning. International Conference on Machine Learning, ICML 2024. [Paper Link]

Blog Series – Breaking BEIS: Risks & Opportunities for Engineering Policy (4/4)

By laurent.liote.19, on 8 March 2023

This 4-part blog series covers the recent dismantling of the UK government’s department for Business, Energy and the Industrial Strategy (BEIS) and what it means for engineering policy. We take this opportunity to look at what we can learn from the creation and internal organisation of BEIS to reflect on how machinery of government changes affect engineering in and for policy. This blog series is written by final year PhD candidate Laurent Lioté, working on engineering advice for energy policy and part of STEaPP’s Engineering Policy Group.      

Science, Innovation and Technology… but still no engineering

My final point is somewhat more conceptual than my previous ones (posts 1, post 2 , post 3) but just as important. Engineering is clearly key for energy and innovation policy so why does it not get an explicit mention in the new ministries’ names or remit? Perhaps because the concepts of science, innovation and technology are thought to cover engineering – but this is not exactly true and has an important impact on engineering policy. All the arguments made in this post are adapted from this article written with Adam Cooper and Chloé Colomer, where we discuss this topic in more detail.

Science is often thought to include engineering because of the common belief that engineers “apply science” in the process of innovation or technology creation. But this is not always the case, a lot of engineering focuses on maintaining systems and optimising already existing processes. Moreover, a lot of science happens in publicly funded academic research institutions whereas most engineers work in private sector companies.

Taking a narrower view, we can also make the case that engineering and science advice for energy policy (now the in the Department for Energy Security and Net Zero’s portfolio) is different too.  Within energy policy, science advice focuses on the biological (like the types of organisms in an anaerobic digester or amount of gas emitted) and engineering focuses on the physical features of the reactor (like how the reactor and engine are built). Engineering is about objects and their performance whereas science is about bio- and ecosystems. Science advice, because it is concerned with biological and ecosystems, is methodologically driven by a hypothesis that measurements can validate or invalidate. Engineering advice on the other hand is outcome-driven or solution-oriented, where measurements help achieve a goal that best meets project design criteria.

Focusing on innovation or technology doesn’t do engineering justice either. Indeed, such focus is necessarily rooted in objects, with the engineers in orbit. Whereas a focus on engineers is rooted in their skills, knowledge, and practices, often with technologies in orbit. Exploring engineering practice surfaces how engineers draw on existing and new knowledge, how they communicate amongst themselves and with others in a way that exploring technology does not.

If assumptions and concepts from “science, innovation and technology” do not apply equally to engineering, perhaps a distinction in policy terms is important if engineering is to be governed effectively. But perhaps we’ll have to wait for the next reshuffle to see engineering pop-up in the name or remit of a UK government department!

Blog Series – Breaking BEIS: Risks & Opportunities for Engineering Policy (3/4)

By laurent.liote.19, on 1 March 2023

This 4-part blog series covers the recent dismantling of the UK government’s department for Business, Energy and the Industrial Strategy (BEIS) and what it means for engineering policy. We take this opportunity to look at what we can learn from the creation and internal organisation of BEIS to reflect on how machinery of government changes affect engineering in and for policy. This blog series is written by final year PhD candidate Laurent Lioté, working on engineering advice for energy policy and part of STEaPP’s Engineering Policy Group.      

“Growth, bills and inflation”: making sure economics is not at odds with engineering

Back in January the Prime Minister articulated his vision for the country, stressing the need to halve inflation, grow the economy and reduce debt. I am not going to discuss this in this post… However, of particular relevance for us today is how these promises have made it into the new “post-BEIS” ministerial remits.

The Department for Energy Security and Net Zero is tasked with “securing the UK’s long-term energy supply, bringing down bills and halving inflation”. The Department for Science, Innovation and Technology’s remit is to “drive innovation, create new and better-paid jobs and grow the economy”. As we pointed out last week, both ministries are responsible for technically oriented policy fields (energy and innovation, respectively) yet their mission statements are very focused on economics concerns. Again, the economy is important, no doubt about that, but this economics-driven framing of energy and innovation policy could be at odds with engineering expertise. Let’s take a historical look at how this might manifest itself.

Going in back in time, DECC’s (The Department for Energy and Climate Change, BEIS’ predecessor) mission was to establish the UK as a world-leader in the fight against climate change. As we established, this led to an increase in in-house engineering expertise. Later, with the Conservative Party now in power, the vision shifted to “how can we use climate policy efforts for economic advantage”, which was closer to the view BIS (The Department for Business, Innovation and Skills, BEIS’ other predecessor) had over the issue. When DECC and BIS merged to form BEIS, the focus of the energy and innovation portfolio thus became more about economics than science and engineering.

This shift in focus had two impacts on engineering advice for energy policy. First, it meant that engineers were less involved in policy vision setting (as opposed to economists) which constrained policy options down the line. Indeed, at BEIS, engineers often mentioned that alternative technical options could have been viable, but they weren’t able to suggest them early enough in the process to shape policy direction accordingly. Second, and linked to our first point, the engineers were sometimes at odds with the policy advisers as the technical solution proposed did not match the economic-driven policy imperative (i.e. the technical solution was too costly).

This doesn’t mean that engineering and economic advice are mutually exclusive, far from it. However, there are a few lessons to learn from DECC and BEIS that might prove useful for the new departments. First, despite the economic framing of their mission, the new ministries will benefit from involving engineers (and leveraging engineering expertise) when setting policy directions. Second, and this is more for the engineers and policy advisers working within the new departments, it is always useful to be clear on what the policy is aiming to achieve from the start. I talk about recognising mutual expertise and developing interactional expertise in more detail in this article (if of interest!).

No matter what, energy and innovation policy will always require a mix of engineering and economics (and many more disciplines) – it’s just a matter of recognising the importance of both and acting accordingly. Which brings me to my final question, engineering is undeniably important to energy and innovation policy so why does it still not get an explicit mention in the ministries’ names or remit?

Find out in next week’s episode of Breaking BEIS!