X Close

UKRI Centre for Doctoral Training in Foundational AI

Home

Menu

Archive for the 'Discussion series' Category

CDT Foundational Artificial Intelligence Showcase: London. 22-24 July

By Claire Hudson, on 27 August 2024

This year, CDT students, academics and speakers along with staff and students from the prestigious Erasmus Mundus Joint Master’s Programme in AI gathered at the AI Centre for a journey into research, innovation and collaboration at the annual CDT Showcase.
The event kicked off with a session focusing on “The future of AI: Forming your own opinion on what’s coming, when it’s coming, and what we should

or shouldn’t do about it”. Here we explored AI Bias, AI and Warfare and AI Regulation – topics which sparked some lively debates and fostered a spirit of critical thinking amongst attendees.

After lunch, we heard from Dr Anthony Bourachad with his talk titled ART: AI’s final frontier” in which Dr Bourachad presented AI’s foray into the world of art and the Pandora’s box of questions this opens. His talk delved into the philosophical debates about what it means to create, the legal intricacies of ownership and moral rights, and the use of AI as a tool to analyze historical art.
This led to the next session in which we had the opportunity to view participants’ AI and Art entries during a mini museum experience. On display were over 40 entries from current CDT students and students from the Erasmus Mundus Joint Master’s Programme.  All artwork had to be original and created by the submitting artist and there were many impressive submissions, each telling a story and highlighting a wealth of creativity and innovation from the artists.
More on the winners later…..
To conclude the first day, attendees were treated to a vibrant and engaging social event at Immersive Gamebox  This immersive activity provided a welcome break from the day’s sessions and created many memorable moments whilst fostering relationships between participants. Truly a wonderful way to close the first day of the Showcase and a chance to solidify connections made during the conference sessions.
Day two started with a morning of informative presentations from CDT students in which we heard more about their research. With topics ranging from ” A Human-Centric Assessment of the Usefulness of Attribution Methods in Computer Vision” to “Latent Attention for Linear Time Transformers” to a talk on the “Theory of generative modelling – rethinking generative modelling as optimization in the space of measures”  The range of topics being presented provided a reminder about the diversity and exciting research that is being conducted from students and demonstrates why centres such as the FAI CDT are crucial to foster interdisciplinary research in this ever changing landscape.
One of the highlights of the showcase was the afternoon’s visit to the offices of Conception X.
Conception X is the UK’s leading PhD deeptech venture programme and assists PhD students to launch deeptech startups based on their research. There are two tracks available. “Project X” which is for PhD students interested in developing business skills through training designed for STEM researchers, and “Startup X” which is aimed at  PhD students ready to build startups.
During our visit, we enjoyed a welcome introduction from Dr Riam Kanso, Chief Executive Officer who spoke about how Conception X is leading the way in enabling scientists to create companies from their research. This was followed by presentations from entrepreneurs who have been successful in launching their companies with the support of Conception X and concluded with a host of questions from students all seemingly keen to find out more about the Conception X programme and how they too might launch their entrepreneurial journey.
Day three started with a visit to the Intelligent Robotics Lab at UCL East in which the group enjoyed a fast-paced morning with Professor Igor Gaponov.

 The lab is a world-leading research centre of excellence, dedicated to autonomous robotics, specializing in robots that can make decisions in the real-world and act on those. The lab covers areas from mechatronics and control to robot vision and learning, so our group were delighted to be able to hear more about the fascinating research that is emerging and would like to thank Professor Gaponov for providing such a wonderful opportunity to our group.

The final afternoon was filled with key note talks on a range of AI related topics. First up was  Avanade’s Emerging Technology R&D Engineering lead, Fergus Kidd with his talk titled ” The road to General Artificial Intelligence”. Next up was Professor Niloy Mitra and his talk on “what are Good Representations for 3D-aware Generative Models’ then we concluded with a presentation from Sophia Banno – Assistant Professor in Robotics and Artificial Intelligence at UCL and her talk looking at the future of AI and Robotics in Surgical Interventions!
All of these talks emphasized the importance of sustained innovation and collaboration in this rapidly evolving world and provided an intriguing end to the formal presentations of the CDT Showcase.

The final session was an opportunity to view and discuss a variety of posters that students had produced which represented their research. Poster sessions are always a great opportunity for researchers to share their findings in a visual format and encourage observers to delve deeper into specific areas of interest. It was inspiring to witness this session buzzing with an energy that underscores the collaborative spirit that defines the CDT showcase experience.

To close, our sponsor G-Research presented prizes for the AI and Art competition and best poster award to the following recipients
AI & ART
Judging was based on three key criteria (i) Description: convincing description that is compelling and an ability to explain the concept (ii) Novelty: originality of the idea and (iii) Aesthetics.
1st:
Romy Williamson
the convergence of perception
2nd:
Reuben Adams
Nook
3rd:
Pedro José Ferreira Moreira
UCL Summer School
4th:
Kai Biegun
In With The New
5th:
Roberta Chissich
Fores Escape
POSTER SESSION
1st:
Adrian Gheorghiu & Pedro Moreira
Joint 2nd:
Lorenz Wolf.
Mirgahney Mohamed & Jake Cunningham
4th:
Sierra Bonilla
5th:
Bernardo Perrone De Menezes Bulcao Ribeiro & Roberta Chissich

We would like to take this opportunity to thank G Research for their generous sponsorship of the AI & Art competition and Best Poster award.

Looking ahead, the connections made and ideas exchanged during these three days will continue to develop, shaping the future of AI. The Foundational Artificial Intelligence CDT Annual Conference is a platform for researchers and academics to showcase their research and innovation and this event proved to be a melting pot of ideas, insights, and networking opportunities, shaping the future landscape of AI.

We look forward to hosting the event again next year!

Understanding and Navigating the Risks of AI – By Reuben Adams

By sharon.betts, on 19 October 2023

It is undeniable at this point that AI is going to radically shape our future. After decades of effort, the field has finally developed techniques that can be used to create systems robust enough to survive the rough and tumble of the real world. As academics we are often driven by curiosity, yet rather quickly the curiosities we are studying and creating have the potential for tremendous real-world impact.

It is becoming ever more important to keep an eye on the consequences of our research, and to try to anticipate potential risks.

This has been the purpose of our AI discussion series that I have organised for the members of the AI Centre, especially for those on our Foundational AI CDT.

I kicked off the discussion series with a talk outlining the ongoing debate over whether there is an existential risk from AI “going rogue,” as Yoshua Bengio has put it. By this I mean a risk of humanity as a whole losing control over powerful AI systems. While this sounds like science fiction at first blush, it is fair to say that this debate is far from settled in the AI research community. There are very strong feelings on both sides, and if we are to cooperate as a community in mitigating risks from AI, it is urgent that we form a consensus on what these risks are. By presenting the arguments from both sides in a neutral way, I hope I have done a small amount to help those on both ends of the spectrum understand each other. You can watch my talk here: https://www.youtube.com/watch?v=PI9OXHPyN8M

Ivan Vegner, PhD student in NLP at the University of Edinburgh, was kind enough to travel down for our second talk, on properties of agents in general, both biological and artificial. He argued that sufficiently agentic AI systems, if created, would pose serious risks to humanity, because they may pursue sub-goals such as seeking power and influence or increasing their resistance to being switched off—after all, almost any goal is easier to pursue if you have power and cannot be switched off! Stuart Russell pithily puts this as “You can’t fetch the coffee if you’re dead.” Ivan is an incredibly lucid speaker. You can see his talk Human-like in Every way? here https://www.youtube.com/watch?v=LGeOMA25Xvc

For some, a crux in this existential risk question might be whether AI systems will think like us, or in some alien way. Perhaps we can more easily keep AI systems under control if we can create them in our own image? Or could this backfire—could we end up with systems that have the understanding to deceive or manipulate? Professors Chris Watkins and Nello Christianini dug into this question for us by debating the motion “We can expect machines to eventually think in a human-like way.” (Chris for, Nello against). There were many, many questions afterwards, and Chris and Nello very kindly stayed around to continue the conversation. Watch the debate here: https://www.youtube.com/watch?v=zWCUHmIdWhE

Separate from all of this is the question of misuse. Many technologies are dual-use, but their downsides can be successfully limited through regulation. With AI it is different: the scale can be enormous and rapidly increased (often the bottleneck is simply buying/renting more GPUs), there is a culture of immediately open-sourcing software so that anyone can use it, and AI models often require very little expertise to run or adapt to new use-cases. Professor Mirco Musolesi outlined a number of risks he perceives from using AI systems to autonomously make decisions in economics, geo-politics, and warfare. His talk was incredibly thought-provoking: You can see his talk here: https://youtu.be/QH9eYPglgt8

This series has helped foster an ongoing conversation in the AI Centre on the risks of AI and how we can potentially steer around them. Suffice to say it is a minefield.

We should certainly not forget the incredible potential of AI to have a positive impact on society, from automated and personalised medicine, to the acceleration of scientific and technological advancements aimed at mitigating climate change. But there is no shortage of perceived risks, and currently a disconcerting lack of technical and political strategies to deal with them. Many of us at the AI Centre are deeply worried about where we are going. Many of us are optimists. We need to keep talking and increase our common ground.

We’re racing into the future. Let’s hope we get what AI has been promising society for decades. Let’s try and steer ourselves along the way.

 

Reuben Adams is a final year PhD student in the UKRI CDT in Foundational AI.