top of page
AI fringe event background.JPG

AI Ethics, Risks and Safety Conference

Thursday 15th May 

Watershed, Bristol. UK

​​

Following the success of its first edition, the AI Ethics, Risks, and Safety Conference returns in 2025 to bring together leading experts to discuss the challenges and opportunities of AI.

The programme includes presentations from experts from around the UK who will gather in Bristol to share their insights on current and future developments in the field of Artificial Intelligence.

​​

Featuring speakers from industry and government, this year’s conference will explore AI assurance, business applications of AI,  the ethical implications of AI agents, the future of AI, and the resources available for building safe and responsible Artificial Intelligence.

Join the Innovation Showcase!

This year, we are opening the floor at the Watershed to businesses and organisations to showcase their AI solutions to industry leaders, researchers, and practitioners.

Fill out this form to receive more information.

Back in its second edition the AI Ethics, Risks and Safety will bring together

Meet The Team

Conference Speakers 2025

Amy Dickens.JPG

Dr. Amy Dickens

AI assurance Lead​​

Department for Science Innovation and Technology

Owen Parson.jpg

Dr. Owen Parsons

Lead Machine Learning  Scientist

Mind Foundry

geoff-keeling-headshot.jpg

Dr. Geoff Keeling
 

Senior Researcher Scientist

Google

Arcangelo.JPG
Photo Karin.jpg
Sam Young.jpg

Arcangelo Leone de Castris.

Research Associate.

The Alan Turing Institute

Karin Rudolph

Founder

Collective Intelligence

Samuel Young

AI Practice Manager

Energy Systems Catapult

BEN-4.jpeg

Ben Byford

Podcast Host

Machine Ethics Podcast

Rich Gilham headshot.jpeg

Dr. Richard Gilham

Supercomputer Infrastructure Specialist

University of Bristol

Sophie Bliss.jpg

Sophie Bliss

Senior Delivery Manager

Incubator for AI

Dr Simon Fothergill.jpg

Dr. Simon Fothergill

 Lead AI Engineer.

Lucent AI 

Lucy Mason. Capgemini.jpg

Lucy Mason

 Director

Capgemini Invent

Conference Programme

  • AI assurance for industry

Speaker: Dr. Amy Dickens. AI Assurance Lead. Department for Science, Innovation and Technology.  

Tools for trustworthy AI- including AI assurance- are crucial to ensure the trustworthy and responsible use of AI systems.​​

In this talk, Amy will provide an overview of the UK government’s approach to AI assurance and introduce AI Management Essentials, the new self-assessment tool it is developing to support industry- particularly SMEs and scale-ups- to establish robust management practices for AI systems they develop and/or deploy. 
 

  • Using AI-in-the-loop to support human oversight in high-stakes applications.

Speaker: Dr. Owen Parsons. Lead Machine Learning Scientist. Mind Foundry

In high-stakes applications, AI-driven systems often rely on human-in-the-loop oversight, but this approach risks limiting human control and potentially causing harm. A safer way to frame this is through human-driven decision-making where AI supports rather than overrules human expertise. One effective method is using semi-supervised learning to guide expert human labellers, ensuring that machine learning models learn from efficiently sampled human expert data. By integrating AI-in-the-loop, we can enhance human oversight to help make decision-making processes more efficient and reliable.

This talk will explore how these approaches harness and amplify human expertise, leading to safer and more accountable AI systems.

  • Advancing responsible AI adoption for business: An introduction to the AI use case framework 

Speaker:

Arcangelo Leone de Castris. ​Research Associate. The Alan Turing Institute.

This session will offer practical insights into how business organisations can identify possible uses of AI in their area, understand the nature of relevant technological solutions, and develop a sound strategy to operationalise those solutions responsibly.
The session will provide a detailed overview of the AI use case framework, a tool recently published by researchers at The Alan Turing Institute to support businesses in their AI adoption journey.

The framework lays out some of the key information to consider when assessing the suitability of an AI use case and provides concrete foundations that organisations can build on based on case-specific needs and conditions.

  • Does using AI to accelerate decarbonisation outweigh its environmental impacts? 

Speaker: Sam Young. AI Practice Manager. Energy Systems Catapult.

Tech optimists claim AI will solve climate change if we just invest enough in it. Environmentalists claim AI’s voracious appetite for energy is making it all the harder to achieve Net Zero. How do we trade off the potential for AI to help solve some of society’s big challenges with the risk that it will greatly exacerbate them if not handled well?

This talk will bring a systems thinking perspective to that question, and explore how we might tip the AI scales in favour of decarbonisation.

  • Going fast with the UK's fastest supercomputer, Isambard-AI

Speaker: Dr Richard Gilham. Supercomputer Infrastructure Specialist. University of Bristol.

​Richard is a Supercomputer Infrastructure Specialist, currently a key member of the team building Isambard-AI at Bristol Centre for Supercomputing (BriCS), within University of Bristol. Isambard-AI is due to be the fastest supercomputer in the UK when it goes fully operational in summer 2025 and is a critical element of the Government's recently announced AI Opportunities Action Plan. The session will explore the innovative approaches that allowed the design and build to proceed very rapidly for a project of this size, and the design decisions that have been made to make this game-changing supercomputer accessible to a diverse range of researchers beyond traditional supercomputing communities.

  • Ethics and the agentic turn.

Speaker: Dr Geoff Keeling. Senior Research Scientist at Google

AI development is undergoing an agentic turn. Large models, including large language and multi-modal models, are not inherently agentic but instead constitute powerful cultural technologies drawing upon web-scale human data as a knowledge base. Yet leading AI labs are combining large models with affordances such as memory, reasoning, and planning modules, alongside application programming interfaces (APIs) for “tool” use. Their goal is to deliver a technological paradigm shift: a new generation of AI agents that can access vast knowledge repositories, automate tasks, and generate novel and creative solutions to complex problems. AI agents present a complex and largely unexplored ethical landscape.

In this talk, Geoff will examine four aspects of the ethics of AI agents: safety, relationships, multi-agent interaction, and safe exploration.

  • Living with AI: the next five years. Panel discussion and podcast recording.

Host. Ben Byford.  Machine Ethics Podcast.

Panellists:

Sophie Bliss. Senior Delivery Manager.  Incubator for AI.

Dr.  Simon Fothergill. Lead AI Engineer. Lucent AI

Lucy Mason. Director. Capgemini Invent.

Join a lively discussion with a panel of experts to discuss the next five years of AI development.

Sponsored by​

UoB_RGB_24.png

Content Partner

bi-foresight-logo-full-color-rgb-1920px-w-144ppi (1).png

Exhibitors

CAILogoFullColour.png
Tikos.jpg
UWE.jpg

We are grateful for the support of the following organisations:

DSIT.png
CyNam_Logo_Positive (6).jpg
image.png
image.png

Join the AI Innovation Showcase!

Showcase your AI Innovation at the AI Ethics, Risks, and Safety Conference!

This year, we are opening the floor at the Watershed for businesses to showcase their AI solutions to industry leaders, researchers, and practitioners.

What Are We Looking For?

We’re seeking innovators and researchers who are shaping the future of AI, particularly in:

  • Applied AI in fields such as sustainability, healthcare, education and cybersecurity.

  • Generative AI: chatbots and virtual assistants, image generation, synthetic media, etc.

 

This space is for start-ups, businesses and universities to showcase innovative solutions and research that consider ethical implications, safety, and societal impact.

Fill out the form below to receive more information

 

  

bottom of page