Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

A cross-disciplinary collaboration designing, evaluating, and translating trustworthy AI to enhance patient safety.

SHAIPE - Safer Healthcare AI through Psychology and Ergonomics

logo.png 

Engineering

Engineers at the Institute of Biomedical Engineering, led by Professor Alison Noble, develop robust algorithms and open, reproducible workflows for clinical AI. See Noble Group.

Psychology

Researchers in the Department of Experimental Psychology, led by Nick Yeung, study how clinicians use AI tools in decision-making and how to support effective human–AI teaming. See Attention & Cognitive Control Lab.

Clinical practice

Clinicians at OxSTaR, led by Professor Helen Higham, focus on human factors and simulation-based training to evaluate AI-enabled pathways and improve safety in real clinical settings. See OxSTaR.

Research themes

  • Trustworthy and transparent AI
  • Human factors and clinical simulation
  • Decision support and workflow integration
  • Evaluation frameworks and safety metrics

Recent highlights

  • Collaborative projects across engineering, psychology, and clinical simulation
  • Workshops on safe deployment and evaluation of AI tools
  • Preprints and publications on human–AI decision-making

 

About SHAIPE

SHAIPE (Safer Healthcare AI through Psychology and Ergonomics) is a cross-disciplinary initiative uniting:

  • Engineers at the Institute of Biomedical Engineering (IBME), led by Professor Alison Noble;
  • Psychologists in the Department of Experimental Psychology, led by Professor Nick Yeung;
  • Clinicians at OxSTaR, led by Professor Helen Higham, focusing on human factors and simulation.

We design, evaluate, and translate AI technologies to improve patient safety and support clinical decision-making.

 

SHAIPE team group photoSHAIPE team at the 3rd Human–AI for Clinical Decision-Making Workshop, IBME, University of Oxford — 12th September 2025

 

People

SHAIPE is a collaborative group spanning engineering, psychology, and clinical practice.

Leads


Alison Noble

Institute of Biomedical Engineering, University of Oxford.

Portrait of Professor Nick Yeung

Nick Yeung

Department of Experimental Psychology, University of Oxford.

Portrait of Professor Helen Higham

Helen Higham

OxSTaR Centre, University of Oxford — Clinical Lead.

helen-higham.jpg

Members

 

Joshua Strong

DPhil Student — Engineering

Interests: Human–AI Collaboration, healthcare, LLMs

Working on: Deferral applications in healthcare

Emailjoshua.strong@eng.ox.ac.uk

Portrait of Harry Rogers

Harry Rogers

Postdoc — Engineering

Interests: Human–AI Collaboration, VLMs

Working on: AI as An Instructor

Portrait of Dr Nicholas Phillips

Nicholas Phillips

Academic Specialised Foundation Doctor — Clinical

Interests: AI in healthcare and education

Working on: Predictive Melanoma Models, AI in Dermatology

Emailnicholas.phillips@msd.ox.ac.uk

Open to collaborationLinkedIn

Portrait of Dr Rubeta Matin

Rubeta Matin

Consultant Dermatologist (OUHFT) & Honorary Senior Clinical Lecturer (RDM) — Clinical

International academic clinical expert in diagnosis and management of skin cancer and skin disease in immunosuppressed patients, with a strong background in AI for dermatology. Committed to safe, equitable, and ethically responsible AI.

Portrait of Ben Attwood

Ben Attwood

Chief Digital & Information Officer (OUHFT) & Consultant in Anaesthesia & Intensive Care — Clinical

Ben is Chief Digital and Information Officer for OUH NHS Foundation Trust and a Consultant in Anaesthesia and Intensive Care. He is passionate about ensuring information and data is presented to the right people in the right place to deliver higher quality care.

Portrait of Anna Louise Todsen

Anna Louise Todsen

DPhil Student — Psychology

Interests: Cognitive and systems-level perspectives on clinical AI development and implementation

Working on: Evidence from real clinical environments across multiple specialties

Emailanna.todsen@lincoln.ox.ac.uk

LinkedIn

Portrait of Pramit Saha

Pramit Saha

Postdoc — Engineering

Interests: Federated learning, human–AI collaboration, multimodal learning, agentic AI in healthcare

LinkedIn

 

pramit_pic.jpg

Research

 

Themes

  • Trustworthy, transparent AI methods for clinical settings
  • Human factors, simulation, and evaluation
  • Decision support and workflow integration
  • Safety metrics, audit, and governance

Projects

Understanding AI integration in dermatology

Ethnographic study at Oxford University Hospitals NHS Foundation Trust on how AI fits dermatology practice, aligned with the NHS 10‑year plan.

  • Focus: feasibility, acceptability, and clinical need (the “applicability of AI”).
  • Methods: observations, interviews, focus groups, and co‑design with clinicians; guided by the In‑Situ System Observation Guide (I‑SOG).
  • Outcomes: workflow fit, clinician trust, and adoption barriers and enablers.

AI‑assisted assessment in VR paediatric training

We explore a Human–AI Collaboration (HAIC) approach to assessing medical students in VR paediatric emergency simulations. The system auto‑grades clear performances and flags ambiguous cases for expert review, aiming for scalable, consistent, and objective skills assessment with high‑quality feedback.

 

Publications

Selected publications and preprints relevant to clinical AI and human–AI decision-making.

  • Coming soon — add BibTeX or auto-import from ORCID/DBLP/Google Scholar.

Welcome to SHAIPE

Posted 2026

We are launching a cross-disciplinary effort to improve safety in clinical AI. Watch this space for updates.

Events

Upcoming talks, workshops, and symposia on clinical AI safety.

  • LLMs@Oxford: “LLMs as clinical decision support tools” — talk by Professor Helen Higham.
  • Human–AI for Clinical Decision‑Making Workshop (third) — hosted by SHAIPE at the Institute of Biomedical Engineering, University of Oxford.

View the original website HERE