National Commission into the Regulation of AI in Healthcare - PRSB's response

The National Commission into the Regulation of AI in Healthcare brings together global AI leaders, clinicians and regulators to advise on AI regulation in healthcare. The Commission aims to support the development of a new regulatory framework for AI in healthcare and will make recommendations to the MHRA in 2026.

As part of its research and engagement programme, a Call for Evidence invited contributions from individuals and organisations across the UK and internationally. We contributed to the Call for Evidence and set out our response below.


Data quality must come first

The rapid advancement of artificial intelligence (AI) in healthcare presents both significant opportunities and complex challenges for patient safety, care continuity, and the delivery of personalised care. This response is submitted to the National Commission into the Regulation of AI in Healthcare, drawing on the perspectives of PRSB’s members.

PRSB recognises the necessity of standardised health information as a foundation for safe and eHective AI deployment in healthcare. Standardisation ensures that AI systems can reliably interpret and share clinical data, reducing the risk of errors and facilitating interoperability across care settings. PRSB’s members call for regulatory frameworks that mandate adherence to recognised standards, promote data quality, and support the safe integration of AI with existing electronic health record systems.

By embedding information standards into the design and implementation of AI solutions, regulators can drive innovation while protecting patient safety and enabling coordinated, personalised care. PRSB also recommends ongoing collaboration between standards bodies, regulators, clinicians, and technology developers to ensure that standards remain relevant and responsive to evolving AI capabilities and clinical needs.

In summary, for the safe use of AI in healthcare, we should start with standards to ensure data quality.

Patient safety

EHective information sharing underpins patient safety by reducing the risk of inadvertent harm, such as missed diagnoses, inappropriate procedures, or incorrect prescriptions. AI systems could be used to bring together structured and unstructured information from across fragmented clinical record systems to enable clinicians to access comprehensive, up-to-date information at the point of care. This would reduce clinician time spent pulling together information before an appointment and would surface information that they would not routinely be able to access. For example, there is a problem with sharing safeguarding concerns, which can often be recorded as narrative, and enabling this information to be accessed by clinicians could enable them to exercise their professional curiosity and substantiate their concerns to escalate faster and prevent harm.

However, the deployment of AI introduces new risks, including algorithmic bias, data quality issues, ‘LLM hallucinations’ and the potential for opaque decision-making processes. These risks can be mitigated through rigorous independent validation and transparency (including disclosure of how an AI model was trained and what data was used). It is essential that regulatory frameworks require AI systems to use data for training that conforms to established health information and terminology standards to help minimise output errors. Where there are gaps in information standards (e.g. for specific care processes or disease / care areas) there should be nationally funded development alongside implementations to ensure data generated through AI can be utilised in the record with professional requirements and clinical safety embedded.

Clinical consensus on the appropriate use of AI tools should also be prioritised. For example, the use of Ambient Voice Technology (AVT) may help improve data quality in electronic health records if the outputs of the technology conform to nationally agreed information standards. However, there is a risk that it may not provide an accurate summary of the views expressed during an encounter and therefore it would still need clinical review before saving to the record – and there should be ‘guard rails’ required. As more autonomous AI is used in the NHS, with AI potentially undertaking certain routine clinical tasks independently in future, the buy-in of clinicians and confidence in regulation will be paramount. A prerequisite should be that any input or output data to the decision-making agent, for or from the clinical record, has a foundation of clinically endorsed information standards to support patient safety.

PRSB’s recommendation is that an ‘Accelerator’ is established to convene industry and clinicians to shape how AI can be used and provide guidance and facilitate professional body endorsement for implementation.

Continuity of care

Continuity of care is increasingly reliant on the seamless transfer of patient information across diHerent professionals and settings, especially as health and care delivery expands beyond traditional boundaries for neighbourhood care models, and therefore record-keeping systems. AI has the potential to facilitate this by automating the extraction, analysis, and sharing of relevant patient data, ensuring every clinician / professional involved in a patient’s journey is fully informed about their background, current condition, and treatment plans.

Fragmented clinical record systems and inconsistent use of standards can undermine these benefits. We advocate for the importance of health information standards to ensure that the data stored in clinical records is structured and coded in a consistent way so that AI solutions summarising or analysing the data is based on use of high-quality data. The output of AI solutions should also conform to information standards so that it is semantically interoperable and can be shared between different clinical systems to support coordinated and cohesive care. The BMA has recognised that AI may ultimately improve continuity of care by freeing up time for longer or more frequent appointments but advises that it cannot replace the doctor- patient relationship. Regulatory measures should prioritise interoperability and mandate compliance with recognised standards to maximise the continuity benefits delivered through AI technologies.

Personalised care

Modern healthcare increasingly prioritises personalised, proactive care tailored to individual patient histories, preferences, and needs. AI-powered tools, combined with timely and reliable access to comprehensive health records, empower clinicians and multidisciplinary teams to develop bespoke care plans that anticipate risks and respond to changing circumstances. This supports a shift from reactive to proactive treatment, improving outcomes and patient experiences.

To realise the full potential of personalised care, AI systems must be designed to respect patient preferences, integrate diverse data sources, and adhere to the highest standards of data governance. PRSB’s standards provide a critical foundation for this, ensuring that health information is accurate, relevant, and accessible when needed. Regulators should ensure that AI solutions facilitate, rather than hinder, the delivery of truly personalised care.

Royal College of Physicians (RCP) perspective on Digital and AI

RCP (a PRSB member) published the ‘RCP View on Digital and AI’ in January 2026. They recognise the transformative potential of digital AI in healthcare, especially in enhancing diagnostic accuracy, streamlining workflows, and supporting clinical decision-making.

The report advocates for the responsible adoption of AI, emphasising the need for robust evidence of eHicacy, safety, and ethical use. Their recommendations include ongoing professional education about AI, multidisciplinary involvement in system design, and clear accountability frameworks to address clinical risk.

Furthermore, the RCP underscores the importance of transparency, explainability, and patient involvement in the deployment of AI. Systems should be subject to rigorous clinical validation and post-market surveillance to monitor real-world impacts. The RCP supports regulation that balances innovation with patient protection, ensuring that digital AI tools enhance, rather than replace, the clinician’s judgement and the patient’s voice.

We support RCP’s report and recommendations.

PRSB recommendations

Our recommendations align to the stated goals of the MHRA’s National Commission, and we are exploring these further with our members through our Advisory Board (which we will share with the Commission in due course):

  • Mandate compliance with recognised health information standards for all AI solutions deployed in healthcare, as advocated by the PRSB and our members.
  • Establish a (funded) Accelerator for PRSB to convene industry and professional bodies that will agree consensus on care processes appropriate for AI, and those where clinical safety ‘guard rails’ are required.
  • It is imperative that information standards evolve in tandem with AI tools to ensure safety, accuracy, and interoperability (and that AI does not act as a proxy therefore for development of standards that are not clinically validated and could compromise patient safety).
  • Require transparent, explainable AI systems to ensure clinicians and patients understand how decisions are made and can challenge or override them when necessary.
  • Implement rigorous validation and post-market surveillance to monitor AI performance, safety, and real-world impact with a clear feedback loop into improving information standards.
  • Promote interoperability by prioritising standards that enable seamless data sharing across care settings and multidisciplinary teams.
  • Support ongoing professional education about AI, including its limitations, risks, and opportunities, for all healthcare staH.
  • Encourage independent professional, patient and public involvement in the design, evaluation, and governance of AI systems, ensuring that solutions reflect diverse needs and preferences.
  • Establish clear accountability and liability frameworks for the deployment and oversight of AI, delineating responsibilities among clinicians, developers, and regulators.
  • PRSB supports the development of a regulatory framework that ensures safety and efficacy of AI applications but does not stifle innovation and aligns with the nature of AI to improve and learn over time.
  • Mandate the use of recognised health information standards for all AI solutions in healthcare, ensuring data safety, accuracy, and interoperability.
  • Establish a funded Accelerator to bring together industry and professionals to agree on AI-appropriate care processes and safety guard rails.
  • Require AI systems to be transparent and explainable, with rigorous validation, post- market surveillance, and mechanisms for feedback to improve standards.
  • Promote interoperability and ongoing professional education about AI, and involve independent professionals, patients, and the public in AI system design and governance.
  • Set out clear accountability and liability frameworks for AI deployment, supporting a regulatory approach that safeguards safety and eHicacy without stifling innovation.