How will standards keep AI safe in the NHS?
A blog from co-written with our partner, HD Labs
In this year’s budget, the Government pledged an additional £3.4bn to ramp up digitisation and automation in the NHS, motivated by the estimated £35bn in productivity savings technology offers over the next parliament.
Whilst technology and automation are a crucial enabler to achieve this short-term opportunity, they also present the NHS with a far bigger opportunity. Over the next 10 to 15 years, technology can help the NHS to shift from a reactive model of care, to a proactive, preventative one, building a more sustainable service.
This is a possibility now, due to the vast amounts of personal health data captured by smart phones, wearables, and other IT systems, coupled with the acceleration in computing power available to process the data.
Central to automation is Artificial Intelligence (AI) that increasingly can use large amounts of structured and unstructured novel datasets to uncover hidden patterns, trends, customer preferences and other useful data that can help inform better decisions.
For example:
- To keep people healthy: AI-powered smart phone apps allow people to monitor their blood pressure at home, arming people with the knowledge to take necessary steps.
- To identify disease earlier: processing of primary care data enables secondary care teams to better identify post-natal depression amongst new mothers with no previous history of mental health issues.
- To reduce the impact of disease: for those living with diabetes, devices continuously monitor glucose, letting people know their current level and where they’re heading, improving management of their condition.
- Mid-office productivity gains: applying large language models and prompt engineering to Patient Advice and Liaison Service (PALS) communications, to increase quality, reduce clinician’s time, enhance patient satisfaction, and reduce response backlogs.
The safety concerns of AI
AI is not new, for decades it has helped us better diagnose illness, customise treatments, predict and prevent illness. But the explosion of data available and continued advances in computing power, are accelerating and amplifying its abilities and usage.
This increased implementation of AI, be it for productivity gains, or to deliver proactive healthcare, can be either great or terrible news for the safety of our health and care services, depending on how we develop and implement it.
AI isn’t inherently risky, but the data, information, and accurate application of existing standards, feeding into it can be. And the amplification of any error is scaled faster and further than a mistake by a single clinician. So, to uphold the safety of our services, we must make conscious decisions on all aspects of the data.
HD Labs aims to make digital public service more effective. Their data orchestration experts look for every advantage that digital processes can offer toward that goal, but they also know that a rush to embrace innovative technology also brings risk. The system’s quest to innovate can bring disastrous consequences if we do not apply the triple aim framework of healthcare to every algorithm development and application of automation – our ‘duty to have regard to wider effect of decisions’; the health and wellbeing of the people; the quality of services provided; the sustainable and efficient use of resources.
Without this approach, unintended consequences can appear without us even realising it. For example, bias may continue and amplify, with the much-cited example where images of mainly white patients were used to train algorithms to spot melanoma, potentially leading to missed diagnosis for black people. Another example is that AI needs to reflect systems. Medicine is an ever-changing field, where clinical and operational practices in clinical settings constantly evolve. This may result in changes in the input data and may no longer resemble the data used to train the model. There is infinite more ways where such unconscious errors can occur.
Use of standards to keep AI safe
Before building or implementing AI, it is important that you thoroughly research the data, content and information against the system’s people, processes, and technical infrastructure. It’s also vital to decide at what points to build a clinician in the loop, deciding at what point in automation a clinician will be in command of the decision making.
Whilst that may sound daunting, or frustrating, the good news is, much of this research has already been done for you. Since 2013, PRSB has collaborated with people, health and social care professionals, system suppliers and others to develop evidence-based standards to deliver better care digitally. Applying relevant standards to your AI provides a short cut to plugging in the existing thinking of experts and upholding quality across the NHS and social care.
AI is an enabler of the digital change, but we can’t grasp its full potential without information standards – they are the driver of that change, while also supporting clinical safety. By defining what information should be recorded and shared in health and care, standards help ensure that clinicians have the right data in front of them to enable informed decision-making and effective delivery of care. Importantly, recording and sharing standardised information also helps reduce the burden of data interpretation that is often placed on clinicians, while reducing risks of errors and duplications. The PRSB has published a position statement which defines how AI relates to their information standards and how marrying up these two will help us navigate through the rapidly changing world of data and deliver digital advances.
There are three specific areas where standards will help improve AI safety:
Interoperability: standards help facilitate safe and seamless data exchange between systems, ensuring AI functions effectively across different systems;
Ethics: standards are implemented in a way that respects patient confidentiality, ensures informed consent, is transparent about how the data will be used in AI solutions;
Collaboration: fostering collaboration with technology developers, healthcare providers and service users, to shape health information standards that cater to their needs.
Learn more about AI and standards with HD Labs and PRSB
There is huge optimism for automation and AI. The data available today from all aspects of life offers the potential of far greater personalisation and prediction than previously possible, transforming how the system delivers everything from diagnostics to treatment.
The confidence in automation is equally matched by caution. But this concern shouldn’t stop progress; it should act as a warning sign to stop and think about how we apply such powerful processing. Standards are a useful tool to do this, helping invention to become innovation. With the scale of automation and new rapidly developing technologies on the horizon, the adoption of standards is more important than ever.
Across a series of AI blogs, we will outline the risks and how standards are a key tool that everyone should use to keep AI safe in the NHS and take a deep dive into the three areas where standards can help ai deliver on the productivity gains sought by Government today – interoperability, ethics, and people. We will also outline how a standards roadmap can guide us to the realisation of a proactive, sustainable health system.
HD Labs and PRSB will be running a series of webinars to discuss this topic in more detail and answer any questions you may have. Please keep an eye on the PRSB’s social media and website for the webinar announcements.