More with Less? AI and health care

Summary

Jacob West – More with Less? How AI could revolutionise productivity in health care – and beyond.

It seems that AI is now everywhere. With frequent attention-grabbing headlines about the impact of AI, it seems that the fast-pace of AI development in 2022 and 2023 has permeated every corner of the earth and every industry. Microsoft’s partner, OpenAI, is behind the hugely popular and category-defining ChatGPT. 

But AI is not a phenomenon of the last year alone. The original use of the term ‘Artificial Intelligence’ can be broadly traced back to the 1950s, and the mission to develop computer science to the point where a computer could pass the Turing Test (https://en.wikipedia.org/wiki/Turing_test).  In 1959, a subset evolved called Machine Learning which sought to ‘learn’ from the data already shared with the programme and use it to make decisions or predictions, to extrapolate or recommend outcomes based on patterns or probabilities. 

In the next 40 years, computer science progressed leaps and bounds with bigger, faster and more affordable compute power, and of course the internet. In the 2000s the state-of-the-art was in Deep Learning which took Machine Learning to a new level with layers of ‘neural networks’ used to process data and make decisions with greater accuracy and sophistication. 

Fast forward to the 2020s and you have the explosion of ‘Generative AI’ onto the tech scene. With ChatGPT being the first to be released to the public is an accessible ‘chat’ style format. What is so powerful about this latest development is that it requires no experience or understanding of any coding language, and can try to answer any question without being ‘trained’ on a sector or industry-specific training dataset. The combination of natural language prompts (ie, talking to it as if you were chatting to a real person) and the reasoning engine that does the work behind the scenes, is what has taken AI from a niche, scientific and highly-specialised tool to the mass market. 

At Microsoft they have a series of products already available to customers, including the flagship ‘Microsoft 365 Copilot’. This intelligent assistant uses AI to respond to your requests to help complete tasks within Microsoft 365 programmes (such as Word and Powerpoint) and can make additions and edits to improve your work. Adding new slides, drafting text, editing, reformatting, translating and more can be done via a chat in the sidebar. The applications for this sort of technology are endless, and for the first time it is now affordable and accessible to every industry. 

As the public, we are still feeling our way through this new landscape and new use cases are being found every day. The top cases so far for the use of this ‘Generative AI’ are: 

  • Content generation – writing and drafting text, generating images, and chatting in a natural human-sort of way with customers via messages. 
  • Summarization – extracting insightful data from call logs, chat transcripts or social media posts and likes 
  • Code generation/ modernization- converting natural language to coding languages and writing code documentation
  • Semantic search – Search reviews for a specific product/service

In healthcare, Jacob gave examples of use cases for Generative AI already spanning from R&D to care delivery. Within R&D this already includes drug development and discovery, scientific literature reviews, clinical trial recruitment and management. AI has been used to spot breast cancers and better target radiotherapy. Applying AI in the care delivery space, applications include patient engagement and communication, operational workflow automation, population insights and personalisation. 

The DAX Copilot is a sector-specific application of the generic tools, trained to automate the drafting of Clinical Summaries. In seconds, the programme can take audio files or transcripts and turn them into summaries in consistent standard formats, and with the benefit of having already ‘read’ millions of training consultations from 1000’s of other professionals. This can improve the consistency and quality of note taking, while significantly reducing clinician time taken on this crucial drafting task. 

There are, however, limitations which should be fully considered when taking AI from an academic or hypothetical experiment into a real healthcare or research setting. These products are not a substitute for the guidance of a licensed health professional and are not regulated as a Medical Device. They should be used only as a general-purpose tool, perhaps as a first draft of a discharge letter, or first cut of the analysis of a large research dataset. 

As the world seems to be wrapped up in the benefits and risks of AI and what it can do for us, it is important to lay the groundwork of trust. At Microsoft, Jacob shared that significant work has gone into establishing a foundation of trust: your data will alway be your data; your data will not be used to train OpenAI foundation models without your permission; and your data is protected by comprehensive enterprise compliance and security controls. In healthcare specifically, they have established 6 principles of responsible AI learning: fairness, reliability & security, privacy & control, inclusiveness, transparency, accountability. 

Questions from the audience of fellow Harkness Fellow alumni prompted a lively discussion. The topics covered in the Q&A included: 

  • The bias within the training sets perpetuating inequalities based on gender, ethnicity and other protected characteristics, and how to counteract this effect. 
  • The eco-credentials of the scale of the computing resource required to power the AI programmes and the interplay with our goals to be Net Carbon Xero. 
  • How healthcare (and other) professionals are going to be educated to use Generative AI appropriately and ethically, and what the public might or should expect when it comes to healthcare professionals using AI as part of their care. 
  • The need for regulation of AI and the role that governments should, could or might play in regulating what is already a trans-national phenomenon. 

The final recommendation from Jacob was to give it a go, and to (responsibly) try different applications of AI and to explore how AI could help improve productivity for the benefit of our patients in healthcare. 

About the Speaker

Jacob (HF 2014/15) leads Microsoft’s UK healthcare and local government business. A former adviser to two UK Prime Ministers, Jacob has worked in healthcare locally, nationally and internationally, in the UK and overseas. Jacob was a Harkness Fellow at the Harvard School of Public Health in 2014/15 and is a Visiting Senior Research Fellow at King’s College London’s Public Policy Institute.