Fear of Artificial Intelligence in Medicine and Healthcare
“But your GP doctor can make a mistake in your diagnosis, you still prefer her? You realise that the AI will give you the correct diagnosis 99.9% of the time!”, I exclaimed in disbelief.
“Yes, I know. But I still want someone to speak with, explain my symptoms to … you know … someone to have a discussion with, not a machine,” Laura replied.
Laura’s ‘illogical’ reasoning for the rejection of AI primary care service (or more correctly its potential application) prompts us to review and assess the technological revolution currently taking place in medicine and healthcare. A lot is changing in these fields and most of the time, the general public; who will be the most affected by the changes, were never part of this discourse. So, let us review a small part of the technological disruption currently underway in the areas of medical diagnosis and treatment.
A 2017 Mayo Clinic research study published in the Journal of Evaluation of Clinical Practice found that 20% of patients who sought a Second Medical Opinion in one of the US’s leading medical institutions were misdiagnosed by their primary care doctors. Whilst this percentage cannot be generalised across all outpatient visitors, it points towards the gravity of the figure. In 2016 alone, there were approximately 863 million outpatient visits across the United States. If only 5% were misdiagnosed (let’s cast aside the 20% that was recorded since in all likelihood those were of patients with serious conditions that warranted a second opinion) and placed on an incorrect treatment path, this means that potentially 43 million visitors received incorrect medical advice. In fact, the US National Academy of Medicine reported in 2015, that each and everyone of us will receive an incorrect or late diagnosis once in our lifetime.
General Practitioners (GPs) follow a set of protocols when presented with symptoms. Based on their learned knowledge and acquired experience, they draw out a treatment plan or refer patients for further investigations or specialists. Machine-learning algorithms are now capable of the same, however, they tend to fair better since ‘real-life’ variables such as high patient volumes, limited examination times due to quick-turnaround targets, a patient’s state-of-mind and/or other external factors, bare no consequences on their performance when processing their tasks. Perhaps the best example which illustrates this is IBM Watson’s ability to analyse the genomic data of tumour cells and healthy cells to harvest actionable insights. It took IBM Watson 10mins. It takes a human 160hrs of dedicated time to arrive to the same conclusion. In real life the leisure of dedicated time and distraction-free environments does not exist. Mistakes happen, oversights occur, and biases exist.
Most doctors are aware of the above-mentioned margin of error. Medical institutions understand the adverse consequences and insurance providers pay the bill. It is no wonder that we see multiple initiatives across the world in which collaborations are forged between technology providers, research scientists and medical institutions to harness the power of machine-learning algorithms with big data to improve diagnostics and care. Take for example the latest initiative launched in May 2018 by University College London Hospitals, UK in which it announced a 3-years partnership with The Alan Turing Institute; the UK’s national institute for data science and artificial intelligence. Their objective is to bring the benefits of AI to the NHS through multiple projects one of which is AI-enabled diagnosis and treatment.
“Imagine a world where we could use this data to develop algorithms to rule out diseases, suggest treatment plans or predict behaviour….that is more than possible with the wealth of data we have available and the expertise at The Alan Turing Institute”, stated Professor Bryan William, Director of Research at University College London Hospitals NHS Foundation Trust about the new partnership.
However, the converging of AI with clinical practice should not mean the end of human-serviced healthcare. Laura’s emotional response to the suggestion of AI in medical care stems from two aspects: first, a trepidation towards AI, fuelled by dystopian narratives in which machines develop consciousness and take over the world and second, a concern that AI’s lack of consciousness will hinder its empathetic ability and fail to establish a relationship with her.
In fact, her emotional response to what should be deemed a functional problem (identifying symptoms and reaching a diagnosis) echoes the infamous Harry Harlow’s monkey experiment conducted in 1959. During that experiment, infant monkeys were separated from their mothers after 12-14hrs of birth and weened on a set of surrogate mothers; one was made of mesh wire with a feeding bottle and the other from a wood frame covered in terry cloth but no food supply. Harlow noted that the infant monkeys preferred the terry cloth-covered mother even though it was unable to provide them with food. He concluded that the physiological needs (the functional solutions) are not the only necessary variables for primates in upbringing and care.
This appears to be the case when it comes to humans and healthcare. We cannot simply remove the ‘care’ component from healthcare even if it leads to more efficient services. The human element which nurtures the empathic sensitivities that are in our neurocircuitry needs to be present. Almost all AI -related healthcare entities are aware of this and non-suggest the replacement of GPs with AI doctors. It is not a question of one replacing the other, but rather it is about developing a new workflow in which AI compliments humans in diagnostics and clinical care to deliver better results.
People in the field need to devote equal time and effort to explore AI application models and clinical workflow as they are currently expending in its development. Investigating processes and potential new roles that highlight the unique human skills necessary in healthcare should have the same priority as our fervour to mine big data and expand its input sources.
Above all, Laura and the rest of the general public need to be updated and correctly informed about ongoing changes. Their views should be accounted for when developing the new augmented systems and their concerns must be addressed before implementation. Unlike entertainment, healthcare is part of a nation’s core services and the public have a right to understand the ongoing evolution.