“Medicine is not an area where you can be disruptive”
Medical ethics expert Alena Buyx explains that we must weigh the advances accompanying technological developments in healthcare against the potential harm they could cause.
Many have high hopes that Big Data, AI and other technological developments will revolutionise healthcare. But with the advances that come with technologies like the genome editing tool CRISPR, we also need to weigh the potential harm, argues medical ethics expert Alena Buyx.
How has AI changed the field of medical ethics?
ALENA BUYX: Not that much has changed, but there has been a clear progression. We have addressed Big Data approaches in genomics and personalised or precision medicine. Slowly, but quite intensely, deep learning and machine learning entered the field of data-rich medicine, and people discovered that if you have medical, lifestyle, genomic data and so on, you can mine it quite successfully with machine-learning algorithms.
Now you have these machine learning-based algorithms that can make autonomous decisions in symptom diagnosis, making treatment suggestions, or responding in a therapeutic fashion to patients. That is the stage we are currently in.
Medical ethics has always looked at such developments and tried to understand what they mean for clinical practice, for doctorpatient relationships, and what kind of advantages and problems there might be.
Proponents of these developments say they will change the whole system, that we will be going from a treatment-focused medicine that looks at a sick patient to an antecedent health-propelling system.
Yes, with the predictive power that these algorithms bring, we have better chances of making strides towards prevention and understanding the many predispositions for illness. Of course, all the data-driven approaches are very powerful, and I hate to be the party pooper, but I don’t think we will see a radical transformation of the medical system. Hopefully, though, we will get to a place where we are much better at catching and treating things a lot earlier.
Yet for the first time in history, we have data that can make projections into the future and enable us to change our lifestyles in advance.
Some will be able to do that, but many people won’t because it’s very hard to change your lifestyle. Plus, illness is complex, multidimensional and multi-factorial. Often, it doesn’t just have one cause that we can avoid or address, like a particular behaviour or a specific genetic component. With many illnesses, particularly those that kill most people, it’s usually a confluence of a variety of factors and often happens over time. Even with new data-driven approaches, we can often only predict and address part of that complexity.
But even if we do have highly predictive power, which we will have down the road, it won’t be available everywhere, and many won’t be able to respond to it, or won’t have the resources to do so. My hope is that we will make this technology work for the widest swath of society possible, but my worry is that it might widen health inequalities. It could go both ways. I strongly hope it will be very broadly distributed, but I’m not holding my breath.
Robotics and AI-powered care
With people reaching more advanced ages than ever before, new diseases will emerge. Researching and treating them costs a lot of money. There’s not much data available from the past, since people didn’t live as long. Do you have high hopes for this particular niche of medicine?
Yes, it is a promising group, and there are some interesting assistive technologies. Robotics and AI-powered systems could really help people live at home for longer and be fitter and healthier. I expect it to be a very promising area but, again, ageing and illness are complex. Elderly people usually aren’t digital natives, so there’s a bit of a digital divide. Whether they will be able to quickly adopt these therapeutic and preventive measures is an open question.
But that’s more in the realm of care than in medicine, isn’t it?
Yes, and there are other questions. A big issue we have regarding the ageing population is loneliness. It’s a “silent killer” and a huge factor for morbidity and early death. We know about it, but it’s not really at the forefront of policymaking yet. Some say we will never be able to solve it with machines, and that we need to be better at integrating older people into society. Others say we might not always be able to do that with everyone and that if we can provide people with a social experience, it could help, even if it’s not with another human. It’s an interesting area for us to see if we can address some of these health determinants and do so in a responsible way.
The unknown unknown’s
Change in the last 10 years has been a mixture of hardware and software. In the health industry, it has also been multifaceted and there are multiple factors. It seems reasonable to assume that there will be other changes in society related to these algorithms.
Yes, you’ll have the unknown unknowns. You’ll have benefits that we can’t even anticipate yet. You’ll have synergies, you’ll have sudden, unexpected interplay of certain interventions you didn’t expect. And AI used in the workplace could even suddenly improve health for some reason. Of course, the same is true for risks and unanticipated harm. We’ll have to be vigilant.
In what way?
We have to perform a careful analysis of the potential benefits and dangers with each application. That’s something we have done for decades with new medical technology, so we know how to do it. We need to anticipate how the technology will help, who will benefit, and who will not. We have to anticipate potential dangers and try our best to predict the unintended consequences. With these algorithms, we know that there are areas where they can do harm, such as built-in bias, which would be very problematic in medicine. And there are, of course, a number of data ethics questions. So, we have to assess all this for the applications we want to develop and implement.
Lacking technological infrastructures
When will consumers begin to notice the effects of AI in the way they are treated or in the ways in which their health is monitored?
One thing I want to say, because I’m based in Germany, is that we are far away from anything like using this kind of data as part of a digital infrastructure in health. I’m not talking about consumer products. I hear a lot of, “But oh, we have smartphones, we have all of this technology, we can do anything.” Yes, in principle we do have the technology. But in health, especially in countries like Germany, we don’t yet have the infrastructure needed to actually use the data for clinical applications. One of the worries I often hear from developers is that this will all happen, and the tech will be great, but it will all be developed in China or the United States. That means we won’t know if and how these applications will work with our German or Austrian populations.
Could you elaborate a bit more on what you mean by “unknown unknowns”.
Medicine as a business, but also as a profession, is always on the lookout for something new and, at times, we find something in unexpected corners. Medicine just takes whatever there is, even if it’s a robot with an algorithm, and tries to see whether that can help patients. To me, that impulse is a wonderful thing. Physicians are usually very open to all kinds of innovations and technological developments. If you’ve spent at least a decade in medicine, you’ve seen some great advances, but also a lot of things that didn’t work. Every once in a while, though, something comes along. Suddenly you can treat hepatitis C, people are surviving certain cancer treatments, and you have the first target molecule for amyotrophic lateral sclerosis (ALS), which nobody thought would ever be possible. That’s cool!
With the algorithms, probably the biggest impact so far is that they allow the recognition of new patterns. In research, they help us study data in new ways that provide all kinds of ideas about which kind of molecules could work, or what kind of causal pathways we might not even have been aware of. It’s a mix between a scientific and an entrepreneurial spirit, and it’s very much geared towards innovation, which has probably been a part of medicine since the days of Hippocrates.
The dark side of medical innovation
There has been a lot of news lately about CRISPR and genome editing.
As a tech-friendly ethicist, I’m so excited about a new technology like this. There’s so much potential to do good. But it does have a dark side. Medicine is not an area where you can be “disruptive”. Nor should you be! You can’t do innovation the way you do it in Silicon Valley, because you can’t “break things and move fast”. Many are familiar with the Theranos case, the failed health tech company led by Elizabeth Holmes, and its attempt to use this kind of rapid, transformative innovation framework to disrupt the entire laboratory medicine market. None of it worked – it was a huge fraud. It also highlighted that, at the end of the day, innovation in medicine is quite slow, and trying to jump ahead in a “disruptive way” can harm people. Whether you like it or not, it takes at least a decade to get from proof of principle to deployment in routine care – and for good reason.
What He Jiankui did in China with CRISPR and what Denis Rebrikov now intends to do in Russia – editing embryos and implanting them in women – is incredibly irresponsible. The technology is nowhere near the stage where we know if it’s safe enough to try it in the clinic. This is unethical on so many levels and, again, an example of trying to be “disruptive”, trying to be first, and accepting a level of potential harm to those involved that should not be accepted. That’s the dark side of this kind of innovation, and it is the reason we need to focus more on the ethics of innovation. We cannot just be medical innovators – we have to be medical ethicists at the same time, because if we are not, we will harm people.
As you pointed out, this isn’t necessarily new. The novelty here is the scale. And the pace. Hardly a week goes by without a report coming out of Silicon Valley or elsewhere suggesting we can now predict how long we have to live – to the point that people are asking themselves what the future of medicine will look like. What can we really expect in the years to come? Will we have the ability to predict the exact date of our deaths?
No, not for quite some time. This is such a meta answer, but the marketing of new studies and major papers has gotten very good. That didn’t used to happen. This is where the media, innovation and medicine converge. There’s a lot of news out there about medical advances, and some of it is grossly overhyped, driven by the desire to be recognised as the first, as the next innovator. We must be aware that we are not talking about just selling phones or cars, we are talking about living people. We should be a bit more responsible, also in communications.
Not too long ago, news broke that a way to reverse ageing had been discovered.
And yes, it was true to some degree. I’m not denying that there are steps being made, but what I am worried about is that with the overhyped reporting, people will get scared. They will think, “What the hell? This sounds terrible, I don’t want to know when I’m going to die! And do I actually want to be able to reverse ageing?” That kind of fear also doesn’t help build trust.
Frameworks and principles
What are the cornerstones of the ethos you are advocating? And how can we ensure that this ethos is also heard in countries whose systems are based on human dignity?
That’s what the theoretical part of my work is about. We have our frameworks and principles that we employ in medical ethics. We also have principles that we use in research, based on the declaration of Helsinki, and a mix of human rights-based frameworks, application-oriented or sector-oriented ethical frameworks, and of course regulations and laws. With regard to the overarching principles, we are fine. We have the main principles and we have the frameworks, and these also differ between places. To some degree, that’s good. Ethical frameworks need to be built on universal principles, but they should be responsive to environments, resources and settings.
The task now is to understand what the principles mean in each context and how the frameworks apply.
So, when I ask: “You’ve written this algorithm. Can you make sure it respects patient autonomy?” – what does that mean concretely? Asking these questions and considering potential answers is something medical ethicists have been doing since the field has existed. It’s a very exciting time for us, and there is going to be plenty to do.
There are not only lots of science and health stories every week, but also many ethics stories these days. That is new. Something has changed, and I think this has come from an erosion of trust. There is a higher interest in ethics. I wish people didn’t have to be afraid and that this interest could come from a more positive place, but I’ll take it.
In your opinion, will doctors develop a deeper sense of how their work affects ethics?
Yes, because ethics have been part of medicine for a long time and are part of the curriculum, even if only a small one. Every doctor is trained in medical ethics. I always tell my students that every doctor is also a practical ethicist, whether they know it or not. But, of course, it’s not something that people are always conscious of.
Ethics has always been a part of the fabric of medicine. Maybe that wasn’t always the case in the tech area. People have thought about ethics for decades there as well, but it wasn’t such an accepted part of the field. This is changing now, and that’s something I can only applaud.
This interview is part of a collection of essays and interviews by Alexander Görlach: