Hello and welcome. Today, we're thinking about artificial intelligence in health care, and how you do it in practice in the health system. We're delighted to be joined by Dr. Indra Joshi, who is the director of artificial intelligence for NHSX here in the English health system. Hello. Hello. Can we start with an easy one? What do you think are the potential benefits for the health system that come from artificial intelligence? So there's lots of benefits. I think the first thing to do is probably roll back slightly and say, what is artificial intelligence? There are lots of definitions of artificial intelligence, and none have been clearly said this is the true one, this is the first one. But you could think of it as automating systems, you can use different types of technologies, such as machine learning to use artificial intelligence. Also, you would use it to maybe see things that the human brain can't. So that's where we think about the intelligence part of artificial intelligence. Some of it may be an imaging, so seeing things that the human brain can't see in imaging, or in the computing factors. So looking at lots of sets of data and interpreting it with something that the normal human brain couldn't do. We can see some really good examples of some of these types of technology in health and care, we mustn't forget care as well. Some good examples that we've seen so far are imaging, which are a no-brainer, and the reason for that is there's lots of good sets of data in imaging that's already quite quantified. The other is around diagnostics. When I'm talking about diagnostics, I'm not actually saying about making the diagnosis, but helping the clinician, whoever that clinician might be, assisting them to make a diagnosis. That could be in the triage sense, so this is normal versus abnormal, or it could be in the decision support, so looking at lots of different factors or datasets and saying, from an interpretation of all these different factors, here is a calculation of risk or mortality. Bring it to, what technology do you think it's going to be useful for first? My money is on screening. What do you reckon? We've seen a lot of good things. So we actually did a study with a group called NIHRIO, which stands for the National Institute of Health Research Innovation Observatory. They looked across all the databases internationally, and found actually there are about 130 odd technologies, which either are European and CE mark, or have been gone through some form of market authorization. The majority were actually in imaging, but actually they range from cardiology to brain, and some were in breast and eyes. Yes, in the UK, we have a program for breast screening. So we could see some good advances there, but we mustn't also forget the other, what the data shows us, which is in cardiology and brain imaging, is also a good bet. Excellent. Okay. So what do you think the challenges are to implementing AI at the clinical front line? So this is where the tricky stuff really gets out into the open. So part of my role is to look at technologies out there, as to go and talk to people about what they're doing, but also listen to what their problems are. So often people will come to me and say, "Well, what are the rules of the game?" Number 1, when we're developing something, where do we know that that's okay to do, or this is how you do it? The second is on the other end. So we've got to technology, somebody is come to us and said, we've got this beautiful algorithm that's going to triage your images, or it's going to help read your lung nodules, for example. Actually, what people forget is the challenge of then implementing that into their own system. So something might be market approved, but it might be market approved on different dataset in a different country. Actually, when you come to your own host situation, wherever you may be in a GP practice, or in a hospital trust, actually you then have to adapt that model on your own dataset, and that's a big challenge. It's a big challenge not just from a data and an infrastructure perspective, but also from a skills perspective. So who? Whose job is it to then adapt that model? Whose job is it then to go, yeah, you're good now with that? These are things that people come to us and say, "Can you help us? How do we do that?" We also look out to the community. I think we're trying to build this community of experts that can help in certain areas that I've mentioned. That's great. So what skills do you think the clinician of the future is going to need to be able to use all this AI technology? That's a good question. We had a recent review called the Topol Review, where Eric Topol came and looked at three different aspects; digital medicine, genomics, and AI, and robotics. He listed a set of skills. But we've also gone out and spoken to the communities. So recently, we held a day with anybody who's interested in AI in the NHSX. So we had about 80 people turn up. So it's quite exciting. Most of them said, understanding the basics. So what are the basic statistics you need to know? What are the basics of critiquing these products? So we all know how to critically appraise a drug paper, for example. But how do you critically appraise a people when they're talking about machine learning models and the datasets there. These were the skills they said. Then they flipped over from what we think of in the clinical sense, towards from a technical sense. So how do you actually adapt that model and plug it into your system? So how do you understand what your system is? Quite often people refer to that as the CIO, the Chief Information Officer, or the Chief Technology Officer's role to say, "Well, we need to work in partnership here. So we can actually adapt the code to embed into the system." So it's a range of skills that I think you need, and we should really thinking about this is a new age now. What we haven't spoken about, and what people are still concerned about is what's that thing, whatever it might be, a device, a model, and algorithm as she starts learning on the dataset that you've put it in? Then how do you start appraising it? Great. Okay. So off any particular aspects of regulation that we need to think about, particularly with artificial intelligence, are there any challenges in that space? So the regulation at the moment is okay. So we have something called the Medical Device Regulation, which looks at if a product classifies itself as a medical device, how does it use what we call software as a medical device to be regulated? The challenges lie as we actually plug those stand-alone products into the live system. So at the moment you can regulate a product that's stands alone over here. But once it gets plugged into a system and starts learning on the live dataset, that is when the regulation hasn't quite covered. Those are things that we're grappling with now. So the ranges it goes is, at what point do you know from a technical aspect that that model is still the model that you plugged into? When is that model going to decay? Models decay over time as the data varies. What are the levels of decay that are acceptable? Then from a safety perspective, so in clinical practice, safety is paramount. Whose job is it to actually understand that when that model has decayed, or it needs to turn-off, whose job is it to do that? What's that what we call operating procedure around that? So this is what we call post-market surveillance, are currently in the regulation system. So how do we build that post-market surveillance in our operating procedure once we've done the bit I mentioned before, actually in real-time? You've recently written some advice, artificial intelligence, how to do it right, and you've also released a code of conduct for data-driven technologies. Can you tell me a bit about what led you to produce those documents and what those those documents advise? Absolutely. So we did it slightly in a reverse order. So first, we published a code of conduct, and we did this because a couple of years ago we had quite a few people come to us, so we work in the center and central government and then the center of the NHS and said, "We're thinking about this, can you give us some advice, and what are the things we should think about?" We looked out and there are quite a few codes of conduct, some quite a few ethical frameworks, but none of them had actually put it all into one piece of paper, so to say very simply. So we went out, we talked to a few people and we said, "Would this be a useful thing to do? Just put it in a set of papers, some principles and guidelines on what are the behaviors you should have when you're designing this type of technology." There have been some recent events at the time with not just technology vendors, but also the wider community on some of the bad things that can happen with AI when it runs wild, so to say, and in health, we think it's vitally important that actually you do have a code of practice, you do have a code of conduct on how you behave. So we outline 10 principles, they basically go from, why you're designing this product? Like what's the point? What's the user need? What's the problem it's solving? Two, well, what are the regulations you need to consider? What are the ethical implications of that? If you are replacing somebody's job, have you thought about that? What's the impact that your technology is going to have on the workforce? Then the third is something that people don't always think about, but the commercial aspects, like, if you're replacing somebody or you want to have a gain share model, what are the commercial models in this? Also, how are you going to adapt that as you're learning on your own dataset versus a separate dataset. So we put all of those together, it's a live document, we aim to update it every year, we've given it one refresh already since we published it last year, and the idea is as regulations adapt, which they will do, but also as the market adopts, that we keep this code of conduct as a set of behaviors and principals live. The second is the report and the saying, "There is so much noise around artificial intelligence in both health care and in the wider market." What we wanted to do was just cut through a bit of the noise, but also bring together this amazing community of people. So whilst we're a couple of people in NHS X, we're actually a huge number of people, I think we had almost over 65 people contribute to the report, and they range from people who were doing technology development, making products, but also people who are working in the wider community, so internationally, there's the WHO and the ITU working group, as well as something called the Global Digital Health Partnership, and then within the UK, we've got partners like HDR UK, which stands for Health Data Research UK, who are doing some fantastic work around creating pools of data and then doing exciting things which are actually in the front line. So we wanted to put all of this together in a report. I wanted to say it was a short and sharp sweet report, but it turned out to be about a 100 or pages long, but that does include quite a few case studies, but it gives you a general overview of what's out there, what are the rules of the game? What are the regulations? Who should you go and talk to for data? Then what are the good case examples of actually doing this stuff? So I highly recommend you read it. So Indra, do you think AI will replace doctors in certain fields? I think this is a really tricky question, and I fundamentally believe doctors do much more than read an image or make a diagnosis, so my answer would be no, they won't replace doctors. What they will do is replace certain mundane tasks that any clinician, regardless of whether their doctor does, and some of those may be filling out forms, that's quite a mundane task. Some of those may be reading the size of the nodule, clicking a point here, and clicking point here, there's no reason where humans should do that, a machine could do that, and I think it will help in assisting of this is worrying, you should look at it now versus this is not so worrying, you can look at it in three hours time. So I think that is what we will see. I will be highly skeptical if anybody says yes, they are going to replace doctors because I don't believe that. What are your hopes for the future of artificial intelligence in health? My personal hope is that this becomes a much more symbiotic relationship, and it's not just about artificial intelligence, I'd say it's about technology as a whole. Health compared to other markets has been traditionally quite skeptical, and this had to be a lot of layers of evidence, a lot of layers of evaluation to make even sometimes the simplest thing like booking an appointment work in a digital space. So I would hope that next year, but lets say in the near future, that we will see a much more symbiotic relationship where the workforce, the market works easily, there's less antagonism, but much more joint working on developing things that solve problems, actually solve problems versus, is a nice shiny toy. One of the things I've heard people talking about is the potential issues and biases that emerge from the data we've got, how we train algorithms and real potential issues around diversity, equality, what do you think the pressing challenges are in that space? This is a really important point, and quite often it gets glossed over because people don't always think about it, it's one of those unconscious bias or unknown unknowns. One of the things we try to push is when we talk about the proportionality of the data or the quality of the data that to make sure it's also diverse and inclusive. If your product is trying to solve a problem for a certain condition, you can't just train it on a very homogenous dataset. So for example, if you are using image recognition of a certain lung nodule, you can't have just trained it on a Caucasian population, it must be have trained on a diverse set of population, because when you then pop that in that model into the real world, it might not give you the results that you want it to do. But also there's been a lot of noise around people being excluded, and the machine can only do what you program it to do. So if you have an unconscious bias or a bias that is not unconscious, that's quite conscious, you will plug that into your system. So if you don't recognize that actually there's a whole culture set that you've not thought about, then you will only ever do what your program to do, and a good example of that was, and a couple of years ago, a medical school here in the UK, created a program to try and sift through applications, and it sifted through the applications quite well, but it completely instilled the bias that the humans had, and so they found that people from ethnic minorities, from bain culture, but also people from lower socioeconomic classes didn't make it through, and that's like a classic example of where the human bias has come back into the computer bias. So I would suggest that anybody who's creating a model or training their data to really think about things that they might not think about, and if they're not sure, to go and ask other people, go and ask people who don't look like you, for example, we'll go and look at your dataset, and that she interrogate and say, "Is it from a diverse source?" If it's supposed to work on a diverse source. Fascinating. So we could be teaching our algorithms our own biases, teaching all the worst stuff for our history. Yeah. So we need to be alert with, we need to do something about it. Be really conscious of it. So there we are, artificial intelligence in healthcare, so much promise, so many interesting new things it can do, so many tasks the automation is really going to help with, and at the same time, big issues about getting clinical engagement, working out how to fit it into the workflow, and some big societal issues about how to deal with bias and regulation. So there's a new challenges from his field. So Indra, thanks again so much, I hope you found this interesting. Lots and lots of challenges ahead under a fast moving field. Yeah, and thank you for having me, and thank you for listening.