Follow us

Scotland’s fortnightly political & current affairs magazine


Subscribe to Holyrood
What will AI mean for the health service?

None (public domain)

What will AI mean for the health service?

Alana’s career has been going well over the last couple of years. In fact, walk into Heriot-Watt University’s state-of-the-art GRID facility, home to its innovation research, and you can meet her.

Currently working under the title of ‘intelligent assistant’, she gives directions to visitors to help them find their way around the building, helps students find spare desks and offers information on where there’s a free computer.

But unlike the other staff at the university, Alana is not a person. Instead, she is an advanced form of Artificial Intelligence (AI) software, who can understand and respond to human conversation. There are smart speakers dotted around the building, allowing Alana to provide live updates to the people using it.

Beyond that, Alana is also highly skilled in regular conversation – she reached the final of the International Amazon Alexa Prize in both 2017 and 2018 – and she is getting better all the time. Professor Oliver Lemon, Director of the Interaction Lab in Heriot-Watt University’s School of Mathematical and Computer Sciences, says the team have learned a huge amount over the last couple of years.

He told Holyrood: “We have collected six million data points of users talking to our system, so we have got a lot better at keeping conversations more interesting and more engaging for longer. Everyone in AI is continually trying to push the boundaries on this, and we would never claim it is by any means perfect. Sometimes it doesn’t quite understand what the human is saying, but the more data we collect, the more we can work on it, because it gets better as we learn.

“It is particularly useful for finding out new information, and for things like browsing news and Wikipedia. So it’s quite different to Siri and Alexa, where you say a command – to set a timer, or ask about the weather – it responds, and that’s the end of the conversation. What we’re doing is long, multi-turn conversations over multiple topics.”

He adds: “If we want these systems to advance, you have to make them much more engaging, so that people want to use them more. Our system is one of the few that can blend together these task-based things – like getting the news – but in the background, it is far more social. The system is learning what you’re interested in, what you’re not interested in, and it is proactively suggesting topics for conversation. Taking initiative in the conversation is something Alexa and Siri don’t do.”

It is easy to see how useful Alana could be in a health environment, either as an intelligent assistant in a hospital or public building, or in more tailored applications. In fact, the team are currently in discussions with health bodies over its use. “The idea is to create support for elderly care,” Lemon explains. “Using conversational AI systems that can help support people but also work on loneliness, isolation.

“One of the main things we want to do is explore the healthcare applications of conversational AI. We are starting to work with the Royal Blind. It’s early days but we are in negotiations with them about testing versions of Alana for helping blind people. It’s a similar group, in a way, because a lot of blind people are also elderly, so there’s a big overlap.”

Meanwhile, a group of researchers is also working on hooking Alana up to a vision system.

“Potentially, a blind person would be able to ask, ‘what’s in front of me?’. It could be used in the home, or imagine you came into a new building and you either didn’t know where to go, or you couldn’t see, then it could give you directions,” Lemon says.

AI clearly has huge potential to change the way we provide, or even think about, healthcare. But what will it mean for health professionals? What will an expert system mean for a human clinician?

“I think it will free up their time and allow them to focus on the more interesting and complex cases,” Lemon says. “For example, you could imagine a 111 service being at least partially automated, so a doctor would be given a kind of summary of what a patient’s said so far, and then be able to flag up if this is something that requires an immediate ambulance or if it’s something that requires more human intervention. I don’t think it should be seen as threatening anyone’s job or livelihood, it should be seen as an assistant which can help you get to where the more interesting, or problematic cases are more quickly.”

Clinical trials have shown AI can be as good as doctors in identifying lung cancer and skin cancer, while a study by Moorfields Eye Hospital in London and the Google company DeepMind found a machine could learn to read complex eye scans and detect more than 50 eye conditions as effectively as clinicians.

HIV Scotland is keen to explore the role AI could play in supporting people living with long term conditions, including HIV, in the hope that new technology might improve access, and reduce dependency on services.

Other tools have been developed that can predict ovarian cancer survival rates and help choose which treatment should be given.

Meanwhile, because of its power in data processing and modelling, AI can also help filter out unnecessary or less urgent requests for imaging and other diagnostics, so that only essential cases are sent to an expert, and the less important ones can be triaged elsewhere.

Yet the effectiveness of AI relies heavily on the quality of the data it receives, and – with a range of public bodies having faced criticism for recent data-breaches – some have expressed concern over the way that data will be handled, particularly in health, where information on patients is likely to be sensitive.

And although its ability to process and model data means AI can save time in a huge range of activities, the process has faced criticism for being biased in terms of characteristics such as race, gender and age.

The House of Lords Select Committee on AI has warned that the datasets used to train AI systems are often poorly representative of the wider population, leading them to make unfair decisions that reflect wider prejudices in society. The committee’s report also found that biases can be embedded in the algorithms themselves, reflecting the beliefs and prejudices of AI developers.

Yet researchers are currently addressing these failings, with experts from Queen’s University Belfast developing a new algorithm aimed at changing how AI clusters together data, in order to fix biases in the system for its use in areas such as recruitment.

Dr Deepak Padmanabhan said: “AI techniques for data processing, known as clustering algorithms, are often criticised as being biased in terms of ‘sensitive attributes’ such as race, gender, age, religion and country of origin. It is important that AI techniques are fair.”

He added: “When a company is faced with a process that involves lots of data, it is impossible to manually sift through this. Clustering is a common process to use in processes such as recruitment where there are thousands of applications submitted. While this may cut back on time in terms of sifting through large numbers of applications, there is a big catch. It is often observed that this clustering process exacerbates workplace discrimination by producing clusters that are highly skewed.

“Our fair clustering algorithm, called FairKM, can be invoked with any number of specified sensitive attributes, leading to a much fairer process.”

And so, while concerns remain, proponents will argue that the bias comes from people – who design the systems – rather than highlighting a failing with the technology itself.

But it is also clear that there are limits to the uses of AI in a health setting. As Lemon explains, in discussing Alana’s potential use in social care, the aim is not to replace human contact, but instead provide an additional tool in supporting people.  

“One of the things we need to be careful about is, we are not trying to make elderly people just sit and talk to AIs all day, we actually want the AI to support elderly people to talk to each other, and maybe do things in real life – go to community events and things like that. We see these systems as a kind of way of enhancing real human contact. It’s not about elderly people sitting in their bedrooms talking to these devices all day – we want to put humans in touch with each other, and get them to talk.”

Read the most recent article written by Liam Kirkaldy - On pause: How coronavirus is hitting the hospitality sector



Stay in the know with our fortnightly magazine

Stay in the know with our fortnightly magazine


Popular reads
Back to top