Associate feature: Being digitally inclusive
The healthcare app that lets you track everything from your height and weight to your blood alcohol limit but does not track menstrual cycles. A soap dispenser that only works on white hands. The facial-recognition software that doesn’t recognise black faces. An algorithm to hire new candidates that ends up recruiting mostly white men.
There are countless examples of bias in artificial intelligence (AI) and other technology.
“AI has the ability to exponentially increase discrimination,” legal expert Ishbel Macpherson explains.
“Where bias occurs – whether it’s intentional or unintentional – there will be consequences. It will be a breach of the Equalities Act, it will be a breach of the public sector equality duty, it will be a breach of administrative law.”
Macpherson is speaking as part of an all-female panel of experts from the tech industry, that came together to discuss issues about digital inclusion and AI ethics at the Scottish Parliament.
Technology company, Leidos, partnered with Scotland Women in Technology (SWiT) to host the event, which was held against the backdrop of the Scottish Parliament debate and emerging Scottish Government strategy on the potential for AI and data technologies.
On the topic of AI bias, panellist Data Lab chief executive Gillian Docherty admitted: “There is one thing that’s certain – we’re going to get it wrong. It’s how do you deal with it when it’s wrong, and how we can try and build those robust questions as early as possible, so that you’re minimising the chance of getting it wrong.”
Ahead of the panel, Holyrood spoke to Leidos data science manager Shirley Cavin, SWiT chair Lynsey Campbell and SWiT founder and Leidos senior marketing manager Silka Patel, about how the tech industry, and society, can become more digitally inclusive.
Campbell, who works for Pivotal Software, says it’s “concerning that some of the world’s biggest tech companies, with employees from the best universities in the world, have often been getting it wrong”.
“It’s really interesting that the smartest people in the world can get this wrong. People think that the more intelligent you are, the more ethical you will be, I just don’t believe that’s the case,” she says.
“It’s not about intellect, it’s about empathy and consideration and knowing the right thing to do.”
Cavin agrees: “Having technical skills and academic records does not indicate that you are empathetic by default. So that’s the reason that it is very important, especially with this technology, that it’s trained and tested properly, and also verified and validated by the right amount of people. It has to be formed by a multidisciplinary and diverse group.”
However, Campbell also says that it’s not “realistic” for every company to be able to hire an ethicist to look out for bias. She says the industry cannot quickly resolve the challenges with having balanced and diverse engineering teams, because “it’s going to take years and years to recruit and train”.
“So, what do we do right now with the majority white male engineering workforce that exists today, to ensure companies avoid making biased errors that impact minority groups?” she says, adding that companies are only now realising that they need to employ more people with ethics and empathy as key skills.
However, it is not realistic that every single AI implementation or method of automation is going to have access to a completely unbiased set of cases and data. One solution is that the teams involved in developing solutions need to make sure they allocate a “designated time, where you collaborate together and ensure you develop user personas and test conditions targeted at inclusion”.
Some of the questions that should be asked are: “Is this an ethical solution? Are we making sure that the test cases we’re using are broad enough to cover society as a whole?”
Cavin says: “We have to have these conversations, like what are we trying to achieve with AI/ML systems? Are we trying to change our society patterns to make fairer decisions, to be able to bring equality and diversity within our communities? Or are we trying to represent the ecosystem where they are intended to be used?
“We have to be totally transparent and honest on the capabilities of AI systems, because a clear understanding of those limitations will enable us to solve any problems or mitigate any potential issues.”
In a recent blog post on the Leidos website, Cavin wrote: “We can’t develop technology first and think about its consequences later. It’s both together.”
She said the industry must be aware of the damage that its technology can cause. “As technologists, researchers, and scientists, sometimes we get lost in the depth of the challenge itself,” Cavin said.
“We become so focused on solving the problem that we don’t fully think through how it will be used. AI is powerful because of its potential to solve a lot of our complex problems, but when we include technology in human activities, we need to think about serious things like safety, security, reliability, and fairness.”
Cavin tells Holyrood there are three main areas where bias occurs.
First, since AI systems aim to replicate human behaviour and are developed in the context of the community in which the tech will be deployed, if the community itself has discriminatory attitudes towards certain sectors of the population then “AI systems will adopt those and sometimes enhance them, so we have to be quite vigilant of those issues”.
The second bias can come from the developers themselves. “When we are designing models, we are humans and we have our own bias in the way that we have been learning and living in the community as a whole. As a designer, you could insert this type of bias in the models that you are generating,” she says.
Finally, the third type of bias is “because of the data”. “AI systems are trained and tested based on data sets and if those data sets are not diverse, and don’t have the right amount of information and equality, it could bring results that are discriminatory to certain sectors, opinions or ideas. It’s a conversation that needs to be more inclusive. I’m so pleased to know that the Scottish Government is opening up these conversations, they’re setting up an AI strategy and calling for people to be part of the forums,” Cavin says.
“I think that’s the right approach, because AI/ML systems are going to impact our life and impact our communities in different ways than current technologies are doing right now.”
The conversation turns to diversity in the tech sector, and how to both attract and retain women.
Campbell, who spends time travelling all over the UK in her current role, says she finds the topic of attracting a diverse workforce via workplace flexibility “so interesting right now”.
“Many years ago, flexibility was only ever spoken about at women’s events, or associated to women thinking of having children… back then, people didn’t seem to acknowledge that children have two parents,” she says. “And it’s been really exciting to see that the conversation around flexibility has changed from being aimed at women to everyone being included.
"I’ve noticed that when men get behind the movement, the movement changes.”
While the need for flexible working practices has never gone away, she says now that more men in the office are saying they want to work from home and spend time with their children, “all of a sudden, the support for it grows”.
Campbell says her current role is very flexible, but to get to that point has taken years of building trust in her abilities. Her advice for young women entering the market is to “quickly establish trust”. “You need to prove that you are available, you need to prove that your contribution is impactful, and then people don’t care where you’re working, they just know they want you to work for them.”
Patel agrees: “It does come down to trust. And I’m in a similar boat as Lynsey, where, as a 40-year-old woman, I’ve built that up over the years.”
She adds: “There are no questions where I am, as long as I’m available and I get the job done. For some of the graduates and apprentices that I speak to, it is more about that they’re in the office every day, that they’re still proving themselves, in order to earn that trust.”
Cavin says that while she never felt discriminated against while working in tech “sometimes you need to train harder” because of the stereotypes attached to being a woman.
“In a male-dominated environment, the stereotypes are sometimes built up. But when you show that you’re good at what you do, that is passed by and then they see you as a great professional. Because of that, I would say that I’ve had great opportunities,” she says.
Cavin has worked in academic research, for heavy defence companies like Lockheed Martin, and is now in a leadership position at Leidos, which she says is “a company that’s enabling people to showcase their capabilities and their skills”.
“When I see another young woman looking at me and talking to me and saying to me, ‘you’re confident, you feel that you’re enjoying what you’re doing, I want to be like you’, and you feel that responsibility as a role model. And you want just say to them, ‘you can do that too’.
“As long as you enjoy it and you’re passionate about what you do, you can demonstrate it and you can thrive.”
Looking ahead, all three agree that in the future they want to see more diverse technology teams.
“There’s gender diversity, and then there’s diversity and inclusion on top of that. For me, [it’s about] just wanting to see more people who look like me in this sector,” Patel says.
Campbell wants to see a focus on “experiential diversity” and a move towards treating staff as human beings, without “labels”.
“What are our life experiences, what have we learned? I do believe that we’re at a juncture now where we have not solved the gender challenge, we haven’t built an equal workforce, in that respect, but what we are seeing is that we’re moving quickly beyond into things becoming more fluid.”
She also wants to see a different dialogue around technology: “Technology is not coding any longer. Technology is not just about going and working in a tech company or going and sitting in a tech team just developing algorithms. I would love in the future for technology to be a core fundamental consideration for every single career, which starts with education.”
The future for Cavin is about: “How can we make a technology that is more diverse, that enables better care for the environment, and better care for our way of life? There’s so much potential in AI/ML systems to solve the complex problems that we’re facing now and that are worrying us, like climate change, population, shortage of food,” she says.
“If we think how to capitalise on technology… we can have a bright future, we can have the future that we’ve all dreamed of, and we are all part of it.”