Artificial Intelligence (AI) is one of the defining technologies of the fourth industrial revolution (4IR). AI systems are becoming smarter every day, identifying tumours in medical scans better than human radiologists, developing new engineering solutions to infrastructure and spotting patterns in vast data sets that would be impossible for a human to go through. Such is its potential that some herald AI’s transformative potential with other “general purpose technologies” such as the steam engine or electricity.
But despite all the world-changing benefits AI is affording, some negatives elements are beginning to emerge.
Because AI is built on the idea of analysing data to spot trends, anomalies and solutions, their answers tend to reflect the biases of the data used to train them. As a result, we are building the foundations of transformative technology on shaky ground.
AI developers are predominantly male. Indeed, a review published by the AI Now Institute found that less than 20% of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female. Consequently, to address the risk of bias, we must increase the diversity of the teams involved in AI development. Firstly, this is because research on collective decision-making and creativity shows that cognitively diverse groups tend to make better decisions. Therefore, the current lack of gender diversity in the community currently developing AI systems is an issue.
Rather than improving the world by extracting historical gender biases, research has found that machine learning is further entrenching discrimination, with potentially harmful consequences. As Caroline Criado Perez, author of ‘Invisible Women: Exposing Data Bias in a World Designed for Men’, explains; “fallibility of AI is based on ‘garbage in, gender bias out’.” She says: “You would hope that artificial intelligence would improve things. But unfortunately, humans are the ones making the algorithms, and humans are the ones feeding those algorithms data: we are creating biased algorithms based on biased data.”
AI is increasingly used in medical diagnoses, so if AI is trained using biased data sets, misdiagnosis is increasingly likely, putting lives at risk. Perez suggests, that medical research is already systematically skewed towards men, whereby female heart attacks are routinely misdiagnosed because the “typical” heart attack symptoms are only typical to men, leaving women dangerously exposed. `In addition, AI failures due to bias are cropping up across a multitude of sectors, for example, an Amazon recruitment system was shown to discriminate against job applicants with female names due to bias within the system.
There is no easy fix to eradicate bias in AI research. System-wide changes aimed at creating safe and inclusive spaces that support and promote researchers from underrepresented groups, a shift in attitudes and cultures in research and industry, and better communication of the transformative potential of AI in many areas could all play a part. For example, organisations and the education sector need to understand and counter the pervasive stereotypes around gender and a teaching environment that impacts the confidence of girls more than boys. In addition, there must also be a boost in the positioning of diverse role models using AI to make a positive difference to global prosperity.
Government-level policy interventions could also help. For example, in the UK, the government provided a £13.5m investment to boost diversity in AI roles through new conversion degree courses. Yet whilst this will go some way towards improving the situation, broader scale interventions are needed to create better links between arts, humanities and AI, changing the image of who can work in AI.
In addition to mirroring bias, machines are replicating our value systems to form their own behaviour. This has led to an alarming trend. As AI gathers data from the internet to ‘learn’ how we interact, it is exposed to the raft of negativity and violence present online and is incorporating it into its own decision making.
This creates an urgency to reassess not only the ethical, social, cultural and technological impacts and risks of AI, but also the “value system” imparted onto these machines. As Mo Gawdat, former Chief Business Officer at Google [X] and founder of the #onebillionhappy movement to prioritize happiness to shape and draw a safe path for AI’s development explains: “Just like an 18-month-old infant, machines are learning by observing a modern world full of illusions, greed, obsessions, ego and disregard for other species. We need to fill the world with compassion and kindness if this is what we want to pass on to future generations.”
In essence, Gawdat believes that we need to act now to shape AI in order to ensure its positive contribution in the future as once these machines are smarter than us in all domains – which is predicted to happen within the next 15 years – it might be too late for us to contain and/or control how the machines will behave.
In a speech on his website, he clarifies this point, saying: “It is fair to imagine that AI might be the last technology we humans invent, because our AI newborn infants, the computers that we’re teaching now, once they are smart enough, will solve the next problem on our behalf. Give them a problem like global warming and ask them to solve it. They will see a much larger data set, much larger set of opportunities, they can mix physics, maths, biology in so many different ways that we may not be able to comprehend as a single human brain. But we need to make sure that those machines work on our side.”
Artificial Intelligence is transforming every industry and every niche in the modern business world and it is our responsibility to teach machines and systems how to behave correctly, but to do so we need to treat each other fairly. The consequence of teaching them poor attitudes and values is dire and therefore, it is worth applying real effort into transforming society for the better.