Why AI needs diversity – and fast
It’s easy to assume that the decisions produced by an algorithm would be neutral. By letting an uninvested and objective machine make a decision on which applicant to hire, or who is guilty in a criminal trial, it feels as though there should be a more accurate outcome than by allowing a biased and subjective human to decide. It seems, however, that this is not the case: Artificial Intelligence (AI) is inheriting the biases of its human creators.
There are numerous examples of AI reflecting the discrimination prevalent in society. There’s the Microsoft AI bot, that started spouting racial hatred after scraping data on Twitter, or the Google Photos application that recognised black people as gorillas. Facial recognition services have a 1% rate of error for light-skinned men, compared to 35% for dark-skinned women. Clearly, AI is fast becoming part of the problem when it comes to structural inequality, rather than the solution.
AI makes decisions based on algorithms, which are constantly being fine-tuned according to the data it is fed. As we live in a structurally racist and sexist society, it is not surprisingly that the data machine learning algorithms have access to reflects these implicit biases too. This is not helped by the fact that the technology industry is overwhelmingly white and male. More than 80% of AI professors are men, and just 2.5% of Google’s workforce is black. The lack of diversity in those building the systems means their algorithms are inheriting implicit biases, based on the information that they have been presented.
The evident solution is to increase diversity in the technology industry. Hiring diversely is incredibly important for this – tech firms need to have the importance of a workforce that incorporates multiple races, genders, ethnicities and backgrounds in mind when developing their hiring strategies. With more types of people offering different perspectives, not only will programming become more egalitarian in the data it receives, but those who are discriminated against are the best placed to ensure AI doesn’t perpetuate prejudice. Giving people with a variety of lived experiences seats at the table is powerful in producing software that better reflects society.
However, diversity in hiring isn’t enough to improve the bias in AI. The solution must include improved education and structural changes that allow a more diverse group of people to enter into STEM fields. This can be done in various ways – from ensuring young children have role models in technology that look like and sufficiently represent them, to teaching basic technology skills in early education. Effectively alleviating the technology industry of its race and gender biases requires a diverse cohort of qualified applicants, which can only be done by increasing access to technical skills from the ground up.
As AI becomes more and more integrated into society, it is crucial that this power is not just held by a privileged few. Diversifying the technology industry is essential in ensuring that machine learning is used as a positive and egalitarian force in society, rather than perpetuating existing inequalities.