From 16 May-26 August 2019, the Barbican is hosting an exploration and celebration of artificial intelligence.
Facial recognition technology has multiple flaws. One of its biggest – it’s not very good at recognising faces.
It’s easy to assume that the decisions produced by an algorithm would be neutral. By letting an uninvested and objective machine make a decision on which applicant to hire, or who is guilty in a criminal trial, it feels as though there should be a more accurate outcome than by allowing a biased and subjective human to decide. It seems, however, that this is not the case: Artificial Intelligence (AI) is inheriting the biases of its human creators.
NASA is shining a light on the once hidden figures who powered their missions, and its opening its doors even wider to let in more light (and people).
The development of new technology means that AI is being used to counter bias in the US criminal justice system. But what about human accountability? Should the criminal justice system take responsibility for individual and institutional prejudice or hide implicit bias behind technology?
Machine learning and CCS has the potential to transform Earth’s eco-system – but they must be adopted cautiously.