
Artificial Intelligence, News, Technology
Recognizing Algorithmic Biases
Transparency, accountability, and reliability are the virtues of trustworthy AI. By keeping inclusivity and equity as the cornerstones of the development of AI standards, organizations can address implicit biases of algorithms.
It took foresight and wise judgement to see that “AI is the new electricity”, and fortunately, British-American computer scientist Andrew Ng had both.
Artificial intelligence (AI) tools of today are powering innumerable processes across industries. Over the years, AI has shaped thriving ecosystems in organizations—large and small—that empower individuals to grow and achieve greater heights through innovation by removing tedious tasks from their way.
This emergent symbiosis between AI and humankind is the utopic manifestation of the partnership between man and machine. The many facets of AI such as machine learning (ML), natural language processing (NLP) and deep learning (DL) have made their foray into significant organizational arms, including auditing, customer service, operations management and recently, talent acquisition.
AI-powered automation has taken processes by storm. Thanks to rapid advancement in technology, process automation has become affordable and widely accessible.
Artificial intelligence tools have earned their repute as the groundbreaking tools that will shape the future of our economy, however, it’s imperative to note that AI can reflect the idiosyncratic biases of humans.

Professional networking platform LinkedIn recently discovered its AI recommendation algorithm was producing biased results while matching candidates with opportunities. (Image: Unsplash)
Today, much of the process automation tools we use exhibit biases that have been passed down from humans and duly inherited by our tools. This is largely referred to as ‘algorithmic bias’.
Biases are imbibed in artificial intelligence tools in the algorithmic training processes. When an artificial intelligence tool is trained on a biased data set, the said biases can manifest themselves in the decision-making process of the model. This can compel the model to reinforce and perpetuate inequities historically exhibited by its human counterparts.
When biased artificial intelligence solutions are deployed in areas such as recruitment and hiring, it poses a substantial risk of taking actions that are systematically unfair to marginalized groups of people.
In October 2018, engineers at the e-commerce giant Amazon unearthed a major problem with their machine-learning-powered recruiting engine—it predominantly preferred male candidates over females.
According to a Reuters report, the algorithm was inherently biased in its reasoning insofar as it penalized any résumé that contained the word ‘women’s’ or any other related term that contained the keyword. As is the case in most instances of algorithmic bias, this was unintended—the algorithm merely inherited the bias from the very data set that shaped the recruiting engine.

Gartner has predicted that 85% of AI projects will deliver erroneous results due to bias in data through 2022. (Image: Markus Spiske on Unsplash)
The AI revolution is as imminent as tomorrow’s sunrise. Artificial intelligence is projected to add $13 trillion to the global economy over the next decade—with the technology becoming as ubiquitous as electricity in the process.
As AI becomes more ever-present in the organizational fabric, it’s important to introduce safeguards against algorithmic bias. Only through transparency, an organization can ensure accountability, and only total accountability can bring reliability.
Recognizing the pressing need for transparency in the algorithmic decision-making process, the UK government last week announced a new algorithmic transparency standard on AI—which complements the commitments made in the National AI Strategy.
As humans, biases slip unconsciously in our line of reasoning, but algorithms must differ in process to achieve the standard of moral and ethical soundness the world is striving for. To move forward as a society, we must dispel biases we have subconsciously inherited—for us, and for the tools that will eventually help shape our world.
After all, if the reasoning of artificial intelligence remains clouded by the same bias that shrouds human vision, then are we getting better or… just faster?
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.
Follow Us