Artificial Intelligence (AI) has seamlessly integrated into our everyday lives, becoming a constant presence we’ve quickly adapted to.
From hiring practices to healthcare and even dating apps, algorithmic decision-making has quietly woven itself into the fabric of modern life, making AI an invisible yet powerful influence on our routines.
But as AI’s prominence in our daily lives grows, a disturbing question arises: Is AI the puppet or the puppeteer?
This question is increasingly important, as users can’t assume that algorithmic decisions are always neutral. The answer becomes ever-blurred as technology advances.
Algorithmic biases — arising from inequities in the data they are trained on — can perpetuate and even amplify the systemic inequalities experienced by marginalised humans.
For example, a 2019 NIST study revealed that facial recognition systems misidentify Black and Asian faces 10 to 100 times more than white faces.
AI ethicist and founder of Justice AI, Christian Ortiz, remarks on his LinkedIn: “Artificial Intelligence is not artificial. AI merely inherits biases and, without careful revision, embeds them into our future.”
What are algorithmic biases in AI?
The phrase “rubbish in, rubbish out” is often used for data and analytics; algorithmic bias occurs when an AI model produces skewed results due to imbalanced data or flawed design.
At London’s Black Tech Fest 2024, Yetunde Dada, senior director of product management at QuantumBlack, AI by McKinsey, shared her insights on the real-world impacts of biased AI.
“We’re talking about decision-making being made at the highest level, whether or not you get a job, or a loan, or are considered for specific decisions. What we’re doing now is automating human decisions,” she said.
“One example is misidentification. This might occur when a data set is missing information about minoritised groups and, therefore, doesn’t know how to make decisions properly.
“For example, one major tech company created an app that would scan your skin for different skin conditions and give you a probability percentage of what’s happening — but the app’s data set did not include black skin.
“Immediately, the model becomes highly inaccurate when looking at black skin tones,” she explained.
Dada adds that the app had used historical medical records, so it wasn’t the developers’ fault that the data didn’t include black skin. However, she argues it is their responsibility to fix it.
Dada also mentions misrepresentation, referencing her personal experience using a dating app that rotates your profile photos and tells you which gets you the most likes.
“The photo it picked as my most popular was the only one I had with a friend who just happened to be white. Maybe it was the most interesting profile photo, I don’t know. I have to consider different reasons before jumping to any conclusions.
“But I didn’t expect the software to crop me out of the picture, even though the app knows what I look like.
“When designing software, you think about test cases and edge cases from the beginning. Why are we edge cases in this discussion? Why do we have to find bugs in the system? Because it wasn’t designed for us.”
When businesses get it wrong
In 2018, Amazon designed a tool to streamline recruitment by automatically evaluating resumes. The tool developed a bias against women because it was trained on ten years of CVs predominantly submitted by men in the heavily male-dominated tech industry.
Algorithmic biases can have huge professional implications for minoritised individuals, such as marketing algorithms that misinterpret cultural nuances or financial models penalising historically underserved communities.
In a conversation with TI, Hemant Patel, CEO of data and analytics consultancy Anumana, described how financial services often reinforce socioeconomic biases.
“In my experience, you can see the issues with how machine learning and AI are utilised by the inherent bias within the customer base of financial services institutions,” he said, suggesting data about location or behaviour can create self-fulfilling prophecies, trapping communities in cycles of disadvantage.
“It’s almost a chicken-and-egg situation. Are these algorithms predicting that communities will default on their loans accurately, or is it because societal factors have led to a situation where those communities can’t break out of the norms of historical decision-making?”
Patel highlights the systemic challenges AI systems face when trained on historically biased data — a challenge echoed in marketing, where biased algorithms risk alienating entire demographics.
These biases are more than ethical missteps for enterprises, says Kiki Punian, founder of accessible AI marketing software ItoA.
“Bias in AI can significantly impact customer loyalty and brand reputation and ultimately compromise long-term business viability,” she says.
“When AI algorithms produce biased results, they risk damaging the connection with audiences, particularly with Gen Z and Baby Boomers, who increasingly prioritise diversity and inclusion in the brands they support.”
A 2022 Deloitte report found that 57% of Gen Z and Millennials prefer to buy from brands that align with their social values, including diversity and inclusion.
Why bother decolonising the algorithm?
The only way to ensure that AI remains impartial is to actively dismantle systemic biases by questioning data assumptions, embedding inclusivity in design, and prioritising diverse representation.
Sandra Masiliso, global DEI lead at DEPT, an agency specialising in AI adoption, says: “It’s about acknowledging the inherent biases present within current data sets, which often fail to represent the diversity of our global communities.”
She adds: “Decolonising algorithms is about first recognising this imbalance and then actively working to reshape it. It means prioritising high-quality, representative data collection before advancing AI development.
“To make this meaningful, we must involve individuals from these communities within AI and data teams, ensuring a diversity of lived experiences is embedded in the very foundation of algorithm development.”
She proposes a novel solution to diversifying data: “Imagine a framework similar to nutritional labels on food items. A ‘data transparency label’ would detail the quality, sources, and any known biases in the data.
“It’s an approach where policy can play a proactive role, setting clear standards for data accountability.”
Yetunde Dada adds that the case for decolonising AI extends beyond risk mitigation, reframing the narrative as an opportunity to innovate and create.
She highlighted the untapped potential of serving underrepresented groups by drawing parallels to Rihanna’s Fenty Beauty creating 43 different shades of makeup.
“She said, ‘I’m going to serve a group that has historically been left out of the conversation.’ And she completely changed the game. Inclusivity drives innovation, and the same applies to AI.”
Dada explains that inclusivity is not just ethical; it’s profitable, citing a McKinsey study showing that diverse companies outperform their peers by an average of 36%.
Ultimately, decolonising algorithms is not only a moral imperative but a financial one that businesses shouldn’t ignore.
How can algorithms be decolonised?
Decolonising algorithms shouldn’t be a checklist activity. It is a continuous, multifaceted effort that requires businesses to address biases at every stage — from the data used to train models to the teams building them.
Peter Wood, CTO at Web3, crypto, and blockchain recruitment agency Spectrum Search, echoed the importance of varied data sources: “At Spectrum Search, we’re careful to bring in information from all sorts of backgrounds — cultural, geographic, and economic — so all voices are heard at the table.”
Speaking to his own experience, Wood explains: “During the early testing of an AI model we’d developed for recruitment, we noticed a subtle trend in which candidates from less conventional educational backgrounds were ranked lower for technical roles, despite their high skill levels and experience.
“This experience completely changed my approach to AI deployment. Now, I insist on multi-layered testing phases to catch potential biases before scaling. We use diverse datasets, scrutinise the weighting given to particular attributes, and continuously monitor for patterns.”
Wood’s approach underscores the importance of active vigilance in AI development—a principle that Vaibhav Puranik, EVP of engineering at AI-driven contextual advertising firm GumGum, believes should be embedded in the broader frameworks governing AI.
“Innovative technological advancements should not be made at a cost to any part of society,” said Puranik. “Policies should ensure that AI acknowledges historical context, addressing past inequities while promoting progress.”
“They should ensure that AI systems are designed with an understanding of real-world biases and prevent models from replicating or amplifying these flaws. They should require clear explanations of AI decisions, allowing stakeholders to assess potential biases.”
Decolonisation in action
While the challenges of algorithmic bias are significant, businesses are stepping up with tangible solutions. Across industries, innovative efforts to decolonise algorithms are setting new benchmarks for inclusivity and fairness.
Technological innovation plays a critical role in mitigating bias. Explainable AI (XAI) and Federated Learning are two approaches gaining traction, as Kiki Punian explains.
“Explainable AI increases transparency by making the decision-making process more understandable to humans, especially in critical fields like healthcare.
“Federated Learning uses decentralised data to build models without centralising data and compromising privacy, meaning models learn from diverse sources without centralising sensitive data.”
Krishna Sai, senior VP of technology and engineering at SolarWinds, shared how his company equips its systems with feedback and validation tools to capture and address negative user experiences.
“Human agents review and refine AI-generated responses before sending them to the end user. Any anomalies detected are documented to drive further refinements,” he notes, emphasising the importance of human oversight in ensuring fair outcomes.
At Kinhub, CEO Erika Brodnock’s team focuses on reducing cultural biases by building datasets beyond dominant cultural norms.
“We regularly audit our algorithms to identify and mitigate potential cultural biases,” she explained.
“We maintain Human-in-the-Loop oversight of our AI systems, particularly in critical decision-making processes. This ensures that human judgment and ethical considerations are always factored in, especially when cultural nuances may be involved.”
Brodnock also emphasised the importance of intentional recruitment.
“We’ve implemented blind recruitment processes and actively seek candidates from underrepresented groups,” she says.
To that point, Hemant Patel highlighted the systemic barriers preventing underprivileged individuals from entering tech: “Only 8% of individuals in technical roles come from underprivileged backgrounds, compared to around 40-50% of the population.
“That diversity issue is only growing because of the barriers to getting into technology.”
To address this gap, Anumana launched the Anumana Code Academy, a not-for-profit program teaching Python to disadvantaged youth.
“We aim to spark interest in technology careers at an early age and open up opportunities for underrepresented communities,” Patel explains.
Read more about AI for Inclusive Business here.
This post was originally published on here