Acknowledging Biases: A Call for Ethical and Inclusive AI Development and Applications
- Alina MacDonald
- Nov 14, 2024
- 5 min read
Updated: Jun 26

Story Time
One of the most formative experiences in my life happened in 2011 during a lecture when I was attending York University for Environmental Studies. At that time in my life, my goal was to become an architect specializing in sustainable development. I wanted to acquire a post-secondary education in the humanities revolving around environmental issues to balance my fundamental knowledge in the sciences for making better decisions to develop a more equitable and sustainable world.
During a lecture one day in a course I was enrolled in Taking Action: Engaging People in our Environment – we were asked by our professor to take 30 minutes to author an exhaustive list of our all of our personal biases.
Everyone started enthusiastically but after only about 5-10 minutes most people felt they were done and the class of 500 started to become restless. That's when our professor told us all to pause – flip our papers over and he delved into his lecture about bias, the difficulties associated with defining them, the importance of understanding them in the context of the world as well as the self and the dangers of overlooking it particularly when it comes to contributing to constructive conversations about solving problems for others.
It was enlightening, thought provoking – and as a white woman descended of settlers who grew up in the middle class in Canada, often times quite uncomfortable.
After the lecture – we were asked to return to the exercise and continue writing down as many personal biases as we could. The room fell silent again as everyone scribbled away with this new found understanding of personal biases and many stayed after the time allotted (45 minutes) to complete their thoughts.
I'll always remember that day as one where many of us grew up a bit. The air in the room seemed to change and we started to interact with one another differently. Perhaps this is an exaggeration from being recalled from my memory – but I remember after that turning point – there was a more noticeable amount of not just empathy but compassion offered in general amongst the students and discussions in tutorials became better informed and less contentious. As a group, we had become a little more mature and in being able to better understand ourselves we seemed better able to understand each other and in doing so, create better solutions together.
As a designer of over 10 years, AI is far from a new topic – I've been using it to support the completion of mundane tasks for many years. What is new however, is the newfound breadth of tasks it's now able to help people complete which means it's more helpful for more types of roles, particularly less technical roles.
For the last couple years I've been quietly observing and purposefully holding off on forming a strong opinion on the latest iterations and applications. I've been waiting to see how people without high levels of technical skills are utilizing and talking about the technology as well as how the technology itself is changing based on this new groups adoption.
AI: An Amplification of Our Biases
That lesson still resonates deeply with me today as I observe the rapid evolution of artificial intelligence. As a designer with over a decade of experience, I've watched AI transform from a specialized tool into a technology that touches nearly every profession.
While its capabilities are impressive, two things concern me:
The Tech Itself – Invisible Algorithm Creators: The biases of those creating AI algorithms are baked into the technology, yet we can't trace or understand these biases as we could with human creators. This opacity creates a new challenge for accountability and fairness.
Popular Applications – The Diversity Deficit: When AI replaces human collaboration, we lose the natural diversity of perspectives that comes from different people working together. As IBM states, "biased results due to human biases that skew the original training data or AI algorithm [lead] to distorted outputs and potentially harmful outcomes."
Real-World Implications
The consequences of biased AI have been visible and affecting us all for a long time.
Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner research and write Machine Bias in 2016 which showcases that:
The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
White defendants were mislabeled as low risk more often than black defendants
In 2021 a paper entitled On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, the authors find, "...there are several factors which narrow Internet participation, the discussions which will be included via the crawling methodology, and finally the texts likely to be contained after the crawled data are filtered. In all cases, the voices of people most likely to hew to a hegemonic viewpoint are also more likely to be retained."
In 2023, Marie Lamensch writes Generative AI Tools Are Perpetuating Harmful Gender Stereotypes which explores the often times racist, sexist and hyper sexualized outputs of many image generators and how the use of them can be harmful to society.
A Harvard Medical School study Demographic Representation in 3 Leading Artificial Intelligence Text-to-Image Generators found in 2023 that AI-generated images of physicians were predominantly white (82%) and male (93%), far exceeding the actual demographics of U.S. physicians (63% white, 62% male)
These biases don't just affect the individuals directly impacted; they can have far-reaching societal implications. Particularly because of the ignorance of the masses regarding this technology, it's influence more often than not goes unchecked or misattributed.
The Path Forward: Ethical and Inclusive AI
Addressing algorithmic bias is a complex issue that demands a comprehensive strategy.
It starts with diverse and inclusive teams of AI developers, bringing together a variety of perspectives and real-life experiences to the table. By involving underrepresented voices in the creation and implementation of AI systems, we can more effectively recognize and alleviate potential biases.
Data accuracy and inclusivity are also paramount. The impartiality of AI models is contingent on the quality of the data used for their training. Developers must meticulously review their datasets to ensure they mirror the diversity of the communities they intend to assist. This might require actively seeking and integrating data from marginalized groups once or many times over. Systems need to be rigorously assessed for biases throughout the development process, utilizing diverse evaluation criteria and testing methodologies if wanting to lessen the amplification of bias. Transparent and responsible procedures for recognizing and rectifying biases should be established and shared.
Although contemplating the technologies and applications of modern AI can be overwhelming, it's worth noting that numerous individuals and organizations globally are already keenly aware of potential challenges and are diligently striving to guarantee that the advancement and adoption of this technology are conducted as socially responsible and sustainably as possible.
Ultimately, policymakers will establish the regulatory frameworks that will oversee the implementation of AI technologies and restrict applications as necessary. Through close collaboration with developers and engagement with the wider public, they can ensure that AI is utilized for the common good, rather than for the benefit of a privileged few.
Nevertheless, it is crucial that the general public is also equipped with the knowledge and awareness to interact meaningfully with AI technologies. This requires a concerted effort to enhance the accessibility and inclusivity of education, guaranteeing that everyone has the chance to comprehend and influence the future of AI, which already impacts all of us.


