Blog banner featuring author and blog title no stem no stress, how to tackle bias in ai

Content Menu

practical tips for building fair and inclusive AI models

Hello! I’m Megan, a Code First Girls ambassador and recent MSc graduate in Human Centred AI. I’m passionate about responsible AI and tackling bias so we can build fair, inclusive technology for our future.

If you’re thinking of building an AI project, whether a small side project or as part of your career, there’s one thing you can’t afford to overlook: bias.

In fact, you’ve likely experienced it without realising, whether that’s through job applications filtered through an algorithm or recommendations that aren’t your cup of tea. This is something we must learn from, grow from, and acknowledge so we can actively design against it.

When completing my masters in Human-Centred AI, my final project focused on developing a decision support tool for maternity care, with a focus on ethnic disparities. Delving deep into the world of data and fairness, I found that AI bias isn’t just a tech problem, it affects real people. More on this later…

What are the different types of bias?

A simple definition of bias, is when an AI system produces unfair outcomes that disadvantage certain groups. It’s not always intentional, it often creeps in quietly through the data or design choices.

Common types of bias

📌 Data bias: When some groups are missing or underrepresented in the data. For example, medical datasets underrepresenting ethnic minority women, meaning tools may work less accurately for them.

📌 Sampling bias: When the training sample doesn’t reflect reality. A well-known example is CV screening tools trained mostly on male candidates, which ended up downgrading women’s applications. 

📌 Measurement bias: This is when the accuracy of data varies across groups. For example, facial recognition systems being less likely to recognise people from ethnic minority backgrounds or women.

All of these biased systems reinforce inequalities, and over time, it chips away at our human trust in technology.

What is fairness in an AI context?

Fairness in AI means the development and design of AI systems that support equitable treatment for all individuals and groups. Fairness is something that shouldn’t be an afterthought when designing AI systems. It’s a crucial part of design that should be considered at the start of your project. 

Here are a few principles you could apply to any AI project :

  • Define fairness goals early. Decide early what fairness means in your context. Is it equal opportunity for across groups? Minimising disparities in error rates?
  • Build with diverse personas in mind. In my own project, I completed research and a thematic analysis of literature to build four diverse personas. This helped keep inclusivity front and centre.
  • Involve end users in testing and feedback loops. Fairness is best evaluated with the people who will actually use or be affected by the system.

Now you know what it is – how do we actually tackle it?

Steps to reduce bias in AI

1. Audit your data. Explore it – who’s missing? Is anyone underrepresented? Overrepresented? 

2. Fairness toolkits. I have personally used the Fairlearn library in python, which can flag where different models show performance differences across demographic groups.

3. Apply explainability methods. Libraries like SHAP or LIME can help spot if a model is overly reliant on sensitive features.

4. Diversify your design process. If you’re designing a product or tool, include user personas with different lived experiences to challenge any blind spots.

5. Iterate, and monitor! Fairness isn’t a one off fix, models need to be revisited, tested, and updated.

Looking at a real world example: Bias in healthcare

In my MSc project, I focused on designing a decision support tool for maternity care. It took me on a deep dive into the world of bias in healthcare. 

When designing my user personas, I started by researching the systemic disparities in UK maternity outcomes. The reality is stark: according to the latest MBRRACE-UK report, Black women in the UK are more than twice as likely to die during pregnancy or shortly afterwards compared to white women.

When designing a tool to try to alleviate these disparities, it is obviously important to address bias. 

Here’s how I approached bias in my project:

  • Diverse user personas. I built four synthetic personas from a thematic analysis of literature, representing different ethnicities, languages, and healthcare experiences. This helped me design a tool that didn’t assume a one-size-fits-all patient.
  • Fairlearn fairness library. As mentioned above, I included Fairlearn to design a ‘fairness audit’, to trigger an alert on the tool when one group was underrepresented.
  • Explainability with SHAP. I wanted clinicians to not only see a risk score, but also why the tool made that prediction. This added transparency and trust, and revealed patterns in risk factors across different groups.

This project was just a prototype, but it revealed a lot. Not only that so many people have been failed by technology, but also, with the right tools, we can try our best to prevent these failures. Fairness and bias simply cannot be treated as secondary features; they are the foundation of responsible AI.

Conclusion

Bias isn’t just a technical element, it’s also about values, culture and the choices we make at every stage of our design. Even if your model outputs seem fair or neutral – question it. Be that person on your team, or within your own project, to advocate for inclusivity. 

As researcher Kate Crawford reminds us, ‘histories of discrimination can live on in digital systems, and if they go unquestioned, they become part of the logic of the AI we build’. That’s why we must design with everyone in mind, monitor continuously, and never treat fairness as a checkbox.

Every small step counts, whether you’re auditing your dataset, adding diverse personas, or simply asking the hard questions, you are helping create AI that’s fair to everyone – and that’s the tech that’s worth building!

TECH HIRING IN PORTUGAL

TUI leveraged our program to hire Junior Software Developers from a cohort with 75% career switchers and 100% non-computer science backgrounds.

Commercetools logo

HIRING TECH TALENT IN GERMANY

Commerce Tools used our programme to hire entry-level tech talent for Junior Software Engineering and Junior Site Reliability Engineering roles.

Rolls Royce Logo Code First Girls Partner

ROLLS-ROYCE HIRING IN THE USA

Rolls-Royce exceeded hiring targets by 150%, bringing in software engineers, data ops managers, and scrum managers, with 83% from underrepresented ethnicities and 50% first-generation university attendees.

blank
SS&C company logo

OPPORTUNITIES IN TECH IN INDIA

blank

CLASSES TO CFGDEGREE: HIRING IN INDIA

Unilever leveraged our pipeline to place CFGdegree graduates in roles like Solutions Factory DevOps Specialist and Solutions Factory ML Ops Specialist.

The Economist Group Logo Code First Girls Partner

TECH TALENT PIPELINES IN SINGAPORE

The Economist’s program supported tech pipelines with 78% oversubscription, drawing a cohort of 84% beginner-level women, 69% from underrepresented ethnicities, and 44% career switchers.

Nike Logo Code First Girls Partner

TRAINING TECH TALENT IN HILVERSUM

IQVIA Logo

Lorem ipsum dolor sit amet

TUI Company Logo

TECH HIRING IN KRAKOW AND WARSAW

Morgan Stanley logo

FROM BEGINNER TO SKILLED IN HUNGARY

Morgan Stanley used our program to hire entry-level software engineers from a cohort with 99% underrepresented ethnicities and 85% career-focused participants.

Goldman Sachs Logo Code First Girls Partner

FINDING TECH TALENT IN poland

Goldman Sachs used our oversubscribed program to hire in Poland and the UK, drawing from a cohort with 63% career switchers and 44% first-generation university attendees.

Credit Suisse Logo

TECH TOPICS UNLOCKED IN SWITZERLAND

Credit Suisse enhanced its employer brand and hiring pipeline by training a cohort that was 81% new to tech, 63% from underrepresented ethnicities, and 61% career switchers.

Skyscanner Logo

FINDING SOFTWARE ENGINEERS IN SPAIN

Skyscanner’s pipeline achieved a 4% year-over-year increase in women in tech roles, with 62% beginner-level participants and 85% career switchers.

blank

HIRING TECH TALENT IN SPAIN

Capgemini Logo Code First Girls Partner

CLOSING THE TALENT GAP IN GERMANY

Capgemini’s pilot program closed Germany’s talent gap, placing 80+ graduates globally and generating job-ready candidates for junior infrastructure admin roles.

GfK Logo Code First Girls Partner

UNLOCKING TECH TALENT IN POLAND

Booking.com Logo Code First Girls Partner

ENTRY-LEVEL TALENT IN THE NETHERLANDS

Booking.com used our program to hire junior software engineers from a cohort with 94% underrepresented ethnicities and 50% career switchers.