
Content Menu
practical tips for building fair and inclusive AI models
Hello! I’m Megan, a Code First Girls ambassador and recent MSc graduate in Human Centred AI. I’m passionate about responsible AI and tackling bias so we can build fair, inclusive technology for our future.
If you’re thinking of building an AI project, whether a small side project or as part of your career, there’s one thing you can’t afford to overlook: bias.
In fact, you’ve likely experienced it without realising, whether that’s through job applications filtered through an algorithm or recommendations that aren’t your cup of tea. This is something we must learn from, grow from, and acknowledge so we can actively design against it.
When completing my masters in Human-Centred AI, my final project focused on developing a decision support tool for maternity care, with a focus on ethnic disparities. Delving deep into the world of data and fairness, I found that AI bias isn’t just a tech problem, it affects real people. More on this later…
What are the different types of bias?
A simple definition of bias, is when an AI system produces unfair outcomes that disadvantage certain groups. It’s not always intentional, it often creeps in quietly through the data or design choices.
Common types of bias
📌 Data bias: When some groups are missing or underrepresented in the data. For example, medical datasets underrepresenting ethnic minority women, meaning tools may work less accurately for them.
📌 Sampling bias: When the training sample doesn’t reflect reality. A well-known example is CV screening tools trained mostly on male candidates, which ended up downgrading women’s applications.
📌 Measurement bias: This is when the accuracy of data varies across groups. For example, facial recognition systems being less likely to recognise people from ethnic minority backgrounds or women.
All of these biased systems reinforce inequalities, and over time, it chips away at our human trust in technology.
What is fairness in an AI context?
Fairness in AI means the development and design of AI systems that support equitable treatment for all individuals and groups. Fairness is something that shouldn’t be an afterthought when designing AI systems. It’s a crucial part of design that should be considered at the start of your project.
Here are a few principles you could apply to any AI project :
- Define fairness goals early. Decide early what fairness means in your context. Is it equal opportunity for across groups? Minimising disparities in error rates?
- Build with diverse personas in mind. In my own project, I completed research and a thematic analysis of literature to build four diverse personas. This helped keep inclusivity front and centre.
- Involve end users in testing and feedback loops. Fairness is best evaluated with the people who will actually use or be affected by the system.
Now you know what it is – how do we actually tackle it?
Steps to reduce bias in AI
1. Audit your data. Explore it – who’s missing? Is anyone underrepresented? Overrepresented?
2. Fairness toolkits. I have personally used the Fairlearn library in python, which can flag where different models show performance differences across demographic groups.
3. Apply explainability methods. Libraries like SHAP or LIME can help spot if a model is overly reliant on sensitive features.
4. Diversify your design process. If you’re designing a product or tool, include user personas with different lived experiences to challenge any blind spots.
5. Iterate, and monitor! Fairness isn’t a one off fix, models need to be revisited, tested, and updated.
Looking at a real world example: Bias in healthcare
In my MSc project, I focused on designing a decision support tool for maternity care. It took me on a deep dive into the world of bias in healthcare.
When designing my user personas, I started by researching the systemic disparities in UK maternity outcomes. The reality is stark: according to the latest MBRRACE-UK report, Black women in the UK are more than twice as likely to die during pregnancy or shortly afterwards compared to white women.
When designing a tool to try to alleviate these disparities, it is obviously important to address bias.
Here’s how I approached bias in my project:
- Diverse user personas. I built four synthetic personas from a thematic analysis of literature, representing different ethnicities, languages, and healthcare experiences. This helped me design a tool that didn’t assume a one-size-fits-all patient.
- Fairlearn fairness library. As mentioned above, I included Fairlearn to design a ‘fairness audit’, to trigger an alert on the tool when one group was underrepresented.
- Explainability with SHAP. I wanted clinicians to not only see a risk score, but also why the tool made that prediction. This added transparency and trust, and revealed patterns in risk factors across different groups.
This project was just a prototype, but it revealed a lot. Not only that so many people have been failed by technology, but also, with the right tools, we can try our best to prevent these failures. Fairness and bias simply cannot be treated as secondary features; they are the foundation of responsible AI.
Conclusion
Bias isn’t just a technical element, it’s also about values, culture and the choices we make at every stage of our design. Even if your model outputs seem fair or neutral – question it. Be that person on your team, or within your own project, to advocate for inclusivity.
As researcher Kate Crawford reminds us, ‘histories of discrimination can live on in digital systems, and if they go unquestioned, they become part of the logic of the AI we build’. That’s why we must design with everyone in mind, monitor continuously, and never treat fairness as a checkbox.
Every small step counts, whether you’re auditing your dataset, adding diverse personas, or simply asking the hard questions, you are helping create AI that’s fair to everyone – and that’s the tech that’s worth building!

