![]() | ||
Accelerating action against gender bias in AIWhile celebrating International Women's Day on Saturday (8 March), the cynic in me couldn't help but wonder how much of a difference one day would really make. But this year’s theme – accelerate action for gender equality – really resonated with me and got me thinking about ways in which I could potentially heed this call as a woman, as a human and as a UX writer. ![]() Author: Henda Scott, senior UX writer at Helm If you’re wondering what my job title has to do with it, the answer is, in fact, a whole lot. Years of crafting content and conversations has shown me how language can serve to either enforce or break gender (and other) stereotypes. So, if I am to make a real difference, or at least contribute meaningfully to this cause, a commitment to gender equality should not only come through in my writing, but also in the solutions we design and develop at Helm. Finding ourselves at the forefront of the AI revolution in South Africa, our solutions are becoming increasingly reliant on AI technology, such as LLMs. However, advanced as they may be, these models have proven (quite embarrassingly, at times) that they are not immune to biases. Who could forget Amazon’s AI-powered recruitment engine that discriminated against female applicants with alarming efficiency? Or Stability AI’s Stable Diffusion text-to-image model that associated high-powered jobs almost exclusively with males? I can ramble off a long list of instances where AI got it wrong, but we’ve all read the case studies and laughed (or cringed) at the anecdotes. Instead, I’d like to look at how AI can get it right and do it in a way that doesn’t just facilitate change, but actually accelerates it. You see, I believe in the power of AI and its ability to speed up processes that traditionally took more time to complete. And since the World Economic Forum estimated that it’ll take 133 years to achieve true gender parity, I’d like to believe that – with a little help from AI and the people working in the field – we can shorten this timespan by a generation or three. So, where do we start? A natural starting point is our own models and the ethical frameworks that guide their development. If our Machine Learning team can ensure that our models are completely devoid of all gender bias – through a continuous process of testing and tweaking – we’re one solid step closer to the goal of gender parity. However, as a smaller player without the limitless resources of an OpenAI, for example, it’s simply not practical or viable for us to develop all AI models from scratch. Incorporating an existing model from one of the major players in the industry is therefore inevitable, but we cannot do so without acknowledging our lack of control over it. In the wake of a scathing report on gender biases in LLMs by Unesco in 2024, it’s been encouraging to see the likes of OpenAI and Google make great advances in reducing and eliminating biases in some of their most popular models – in fact, I tried tricking Gemini into using gender stereotypes and failed miserably. In the end, the responsibility remains with us to vet any external models we integrate into our solutions. Only through thorough testing (and the odd attempt to trick) can we identify potential biases and either select a better model or tweak our application of it to mitigate these. Despite their past shortcomings, or perhaps because of these, AI models are constantly evolving and improving, thanks to a renewed focus on ethics in AI development, accompanied by clearer, more stringent guidelines on application. This, coupled with the commitment I’ve seen from people I work with daily, leaves me with no doubt that AI can be an excellent tool to accelerate action against bias and achieve gender parity in our lifetime. About the authorHenda Scott is a senior UX writer at Helm
| ||