Gender bias in AI: what can we do about it?
One thing generative artificial intelligence (AI) can’t do is advocate for gender equity in tech. Eleni Sarri of Tug explains how to tackle this issue in AI.
The tech industry must encourage more women, queer, and non-binary-identifiying people to enter the field / Delia Giandeini
The usage of OpenAI has exploded in the last couple of years. Developments like ChatGPT have become the latest hot topic in the industry, with competition among tech giants fierce, trying to get ahead of the game.
From helping non-coders build apps and virtual assistants to automating a huge range of processes, AI has brought immense transformations to tech. Currently, we’re seeing the industry use ChatGPT to help build tools. From keyword research to web scraping, content writing, and even image production - you name it, someone’s likely using ChatGPT to help automate it.
The tool is now available to everyone, however, while AI and its emerging evolutions offer up huge potential, in a culture where sexism is thriving, malicious outcomes darken this otherwise exciting space.
Underrepresentation makes for bias tech
Currently, there’s a big gap between the proportion of male to female, non-binary, gender-queer and gender non-conforming programmers. 91.88% of the AI workforce is made up of men, while 5.17% is female, and 1.67% fall into the categories of non-binary, gender-queer and gender non-conforming. Consequently, AI is largely being defined under the hands of a certain group, leading to the underrepresentation of others.
Gender bias is manifesting in the space as a result of this. Innovations in tech like deep fakes have long been used predominantly against women. Meanwhile, gender stereotyping is emulated in much generative art, AI text outputs, and voice recognition systems.
Examples of gender bias in AI include generative tech being more likely to generate images of men when prompted with the word ‘doctor’, and images of women when prompted with ‘nurse’.
With the lack of women/nonbinary/queer AI programmers, underrepresentation is widening in a pivotal moment for AI, where the foundations of its ecosystem are being defined. AI can be quite binary – and if models are trained using data that doesn’t recognize certain groups, this could lead to these groups feeling underrepresented.
Discrimination is also present due to the lack of data representing these groups. The problem isn’t just the gender imbalance of the people training AI, but also the actual data that exists on gender, and its potential to reinforce inequality. Not only is this resulting in discriminatory outcomes, it’s also not accommodating to the needs of underrepresented groups.
The healthcare industry is a great example of how this tech can jeopardize the well-being of women and other minority groups. As Caroline Criado Perez nicely describes in her book ‘Invisible Women: Exposing Data Bias in a World Designed for Men’, training models to conduct medical diagnosis use past medical data that has been generated focussing on male health characteristics.
This is due to the lack of health data that is currently available for women or other groups. The result for these underrepresented groups? Inadequate healthcare advice, misdiagnosis, and poor treatment.
How do we tackle bias in AI?
The tech industry needs to encourage more women, queer, and non-binary-identifying people to enter the field. What’s more, they need to start regulating any malpractices that discriminate or target certain groups. Initiatives could include helping these groups get into tech-related higher education and increasing their awareness of jobs available across AI.
Historically, these groups have been divided and conditioned into a binary model, where the typical image of a tech employee is male. As a consequence, this has been feeding the misconception that men are more mathematical and tech-savvy than other groups.
Although queer coders might be more likely to fall out of the binary, there is still not enough accuracy in the data from a representation point of view. This is expected since there might be fear or discomfort of queer people vocalizing their identity at the workspace, and hence less likely to design ‘queer friendly’ code in a system directed by a larger group.
AI evolution can be great for our society. However, when used in a myopic way, and undermining these minorities, the dangers may far outweigh the benefits. Any new tech has to be treated with consciousness and used intending to have a positive impact on the world. Companies that use AI systems should be open about the data sets and algorithms used to train these systems and should be willing to make changes if bias is detected.
Governance has become – and will continue to be – an incredible challenge in the next few years. The tech industry has a strong voice in influencing the direction of tech. Because of this, we must ensure we participate in decision-making, promote the importance of regulation and apply best practices for representation. This will be key in tackling discrimination and avoiding gender bias spills.
By promoting diversity in the AI workforce, designing inclusive AI systems, and being transparent about the data and algorithms being used, we can help shape a healthy, safe and nonbiased environment.
Content by The Drum Network member:
Tug is a performance driven, global digital marketing agency, optimised to grow ambitious brands, through the smart combination of data, media, content and technology.
Our offices in London, Berlin, Toronto and Sydney mix local capabilities with international scale to drive real business advantage for our clients, removing barriers to siloed thinking for optimal outcomes.
Tug supports brands - across a wide variety of verticals - looking to capitalise on the growing number of consumers online, with integrated, data-driven digital campaigns.
The team focuses on driving results and holds themselves accountable for each client's performance.
Proudly working with clients such as Muller, WWF, Zipcar, Compare the Market and Norton.