Artificial Intelligence (AI) is used every day to analyze data that support decision making. One might assume that the data and programming that teach the machine to learn are completely objective. But what if the data sets, the input, or the instructions are biased? The decisions rendered will be skewed, leading to suboptimal outcomes.
According to Gartner, 85% percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them.1 Why are these systems flawed?
Data sets based on a prevalence of male gender information are problematic. For example, researchers from The Stanford Social Innovation Review discovered that among 133 machine-learning AI systems across industries from 1988 until the present, 44.2% of the systems demonstrated gender bias, which resulted in 70% lower quality of service delivery for women.2
In healthcare, the problem is magnified by gender biased data points. For example, AI systems that improve detection and diagnosis of skin cancer are a boon to dermatology. Yet those systems struggle to detect melanoma in dark skin; for black women in particular, this puts lives at risk.3
Alarmingly, women are 60% more likely to have an adverse reaction to prescription drugs, which could be due to the lack of medical studies performed on women.4 A 2018 literature review analyzing gender norms and biases concluded that women with chronic pain tend to be prescribed less pain medication.5 Additional research has revealed that AI used to diagnose heart attacks was 50% more likely to misdiagnose women.6
But there is hope on the horizon. Most recently, MIT created a deep learning algorithm to read mammograms to assess breast density,7 and the NIH and Intellectual Ventures’ Global Good Fund (now part of the Bill & Melinda Gates Foundation) created an algorithm to analyze images of a woman’s cervix to identify precancerous changes.8
What to do about gender bias in AI?
Your program is only as good as your data. AI is reliant upon the data sets you provide the program designer. The program designers develop the algorithm and the taxonomy underscoring the program. If the data sets, instructions, categories or labels are flawed, the results will be wrong.
When building AI systems, the team should include women. Stakeholders from diverse backgrounds and disciplines should review the framework and data sets to detect bias and ensure the machine-learning protocols support accurate decision-making.
The future of work for women in AI
New and novel methods utilizing AI to improve women’s health begins with the creation of accurate algorithms and includes more women involved in AI development. Dr. Caitlin Pley, a WHO consultant and associate with Women in Global Health Policy, addressed gender equality during a 2019 meeting of the global AI health community at the Intelligent Health Summit in Basel, Switzerland.9
Among her recommendations were consideration of gender inequalities in data sources; use of unbiased datasets and disaggregating results by sex, age and other factors; opening up access to AI software and data; and assuring all health professionals have equal access to and training in AI for better health outcomes.10
A UNESCO report, I’d Blush If I Could, examines the biases in digital skills education and AI. It included recommendations for future actions, including strengthening gender equality in AI principles and creating realistic action plans.11,12
Such discussions have helped to create recent developments in AI for women, known as Femtech. One example of better data gathering includes wearable tech designed with a better understanding of women’s physiology.