As the use of artificial intelligence (AI) proliferates, more instances of the inequitable design, use, and impact of AI-enabled tools in developing countries are coming to light. Many of these tools and approaches can generate inequitable outcomes across genders due to bias embedded in AI technology through data collection, model design, or end-use applications. These tools often pose the greatest risk of harm and missed opportunities to those who have historically been subject to bias. These inequities require creative solutions to ensure that everyone has a chance to benefit from AI technology. To foster an equitable and inclusive digital ecosystem, more efforts are needed to identify innovative and timely approaches to help decision-makers address gender biases, harms, and inequitable outcomes resulting from AI technology.
About the Competition:
Through the Equitable AI Challenge, U.S. Agency for International Development (USAID) is investing in innovative approaches to help identify and address actual and potential gender biases within AI systems, in particular those relevant to global development. USAID is supporting approaches to increase the prevention, identification, transparency, monitoring, and accountability of AI systems so that their outputs do not produce gender-inequitable results. While some of these approaches use technology, others do not. In the first round of the challenge, Digital Frontiers disbursed five grants using over $570,000 in award funding. Before making awards, the challenge included the following steps:
- Phase One: An open call for short concept notes. Concepts were accepted under two tracks — “Prevention,” for approaches to prevent AI systems from incorporating gender bias into their outputs, and “Response,” for approaches to inspire transparency and accountability in existing AI systems. Strong applications from Phase One were designated semi-finalists and invited to attend a co-creation workshop in early 2022 to discuss the context in which different instances of inequity arise, showcase different interventions, and encourage collaboration and partnership among semi-finalists within and across the tracks.
- Phase Two: Semi-finalists were then invited to submit revised concept notes that reflected updates to their original concept note, as relevant. Five winning teams have been awarded grant funds to design, test, scale, and/or socialize the approach over the course of one year.
Digital Frontiers encouraged proposals from:
- All organizations regardless of size, including technology firms, startups, small, and medium enterprises, civil society organizations, and researchers from around the world.
- Diverse groups that have clear, strategic, and collaborative approaches or frameworks to tackle the complexity of the gender bias and inequity occurring as a result of AI applications, including and especially, collaboration between developers, AI tool buyers, and civil society organizations.
- Organizations that present a business case for why addressing gender inequitable outcomes from AI is both valuable to their organization and will be sustained following the grant.
- Proposals that represent a good faith effort from developers to serve as an industry example on the steps needed to identify and eliminate instances of gender inequity in their AI tools.
- Proposals that incorporate the diversity of AI-enabled bias or inequity women face according to intersectional identities (disability, sexual minority, language minority, caste status, etc.).
Meet our Winners
USAID chose 28 diverse semi-finalists to attend a three-week virtual co-creation event, held between February 14 and March 1, 2022, which brought together select technology firms, startups, small and medium enterprises, civil society organizations, and researchers from around the world. The co-creation focused on the need for close collaboration between the public and private sectors, which allows for diverse perspectives, local solutions, and partnerships to form among AI technology developers, investors, donors, and users. With a desire to address AI’s most critical issues, including bias and inequity within AI systems, participants were encouraged to collaborate on solutions, identify partnerships, and strengthen their proposals — all while forming a larger community of practice.
In October 2022, USAID and Digital Frontiers awarded five proposals to receive grant awards to implement their approaches in alignment with the challenge’s objectives. Learn more about the winners of the Equitable AI Challenge as their work gets underway!
A Due Diligence Tool for Investors to Examine Algorithms
Investors play an important role in digital ecosystems and determining how future innovations take shape. Nonetheless, investors have little visibility on how companies use algorithms and AI in their financial tools. For example, AI algorithmic credit scoring to expand access to loans or target their financial products may be well-intentioned but have inequitable results. The Accion Center for Financial Inclusion (CFI) will develop a due diligence tool to help investors make better, gender-informed decisions about where and how to invest their money and establish means to verify whether any algorithmic tools perpetuate women’s historical marginalization within the financial sector. The tool will allow impact investors and donors to push product designers to build better, more equitable processes for algorithm development. Through this approach, CFI expects to give investors confidence that the artificial intelligence products and services they are funding do not exacerbate women’s inequity.
Gender Accessibility in Health Chatbots
Mobile chatbots are an increasingly common way for healthcare companies to reach a greater number of patients. They can reduce the burden on healthcare providers by automatically triaging patients based on their symptoms, or provide rapid healthcare advice for routine health concerns. These chatbots often rely on a type of AI—Natural Language Processing (NLP)—to engage patients in an automated way. However, if not designed with different genders in mind, these chatbots may not appropriately register the symptoms women are experiencing or provide advice that is well-suited for female bodies. Through this project, the University of Lagos and Nivi are partnering to create a gender-aware auditing tool within their existing health chatbot technology deployment in Nigeria.The tool will evaluate customer interactions with the chatbot (in English and Hausa) along with how the bot interprets each conversation and then responds. User feedback will also be gathered, and incorporated into chatbot programming, to better respond to user needs. By building an AI-based system that is more attuned to the needs of each local population, Nivi’s health chatbot and digital health services will ideally reach more women, help them make better-informed health decisions, and reduce overall health-care costs.
Improving Access to Credit with Gender-differentiated Credit Scoring Algorithms
For those without formal financial records or history with banks, AI is increasingly being used to determine a loan applicant’s creditworthiness based on alternative data. This data can include details such as someone’s mobile phone usage or whether they capitalize the names of their contacts. While this approach has the potential to expand access to credit, these alternative credit-scoring models tend to pool data from men and women and claim they are taking a “gender-neutral” approach. Because of historical gender biases, this can place women at a disadvantage when seeking access to credit. The University of California-Berkeley, Northwestern University, and Texas A&M University are partnering with RappiCard Mexico to address this challenge. They will develop an AI model that differentiates credit worthiness between men and women, aiming to increase fairness and credit scoring efficiency. This research will inform policymakers and practitioners about whether (and how) gender-aware, rather than gender-neutral, algorithms are fairer and more effective for women seeking access to credit. The study findings will be shared with RappiCard, as well as other fintech partners, to apply the algorithm in their digital credit products.
Preventing Gender Bias in AI-based Early Alert Systems in Higher Education
Governments around the world are increasingly turning to AI for automating service delivery, increasing efficiency, and improving transparency. But what controls are there to ensure that these systems operate equitably? Itad—in partnership with Women in Digital Transformation, PIT Policy Lab, and Athena Infonomics—will work with the Mexican State of Guanajuato's Ministry of Education on a pioneering initiative called Educational Paths, to identify and mitigate gender bias within Guanajuato’s new AI-based early alert system. This system uses AI to better identify higher education students who are at-risk of failing. The grant will support Educational Paths to identify and mitigate potential gender bias in the government’s databases and AI performance. Specifically, the team will develop an Ethical Guide and Checklist for decision makers to ensure responsible and equitable deployment of AI systems. They will also leverage the AI Fairness 360 toolkit from IBM to detect AI bias, providing the toolkit to the Ministry of Education. The learnings from this case study will be presented in a workshop with stakeholders from the Ministry of Education in the state of Tamil Nadu, India, to explore how lessons learned from the Mexico experience could transfer to the Indian context.
Evaluating Gender Bias in AI Applications using Household Survey Data
Household survey data are increasingly being used to build AI tools that can better estimate poverty around the world. But if the underlying data is biased, any AI tools built from this data will reflect those biases. William & Mary University’s AidData, in partnership with the Ghana Center for Democratic Development (CDD-Ghana), will evaluate the impact of gender bias on poverty estimates generated using AI and USAID’s Demographic and Health Surveys (DHS) data. This project will inform AI developers, researchers, development organizations, and decision-makers who produce or use poverty estimates. Using AidData’s expertise in AI, geospatial data, and household surveys—as well as CDD-Ghana’s knowledge of the local context—this project will produce a novel public good that elevates equity discussions surrounding AI tools in poverty alleviation. Overall, this work will encourage deeper consideration for potential bias in the data and resulting AI models developed, while also providing a practical roadmap to evaluate bias in other applications.
What’s Next: The Journey Towards Implementation
Through these diverse concepts spanning geographic regions and types of approaches—from improving AI fairness tools and data systems, to strengthening the evidence base for AI fairness in development contexts, to developing and testing more equitable algorithms—the winners of the Equitable AI Challenge will help USAID and its partners better address and prevent gender biases in AI systems in countries where USAID works.
Over the next year, these awardees will work with USAID and its partners to implement their approaches and generate new technical knowledge, lessons learned, and tested solutions in addressing gender-bias in AI tools. Through this implementation phase, USAID seeks to foster a diverse and more inclusive digital ecosystem where all communities can benefit from emerging technologies like AI and—most importantly—ensure that all members of these communities are not harmed by these technologies. This effort will inform USAID and the development community; provide a greater understanding of AI fairness tools and approaches; better determine what is captured and what is left out; and inform what tactics are needed to update, adapt, and socialize these tools for broader use.
Stay tuned as we share the ongoing progress of the challenge winners and build a stronger community of practice to learn and work towards a more equitable AI-powered future.