DEPUTY ADMINISTRATOR ISOBEL COLEMAN: Thank you, Teresa [Hutson]. And thank you to our partners at Microsoft and Truepic for bringing us together today.
Over the last few years, artificial intelligence has taken the world by storm. And its recent advancements have been unprecedented.
This past July, UN Secretary General António Guterres remarked to the General Assembly that generative AI has been compared to the introduction of the printing press. “But,” he qualified, “while it took more than 50 years for printed books to become widely available across Europe, ChatGPT reached 100 million users in just two months.”
The general trend in the news, in academic circles, and in the policy world has been largely polarized. AI is either an existential risk to be reckoned with, or the most promising technological advancement humanity has seen since the dawn of the internet.
And both arguments ring true. AI – especially unregulated AI – presents significant risks, ethical concerns, and threats to democracy.
It opens up new avenues for authoritarians to suppress and surveil their populations; it equips malign actors with tools to propagate misinformation, disinformation, and spread online hate speech and harassment at-scale; and it can be used to target civilians, silence dissenting voices, and manipulate public opinion, when left unchecked.
But AI also opens up a world of potential, and presents immense opportunities for good. It can be used to protect citizens and help prevent conflicts.
It can assess complex social and behavioral phenomena – from human trafficking and transnational crime, to violence and extremist activity – rapidly and at a massive scale, to enable the creation of early warning systems and help protect civilians in conflict zones.
It can provide avenues for civil society, investigative journalists, independent media, and human rights defenders to network and communicate more effectively.
It can be a pivotal tool in strategically countering misinformation, disinformation, and hate speech. And in times of conflict, it can help us document atrocities and hold perpetrators accountable.
At USAID, we are leveraging AI tools to help our partners improve their communities, and we are seeing powerful examples of AI at work for good.
In Ukraine, USAID partnered with international nonprofit Pact, to connect a group of Ukrainian documenters associated with Ukrainian NGO Anti-Corruption Headquarters, with our trusted partners, Microsoft and TruePic – here with us today – to create an AI-enabled system for documenting damage to cultural heritage and national infrastructure across Ukraine.
This tool enables users to verify images and capture precise data in order to accurately document damage to Ukrainian cultural sites.
The image authentication software has been used to document over 600 attacks on cultural heritage and nationally significant sites, and images have been included in at least ten potential war crimes investigations by prosecutors throughout Ukraine.
During my own travels to Ukraine, I’ve seen firsthand the catastrophic damage Russia has inflicted. Through the power of AI, we are shedding light on the destruction of Ukrainian cultural identity with the transparency and authenticity that will be critical to pursuing reparations and restoring these treasured landmarks in the future.
In Belarus, USAID supports the use of AI to bring truthful, authoritative information to citizens on social media. With USAID support, our partner created an AI-enabled bot that converts news web pages into an accessible form that cannot be blocked in Belarus.
When users send the bot a link to so-called “extremist” content that is blocked by the regime, the bot converts it into a link that can be freely accessed. This is a win for countering disinformation, and is equipping journalists and advocates for democracy with the tools they need to succeed.
This work builds on the efforts of brave champions for progress like Sviatlana Tsikhanouskaya, opposition leader for Democratic Belarus, who is here with us today.
And globally, USAID has supported the Machine Learning for Peace Project, which uses AI to analyze millions of news articles to predict closures to civic spaces in 54 countries, equipping researchers, policymakers, and activists around the world with rich, high-frequency data on political regimes to inform their work to defend democracy and free speech.
At USAID, we are looking toward the future with hope and optimism, and striving to prepare for the challenges and seize the opportunities of AI. In the face of polarization and disinformation, we are taking a sensible, middle path on this technology.
While AI does indeed pose risks, especially in sensitive contexts such as conflict settings, we are optimistic about responsible use of AI to help citizens in conflict settings, counter disinformation, and preserve records and artifacts.
We are working to codify this measured approach.
Our USAID AI Action Plan lays out a vision for responsible use of AI in international development by delineating steps for responsible engagement. USAID also promotes the Donor Principles for Human Rights in the Digital Age, a nonbinding set of principles drafted multilaterally with the Freedom Online Coalition, which seek to advance an affirmative, rights-respecting agenda for our collective digital future, upholding our commitment to “do no harm.”
And we do so in conjunction with this administration, with civil society, and with the private sector.
Just last week, the Biden administration secured voluntary commitments from eight additional AI companies – including Microsoft – that design, develop and deploy AI to manage the associated risks and help move toward the safe, secure, rights-respecting and transparent development of AI technology.
By understanding both the risks and benefits of a technology, and putting people at the center of our designs, we can maximize the benefits and plan ways to mitigate the risks.
We can leverage this transformative technology for profound good, in ways that preserve cultures, protect citizens, and benefit humanity.