Entitled “AI as a Public Good: Ensuring Democratic Control of AI in the Information Space” the policy framework issued on 28 February 2024, contains more than 200 recommendations addressed to States and AI companies. It takes a comprehensive approach calling for safe and inclusive AI systems, putting in place accountability mechanisms and incentives for ethical AI as well as governance mechanisms.  

The policy recommendations were developed by an international Policy Working Group with 14 members from diverse disciplines and 13 countries on all continents. The Group was co-chaired by Laura Schertel Mendes, Professor of Law and Jonathan Stray, Senior Scientist, and composed of Rachel Adams, Linda Bonyo, Marta Cantero Gamito, Alistair Knott, Syed Nazakat, Alice Oh, Alejandro Pisanty, Gabriela Ramos, Achim Rettinger, Edward Santow, Suzanne Vergnolle and Claes de Vreese. Over 6 months, they worked through an inclusive and consultative process, receiving inputs from more than 150 experts worldwide.

The report outlines key recommendations to governments, the industry and relevant stakeholders, notably:

  • Foster the creation of a tailored certification system for AI companies inspired by the success of the Fair Trade certification system.
  • Establish standards governing content authenticity and provenance, including for author authentication.
  • Implement a comprehensive legal framework that clearly defines the rights of individuals including the right to be informed, to receive an explanation, to challenge a machine-generated outcome, and to non-discrimination
  • Provide users with an easy and user-friendly opportunity to choose alternative recommender systems that do not optimize for engagement but build on ranking in support of positive individual and societal outcomes, such as reliable information, bridging content or diversity of information.
  • Set up a participatory process to determine the rules and criteria guiding dataset provenance and curation, human labeling for AI training, alignment, and red-teaming to build inclusive, non-discriminatory and transparent AI systems.

Democracies must stop allowing tech companies to dictate the trajectory of technology, to capture the policy narrative and to set the agendas. Solutions exist to build a global information and communication space conducive to democracy, that creates value for people not only as consumers, but first and foremost as citizens. We are presenting these solutions today. They call for a comprehensive framework encouraging companies developing and deploying AI to implement democratic procedures, suggesting measures to incentivize an ethical development and use of AI and setting a framework for accountability, governance and oversight, ” highlights Michael Bąk, Executive Director of the Forum on Information and Democracy. “We call this Fair Trade AI.”

Recent events have shown the destructive power that AI can have on political processes. Deepfakes of political actors can influence voting behavior, and AI systems can amplify content that escalates conflict and crises. Chatbots have already provided incorrect information about elections. AI systems can reproduce existing inequalities and cultural hegemonies and lead to discrimination and bias. Yet, AI also presents untapped possibilities to strengthen news production, data analysis and access to information. 

If AI development and use continues as it currently does, it poses major challenges to the information environment that powers democratic processes. We are on the verge of a major shift in the AI governance landscape from ideas to regulation. It is time for States to act, and our roadmap is intended to guide policy-makers in defending democracy” explain Laura Schertel Mendes and Jonathan Stray, co-chairs of the Working Group. 

The report was launched at the occasion of a global event in five cities from four continents. This event features an online launch and local panel debates, organized in collaboration with several partners: Center for Human-Compatible Artificial Intelligence (UC Berkeley, United States), Florence School of Transnational Governance (European University Institute, Italy), Institute of Education, Development and Research (Brazil), Research ICT Africa (South Africa), and Paris School of International Affairs, Tech and Global Affairs Innovation Hub (SciencesPo, France). 

Christophe Deloire, Chair of the Forum on Information and Democracy, introduced the report’s main takeaways from Paris, followed by remarks from co-chairs Jonathan Stray and Laura Schertel Mendes respectively in Berkeley and Brasilia. Scott Timcke will join from Cape Town and Marta Cantero Gamito from Florence, before giving the floor to Gabriela Ramos, a member of the working group who has led the development of the Ethics of AI recommendations at UNESCO.