Artificial intelligence and the Information space

Artificial Intelligence is increasingly transforming the way we create, disseminate and consume information. While providing new ways for investigative reporting it also facilitates deepfakes, disinformation campaigns and diminishes trust. 

A trust crisis

Generative artificial intelligence is revolutionizing the production and distribution of information. Creating a manipulated picture or drafting a convincing false text has become available to anyone. The internet is already flooded with fully generated “news sites” and deepfakes. Even when there is no malicious intent, generative artificial intelligence is not flawless: it hallucinates, thereby spreading false information and potentially ruining the media outlets’ reputation.  

It is becoming increasingly difficult to distinguish trustworthy from untrustworthy, artificially fabricated from human created content, leading to a deepened trust crisis. News media are struggling to survive economically in this context: they compete against artificial content for attention and advertising money. Similarly as with social media platforms, the large AI companies operate with little transparency or accountability, yet transforming our global information space. 

At the same time, AI provides opportunities to transform public interest media and access to reliable information. This requires reliable, transparent and responsible systems as well as capacities to use them adequately.

 

Our work on this theme

Policy brief

OSCE Policy Manual|Safeguarding media freedom in the age of Big Tech Platforms and AI – in partnership with the Forum on Information and Democracy

This Policy Manual highlights how the current digital information ecosystem — dominated by Big Tech platforms (very large social media and search engines, and increasingly also AI companies) — has become increasingly captured in ways that undermine media freedom. It underscores the need for democratic state intervention, based on the rule of law, to ensure an enabling environment for independent and pluralistic journalism.

The Manual offers a vision for healthy online information spaces, where the availability and accessibility of public interest information are ensured. It puts forward mitigation measures and key recommendations for States to implement long-term structural reforms and sustained investments to address the distortions in today’s online information ecosystem.

The recommended mitigation measures cover three key areas:
• Visibility of journalism and public interest information online
• Media viability and funding models that support public interest information
• Vigilance, or the online safety of journalists

The core of this Policy Manual lies in the guidance it provides on how to enable healthy information spaces online by freeing the ecosystem from heavily concentrated gatekeeping power, and instead fostering an enabling environment for media freedom in the algorithmic and artificial intelligence (AI) era.

It concludes that for media freedom to be safeguarded, addressing platform-related challenges alone is not sufficient. Instead, it calls for more ambitious structural reforms — to move beyond merely mitigating media dependency and towards building an independent, pluralistic online information and media landscape that can sustain democratic debate and societal resilience.

This publication is part of the project “Healthy Online Information Spaces – SAIFE Renewed”. It was produced in collaboration with the Forum on Information and Democracy.

Insight

Legislative trends in the regulation of artificial intelligence in Central America and the Dominican Republic|in cooperation with IPANDETEC

In the Central American region and the Dominican Republic, there has been a legislative trend on the regulation of artificial intelligence since 2023. These bills have been presented in three specific countries: Costa Rica, Panama and the Dominican Republic, which are still under discussion in their respective legislatures. A model law proposed by the Latin American and Caribbean Parliament also promotes AI regulation. Yet, progress is slow and data protection laws are also outdated to deal with the new challenges of AI.

This analysis was developed in cooperation with IPANDETEC.

Policy brief

A Voluntary Certification Mechanism for Public Interest AI

 

Latest news on the same theme

The evolving nature of private messaging platforms and its impact on information integrity

On November 6, 2025, the Partnership for Information and Democracy held the fourth and last meeting of its workstream on “Strengthening Information Integrity on Private Messaging…

Published on November 6, 2025

AI and democracy in West Africa: recommendations for Benin, Côte d’Ivoire and Senegal

The Forum on Information and Democracy and its partners in three West African countries are publishing recommendations to ensure that Artificial Intelligence serves the public good…

Published on May 7, 2025

AI Action Summit: making information integrity a priority

As the Summit for AI Action opens this Thursday, February 6, in Paris, the Forum on Information and Democracy reiterates the urgent need to establish regulatory…

Published on February 5, 2025

Paris Peace Forum Event: Exploring the creation of a voluntary certification mechanism for public interest AI

In the framework of the Paris Peace Forum Official Side Event on the Road to AI Summit the Forum on Information and Democracy organized a workshop…

Published on November 25, 2024