
Generative artificial intelligence is revolutionizing the production and distribution of information. Creating a manipulated picture or drafting a convincing false text has become available to anyone. The internet is already flooded with fully generated “news sites” and deepfakes. Even when there is no malicious intent, generative artificial intelligence is not flawless: it hallucinates, thereby spreading false information and potentially ruining the media outlets’ reputation.
It is becoming increasingly difficult to distinguish trustworthy from untrustworthy, artificially fabricated from human created content, leading to a deepened trust crisis. News media are struggling to survive economically in this context: they compete against artificial content for attention and advertising money. Similarly as with social media platforms, the large AI companies operate with little transparency or accountability, yet transforming our global information space.
At the same time, AI provides opportunities to transform public interest media and access to reliable information. This requires reliable, transparent and responsible systems as well as capacities to use them adequately.
This Policy Manual highlights how the current digital information ecosystem — dominated by Big Tech platforms (very large social media and search engines, and increasingly also AI companies) — has become increasingly captured in ways that undermine media freedom. It underscores the need for democratic state intervention, based on the rule of law, to ensure an enabling environment for independent and pluralistic journalism.
The Manual offers a vision for healthy online information spaces, where the availability and accessibility of public interest information are ensured. It puts forward mitigation measures and key recommendations for States to implement long-term structural reforms and sustained investments to address the distortions in today’s online information ecosystem.
The recommended mitigation measures cover three key areas:
• Visibility of journalism and public interest information online
• Media viability and funding models that support public interest information
• Vigilance, or the online safety of journalists
The core of this Policy Manual lies in the guidance it provides on how to enable healthy information spaces online by freeing the ecosystem from heavily concentrated gatekeeping power, and instead fostering an enabling environment for media freedom in the algorithmic and artificial intelligence (AI) era.
It concludes that for media freedom to be safeguarded, addressing platform-related challenges alone is not sufficient. Instead, it calls for more ambitious structural reforms — to move beyond merely mitigating media dependency and towards building an independent, pluralistic online information and media landscape that can sustain democratic debate and societal resilience.
This publication is part of the project “Healthy Online Information Spaces – SAIFE Renewed”. It was produced in collaboration with the Forum on Information and Democracy.
In the Central American region and the Dominican Republic, there has been a legislative trend on the regulation of artificial intelligence since 2023. These bills have been presented in three specific countries: Costa Rica, Panama and the Dominican Republic, which are still under discussion in their respective legislatures. A model law proposed by the Latin American and Caribbean Parliament also promotes AI regulation. Yet, progress is slow and data protection laws are also outdated to deal with the new challenges of AI.
This analysis was developed in cooperation with IPANDETEC.
On November 6, 2025, the Partnership for Information and Democracy held the fourth and last meeting of its workstream on “Strengthening Information Integrity on Private Messaging…