Working Group on Artificial Intelligence and its Implications for the Information and Communication Space

The rapid development and advancements in artificial intelligence (AI), generative artificial intelligence and even artificial general intelligence (AGI) is transforming the global information and communication space at a pace nearly unseen among recent technological innovations. 

Generative AI tools enable anyone to become an easy creator of content. Yet, AI can invent sources, create misinformation and deep fakes, amplifying the dangers of disinformation and information chaos, which puts increasing strain on our democratic institutions.

AI applications are taking crucial decisions in the information space, as the mere amount of information available and content created exceeds human capacities to consume, sort, moderate and verify. Currently, mainly private enterprises are deciding the rules of the game, including which safety and ethical guardrails they choose to implement.

Our democratic institutions must lead the development and implementation of  democratic principles and rules to govern the development, deployment and use of all aspects of AI in the information space. Without guidance – including regulatory incentives – from our democratic institutions, developers and deployers of AI models and tools risk undermining the very foundations of our democracies, rooted in a credible and legitimate information ecosystem.


The working group has commenced a research agenda that includes gathering broad input from experts around the world in the following three critical areas:

Development and deployment of AI systems

Provide recommendations for putting in place guardrails in the design, development and deployment of AI systems to reduce their risks to the information space, respect data privacy of AI subjects and intellectual property, and promote transparency, explainability and contestability of AI systems by AI subjects.

Accountability regimes

Provide recommendations for putting in place accountability regimes for the developers, deployers, users, and subjects of AI systems with regards to the outputs generated and decisions taken by AI.

Governance of AI 

Provide recommendations of governance options for the deployment and monitoring of AI systems, their deployment and use.


Rachel Adams


Director, African Observatory on Responsible AI, Research ICT Africa

Linda Bonyo


Founding Director at Africa Law Tech and the Founder of the Lawyers Hub 

Daniel Innerarity & Marta Cantero


Chair in AI & Democracy at the School of Transnational Governance, European University Institute

Alistair Knott

New zealand

School of Engineering and Computer Science, Victoria University of Wellington

Syed Nazakat


Media entrepreneur, founder and CEO of DataLEADS

Alice Oh


Professor of Computer Science, Korean Advanced Institute of Science and Technology

Alejandro Pisanty


Professor at Facultad de Quimica, National University of Mexico

Gabriela Ramos


Assistant Director-General for Social and Human Sciences, UNESCO

Achim Rettinger


Professor, Knowledge Representation Learning,   University of Trier

Edward Santow


Industry Professor and Director, Policy and Governance Human Technology Institute, University of Technology Sydney

Laura Schertel Mendes


Lawyer and Professor of Civil Law at the University of Brasilia and Senior Researcher at Goethe University, Frankfurt am Main 

Jonathan Stray


Senior Scientist at the Berkeley Center for Human-Compatible AI

Suzanne Vergnolle


Associate Professor in Technology law at the Cnam Institute

Claes de Vreese


Professor of Artificial Intelligence and Society, University of Amsterdam

%d bloggers like this: