
For more than a decade, the spotlight has largely been on the harmful impacts of open social media platforms. However, the most effective disinformation campaigns are now occurring behind closed doors, within private messaging platforms. This shift highlights a crucial gap in our current understanding and regulation of online information integrity.
Recent evidence sheds light on how private messaging apps are being exploited for political manipulation. A striking example occurred during Brazil’s 2024 municipal elections, where deceptive political content circulated widely through WhatsApp groups. Meanwhile, in Ukraine, Telegram served a dual purpose during wartime: it acted as a lifeline for essential communication while simultaneously channeling Russian disinformation. Similar patterns have been observed in Lebanon and beyond. Despite these pressing issues, existing regulatory frameworks often overlook the challenges posed by private messaging platforms.
A recent report by the Forum on Information and Democracy delves into this regulatory blind spot and proposes a new approach to governing these platforms. Co-chaired by the governments of Luxembourg and Ukraine, and backed by the NYU Stern Center for Business and Human Rights, this comprehensive initiative brought together public authorities, civil society, and academic experts to address the complexities of information integrity in a private messaging context.
The Regulatory Challenge of Private Messaging Platforms
Many existing governance frameworks are built around an outdated distinction between “public” and “private” communication. Originally, messaging platforms were intended for one-to-one exchanges or small group conversations. In today’s landscape, these applications feature broadcast channels, large group chats with thousands of participants, business messaging functions, and sophisticated AI-driven capabilities. This evolution allows information to spread broadly while still maintaining the facade of private communication.
Regulatory measures addressing disinformation frequently fail to encompass the hybrid nature of messaging services. Most frameworks focus narrowly on clearly illegal content, such as material linked to terrorism or child exploitation, or limit their disinformation policies to platforms designated as “public.” Consequently, private conversations are often excluded from regulations designed to mitigate misinformation risks.
Among twelve jurisdictions surveyed, the UK Online Safety Act (OSA) stands out as one of the few regulatory frameworks that encompasses “user-to-user services.” It imposes duties to tackle foreign interference and misinformation that may cause physical or psychological harm. Yet, inherent ambiguities persist, particularly concerning how user-to-user services—with their encrypted messaging functionalities—can comply without compromising encryption practices.
First Recommendation: Feature-Based Regulation
One of the primary recommendations in the report is that lawmakers should develop regulations that impose obligations tied to specific platform features rather than categorizing services as entirely “public” or “private.” This reframing allows for a more nuanced approach to regulation that acknowledges the diverse functionalities offered by messaging apps.
For instance, a secure, one-on-one encrypted conversation does not present the same systemic risks as a large, searchable broadcast channel or a mass-forwarding option. Treating them differently isn’t a violation of privacy; instead, it’s essential for creating proportionate governance. An example of this shift is the European Commission’s designation of WhatsApp Channels as a Very Large Online Platform (VLOP) under the Digital Services Act, signaling a step toward recognizing the hybrid nature of these services.
Platforms can further mitigate systemic risks by delineating features that are distinctly private from those driven by broadcasting or AI functions, thereby enhancing transparency and empowering users regarding their choices.
Second Recommendation: Protecting Encryption
The report emphasizes the importance of safeguarding encryption in private communications. End-to-end encryption plays a critical role in ensuring privacy, freedom of expression, and security—particularly for vulnerable populations like journalists, activists, and users in conflict zones. Protecting this technological feature doesn’t necessitate complete oversight abandonment; instead, regulatory obligations related to transparency and risk mitigation can be confined to non-encrypted or public-facing functionalities.
This means that there should be explicit exclusions regarding regulatory obligations that could jeopardize the feasibility of robust encryption in private communications.
Third Recommendation: Strengthening Societal Resilience
Another critical area of focus should be on building societal resilience and empowering users through initiatives that enhance information literacy. Governments are encouraged to support and implement programs aimed at educating users about misinformation and disinformation. Notable examples include Ukraine’s “Filter” project, which combines formal education with fact-checking partnerships, and Ireland’s Media Literacy Ireland Network, which promotes multi-stakeholder coordination among broadcasters, NGOs, and regulators.
Additionally, messaging platforms should invest in in-app features designed to increase user agency and awareness, such as fact-checking tools or “tiplines” that provide users with reliable information sources.