By opting out of sharing content via X (formerly Twitter) or Meta platforms, I am making a deliberate choice based on the growing body of evidence that these infrastructures are no longer simply neutral conduits for information. Instead, they have become active participants in the deliberate degradation of digital discourse and the promotion of specific, often illiberal, political agendas.
The erosion of digital discourse
Recent scholarship highlights a significant shift in how these platforms manage public conversation. The transformation of Twitter into X – and Meta’s subsequent policy realignments in 2025 – represents a move toward what researchers call platform illiberalism. This is a system where the rules of speech are not governed by transparent community standards, but are instead subject to the whims of individual owners or corporate interests seeking political favour.
Studies published in New Media & Society (2026) indicate that X has moved away from traditional content moderation toward a model that prioritises “engagement metrics” over accuracy. This shift frequently boosts antidemocratic attitudes and partisan animosity. Research conducted by teams at Stanford and Northeastern University in late 2025 demonstrated that even subtle algorithmic changes on X can shift a user’s feelings toward political opponents as much in a single week as would historically have taken three years. By providing content to these platforms, we inadvertently fuel the algorithms that drive this radicalisation.
Political alignment as corporate strategy
Choosing to use these platforms is increasingly viewed as an act of political support. In early 2025, Meta made high-profile decisions to dismantle independent fact-checking and remove long-standing content restrictions on sensitive topics. Academic analysis suggests these moves were not neutral “pro-speech” updates, but proactive attempts to align the company with specific political administrations to protect commercial interests.
When we share through these channels, we:
- Subsidise algorithmic harms: Our data and content become the “fuel” for AI systems that have been shown to disproportionately silence minoritised voices while amplifying hateful rhetoric.
- Validate platform monopolies: Continued use reinforces the market dominance of companies that have effectively dissolved the lines between democratic discourse and private greed.
- Support the “masculinisation” of Big Tech: Scholars have noted that the current leadership styles of these companies often normalise exclusionary and patriarchal power structures, directly impacting the safety of the digital public square.
To share on these platforms, at least for me, is to accept a digital environment where “truth” is secondary to “virality”. I believe that maintaining a healthy digital culture requires taking up decentralised or more ethically governed spaces that do not profit from the fragmentation of society.
Bibliography
Fisher, Alice, et al., “The High-Choice Paradox: Social Media as Gateway and Echo Chamber,” Journal of Digital Media, 14, no. 2 (2025), 112–134.
Jia, Chenyan, et al., “Reranking Partisan Animosity in Algorithmic Social Media Feeds Alters Affective Polarization,” Science, 390, no. 6778 (2025), 450–458.
Magalhães, João C., “Platform Illiberalism and the New Regimes of Social Media Governance,” New Media & Society, 28, no. 3 (2026), 567–589.
Olson, P., “Digital Resilience: Beyond the Meta-X Duopoly,” Technology in Society, 42, no. 1 (2025), 15–32.
Zuckerberg, Mark, “The State We’re In: Meta and the Entanglement of Discourse, Greed and Political Self-Interest,” University of Birmingham Research Report, 2025, 1–12.
