From Platforms to Protocols: The Problem With Decentralized Social Media
Originally published on my substack.
Building social media is a human problem, before it is a technological one. A decentralized architecture introduces new problems without solving the old ones.
The Spotlight on Platform Protocols
Social media is having a moment, with some big players building alternatives to lure Twitter-quitters. Among them are Jack Dorsey’s Bluesky, Meta’s new twitter competitor (codenamed Project 92) and Mastodon, which has 12M users now.
There’s one thing these platforms have in common—they are all building on top of open protocols intended to make social media decentralized and interoperable. Bluesky has created its AT protocol, while Mastodon and Meta are relying on ActivityPub (an old protocol). Then there’s Nostr which calls itself ‘censorship-resistant’, Farcaster and others.
The discussion about social media seems to be shifting from how to build platforms, to how to design protocols.
In theory, open protocols and interoperability of platforms is a good thing. It breaks the walled nature of closed platforms, and reduces the chance that one platform will monopolize social media, or exclusively control data.
Decentralized social networking—in simple terms—can be thought of as separating the user interface from the underlying data. This is a useful explainer piece in The Verge. Joining a new network doesn’t require building your network from scratch again. You can communicate with your social connections on any platform. Email is a good example of an everyday technology that works on this principle. Your friends can email you even though you are using Gmail while they use Yahoo! mail.
Who could argue against the benefit of an open, social internet? After all, the end goal is noble: to make the social web bigger than any single company.
But there’s a problem. This shift from platforms to protocols ignores and delays the fundamental issue of safety from online harms (such as disinformation and abuse) — which is a pre-requisite for engagement.
The Fediverse Delegates Safety to Users
If we want a new digital square at Twitter’s scale that keeps all users safe, elevates the quality and reliability of content, and creates healthier online norms—then open protocols do not get us closer to the goal. The fediverse is, in fact, more vulnerable to many of the issues social media incumbents are already struggling with.
The biggest danger is that moderation and verification are a lot harder with decentralized protocols, which means we are pushing protocols above safety.
Several experts have spoken about how moderation will be the Fediverse’s thorniest problem—because it is significantly challenging to design safety tools and moderation with a distributed architecture that does not allow for any platform-wide actions or policies. But moderation is not just a challenge. It is the foundation upon which any platform with user-generated content operates.
Bluesky opened its beta and had over 60K users without a block button. When users asked why this had not been prioritized, the CEO posted this.
Platform safety doesn’t just happen—it is proactively created by policies, design, architecture, norms and culture. Trust and safety experts, social media researchers and critical internet studies scholars have, over decades, built knowledge and evidence of what works. This is lost when the platform delegates safety to voluntary moderators, thereby reducing platform safety to removal of posts, banning of users or other actions moderators are authorized for.
Compare this to the methods available to a platform when it sees safety as its responsibility and priority—granular, customizable built-in safety tools, architecture and algorithmic decisions, UI/UX signposting, incentive-disincentive structures, setting online norms, all rooted in domain expertise.
Bluesky delegates both content curation and moderation—the two key factors that define safety on social platforms—to users. As the company’s CEO stated, it is “designed for people to be able to create separate instances with different approaches to moderation”.
In other words, if you don’t like the experience of being on an instance, spin off your own server and post there. Create your own moderation and curation. No global rules. No platform-wide actions or tools. No hierarchy of accountability.
Giving users control to shape their own experience is desirable in principle—but it is executed in Bluesky with no clear goal or thesis in mind, and without the pre-requisite foundation of safety.
This hands-off approach led to black users on Bluesky facing harassment before the platform hit the 100K user mark. In a recent TechCrunch piece, internet culture reporter Morgan Sung questions the viability of federation as a solution to bigotry and toxicity on platforms:
Though there are benefits to that level of independence, the approach to community-led moderation is often optimistic at best, and negligent at worst. Platforms can absolve themselves of the burden of moderation — which is labor intensive, costly and always divisive — by letting users take the wheel instead.
While wisdom of crowds sounds like a pleasant concept, we do not have use cases of it working well—in practice, and at scale—for the moderation of sensitive content or to protect frequently-targeted groups. We must consider who is disadvantaged when we put the burden of protecting themselves on vulnerable users yet again.
A Bluesky post calls this style of moderation a form of techno-optimism, whereby we continue to throw more technology at human and social problems.
While it sounds great that a user can move to a new ‘instance’ of a server with their current relationships and data intact — what is data portability worth if the instances are not safe by design? If decentralization makes it harder to achieve safety from harassment and misinformation, should it be the priority?
And then there’s verification and discovery—these protocols do not allow for a universal system for either. How do you find out where NYT is on Mastodon? No good way. You’ll have to guess or know where on the Fediverse it is sitting, and which account is the “real” one.
Fediverse Moderation is A Costly Job
If one is to go by the experience of Mastodon moderators, doing so becomes increasingly time-consuming as more users join. Moderation decisions, feature requests, dispute resolutions, availability and reliability of moderators—this is a significant investment of time, resources and even money. Your server will be hosting media from the servers with which you are federated. So as you gain users, the demand for file storage on your server increases as well.
If next-gen social media platforms live on the Fediverse, who will have the time and financial resources to invest in moderation? Who are we ultimately benefitting and empowering?
There is the valid concern that companies building decentralized architecture are creating the technology tools for bad-faith actors and extremist groups to organize more effortlessly. This already played out in 2019 when Gab, a social network frequently used by neo-Nazi terror groups migrated to Mastodon and became its largest node. When there were protests, Mastodon’s founder told reporters that his hands were tied. “You have to understand it’s not actually possible to do anything platform-wide because it’s decentralized,” he said at the time. “I don’t have the control.”
We Still Need The Global Digital Square
At best, decentralized protocols could help build smaller Reddit-style sub-communities with their own moderation that runs effectively for some period of time. Indeed, this is the vision that some platforms have launched with.
But, as I wrote in an earlier Substack piece, scale still matters for certain critical use cases and contexts. The unique power of Twitter—and why it became the site for social movements like MeToo and BLM where other platforms couldn’t—was in its reachability and serendipity, in its being a global, open digital square where anyone could be heard and amplified, in being the hotline to celebrities, politicians and influencers, in giving journalists access to audiences they couldn’t otherwise reach. A closed sub-community does not have this power.
In other words, you still need a global information channel.
Furthermore, if Reddit and Mastodon are two examples of sub-communities with decentralized moderation, it is telling that the former is among the most racist and sexist social platforms to ever have existed (one only has to look at the Wikipedia entry on ‘controversial’ Reddit communities to get a sense of the horror), and the latter has in its short lifespan already seen major moderation challenges outlined by users who run instances.
This week’s Reddit fiasco over API pricing also shows how the idea of community moderation is attractive in principle—but in reality leaves neither the community nor the platform in control of policy and vision.
Decentralization Puts Targeted Users At a Bigger Disadvantage
Users targeted at higher volumes and with greater ferocity, for their identities or professions—persons of color, women, LGBTQ+, journalists, activists and others—have consistently voiced how current platforms protect harassers instead of the harassed.
When platforms are not proactive about safety, they make abusive behavior easier than reporting or avoiding it. Targeted groups currently spend large amounts of time attempting to block, report, sift through or otherwise deal with attacks that come their way. They are already at a disadvantage compared to users that can spend the same time creating content, building audiences and feeling psychologically safe, supported and welcome.
Federation exacerbates the load on targeted users by further pushing the can to the community to make decisions, take actions and create rules to stay safe. Even the insufficient, centralized moderation of legacy platforms is taken away.
This disproportionate load created by decentralized protocols has seen little concern in discussions about its viability.
What Are We Solving For?
The question we should be asking is — what are we solving for when we build a federated social architecture? If it’s data portability and privacy, what is that data worth if we can’t build safe online spaces for users to inhabit?
Protocols are not the same as platforms. Is decentralized architecture well-suited for creating safe spaces online? Building healthier digital norms? Designing user safety? Reducing disinformation in the network?
Are they doing anything to counter known online harms, or are they asking us to lean further into the myth of platform neutrality? As Princeton CS professor Arvind Narayanan lays out in this important piece on social media recommendation algorithms, when we build technology intended to be ‘neutral’, it almost never is neutral. Instead, it rewards people who benefit from social biases or have figured out how to hack engagement—leading to a ‘rich get richer’ ecosystem.
What social media needs is a human thesis—an opinion on the type of content and user that should thrive on the platform—and designing around this thesis (I’ll write a future Substack on this).
A decentralized architecture has many benefits and use cases in social spaces, but it’s important to weigh the impact on existing online harms, before we decide that building a Twitter replacement should be one of them.