The internet — arguably the greatest invention in human history — has gone awry. We can all feel it. It is harder than ever to tell if we are engaging with friends or foes (or bots), we know we are being constantly surveilled in the name of better ad conversion, and we live in constant fear of clicking something and being defrauded.
The failures of the internet largely stem from the inability of large tech monopolies — particularly Google and Facebook — to verify and protect our identities. Why don’t they?
The answer is that they have no incentive to do so. In fact, the status quo suits them, thanks to Section 230 of the Communications Decency Act, passed by the United States Congress in 1996.
Related: Nodes are going to dethrone tech giants — from Apple to Google
But things may be about to change. This term, the Supreme Court will hear Gonzalez v. Google, a case that has the potential to reshape or even eliminate Section 230. It is hard to envision a scenario where it wouldn’t kill the social media platforms we use today. That would present a golden opportunity for blockchain technology to replace them.
How did we get here?
A key facilitator of the internet’s early development, Section 230 states that web platforms are not legally liable for content posted by their users. As a result, social media networks like Facebook and Twitter are free to publish (and profit from) anything their users post.
The plaintiff in the case now before the court believes internet platforms bear responsibility for the death of his daughter, who was killed by Islamic State-affiliated attackers in a Paris restaurant in 2015. He believes algorithms developed by YouTube and its parent company Google “recommended ISIS videos to users,” thereby driving the terrorist organization’s recruitment and ultimately facilitating the Paris attack.
Section 230 gives YouTube a lot of cover. If defamatory, or in the above case, violent content is posted by a user, the platform can serve that content to many consumers before any action is taken. In the process of determining if the content violates the law or the platform’s terms, a lot of damage can be done. But Section 230 shields the platform.
Related: Crypto is breaking the Google-Amazon-Apple monopoly on user data
Imagine a YouTube after Section 230 is struck down. Does it have to put the 500 hours of content that are uploaded every minute into a review queue before any other human is allowed to watch it? That wouldn’t scale and would remove a lot of the attractive immediacy of the content on the site. Or would they just let the content get published as it is now but assume legal liability for every copyright infringement, incitement to violence or defamatory word uttered in one of its billions of videos?
Once you pull the Section 230 thread, platforms like YouTube start to unravel quickly.
Global implications for the future of social media
The case is focused on a U.S. law, but the issues it raises are global. Other countries are also grappling with how best to regulate internet platforms, particularly social media. France recently ordered manufacturers to install easily accessible parental controls in all computers and devices and outlawed the collection of minors’ data for commercial purposes. In the United Kingdom, Instagram’s algorithm was officially found to be a contributor to the suicide of a teenage girl.
Then there are the world’s authoritarian regimes, whose governments are intensifying censorship and manipulation efforts by leveraging armies of trolls and bots to sow disinformation and mistrust. The lack of any workable form of ID verification for the vast majority of social media accounts makes this situation not just possible but inevitable.
And the beneficiaries of an economy without Section 230 may not be whom you’d expect. Many more individuals will bring suits against the major tech platforms. In a world where social media could be held legally liable for content posted on their platforms, armies of editors and content moderators would need to be assembled to review every image or word posted on their sites. Considering the volume of content that has been posted on social media in recent decades, the task seems almost impossible and would likely be a win for traditional media organizations.
Looking out a little further, Section 230’s demise would completely upend the business models that have driven the growth of social media. Platforms would suddenly be liable for an almost limitless supply of user-made content while ever-stronger privacy laws squeeze their ability to collect massive amounts of user data. It will require a total re-engineering of the social media concept.
Many misunderstand platforms like Twitter and Facebook. They think the software they use to log in to those platforms, post content, and see content from their network is the product. It is not. The moderation is the product. And if the Supreme Court overturns Section 230, that completely changes the products we think of as social media.
This is a tremendous opportunity.
In 1996, the internet consisted of a relatively small number of static websites and message boards. It was impossible to predict that its growth would one day cause people to question the very concepts of freedom and safety.
People have fundamental rights in their digital activities just as much as in their physical ones — including privacy. At the same time, the common good demands some mechanism to sort facts from misinformation, and honest people from scammers, in the public sphere. Today’s internet meets neither of these needs.
Some argue, either openly or implicitly, that a saner and healthier digital future requires hard tradeoffs between privacy and security. But if we’re ambitious and intentional in our efforts, we can achieve both.
Related: Facebook and Twitter will soon be obsolete thanks to blockchain technology
Blockchains make it possible to protect and prove our identities simultaneously. Zero-knowledge technology means we can verify information — age, for instance, or professional qualification—without revealing any corollary data. Soulbound Tokens (SBTs), Decentralized Identifiers (DIDs) and some forms of nonfungible tokens (NFTs) will soon enable a person to port a single, cryptographically provable identity across any digital platform, current or future.
This is good for us all, whether in our work, personal, or family lives. Schools and social media will be safer places, adult content can be reliably age-restricted, and deliberate misinformation will be easier to trace.
The end of Section 230 would be an earthquake. But if we adopt a constructive approach, it can also be a golden chance to improve the internet we know and love. With our identities established and cryptographically proven on-chain, we can better prove who we are, where we stand, and whom we can trust.
Nick Dazé is the co-founder and CEO of Heirloom, a company dedicated to providing no-code tools that help brands create safe environments for their customers online through blockchain technology. Dazé also co-founded PocketList and was an early team member at Faraday Future ($FFIE), Fullscreen (acquired by AT&T) and Bit Kitchen (acquired by Medium).
This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.