National legislations across the European Union — with the exception of states that have implemented their own digital laws, such as Germany and France — are very difficult to enforce when it comes to online. This is because, in the absence of overarching legislation that would govern digital space, tech giants implement community standards that may sometimes contradict the laws of countries in which they operate.
A Lawless Web
It is undeniable that digital platforms and social media networks provide us with essential services. Yet these services are not free as we all pay for it with our data. As Scott Goodson mentions in an article for Forbes, “If you’re not paying for it … you are the product.”
The subsequent use of data by tech giants and third-parties alike is shrouded in mystery, which Shoshana Zuboff, the author of “The Age of Surveillance Capitalism,” calls “moats of secrecy, indecipherability, and expertise.” While our data might be used in a non-transparent way for machine learning and political campaigning through ad targeting, democracies worldwide are grappling with the fallout.
The Intellectual Dark Web Defends STEM
The reason why the consequences of the growing influence of the “wild wild web” are so difficult to address is a lack of basic frameworks to address lawlessness online, which up till now relies mostly on arbitrary guidelines defined by tech giants themselves. Legal experts, scholars and policymakers have been mostly absent from the conversation, as the overarching narrative up until recently has been that any such regulation would represent unhealthy government interference into business and innovation.
The recently announced Facebook Oversight Board on Removing Objectionable Content promises to go some way to remedy this problem, and observers are eager to see its impact in action.
What’s Not Acceptable Offline Is Acceptable Online
The discussion surrounding Holocaust denial illustrates the inconsistencies imposed on states and markets in which unregulated social media platforms operate. National legislation of many countries, such as Germany, Austria, Spain, Israel, France, Slovakia or the Czech Republic, consider denying the Holocaust to be a crime. In Slovakia, for example, it is punishable by up to three years in prison.
Yet content on Holocaust denial is widespread on Slovak pages on Facebook despite users reporting it as harmful. Furthermore, the European Court of Human Rights ruled in a landmark 2019 case, Pastörs v. Germany, that Holocaust denial is not protected by free speech.
This understanding is not straightforwardly shared by digital platforms. In 2018, Mark Zuckerberg, the CEO of Facebook, insisted that content on Holocaust denial should not be taken down from the platform. There is a mismatch between what is acceptable online versus offline — i.e., what is illegal offline is not illegal online.
Theoretically, national authorities of the above-mentioned states could try to prosecute social media users for sharing content denying the Holocaust. (Some countries, such as Germany, have adopted legislation to force digital platforms to comply.)
Yet, in reality, that is near impossible as states simply do not have the resources to track every piece of content and then prosecute everyone who has shared it. Furthermore, by the time any case is closed, the content would remain online as it would have been undoubtedly copied and shared far and wide. It also poses an interesting question about state sovereignty and the potential complicity of service providers in criminal behavior, as they defy the national laws of countries in which they operate.
The Tide Is Turning in the EU
Partial answers on a European level may come with the passing of the Digital Services Act (DSA), which is postponed until the first quarter of 2021 due to the coronavirus pandemic. The recently published draft report on the DSA recommends that “the principle of ‘what is illegal offline is also illegal online,’ as well as the principles of consumer protection and user safety, should also become guiding principles of the future regulatory framework.”
If this becomes a guiding principle of the DSA, digital platforms will no longer be able to tailor their community standards arbitrarily. Instead, social networks like Facebook would have to comply with national and European legislation.
Such a development would be welcomed not only by those who care about the quality of democracy in the digital age, but also by digital platforms themselves. For years, social media networks have faced intense criticism and scrutiny for haphazard decision-making in policy areas which, in some places, have had devastating consequences.
In 2019, Zuckerberg gave his two cents about making rules for the internet and who should be responsible for doing so. “Every day, we make decisions about what speech is harmful, what constitutes political advertising, and how to prevent sophisticated cyberattacks,” he wrote in an op-ed for The Washington Post. “These are important for keeping our community safe. But if we were starting from scratch, we wouldn’t ask companies to make these judgments alone.” He further reiterated this point in a recent video conference with Thierry Breton, the EU commissioner for the internal market, asking for European leadership on platform regulation.
Zuckerberg is right. Policy frameworks and regulations are not the main areas of expertise of tech companies, nor should they be. Such efforts should be led by national and international institutions in cooperation with tech companies, civil society actors and research scholars to ensure that any upcoming frameworks will strike the right balance between each stakeholder’s diverse interests. With the increasing fragmentation of the EU single market due to the implementation of new digital laws on national levels and the prevalence of hate speech online, time is of the essence.
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.