The UK government is considering adopting new laws to regulate problematic content online, ranging from terrorist propaganda to fake news. A new advice unveiled on Monday will impose a new “duty of care” on websites hosting user-submitted content. Under the plan, a new UK agency will develop codes of practice explaining how sites should deal with different types of harmful content.
The new proposal follows last month’s mass shooting in Christchurch, New Zealand, which left 50 people dead. After that attack, Australia passed the new law that require special platforms to quickly remove violent online material—or face stiff fines and maybe even jail time. On Monday, a committee of the EU parliament obey the law that would fine online platforms up to 4 percent of their revenue if they fail to take down terrorist content within four hours.
The British proposal is very broad, requiring tech companies to police their platforms for a variety of countermeasures. Companies can face fines if they don’t remove harmful material quickly.
A 100-page white paper from Theresa May’s government explained several categories of content that will be governed by new laws, including child pornography, revenge pornography, cyberstalking, hate crimes, encouragement of suicide, sale of illegal goods, sexting by teenagers, and “the common.” The proposal would also try to stop inmates from posting content online that violates prison rules.
Such a sweeping proposal would be unlikely to pass in the United States, where the First Amendment limits government regulation of online content. But America is unusual; Most countries have a very narrow concept of free speech that leaves governments great latitude to regulate content they deem harmful.
However, a big question is how to crack down on harmful content without weighing down the content of legitimate users—or of unfairly burdening the operators of small websites. Basically, regulators have two options here. They may require online operators to take down content only after they have been notified of its existence, or they may require platforms to proactively monitor the content you upload.
Under the EU E-Commerce Directive, current UK law protects online service providers from liability for content unless they have actual knowledge of its existence. But the UK government is rethinking that approach.
“The existing liability regime only forces companies to take action against illegal content once they have been notified of its existence,” the white paper said. to government liability will not be enough.”
Instead, the UK government said it was opting for a “comprehensive approach,” requiring tech companies to “ensure they have effective and appropriate processes and controls in place to reduce risk of illegal and harmful activity on their platforms.”
Of course, forcing technology companies to proactively monitor their platforms for objectionable content can create their own problems, leading to the unnecessary removal of legitimate content or compromising user privacy.
UK regulators say there is no need to worry about this. “The regulator will not force companies to conduct general monitoring of all communications on their online services, as this would be an unreasonable burden on companies and would raise concerns about user privacy,” the document However, he said, “there is a strong case for mandating specific, targeted surveillance where there is a threat to national security or the physical safety of children.”
Uncertainty about design
If that sounds vague, that’s by design. Instead of spelling out the precise obligations of online service providers in its first proposal, the government plans to create a new regulatory agency and have it write specific guidelines for the types of inappropriate content that can display on technical platforms.
Monday’s publication of an online vulnerability white paper is the first step to developing these new guidelines. Citizens now have 12 weeks to comment on the proposal. The government will then take those comments into account as it develops a final legislative proposal.
If something like this proposal becomes law, it could have significant effects beyond the borders of the United Kingdom. The Internet is global, and we can expect the United Kingdom to claim that objectionable content is inaccessible in the UK regardless of who originally posted it. Basically, major platforms can use geoblocking technology to prevent British citizens from accessing objectionable content hosted in the United States or elsewhere. But tech companies may decide it’s easier to just take down objectionable content to the public—especially if other jurisdictions pass similar laws.
As a result, America’s free-speech tradition may become less and less effective online, as online content regulations are expanded by countries with more activist approaches.