Skepticism over Ofcom’s promise to use the internet’s new powers lightly
The regulator, with added powers, has set out its plans for the first 100 days as Britain’s online safety regulator, telling tech companies they should start preparing for new online safety rules now.
In a statement on Thursday, Ofcom said it would “not censor online content” because the bill does not give it the power to moderate or respond to complaints from individuals about individual pieces of content. He added that he would require the “biggest and riskiest companies” to be transparent and consistent about how they “deal with legal but harmful material” when viewed by adults.
But free speech advocates still have major concerns about censorship issues further afield, due to Ofcom’s informational role.
“Should do more censorship”
“So they’re not actually censoring individual posts, they’re absolutely right, but nonetheless what they’re doing is telling these providers that we think you should do more censorship,” Andrew Tettenborn, common law and continental jurisdictions scholar and adviser to the Free Speech Union, told The Epoch Times.
“The internet in Britain will look rather tamer, it will only affect people who haven’t found the time to get a VPN,” Tettenborn added.
“The government recognises, and we agree, that the sheer volume of online content would make this impractical. Rather than focusing on the symptoms of online harm, we will tackle the causes by ensuring companies design their services with security in mind from the start,” Ofcom said.
The upcoming Regulation of Online Spaces Bill (pdf) aims to “protect children from harmful content such as pornography and limit people’s exposure to illegal content, while protecting freedom of expression”.
To help it do this, the government has announced that it is giving Ofcom new responsibilities and powers, with a wide range of compliance tools, fines and penalties, to become the regulator. online damage. It already has experience in this field thanks to its role in telecommunications and broadcasting (TV, radio and video on demand broadcasts).
“It is not the job of Ofcom to arbitrate”
Ofcom has said it expects the Online Safety Bill to pass by early 2023 at the latest, with our powers taking effect two months later. He released a roadmap asking tech companies to start preparing now for new online safety rules.
Mark Bunting, Ofcom’s director of online safety policy, told The Telegraph on Thursday that it “will not censor online content” and that “I don’t think we should expect companies to be able to completely eliminate hate speech, at least not without very significant unintended consequences for freedom of expression.
“It’s not Ofcom’s job to comment on particular pieces of content,” he said. “Consumers can complain to us, we can bring it to the attention of businesses, but we can’t ask them to remove individual pieces of content.
“The second aspect of not being a censor is in the supposedly legal but harmful areas of the regime. It is really important that the services understand that it is not Ofcom’s role to dictate what they can and cannot host in terms of legal material,” he added.
“What services need to do is recognize that there may be risks associated with legal content, and take appropriate steps to address those risks, and then clearly state in their terms of service the steps they have taken about these risks,” Bunting said. .
Victoria Hewson, head of regulatory affairs at the free-market think tank Institute of Economic Affairs, explored the subject and the risks of the bill’s unintended consequences in a report titled An Unsafe Bill: How the Online Safety Bill Threatens Freedom of expression, innovation and intimacy.”
“That mantra that it’s about systems and processes, rather than individual pieces of content has really never held true, I think, because how are you going to know if a system or process is working effectively like the regulator would see it, except in reference to how it handles individual content items?’ she told The Epoch Times.
In his report, Hewson noted that “a likely result is that those who are easily offended or act in bad faith will get deletions for claiming material is intentionally false or psychologically distressing to a ‘likely audience.’
“This places the burden on the platform to remove it or risk failing to comply with its duty, as well as potential fines and other sanctions from Ofcom,” she added. .
Hewson questioned Ofcom saying they weren’t going to censor the content.
“You might not take regulatory action just because a particular piece of content sneaks onto the net, but they will judge compliance in the cycle by referring to individual pieces of content as a whole, so that ‘They will clearly look at individual pieces of content to judge systems and processes,’ Hewson said.
“As to whether they would impose fines for an individual piece of content as opposed to a systematic violation, I don’t think that makes too much of a difference to platform incentives, because platforms know that if they routinely allow the quote ‘illegal content’ to be encountered on their platforms, then they will face all the penalties and responsibilities. This largely comes down to making judgments about individual pieces of content,” she added.
The Epoch Times has contacted Ofcom for comment.