Site icon Teens Toons

UK plans social media and internet watchdog

Online harms white paper

Internet sites could be fined or blocked if they fail to tackle “online harms” such as terrorist propaganda and child abuse, under government plans.

The Department for Digital, Culture, Media and Sport (DCMS) has proposed an independent watchdog that will write a “code of practice” for tech companies.

Senior managers could be held liable for breaches, with a possible levy on the industry to fund the regulator.

But critics say the plans threaten freedom of speech.

The Online Harms White Paper is a joint proposal from the DCMS and the Home Office. A public consultation on the plans will run for 12 weeks.

The paper suggests:

  • establishing an independent regulator that can write a “code of practice” for social networks and internet companies
  • giving the regulator enforcement powers including the ability to fine companies that break the rules
  • considering additional enforcement powers such as the ability to fine company executives and force internet service providers to block sites that break the rules

Outlining the proposals, Culture Secretary Jeremy Wright said: “The era of self-regulation for online companies is over.

“Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”

Discussing potential penalties on BBC Breakfast, he said: “If you look at the fines available to the Information Commissioner around the GDPR rules, that could be up to 4% of company’s turnover… we think we should be looking at something comparable here.”

What are ‘online harms’?

The plans cover a range of issues that are clearly defined in law such as spreading terrorist content, child sex abuse, so-called revenge pornography, hate crimes, harassment and the sale of illegal goods.

But it also covers harmful behaviour that has a less clear legal definition such as cyber-bullying, trolling and the spread of fake news and disinformation.

It says social networks must tackle material that advocates self-harm and suicide, which became a prominent issue after 14-year-old Molly Russell took her own life in 2017.

After she died her family found distressing material about depression and suicide on her Instagram account. Molly’s father holds the social media giant partly responsible for her death.

Media playback is unsupported on your device

Media captionAfter Molly Russell took her own life, her family discovered distressing material about suicide on her Instagram account

Home Secretary Sajid Javid said tech giants and social media companies had a moral duty “to protect the young people they profit from”.

“Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

What do the proposals say?

The plans call for an independent regulator to hold internet companies to account.

It would be funded by the tech industry. The government has not decided whether a new body will be established, or an existing one handed new powers.

The regulator will define a “code of best practice” that social networks and internet companies must adhere to.

As well as Facebook, Twitter and Google, the rules would apply to messaging services such as Snapchat and cloud storage services.

The regulator will have the power to fine companies and publish notices naming and shaming those that break the rules.

The government says it is also considering fines for individual company executives and making search engines remove links to offending websites.

It is also consulting over blocking harmful websites.

On the face of it, this is a tough new regime – and ministers have acted upon the demands of charities like the NSPCC which want what they regard as the “Wild West Web” to be tamed.

But a closer look reveals all sorts of issues yet to be settled.

Will a whole new organisation be given the huge job of regulating the internet? Or will the job be handed to the media regulator Ofcom?

What sort of sanctions will be available to the regulator? And will they apply equally to giant social networks and to small organisations such as parents’ message boards?

Most tricky of all is how the regulator is going to rule on material that is not illegal but may still be considered harmful.

Take this example. Misinformation is listed as a potential harm, and Health Secretary Matt Hancock has talked about the damaging effects anti-vaccination campaigners have had.

So will the regulator tell companies that their duty of care means they must remove such material?

The government now plans to consult on its proposals. It may yet find that its twin aims of making the UK both the safest place in the world online and the best to start a digital business are mutually incompatible.

What will the ‘code of practice’ contain?

Image copyright
Getty Images

The white paper offers some suggestions that could be included in the code of best practice.

It suggests the spread of fake news could be tackled by forcing social networks to employ fact-checkers and promote legitimate news sources.

But the regulator will be allowed to define the code by itself.

The white paper also says social media companies should produce annual reports revealing how much harmful content has been found on their platforms.

The children’s charity NSPCC has been urging new regulation since 2017 and has repeatedly called for a legal duty of care to be placed on social networks.

A spokeswoman said: “Time’s up for the social networks. They’ve failed to police themselves and our children have paid the price.”

How have the social networks reacted?

Image copyright
PA

Rebecca Stimson, Facebook‘s head of UK policy, said in a statement: “New regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.

“New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech.”

Twitter‘s head of UK public policy Katy Minshall said in a statement: “We look forward to engaging in the next steps of the process, and working to strike an appropriate balance between keeping users safe and preserving the open, free nature of the internet.”

TechUK, an umbrella group representing the UK’s technology industry, said the government must be “clear about how trade-offs are balanced between harm prevention and fundamental rights”.

Jim Killock, executive director of Open Rights Group, said the government’s proposals would “create state regulation of the speech of millions of British citizens”.

Matthew Lesh, head of research at free market think tank the Adam Smith Institute, went further.

He said: “The government should be ashamed of themselves for leading the western world in internet censorship.

“The proposals are a historic attack on freedom of speech and the free press.

“At a time when Britain is criticising violations of freedom of expression in states like Iran, China and Russia, we should not be undermining our freedom at home.”

And freedom of speech campaigners Article 19 warned that the government “must not create an environment that encourages the censorship of legitimate expression”.

A spokesman said it opposed any duty of care being imposed on internet platforms.

They said that would “inevitably require them to proactively monitor their networks and take a restrictive approach to content removal”.

“Such actions could violate individuals’ rights to freedom of expression and privacy,” they added.

The BBC has a digital guide to life online for parents and young people: BBC Own It

Exit mobile version