Britain plans a social media regulator to combat harmful content
A week after Australia introduced unprecedentedly broad social media legislation, the UK is considering something similar.
A new UK government white paper, released on Monday, seeks to address online content that “threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights.”
The white paper goes on to suggest creating a government regulator that will impose a duty of care on internet platforms, holding them liable for restricting “behaviours which are harmful but not necessarily illegal”. Failure to do so could lead to fines, criminal charges or restrictions.
The proposed policy has already drawn criticism for being too vague in what it designates as harmful content. “Trolling,” for example, is listed as harmful but is not defined.
The new regulator could also place disproportionate burdens on small organizations. Big companies like Facebook already have large moderation infrastructures in place, making it will be easier for them to adjust to new requirements, while smaller companies might suffer. Dom Hallas, the executive director of the startup focussed Coalition for a Digital Economy, told the Guardian that more regulation “is in Facebook’s business interest.” Indeed, Mark Zuckerberg called for more social media regulation late last month.
The white paper does anticipate this problem, promising that “we will minimise excessive burdens according to the size and resources of organisations.”
The UK government’s white paper comes after a national outcry around a teenager’s suicide in 2017. Her father has said that disturbing content on Instagram “helped kill my daughter.”
But the regulation is also part of a growing international trend. In addition to Australia, Singapore also introduced an online content bill earlier this month, while Vladimir Putin signed major online content legislation in March.