Britain just laid out plans to end the internet's Wild West days and take a world-leading role in regulating big tech
- Britain wants to end the internet's days as the Wild West by taking a world-leading role in regulating the world's biggest tech companies.
- The UK government said it wants to legislate for a new independent regulator that will oversee "harmful content" on social media, search engines, messaging, and file-sharing platforms.
- It would be the first time the UK regulates online safety, with the internet conventionally seen as ungovernable.
- If these platforms fail to keep hate speech and content relating to sexual abuse, violence, terrorism or self-abuse off their services, they will face huge fines under the proposed new rules.
- Industry lobbying bodies representing Facebook, Google, and other big tech firms say the proposed laws are too vague and may harm competition.
The internet's days as the Wild West may be numbered.
The British government has laid out its blueprint for groundbreaking new laws that will regulate social media, search, messaging, and even file-sharing platforms for content that causes "online harm." That's an umbrella term that includes content relating to sexual abuse, violence, hate speech and terrorism, self-harm, and underage sexting.
The proposals, dubbed "world first online safety laws" in an emailed statement, coincide with global pressure on US tech firms to prevent self-harm, terrorist, and hate speech content from appearing on their sites. It also comes at a time when Silicon Valley leaders, like Mark Zuckerberg, are calling for regulation.
Facebook, YouTube, and the more niche 8chan came in for severe criticism just last month when the suspected Christchurch shooter attacked two mosques and livestreamed the whole event. In February, Instagram banned "extreme" images of self-harm after the suicide of British teenager Molly Russell.
The proposals, put forward in a white paper by the UK's Department for Digital, Culture, Media and Sport, include a new independent regulator that would police these platforms for harmful content.
It will have the power to issue major fines and even hold individual executives responsible for failing to comply with any new laws. Fines could reach billions of dollars for the biggest companies, Culture Minister Margot James told Business Insider in February.
Tech firms would also need to obey a "duty of care," which would require them to take steps to keep users safe and to deal with illegal or harmful content.
Other proposals include:
- Forcing social media firms to publish transparency reports about harmful content on their services, and the measures they take to combat it.
- Compelling companies to respond quickly to user complaints, possibly akin to Germany's controversial "NetzDG" law.
- Codes of practice that might require tech firms to minimise the spread of misinformation during elections.
- A framework to help tech firms build safety features into their apps from the start.
- A media literacy strategy to help people recognise misinformation and malicious behaviours.
In a statement, British Prime Minister Theresa May: "The internet can be brilliant at connecting people across the world - but for too long these companies have not done enough to protect users, especially children and young people, from harmful content.
"That is not good enough, and it is time to do things differently. We have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe. Online companies must start taking responsibility for their platforms, and help restore public trust in this technology."
Big tech wants more detail about how the new laws will actually work
The proposals are still some way off becoming legislation, and what actually ends up becoming law may look quite different from Monday's policy paper after further industry and public consultation.
The Internet Association, a lobby group which counts Facebook, Google, Snap, Reddit, and Twitter among its members, said the proposals needed tightening up.
UK executive director Daniel Dyeball said in a statement: "The internet industry is committed to working together with government and civil society to ensure the UK is a safe place to be online. But to do this, we need proposals that are targeted and practical to implement for platforms both big and small.
"We also need to protect freedom of speech and the services consumers love. The scope of the recommendations is extremely wide, and decisions about how we regulate what is and is not allowed online should be made by parliament."
Coadec, a group which lobbies on behalf of startups, said overly-strict regulation could punish smaller firms that don't have the money and clout of Facebook and Google.
Executive director Dom Hallas said: "Everyone, including British startups, shares the goal of a safer internet - but these plans will entrench the tech giants, not punish them.
"The vast scope of the proposals means they cover not just social media but virtually the entire internet - from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply - and restrict the ability of British startups to compete fairly."