Tech giants like Facebook and Google came under increasing pressure in Europe on Monday when countries proposed stricter rules to force them to block extreme material such as terrorist propaganda and child porn.
Britain called for a first-of-its-kind watchdog for social media that could fine executives and even ban companies. And a European Union parliamentary committee approved a bill giving internet companies an hour to remove terror-related material or face fines that could reach into the billions.
"We are forcing these firms to clean up their act once and for all," said British Home Secretary Sajid Javid, whose department collaborated on Britain's proposal.
Opponents warned the British and EU measures could stifle innovation and strengthen the dominance of technology giants because smaller companies won't have the money to comply. That, in turn, could turn Google and Facebook into the web's censors, they said.
The push to make the big companies responsible for the torrent of material they carry has largely been driven by Europeans. But it picked up momentum after the March 15 mosque shootings in New Zealand that killed 50 people and were livestreamed for 17 minutes. Facebook said it removed 1.5 million videos of the attacks in the 24 hours afterward.
The U.S., where government action is constrained by the First Amendment right to free speech and freedom of the press, has taken a more hands-off approach, though on Tuesday, a House committee will press Google and Facebook executives on whether they are doing enough to curb the spread of hate crimes and white nationalism.
Australia last week made it a crime for social media platforms not to quickly remove "abhorrent violent material." The offense would be punishable by three years in prison and a fine of 10.5 million Australian dollars ($7.5 million), or 10% of the platform's annual revenue, whichever is larger. New Zealand's Privacy Commissioner wants his country to so the same.
The British plan would require social media companies such as Facebook and Twitter to protect people who use their sites from "harmful content." The plan, which includes the creation of an independent regulator funded by a tax on internet companies, will be subject to public comment for three months before the government publishes draft legislation.
"No one in the world has done this before, and it's important that we get it right," Culture Secretary Jeremy Wright told the BBC.
Facebook's head of public policy in Britain, Rebecca Stimson, said the goal of the new rules should be to protect society while also supporting innovation and freedom of speech.
"These are complex issues to get right, and we look forward to working with the government and Parliament to ensure new regulations are effective," she said.
Britain will consider imposing financial penalties similar to those under the EU's online data privacy law, which permits fines of up to 4% of a company's annual worldwide revenue, Wright said. In extreme cases, the government may also seek to fine individual company directors and prevent companies from operating in Britain.
Under the EU legislation that cleared an initial hurdle in Brussels, any internet companies that fail to remove terrorist content within an hour of being notified by authorities would face similar 4% penalties. EU authorities came up with the idea last year after attacks highlighted the growing trend of online radicalization.
The bill would apply to companies providing services to EU citizens, whether or not those businesses are based in the EU's 28 member countries. It still needs further approval, including from the full European Parliament.
It faces heavy opposition from digital rights organizations, tech industry groups and some lawmakers, who said the 60-minute deadline is impractical and would lead companies to go too far and remove even lawful material.
"Instead, we call for a more pragmatic approach with removals happening 'as soon as possible,' to protect citizens' rights and competitiveness," said EDIMA, a European trade group for new media and internet companies.
Opponents said the measure also places a bigger burden on smaller internet companies than on giants like Facebook and Google, which already have automated content filters. To help smaller web companies, the bill was modified to give them an extra 12 hours for their first offense, a measure opponents said didn't go far enough.
Mark Skilton, a professor at England's Warwick Business School, urged regulators to pursue new methods such as artificial intelligence that could do a better job of tackling the problem.
"Issuing large fines and hitting companies with bigger legal threats is taking a 20th-century bullwhip approach to a problem that requires a nuanced solution," he said. "It needs machine learning tools to manage the 21st-century problems of the internet."
Wright said Britain's proposed social-media regulator would be expected to take freedom of speech into account while trying to prevent harm.
"What we're talking about here is user-generated content, what people put online, and companies that facilitate access to that kind of material," he said. "So this is not about journalism. This is about an unregulated space that we need to control better to keep people safer."