The European Union on Saturday reached a deal on landmark legislation that would force Facebook, YouTube and other internet services to combat misinformation, disclose how their services amplify divisive content and stop targeting online ads based on a person’s ethnicity, religion or sexual orientation.
The law, called the Digital Services Act, is intended to address social media’s societal harms by requiring companies to more aggressively police their platforms for illicit content or risk billions of dollars in fines. Tech companies would be compelled to set up new policies and procedures to remove flagged hate speech, terrorist propaganda and other material defined as illegal by countries within the EU.
The law aims to end an era of self-regulation in which tech companies set their own policies about what content could stay up or be taken down. It stands out from other regulatory attempts by addressing online speech, an area that is largely off limits in the United States because of First Amendment protections. Google, which owns YouTube, and Meta, owner of Facebook and Instagram, would face yearly audits for “systemic risks” linked to their businesses, while Amazon would confront new rules to stop the sale of illegal products.
The Digital Services Act is part of a one-two punch by the EU to address the societal and economic effects of the tech giants. Last month, the 27-nation bloc agreed to a different sweeping law, the Digital Markets Act, to counter what regulators see as anti-competitive behavior by the biggest tech firms, including their grip over app stores, online advertising and internet shopping.
Together, the new laws underscore how Europe is setting the standard for tech regulation globally. Frustrated by anti-competitive behavior, social media’s effect on elections and privacy-invading business models, officials spent more than a year negotiating policies giving them broad powers to crack down on tech giants worth trillions of dollars and used by billions of people for communication, entertainment, payments and news.
The deal was reached by European policymakers in Brussels early Saturday after 16 hours of negotiations.
“Platforms should be transparent about their content moderation decisions, prevent dangerous disinformation from going viral and avoid unsafe products being offered on marketplaces,” said Margrethe Vestager, who has spearheaded much of the bloc’s work to regulate the tech industry as executive vice president of the European Commission, executive arm of the EU.
This article originally appeared in The New York Times.