November 20, 2024

mipueblorest

Technologyeriffic

First Amendment may stand in way of regulating social media companies

[ad_1]

Placeholder while article actions load

Texas, Florida and other Republican-led states are passing laws that prohibit tech companies from “censoring” users — laws that Republican leaders say are meant to protect their constituents’ rights to free speech.

In the view of the tech companies, however, it’s those Republican lawmakers who are actually censoring. And the victims are not the everyday users of their social networks, but the companies themselves.

As tech-interest groups fight regulations in court battles across the country, they are advancing arguments that cast their content moderation decisions and even their ranking algorithms — the software that decides which posts each user sees when they open the app or website — as a form of expression in its own right. And they’re calling on the First Amendment, which protects American citizens and companies alike from government restraints on speech, to keep states’ hands off.

From Texas to Florida to Ohio to the U.S. Supreme Court, the nation’s judges and justices are wrestling with gnarly new questions about just what constitutes free speech, and whose rights are really at stake when lawmakers try to regulate social media feeds. Hanging in the balance are not only efforts by the right to impose political neutrality on Silicon Valley giants, but efforts by the left and center to require greater transparency and to hold them accountable for amplifying speech that may be harmful or illegal.

“The First Amendment is to some degree up for grabs,” says Genevieve Lakier, a University of Chicago law professor and senior visiting research scholar at the Knight First Amendment Institute. “These old principles are being pushed and pulled and reimagined in light of changing technological conditions and changing political alignments.”

The legal battles have their roots in controversies over social media’s ever-growing role in shaping political discourse. As platforms such as Facebook, Twitter, YouTube and even TikTok have become influential forums for politicians, activists and the media, they’ve been criticized — often, though not exclusively, by the left — for fanning misinformation, bigotry and division.

In response, those platforms have developed increasingly sophisticated systems — combining automation with human oversight — to detect and remove posts that violate their rules. In some cases, they’ve also adjusted their feed-ranking and recommendation algorithms to try to avoid highlighting content that could be problematic. But those moves have their own critics, especially on the right.

Tech groups ask Supreme Court to block Texas social media law

On May 11, a federal appeals court stunned the legal establishment by allowing Texas to move forward with a law that bans large Internet sites from “censoring” — whether by removing or algorithmically demoting — users’ posts based on their viewpoint. While the 5th Circuit Court didn’t explain its decision, the ruling seemed to support Texas Republicans’ argument that individual users’ right to be heard on social media platforms could trump tech companies’ right to decide which posts to display.

Tech companies quickly appealed to the Supreme Court, asking it to put the law back on hold while the lawsuit unfolds in a lower court. Justice Samuel A. Alito Jr. is expected to issue a ruling on that request in the coming days. While that ruling won’t resolve the case, it will be closely watched as a signal of how the broader debate is likely to play out in cases across the country.

Meanwhile, on May 23, another federal appeals court took a very different stand on Florida’s social media law, which is similar in spirit to Texas’s but differs in the details. In that case, the 11th Circuit upheld a lower court’s decision to suspend large swaths of the Florida law, on the grounds that tech companies’ algorithms and content moderation decisions amount to “constitutionally protected expressive activity.”

That ruling was broadly in keeping with decades of legal precedent holding that the best way to protect free speech is for governments to stay out of it. But it was noteworthy in affirming that social media sites’ “curation” of content is itself a form of protected speech.

It was also nuanced
. While the appeals court judges found that many of the Florida law’s provisions were likely to be unconstitutional, they reinstated portions of the law that require tech companies to disclose certain types of information relevant to their content moderation processes.

For instance, they found that Florida requiring social media platforms to spell out their content moderation standards, show users the view counts on their posts, and give suspended users access to their data might be permissible. Those provisions will now take effect while a lower court continues to hear the case. But the court rejected a provision that would have required platforms to articulate to users their reasoning for suppressing any given post, ruling that it would be too burdensome.

Importantly, it also swatted away a provision requiring platforms to offer their users the ability to opt out of algorithmic ranking and see every post in their feed in chronological order. That decision, again, was on First Amendment grounds, suggesting platforms have a constitutional right to algorithms and even “shadow banning” — a colloquial term for hiding posts from certain users or making them harder to find, often without the user knowing about it.

11th Circuit blocks major provisions of Florida’s social media law

Mary Anne Franks, a University of Miami law professor and author of the book “The Cult of the Constitution,” is a critic of what’s sometimes called “First Amendment absolutism” — the idea that the government can almost never interfere with even the most abhorrent speech. She argues there should be room for reforms that allow tech companies to be held responsible when they host or promote certain types of harmful content.

Yet Franks believes the 11th Circuit was correct to find much of the Florida law unconstitutional. Requiring social media platforms to offer a chronological feed, she said, would be analogous to requiring bookstores to arrange every book in chronological order in their storefront window — a violation of their right to decide which works to highlight.

That opinion could have implications not only for attempts by the right to restrict content moderation, but also for bipartisan and progressive proposals to promote more and better content moderation. Those include a bevy of bills that surfaced or gained momentum after the Facebook whistleblower Frances Haugen called attention to how that company’s algorithms prioritized engagement and profits over social responsibility.

Some of those bills would remove the liability shield that Internet platforms enjoy under Section 230 of the Communications Decency Act if their algorithms play a role in amplifying certain categories of speech. Others would require social media sites to offer “transparent” alternatives to their default recommendation algorithms. Still others would require them to submit their ranking algorithms to researchers or even the Federal Trade Commission.

Based on the recent federal court opinions, most, if not all, would likely prompt lawsuits from tech groups alleging that they violate the First Amendment. Exactly where courts will draw the line remains to be seen.

“What the 11th Circuit opinion does is start from the presumption that algorithmic ranking and recommendation and amplification is part of the First Amendment-protected conduct or speech that a platform engages in,” said Emma Llanso, director of the Free Expression Project at the nonprofit Center for Democracy and Technology, which receives funding from tech companies as well as other sources. “And so any regulation of that aspect of what platforms do will potentially face the same First Amendment scrutiny.”

Lawmakers’ latest idea to fix Facebook: Regulate the algorithm

That doesn’t mean regulating social media algorithms is impossible, Llanso said. But it sets a “very high bar” for the government to show a compelling interest in doing so, and to avoid making any such regulations overly burdensome.

In the wake of the recent court opinions, the kinds of regulations that would seem to have the best chance of surviving judicial scrutiny are those that focus on transparency, Llanso and other experts agreed. For instance, a bipartisan bill in Congress that would require large platforms to share data with approved researchers might stand a solid chance of surviving the level of scrutiny that the 11th Circuit applied.

But they cautioned that the big, underlying legal questions remain open for now, especially after the 5th and 11th circuits took such different stands on the Texas and Florida laws.

At the core of the debate is whether it’s only the tech companies’ speech rights that are at issue when the government attempts to regulate them, or whether some of those tech companies now have such power over individuals’ speech that the speech rights of users should come into play.

Historically, conservative thinkers held that “the best way to protect users’ speech rights is to give a lot of speech rights to platforms,” Lakier said, while some on the left worried that individuals’ speech rights were being given short shift. Now, a new breed of Trump-aligned Republicans has taken up the view that individuals may need speech protections from corporations, not just the government. Those include Texas Gov. Greg Abbott, Florida Gov. Ron DeSantis, and Supreme Court Justice Clarence Thomas.

“It’s a live question,” Lakier said. While she believes the Texas and Florida laws go too far in restricting platforms, she added, “I wi
ll say as a progressive, I’m quite sympathetic to this turn to users’ speech rights. I think we should be thinking about that a lot more than we have in the past.”

Cat Zakrzewski and Cristiano Lima contributed to this report.

[ad_2]

Source link