Senators Renew Attack on Section 230 Following Election

Senators Renew Attack on Section 230 Following Election

WASHINGTON — During a nationwide spike of COVID-19 cases and only hours after the Georgia Secretary of State accused him of tampering with the presidential election results, Senator Lindsey Graham (R-S.C.) revived his ongoing crusade against Section 230 protections — the so-called First Amendment of the internet — during a highly politicized interrogation of two tech entrepreneurs at the Senate Judiciary Committee.

Graham presided over today’s Senate Judiciary Committee hearing with Facebook co-founder Mark Zuckerberg and Jack Dorsey of Twitter.

“We have to find a way when Twitter and Facebook make a decision about what’s reliable and what’s not, what to keep up and what to keep down, that there is transparency in the system,” Graham said as the hearing began. “Section 230 has to be changed because we can’t get there from here without change.”

During the presidential campaign, as XBIZ has been reporting, Congress has been flooded with a smörgåsbord of proposals that seek to curtail free speech online and digital rights in the name of various causes.

None of these proposals are identical, and all of them prioritize the specific interests of their sponsors, from Graham’s insistence in creating a new government bureaucracy to make decisions about what deserves protection from liability and what does not, to the folksy cluelessness of Senator John Kennedy’s bizarre obsession with mind control and manipulation, to the more bipartisan PACT Act, which many observers consider the "adults-in-the-room" option among this colorful carnival of election-year legislative ingenuity.

The latest attempt to abolish Section 230 protections was introduced the Friday before the election by Representative Greg Steube (R-Fla.), who included the legislative novelty of attempting to define adult content in explicit and broad terms.

Bipartisan Attack on Section 230

At today’s hearing, anti-Section 230 senators referred to its protections as  “a golden goose legal shield” that favored tech companies.

Senators from both parties lambasted Section 230 and Zuckerberg and Dorsey, albeit for different reasons.

“Change is going to come, no question” said Senator Richard Blumenthal (D-Conn.), who has found agreement with Graham on this topic. “And I plan to bring aggressive reform to 230.”

Blumenthal added he was “not, and nor should we be in this committee, interested in being a member of the speech police,” although his version of Section 230 reform, supposedly targeting “human trafficking,” appears to create a state office devoted to making decisions about different kinds of sexual content posted online, and to adjudicate Section 230 protections based on vague standards.

As the New York Times reported after the hearing, “Republicans have pointed to the law as a crutch for online platforms to censor conservative content, claims that are not founded. Democrats have agreed that the law needs reform, but they have taken the opposite position on why. Democrats have said Section 230 has caused disinformation and hate to flourish on the social media sites.”

Jack Dorsey's Proposal

Zuckerberg appeared to be asking for state regulation on content moderation to take the heat off of Facebook’s questioned practices, which have been denounced by sex worker groups for years.

Dorsey, on the other hand, offered the following thoughts via Twitter:

Thank you members of the Judiciary Committee for the opportunity to speak with the American people about Twitter and your concerns around censorship and suppression of a specific news article, and generally what we saw in the 2020 U.S. Elections conversation.

We were called here today because of an enforcement decision we made against the New York Post, based on a policy we created in 2018 to prevent Twitter from being used to spread hacked materials. This resulted in us blocking people from sharing a New York Post article, publicly or privately.

We made a quick interpretation, using no other evidence, that the materials in the article were obtained through hacking, and according to our policy, blocked them from being spread. Upon further consideration, we admitted this action was wrong, and corrected it within 24 hours.

We informed the New York Post of our error and policy update, and how to unlock their account by deleting the original violating tweet, which freed them to tweet the exact same content and news article again. They chose not to, instead insisting we reverse our enforcement action.

We did not have a practice around retroactively overturning prior enforcement. This incident demonstrated that we needed one, and so we created one we believe is fair and appropriate.

In response, we’re updating our practice of not retroactively overturning prior enforcement.

Decisions made under policies that are subsequently changed and published can now be appealed if the account at issue is a driver of that change. We believe this is fair and appropriate.

I hope this illustrates the rationale behind our actions, and demonstrates our ability to take feedback, admit mistakes, and make changes, all transparently to the public. We acknowledge there are still concerns around how we moderate content, and specifically our use of Section 230.

Three weeks ago we proposed three solutions to address the concerns raised, and they all focus on services that decide to moderate or remove content. They could be expansions to §230, new legislative frameworks or a commitment to industry-wide self-regulation best practices.

Requiring, 1) moderation process and practices to be published; 2) a straightforward process to appeal decisions; and 3) best efforts around algorithmic choice, are suggestions to address the concerns we all have going forward. And they all are achievable in short order.

It’s critical as we consider these solutions, we optimize for new startups and independent developers. Doing so ensures a level playing field that increases the probability of competing ideas to help solve problems going forward. We mustn’t entrench the largest companies further.

Finally, before I close, I wanted to share some reflections on what we saw during the U.S. Presidential election. We focused on addressing attempts to undermine civic integrity, providing informative context and product changes to encourage greater consideration.

We updated our civic integrity policy to address misleading or disputed information that undermines confidence in the election, causes voter intimidation or suppression or confusion about how to vote, or misrepresents affiliation or election outcomes.

More than a year ago, the public asked us to offer additional context to help make potentially misleading information more apparent. We did exactly that, applying labels to over 300K tweets from Oct. 27-Nov. 11, which represented 0.2% of all U.S. election-related tweets.

Copyright © 2026 Adnet Media. All Rights Reserved. XBIZ is a trademark of Adnet Media.
Reproduction in whole or in part in any form or medium without express written permission is prohibited.

More News

Utah Governor Signs 'Porn Tax' and VPN Rule Into Law

Governor Spencer Cox on Friday signed into law a bill to tax adult websites and make them liable if minors circumvent geolocation.

BranditScan Launches 'White Glove' Subscription Tier

BranditScan has launched its new White Glove subscription tier for creators.

German Court: Regulator Can't Block Creator's IG Account, Only Posts

A German court has ruled that while a regional media regulatory agency may block specific Instagram posts that include material deemed harmful to minors, it cannot ban an entire Instagram account due to such a post.

Brazil Lays Out Preliminary Guidelines for New AV Requirements

President Luiz Inácio Lula da Silva on Wednesday signed a decree establishing guidelines for new regulations requiring adult websites to age-verify users located in Brazil.

Senate Committee Debates Section 230 Reform

The U.S. Senate Committee on Commerce, Science, and Transportation held a hearing Wednesday on potential changes to Section 230 of the Communications Decency Act, which protects interactive computer services — including adult platforms — from liability for user-generated content.

Pearl Industry Network Offers Free Creator Memberships

Industry trade group Pearl Industry Network (PiN) has launched its free creator membership initiative.

Sam Bird Acquires Fanblast

Sam Bird, former co-director of global talent agency Surge, has acquired creator monetization tool Fanblast and named himself CEO.

'SheHerGirls' Launches Through Paysite.com

The braintrust behind PoleVixens has officially launched a new membership site, SheHerGirls, also through Paysite.com.

FTC Invites Public Comment on 'Click to Cancel' Rulemaking

The Federal Trade Commission (FTC) announced this week that it is seeking public comment on whether it should amend its Negative Option Rule to better address deceptive or unfair practices.

Aylo Rebuts Indiana AV Suit Claims Over VPN Access

Aylo this week asked a Marion Superior Court judge to dismiss Indiana’s lawsuit alleging that the company violated the state’s age verification law by failing to prevent access by users who employ VPNs and similar means to avoid geolocation.

Show More