WASHINGTON — During a nationwide spike of COVID-19 cases and only hours after the Georgia Secretary of State accused him of tampering with the presidential election results, Senator Lindsey Graham (R-S.C.) revived his ongoing crusade against Section 230 protections — the so-called First Amendment of the internet — during a highly politicized interrogation of two tech entrepreneurs at the Senate Judiciary Committee.
Graham presided over today’s Senate Judiciary Committee hearing with Facebook co-founder Mark Zuckerberg and Jack Dorsey of Twitter.
“We have to find a way when Twitter and Facebook make a decision about what’s reliable and what’s not, what to keep up and what to keep down, that there is transparency in the system,” Graham said as the hearing began. “Section 230 has to be changed because we can’t get there from here without change.”
During the presidential campaign, as XBIZ has been reporting, Congress has been flooded with a smörgåsbord of proposals that seek to curtail free speech online and digital rights in the name of various causes.
None of these proposals are identical, and all of them prioritize the specific interests of their sponsors, from Graham’s insistence in creating a new government bureaucracy to make decisions about what deserves protection from liability and what does not, to the folksy cluelessness of Senator John Kennedy’s bizarre obsession with mind control and manipulation, to the more bipartisan PACT Act, which many observers consider the "adults-in-the-room" option among this colorful carnival of election-year legislative ingenuity.
The latest attempt to abolish Section 230 protections was introduced the Friday before the election by Representative Greg Steube (R-Fla.), who included the legislative novelty of attempting to define adult content in explicit and broad terms.
Bipartisan Attack on Section 230
At today’s hearing, anti-Section 230 senators referred to its protections as “a golden goose legal shield” that favored tech companies.
Senators from both parties lambasted Section 230 and Zuckerberg and Dorsey, albeit for different reasons.
“Change is going to come, no question” said Senator Richard Blumenthal (D-Conn.), who has found agreement with Graham on this topic. “And I plan to bring aggressive reform to 230.”
Blumenthal added he was “not, and nor should we be in this committee, interested in being a member of the speech police,” although his version of Section 230 reform, supposedly targeting “human trafficking,” appears to create a state office devoted to making decisions about different kinds of sexual content posted online, and to adjudicate Section 230 protections based on vague standards.
As the New York Times reported after the hearing, “Republicans have pointed to the law as a crutch for online platforms to censor conservative content, claims that are not founded. Democrats have agreed that the law needs reform, but they have taken the opposite position on why. Democrats have said Section 230 has caused disinformation and hate to flourish on the social media sites.”
Jack Dorsey's Proposal
Zuckerberg appeared to be asking for state regulation on content moderation to take the heat off of Facebook’s questioned practices, which have been denounced by sex worker groups for years.
Dorsey, on the other hand, offered the following thoughts via Twitter:
Thank you members of the Judiciary Committee for the opportunity to speak with the American people about Twitter and your concerns around censorship and suppression of a specific news article, and generally what we saw in the 2020 U.S. Elections conversation.
We were called here today because of an enforcement decision we made against the New York Post, based on a policy we created in 2018 to prevent Twitter from being used to spread hacked materials. This resulted in us blocking people from sharing a New York Post article, publicly or privately.
We made a quick interpretation, using no other evidence, that the materials in the article were obtained through hacking, and according to our policy, blocked them from being spread. Upon further consideration, we admitted this action was wrong, and corrected it within 24 hours.
We informed the New York Post of our error and policy update, and how to unlock their account by deleting the original violating tweet, which freed them to tweet the exact same content and news article again. They chose not to, instead insisting we reverse our enforcement action.
We did not have a practice around retroactively overturning prior enforcement. This incident demonstrated that we needed one, and so we created one we believe is fair and appropriate.
In response, we’re updating our practice of not retroactively overturning prior enforcement.
Decisions made under policies that are subsequently changed and published can now be appealed if the account at issue is a driver of that change. We believe this is fair and appropriate.
I hope this illustrates the rationale behind our actions, and demonstrates our ability to take feedback, admit mistakes, and make changes, all transparently to the public. We acknowledge there are still concerns around how we moderate content, and specifically our use of Section 230.
Three weeks ago we proposed three solutions to address the concerns raised, and they all focus on services that decide to moderate or remove content. They could be expansions to §230, new legislative frameworks or a commitment to industry-wide self-regulation best practices.
Requiring, 1) moderation process and practices to be published; 2) a straightforward process to appeal decisions; and 3) best efforts around algorithmic choice, are suggestions to address the concerns we all have going forward. And they all are achievable in short order.
It’s critical as we consider these solutions, we optimize for new startups and independent developers. Doing so ensures a level playing field that increases the probability of competing ideas to help solve problems going forward. We mustn’t entrench the largest companies further.
Finally, before I close, I wanted to share some reflections on what we saw during the U.S. Presidential election. We focused on addressing attempts to undermine civic integrity, providing informative context and product changes to encourage greater consideration.
We updated our civic integrity policy to address misleading or disputed information that undermines confidence in the election, causes voter intimidation or suppression or confusion about how to vote, or misrepresents affiliation or election outcomes.
More than a year ago, the public asked us to offer additional context to help make potentially misleading information more apparent. We did exactly that, applying labels to over 300K tweets from Oct. 27-Nov. 11, which represented 0.2% of all U.S. election-related tweets.