I'm all for the social media platforms policing up
Post# of 123771
Here's the deal, I do not see apprehensions among liberals over fact checking requirements. I suspect it's a reflection of confidence in their ability to discern shit from shinola, fake news from real news and conspiracy theories from reality.
Social media[edit]
Many social media sites, notably "big tech" companies such as Facebook, Inc., Google, and Twitter, have come under scrutiny as a result of the alleged Russian interference in the 2016 United States elections, where it was alleged that Russian agents used the sites to spread propaganda and fake news to swing the election in favor of Donald Trump.
These platforms also were criticized for not taking action against users that used the social media outlets for harassment and hate speech against others. Shortly after the passage of FOSTA-SESTA acts, some in Congress recognized that additional changes could be made to Section 230 to require service providers to deal with these bad actors, beyond what Section 230 already provided to them.[35]
Platform neutrality[edit]
Some politicians, including Republican senators Ted Cruz and Josh Hawley, have accused major social networks of displaying a bias against conservative perspectives when moderating content (such as Twitter suspensions).[36][36][37][38] In a Fox News op-ed, Cruz argued that section 230 should only apply to providers that are politically "neutral", suggesting that a provider "should be considered to be a [liable] 'publisher or speaker' of user content if they pick and choose what gets published or spoke."[39] Section 230 does not contain any requirements that moderation decisions be neutral.[39] Hawley alleged that section 230 immunity was a "sweetheart deal between big tech and big government".[40][41]
In December 2018, Republican house representative Louie Gohmert introduced the Biased Algorithm Deterrence Act (H.R.492), which would remove all section 230 protections for any provider that used filters or any other type of algorithms to display user content when otherwise not directed by a user.[42][43]
In June 2019, Hawley introduced the Ending Support for Internet Censorship Act (S. 1914), that would remove section 230 protections from companies whose services have more than 30 million active monthly users in the U.S. and more than 300 million worldwide, or have over $500 million in annual global revenue, unless they receive a certification from the majority of the Federal Trade Commission that they do not moderate against any political viewpoint, and have not done so in the past 2 years.[44][45]
There has been criticism—and support—of the proposed bill from various points on the political spectrum. A poll of more than 1,000 voters gave Senator Hawley's bill a net favorability rating of 29 points among Republicans (53% favor, 24% oppose) and 26 points among Democrats (46% favor, 20% oppose).[46] Some Republicans feared that by adding FTC oversight, the bill would continue to fuel fears of a big government with excessive oversight powers.[47]
Democrat Speaker Nancy Pelosi has indicated support for the same approach Hawley has taken.[48] The chairman of the Senate Judiciary Committee, Senator Graham, has also indicated support for the same approach Hawley has taken, saying "he is considering legislation that would require companies to uphold 'best business practices' to maintain their liability shield, subject to periodic review by federal regulators." [49]
Legal experts have criticized the Republicans' push to make Section 230 encompass platform neutrality. Wyden stated in response to potential law changes that "Section 230 is not about neutrality. Period. Full stop. 230 is all about letting private companies make their own decisions to leave up some content and take other content down."[50]
Law professor Jeff Kosseff, who has written extensively on Section 230, has stated that the Republican intentions are based on a "fundamental misunderstanding" of Section 230's purpose, as platform neutrality was not one of the considerations made at the time of passage.[51] Kosseff stated that political neutrality was not the intent of Section 230 according to the framers, but rather making sure providers had the ability to make content-removal judgement without fear of liability.[2] There have been concerns that any attempt to weaken Section 230 could actually cause an increase in censorship when services lose their liability.[41][52]
Hate speech[edit]
In the wake of the 2019 shootings in Christchurch, New Zealand, El Paso, Texas, and Dayton, Ohio, the impact on Section 230 and liability towards online hate speech has been raised. In both the Christchurch and El Paso shootings, the perpetrator posted hate speech manifestos to 8chan, a moderated imageboard known to be favorable for the posting of extreme views.
Concerned politicians and citizens raised calls at large tech companies for the need for hate speech to be removed from the Internet; however, hate speech is generally protected speech under the First Amendment, and Section 230 removes the liability for these tech companies to moderate such content as long as it is not illegal.
This has given the appearance that tech companies do not need to be proactive against hateful content, thus allowing the hate content to proliferate online and lead to such incidents.[53][5]
Notable articles on these concerns were published after the El Paso shooting by The New York Times,[53] The Wall Street Journal,[54] and Bloomberg Businessweek,[5] among other outlets, but which were criticized by legal experts including Mike Godwin, Mark Lemley, and David Kaye, as the articles implied that hate speech was protected by Section 230, when it is in fact protected by the First Amendment. In the case of The New York Times, the paper issued a correction to affirm that the First Amendment protected hate speech, and not Section 230.[55][56][57]
Members of Congress have indicated they may pass a law that changes how Section 230 would apply to hate speech as to make tech companies liable for this. Wyden, now a Senator, stated that he intended for Section 230 to be both "a sword and a shield" for Internet companies, the "sword" allowing them to remove content they deem inappropriate for their service, and the shield to help keep offensive content from their sites without liability.
However, Wyden argued that because tech companies have not been willing to use the sword to remove content, it is necessary to take away that shield.[53][5] Some have compared Section 230 to the Protection of Lawful Commerce in Arms Act, a law that grants gun manufacturers immunity from certain types of lawsuits when their weapons are used in criminal acts.
According to law professor Mary Anne Franks, "They have not only let a lot of bad stuff happen on their platforms, but they’ve actually decided to profit off of people's bad behavior."[5]
Representative Beto O’Rourke stated his intent for his 2020 presidential campaign to introduce sweeping changes to Section 230 to make Internet companies liable for not being proactive in taking down hate speech.[58] O'Rourke later dropped out of the race.
Fellow candidate and former vice president Joe Biden has similarly called for Section 230 protections to be weakened or otherwise "revoked" for "big tech" companies—particularly Facebook—having stated in a January 2020 interview with The New York Times that "[Facebook] is not merely an internet company. It is propagating falsehoods they know to be false", and that the U.S. needed to "[set] standards" in the same way that the European Union's General Data Protection Regulation (GDPR) set standards for online privacy.[59][60]
Terrorism-related content[edit]
In the aftermath of the Backpage trial and subsequent passage of FOSTA-SESTA, others have found that Section 230 appears to protect tech companies from content that is otherwise illegal under United States law.
Professor Danielle Citron and journalist Benjamin Wittes found that as late as 2018, several groups deemed as terrorist organizations by the United States had been able to maintain social media accounts on services run by American companies, despite federal laws that make providing material support to terrorist groups subject to civil and criminal charges.[61]
However, case law from the Second Circuit has ruled that under Section 230, technology companies are generally not liable for civil claims based on terrorism-related content.[62]
2020 Department of Justice review[edit]
In February 2020, the United States Department of Justice held a workshop related to Section 230 as part of an ongoing antitrust probe into "big tech" companies. Attorney General William Barr said that while Section 230 was needed to protect the Internet's growth while most companies were not stable, "No longer are technology companies the underdog upstarts...They have become titans of U.S. industry" and questioned the need for Section 230's broad protections.[63]
Barr said that the hearings was not meant to make policy decisions on Section 230, but part of a "holistic review" related to Big Tech since "not all of the concerns raised about online platforms squarely fall within antitrust" and that the Department of Justice would want to see reform and better incentives to improve online content by tech companies within the scope of Section 230 rather than change the law directly.[63]
Observers to the sessions stated the focus of the talks only covered Big Tech and small sites that engaged in areas of revenge porn, harassment, and child sexual abuse, but did not consider much of the intermediate uses of the Internet.[64]
https://en.wikipedia.org/wiki/Section_230_of_...ecency_Act