Proof The Federal Government Tells Social Media whom to Silence


 


The Long Fuse is 1984 Deep State, Republican and Democrat Government collusion with social media to silence Americans, effectively removing free speech. 

The "  " is verbatim from the Long Fuse

"The Long Fuse Misinformation and the 2020 Election The Election Integrity Partnership Digital Forensic Research Lab Graphika Stanford Internet Observatory UW Center for an Informed Public.

Executive Summary On January 6, 2021, an armed mob stormed the US Capitol to prevent the certification of what they claimed was a “fraudulent election.” Many Americans were shocked, but they needn’t have been. The January 6 insurrection was the culmination of months of online mis- and disinformation directed toward eroding American faith in the 2020 election. 

US elections are decentralized: almost 10,000 state and local election offices are primarily responsible for the operation of elections. Dozens of federal agencies support this effort, including the Cybersecurity and Infrastructure Security Agency (CISA) within the Department of Homeland Security, the United States Election Assistance Commission (EAC), the FBI, the Department of Justice, and the Department of Defense. However, none of these federal agencies has a focus on, or authority regarding, election misinformation originating from domestic sources within the United States. This limited federal role reveals a critical gap for non-governmental entities to fill. Increasingly pervasive mis- and disinformation, both foreign and domestic, creates an urgent need for collaboration across government, civil society, media, and social media platforms."

Translation-The Long Fuse group is a coalition of government agents pressuring social media companies to SILENCE FREE SPEECH!  The Long Fuse group is taking down Trump supporters or anti-deep state operatives. NOTE: Disinformation is false information which is intended to mislead, especially propaganda issued by a government organization to a rival power or the media. Is being called "disinformation" a loaded partisan term used to defame or destroy opponents? YES!!! 

"The Election Integrity Partnership, comprising organizations that specialize in understanding those information dynamics, aimed to create a model for whole-of-society collaboration and facilitate cooperation among partners dedicated to a free and fair election. With the narrow aim of defending the 2020 election against voting-related mis- and disinformation, it bridged the gap between government and civil society, helped to strengthen platform standards for combating election-related misinformation, and shared its findings with its stakeholders, media, and the American public. This report details our process and findings, and provides recommendations for future actions. 

Executive Summary 

Who We Are: EIP and Its Members 

The Election Integrity Partnership was formed to enable real-time information exchange between election officials, government agencies, civil society organizations, social media platforms, the media, and the research community.1 It aimed to identify and analyze online mis- and disinformation, and to communicate important findings across stakeholders. It represented a novel collaboration between four of the nation’s leading institutions focused on researching misand disinformation in the social media landscape: 

• The Stanford Internet Observatory (SIO)

 • The University of Washington’s Center for an Informed Public (CIP) 

• Graphika 

• The Atlantic Council’s Digital Forensic Research Lab (DFRLab) 

What We Did 

The EIP’s primary goals were to:

 (1) identify mis- and disinformation before it went viral and during viral outbreaks, 

(2) share clear and accurate countermessaging, and 

(3) document the specific misinformation actors, transmission pathways, narrative evolutions, and information infrastructures that enabled these narratives to propagate. To identify the scope of our work, we built a framework to compare the policies of 15 social media platforms2 across four categories:

 • Procedural interference: misinformation related to actual election procedures 

• Participation interference: content that includes intimidation to personal safety or deterrence to participation in the election process

 • Fraud: content that encourages people to misrepresent themselves to affect the electoral process or illegally cast or destroy ballots 

• Delegitimization of election results: content aiming to delegitimize election results on the basis of false or misleading claims

The EIP used an innovative internal research structure that leveraged the capabilities of the partner organizations through a tiered analysis model based on “tickets” collected internally and from our external stakeholders. Of the tickets we processed, 72% were related to delegitimization of the election. 

My Translation-If you speak out against voter fraud, you will be destroyed.  This group decides what is truth, free speech is dead. 

"Key Takeaways Key Takeaways 

Misleading and false claims and narratives coalesced into the meta-narrative of a “stolen election,” which later propelled the January 6 insurrection. 

Right-leaning “blue-check” influencers transformed one-off stories, sometimes based on honest voter concerns or genuine misunderstandings, into cohesive narratives of systemic election fraud. 

• Warped stories frequently centered on mail-in voting and accusations of found, discarded, or destroyed ballots, particularly in swing states. Misleading framing of real-world incidents often took the form of falsely assigning intent, exaggerating impact, falsely framing the date, or altering locale.

 • The meta-narrative of a “stolen election” coalesced into the #StopTheSteal movement, encompassing many of the previous narratives. The narrative appeared across platforms and quickly inspired online organizing and offline protests, leading ultimately to the January 6 rally at the White House and the insurrection at the Capitol.

 • Fact-checking of narratives had mixed results; non-falsifiable narratives presented a particular challenge. In some cases, social media platform fact-checks risked drawing further attention to the claims they sought to debunk. The production and spread of misinformation was multidirectional and participatory.

 • Individuals participated in the creation and spread of narratives. Bottomup false and misleading narratives started with individuals identifying realworld or one-off incidents and posting them to social media. Influencers and hyperpartisan media leveraged this grassroots content, assembling it into overarching narratives about fraud, and disseminating it across platforms to their large audiences. Mass media often picked up these stories after they had reached a critical mass of engagement. 

• Top-down mis- and disinformation moved in the opposite direction, with claims first made by prominent political operatives and influencers, often on mass media, which were then discussed and shared by people across social media properties. vii Executive Summary Narrative spread was cross-platform: repeat spreaders leveraged the specific features of each platform for maximum amplification. 

• The cross-platform nature of misinformation content and narrative spread limited the efficacy of any single platform’s response. • Smaller, niche, and hyperpartisan platforms, which were often less moderated or completely unmoderated, hosted and discussed content that had been moderated elsewhere. Parler in particular saw a remarkable increase in its active user base, as users rejected the “censorship” they perceived on other platforms. The primary repeat spreaders of false and misleading narratives were verified, blue-check accounts belonging to partisan media outlets, social media influencers, and political figures, including President Trump and his family. 

• These repeat spreaders amplified the majority of the investigated incidents aggressively across multiple platforms. • Repeat spreaders often promoted and spread each others’ content. Once content from misleading narratives entered this network, it spread quickly across the overlapping audiences. Many platforms expanded their election-related policies during the 2020 election cycle. However, application of moderation policies was inconsistent or unclear. 

• Platforms took action against policy violations by suspending users or removing content, downranking or preventing content sharing, and applying informational labels. However, moderation efforts were applied inconsistently on and across platforms, and policy language and updates were often unclear. 

• Account suspensions and content removal or labeling sometimes contributed to conspiratorial narratives that platforms were “covering up the truth,” entangling platforms with the narratives they wished to eliminate. 

• Lack of transparency and access to platform APIs hindered external research into the effectiveness of platform policies and interventions. 

Key Recommendations 

Federal Government  

Establish clear authorities and roles for identifying and countering election related mis- and disinformation. Build on the federal interagency movement toward recognizing elections as a national security priority and critical infrastructure. 

• Create clear standards for consistent disclosures of mis- and disinformation from foreign and domestic sources as a core function of facilitating free and fair elections, including via CISA’s Rumor Control and joint interagency statements. Congress • Pass existing bipartisan proposals for increased appropriations marked for federal and state election security. 

• Codify the Senate Select Committee on Intelligence’s bipartisan recommendations related to the depoliticization of election security and the behavior of public officials and candidates for federal office noted in Volumes 3 and 5 of the Committee’s report on foreign influence in 2016 elections. State and Local Officials

 • Establish trusted channels of communication with voters. This should include a .gov website and use of both traditional and social media. 

• Ensure that all votes cast are on auditable paper records and that efficient, effective, and transparent post-election audits are conducted after each election.

 Platforms 

• Provide proactive information regarding anticipated election misinformation. For example, if researchers expect a narrative will emerge, platforms should explain that narrative’s history or provide fact-checks or context related to its prior iterations.

 • Invest in research into the efficacy of internal policy interventions (such as labeling) and share those results with external researchers, civil society, and the public. 

Executive Summary 

• Increase the amount and granularity of data regarding interventions, takedowns, and labeling to allow for independent analysis of the efficacy of these policies. • Impose clear consequences for accounts that repeatedly violate platform policies. These accounts could be placed on explicit probationary status, facing a mixture of monitoring and sanctions.

 • Prioritize election officials’ efforts to educate voters within their jurisdiction and respond to misinformation. This could include the promotion of content from election officials through curation or advertisement credits, especially in the lead-up to Election Day. 

Conclusion 

The 2020 election demonstrated that actors—both foreign and domestic—remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democracy. Mis- and disinformation were pervasive throughout the campaign, the election, and its aftermath, spreading across all social platforms. The Election Integrity Partnership was formed out of a recognition that the vulnerabilities in the current information environment require urgent collective action. While the Partnership was intended to meet an immediate need, the conditions that necessitated its creation have not abated, and in fact may have worsened. Academia, platforms, civil society, and all levels of government must be committed, in their own ways, to truth in the service of a free and open society. All stakeholders must focus on predicting and pre-bunking false narratives, detecting mis- and disinformation as it occurs, and countering it whenever appropriate."

Dr. Shiva (" " continues to be from The Long Fuse)

"Dr. Shiva Ayyadurai (a coronavirus and electionrelated conspiracy theorist and anti-vaccine activist, also known as Dr. Shiva), the same video is mobilized (re-introduced and widely spread) multiple times. Information cascades related to content from Project Veritas and Ayyadurai." 

My translation- Dr. Shiva is an enemy of the "status quo" that must be destroyed. 

"Shiva Ayyadurai posted a fraught analysis, choosing variables 170 4.3. Dynamics of 2020 Election Misinformation that artificially created the impression that Trump did more poorly than expected in more Republican areas to suggest voting machines were changing votes to Joe Biden.31 He further used the imposed negative slope to estimate purported switched votes, which fed into misleading narratives about Dominion voting software."

My translation-Dr. Shiva was exactly right and exposing the crime of the century would lead to exposing the deep state actors, including Those involved in the Long Fuse. 


High resolution image

SourceLink

Dr. Shiva explains

Comments

Popular posts from this blog

HARVEST PREP BASEBALL WINS THEIR FIRST GAME IN 9 YEARS!!!!!!

The Peculiar Downplaying of Sex Trafficking is creating the Marriage with Children Effect

If Texas goes Blue, it's GAME OVER!