The latest evidence that Section 702 of the Foreign Surveillance Intelligence Act (FISA) must be ended or drastically reformed came last month in the form of a newly unsealed order from the Foreign Intelligence Surveillance Court (FISC) detailing massive violations of Americans’ privacy by the FBI.
The FISC order is replete with problems. It describes the government’s repeated, widespread violations—over a seven-year period—of procedures for searching its databases of internet communications involving Americans, all without a warrant. These searches included especially sensitive people and groups, including donors to a political campaign. And it shows the FISC giving the FBI all-but-endless do-overs, each time proclaiming that the executive branch has made “promising” steps toward compliance with procedures that are largely left up to government attorneys to design.
Perhaps most shocking, however, is the court’s analysis of how the Fourth Amendment should apply to the FBI’s “backdoor searches” of Americans’ communications. These searches occur when the FBI queries Section 702 data that was ostensibly collected for foreign intelligence purposes without a warrant but includes a person on U.S. soil in the communication.
Although the court acknowledged that the volume of Americans’ private communications collected using Section 702 is “substantial in the aggregate,” and that the FBI routinely searches these communications without a warrant for routine matters, it held that the government’s oft-broken safeguards are consistent with the Fourth Amendment and “adequately guard against error and abuse.” When EFF writes that Section 702 and similar programs have created a “broad national security exception to the Constitution,” this is what we mean.
As long as Section 702 has been debated, its defenders have assured the public that the FISC is just like any other federal court: independent from the executive branch under Article III of the Constitution and charged with protecting individual rights. But as this latest order shows, the FISC’s performance of this duty bears no resemblance to how other Article III courts have treated the same questions, even when those courts have been hamstrung by unwarranted secrecy around the facts of national security surveillance.
Case in point is the U.S. Court of Appeals for the Second Circuit’s 2019 opinion in United States v. Hasbajrami. Hasbajrami was a criminal case in which government agents read a US-resident’s emails collected using Section 702 and charged him with supporting a terrorist organization. As with every other criminal prosecution involving FISA, the defense did not have access to evidence about how the government actually used 702 to surveil Hasbajrami. Yet even with this unfairly narrow review, on appeal the Second Circuit pressed the government on important constitutional questions, including backdoor searches. It even ordered the government to submit additional briefing on why backdoor searches did not violate the Fourth Amendment.
In its Hasbajrami opinion, the Second Circuit wrote that regardless of the procedures the FBI put in place for backdoor searches, these searches must be treated as “separate Fourth Amendment events.” In other words, each and every time the government runs one of these searches, it must ensure it is not unreasonably violating Americans’ privacy. The court’s reasons for reaching this conclusion are noteworthy:
(1) Under Supreme Court precedent, just because the government comes into possession of a person’s private communications—as the NSA does routinely with Section 702— the government is not necessarily allowed to read them without getting a warrant.
(2) The “vast technological capabilities” of Section 702 mean that the government can simply throw Americans’ communications into databases and search them at a later date unrelated for a purpose unrelated to the original “incidental collection.”
(3) Even though Section 702 prohibits directly targeting US residents, the “the NSA may have collected all sorts of information about an individual, the sum of which may resemble what the NSA would have gathered if it had directly targeted that individual in the first place.”
(4) The agency running the searches matters. The example the court gave that would raise Fourth Amendment concerns? “FBI queries directed to a larger archive of millions of communications collected and stored by the NSA for foreign intelligence purposes, on the chance that something in those files might contain incriminating information about a person of interest to domestic law enforcement.” That’s exactly the issue that was before the FISC in this latest opinion.
Clearly, the Second Circuit opinion raises a number of serious questions about whether a single backdoor search is constitutional. That concern is compounded by the hundreds of thousands of searches done by the government’s aggregate querying under Section 702, representing a massive violation of Americans’ privacy.
Even if the FISC did not wrestle with these questions adequately in the past—and it didn’t—you would expect the court to take notice of the Hasbajrami opinion and offer its own analysis. You’d be wrong. The newly unsealed opinion is apparently the first time the FISC has considered Hasbajrami, and in just over a page, the FISC wrote that it “respectfully” disagreed that each search should be viewed as a separate Fourth Amendment event. Instead, it “adhered” to its previous conclusion that the government’s own procedures safeguard privacy “as a whole.” So the scope of the collection and searching was irrelevant, as was the government’s consistent inability to even follow its procedures. But as we’ve said before, allowing the government to claim that protocols are sufficient to protect our constitutional rights turns the Fourth Amendment on its head.
The FISC’s treatment of backdoor searches makes a mockery of the right to privacy. In Hasbajrami, the court did not have a record of backdoor searches run against Mr. Hasbajrami, meaning that it could not say definitively what the Fourth Amendment required. In this FISC opinion, however, the court was presented with an extensive record of backdoor searches—as well as the ability to supplement the factual record to its satisfaction— and the court nevertheless refused to confront what was staring it in the face.
The FISC’s refusal to enforce the Fourth Amendment is yet another reason the surveillance enabled by Section 702 needs to be ended or drastically reformed. A starting point is a requirement in the law itself that the government obtain a warrant before searching its databases for Americans’ communications, which would address the Second Circuit’s concerns in Hasbajrami. Our privacy should not depend on the FBI’s self-policing and the secret court’s contorted interpretation of the Constitution.
In response to Representative LaHood’s asserting during a House Intelligence Committee hearing that he was subject of unlawful Foreign Intelligence Surveillance Act (FISA) search, Demand Progress Senior Policy Counsel Sean Vitka issued the following statement:
“Representative LaHood’s assertion that he was the subject of an unlawful search of 702 information, which Demand Progress first unearthed, is all the more stunning considering he sits on one of the committees that oversees the Intelligence agencies — and has been designated by Congressional leadership as the Intelligence Committee’s point person in charge of the 702 reauthorization. If the Biden administration wants to see this authority reauthorized at all it must embrace a complete overhaul of privacy protection for all Americans — members of Congress and beyond.”
U.S. Special Operations Command, responsible for some of the country’s most secretive military endeavors, is gearing up to conduct internet propaganda and deception campaigns online using deepfake videos, according to federal contracting documents reviewed by The Intercept.
The plans, which also describe hacking internet-connected devices to eavesdrop in order to assess foreign populations’ susceptibility to propaganda, come at a time of intense global debate over technologically sophisticated “disinformation” campaigns, their effectiveness, and the ethics of their use.
While the U.S. government routinely warns against the risk of deepfakes and is openly working to build tools to counter them, the document from Special Operations Command, or SOCOM, represents a nearly unprecedented instance of the American government — or any government — openly signaling its desire to use the highly controversial technology offensively.
SOCOM’s next generation propaganda aspirations are outlined in a procurement document that lists capabilities it’s seeking for the near future and soliciting pitches from outside parties that believe they’re able to build them.
“When it comes to disinformation, the Pentagon should not be fighting fire with fire,” Chris Meserole, head of the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative, told The Intercept. “At a time when digital propaganda is on the rise globally, the U.S. should be doing everything it can to strengthen democracy by building support for shared notions of truth and reality. Deepfakes do the opposite. By casting doubt on the credibility of all content and information, whether real or synthetic, they ultimately erode the foundation of democracy itself.”
“When it comes to disinformation, the Pentagon should not be fighting fire with fire.”
Meserole added, “If deepfakes are going to be leveraged for targeted military and intelligence operations, then their use needs to be subject to review and oversight.”
The pitch document, first published by SOCOM’s Directorate of Science and Technology in 2020, established a wish list of next-generation toys for the 21st century special forces commando, a litany of gadgets and futuristic tools that will help the country’s most elite soldiers more effectively hunt and kill their targets using lasers, robots, holographs, and other sophisticated hardware.
Last October, SOCOM quietly released an updated version of its wish list with a new section: “Advanced technologies for use in Military Information Support Operations (MISO),” a Pentagon euphemism for its global propaganda and deception efforts.
The added paragraph spells out SOCOM’s desire to obtain new and improved means of carrying out “influence operations, digital deception, communication disruption, and disinformation campaigns at the tactical edge and operational levels.” SOCOM is seeking “a next generation capability to collect disparate data through public and open source information streams such as social media, local media, etc. to enable MISO to craft and direct influence operations.”
SOCOM typically fights in the shadows, but its public reputation and global footprint loom large. Comprised of the elite units from the Army, Marine Corps, Navy, and Air Force, SOCOM leads the most sensitive military operations of the world’s most lethal nation.
While American special forces are widely known for splashy exploits like the Navy SEALs’ killing of Osama bin Laden, their history is one of secret missions, subterfuge, sabotage, and disruption campaigns. SOCOM’s “next generation” disinformation ambitions are only part of a long, vast history of deception efforts on the part of the U.S. military and intelligence apparatuses.
Special Operations Command, which is accepting proposals on these capabilities through 2025, did not respond to a request for comment.
Though Special Operations Command has for years coordinated foreign “influence operations,” these deception campaigns have come under renewed scrutiny. In December, The Intercept reported that SOCOM had convinced Twitter, in violation of its internal policies, to permit a network of sham accounts that spread phony news items of dubious accuracy, including a claim that the Iranian government was stealing the organs of Afghan civilians. Though the Twitter-based propaganda offensive didn’t use of deepfakes, researchers found that Pentagon contractors employed machine learning-generated avatars to lend the fake accounts a degree of realism.
Provocatively, the updated capability document reveals that SOCOM wants to boost these internet deception efforts with the use of “next generation” deepfake videos, an increasingly effective method of generating lifelike digital video forgeries using machine learning. Special forces would use this faked footage to “generate messages and influence operations via non-traditional channels,” the document adds.
While deepfakes have largely remained fodder for entertainment and pornography, the potential for more dire applications is real. At the onset of Russian’s invasion of Ukraine, a shoddy deepfake of Ukrainian President Volodymyr Zelenskyy ordering troops to surrender began circulating on social media channels. Ethical considerations aside, the legality of militarized deepfakes in a conflict, which remains an open question, is not addressed in the SOCOM document.
As with foreign governmental “disinformation” campaigns, the U.S. has spent the past several years warning against the potent national security threat represented by deepfakes. The use of deepfakes to deliberately deceive, government authorities warn regularly, could have a deeply destabilizing effect on civilian populations exposed to them.
At the federal level, however, the conversation has revolved exclusively around the menace foreign-made deepfakes might pose to the U.S., not the other way around. Previously reported contracting documents show SOCOM has sought technologies to detect deepfake-augmented internet campaigns, a tactic it now wants to unleash on its own.
Perhaps as provocative as the mention of deepfakes is the section that follows, which notes SOCOM wishes to finely tune its offensive propaganda seemingly by spying on the intended audience through their internet-connected devices.
Described as a “next generation capability to ‘takeover’ Internet of Things (loT) devices for collect [sic] data and information from local populaces to enable breakdown of what messaging might be popular and accepted through sifting of data once received,” the document says that the ability to eavesdrop on propaganda targets “would enable MISO to craft and promote messages that may be more readily received by local populace.” In 2017, WikiLeaks published pilfered CIA files that revealed a roughly similar capability to hijack into household devices.
The technology behind deepfake videos first arrived in 2017, spurred by a combination of cheap, powerful computer hardware and research breakthroughs in machine learning. Deepfake videos are typically made by feeding images of an individual to a computer and using the resultant computerized analysis to essentially paste a highly lifelike simulacrum of that face onto another.
“The capacity for societal harm is certainly there.”
Once the software has been sufficiently trained, its user can crank out realistic fabricated footage of a target saying or doing virtually anything. The technology’s ease of use and increasing accuracy has prompted fears of an era in which the global public can no longer believe what it sees with its own eyes.
Though major social platforms like Facebook have rules against deepfakes, given the inherently fluid and interconnected nature of the internet, Pentagon-disseminated deepfakes might also risk flowing back to the American homeland.
“If it’s a nontraditional media environment, I could imagine the form of manipulation getting pretty far before getting stopped or rebuked by some sort of local authority,” Max Rizzuto, a deepfakes researcher with the Atlantic Council’s Digital Forensic Research Lab, told The Intercept. “The capacity for societal harm is certainly there.”
SOCOM’s interest in deploying deepfake disinformation campaigns follows recent years of international anxiety about forged videos and digital deception from international adversaries. Though there’s scant evidence Russia’s efforts to digitally sway the 2016 election had any meaningful effect, the Pentagon has expressed an interest in redoubling its digital propaganda capabilities, lest it fall behind, with SOCOM taking on a crucial role.
At an April 2018 hearing of the Senate Armed Services Committee, Gen. Kenneth Tovo of the Army Special Operations Command assured the assembled senators that American special forces were working to close the propaganda gap.
“We have invested fairly heavily in our psy-op operators,” he said, “developing new capabilities, particularly to deal in the digital space, social media analysis and a variety of different tools that have been fielded by SOCOM that allow us to evaluate the social media space, evaluate the cyber domain, see trend analysis, where opinion is moving, and then how to potentially influence that environment with our own products.”
While military propaganda is as old as war itself, deepfakes have frequently been discussed as a sui generis technological danger, the existence of which poses a civilizational threat.
At a 2018 Senate Intelligence Committee hearing discussing the nomination of William Evanina to run the National Counterintelligence and Security Center, Sen. Marco Rubio, R-Fla., said of deepfakes, “I believe this is the next wave of attacks against America and Western democracies.” Evanina, in response, reassured Rubio that the U.S. intelligence community was working to counter the threat of deepfakes.
The Pentagon is also reportedly hard at work countering the foreign deepfake threat. According to a 2018 news report, the Defense Advanced Research Projects Agency, the military’s tech research division, has spent tens of millions of dollars developing methods to detect deepfaked imagery. Similar efforts are underway throughout the Department of Defense.
In 2019, Rubio and Sen. Mark Warner, D-Va., wrote 11 American internet companies urging them to draft policies to detect and remove deepfake videos. “If the public can no longer trust recorded events or images,” read the letter, “it will have a corrosive impact on our democracy.”
Nestled within the National Defense Authorization Act for Fiscal Year 2021 was a directive instructing the Pentagon to complete an “intelligence assessment of the threat posed by foreign government and non-state actors creating or using machine-manipulated media (commonly referred to as ‘deep fakes’),” including “how such media has been used or might be used to conduct information warfare.”
Just a couple years later, American special forces seem to be gearing up to conduct the very same.
“It’s a dangerous technology,” said Rizzuto, the Atlantic Council researcher.
“You can’t moderate this tech the way we approach other sorts of content on the internet,” he said. “Deepfakes as a technology have more in common with conversations around nuclear nonproliferation.”
Are you ready for “brain transparency?” That’s the question posed in a lecture given by Duke University professor Nita Farahany at this year’s annual meeting of the World Economic Forum in Davos, Switzerland. And she doesn’t mean your head looking like one of those see-through fish at the bottom of the ocean.
Instead, Farahany, a high-profile scholar and legal ethicist focused on emerging tech, rather glibly predicts a future in which corporations and governments will be able to read your mind. In fact, that technology — the “ability to decode brainwave activity” — is already here, she claims.
“We’re not talking about implanted devices of the future,” she tells her audience. “I’m talking about wearable devices that are like FitBits for your brain,” that can pick up your mind’s emotional states, simple shapes you may be thinking of, or even faces.
Farahany adds, though, that “we can’t literally decode complex thoughts just yet.”
To illustrate her vision for the tech, she invokes a tragic vehicular accident caused by a trucker falling asleep at the wheel. If only he were wearing a fancy hat, with embedded electrode sensors that tells his employer on a scale of one through five how alert he was, they could’ve avoided an accident that was “disastrous for the company and cost many lives” (note the order of priorities).
“Which is why in 5,000 companies across the world, employees are already having their brainwave activity monitored to test for their fatigue levels,” Farahany says. She cites mining operations — including one of the biggest mining companies in the world — that have their employees wear hardhat and baseball cap-like devices that detect fatigue. No mention, of course, of alleviating the conditions that lead to overfatigued workers in the first place.
But never mind safety — she quickly pivots into the all-important metric of productivity.
“Surveillance for productivity is part of what has become the norm in the workplace — and maybe with good reason,” she avers, citing a survey that found nine out of ten employees admitted to the cardinal sin of wasting “at least some time” at work each day — ample justification for the growing ubiquity of bossware, a type of software that’s typically used to surveil what employees (especially those that work from home) do on their computers.
And don’t worry: the tech to monitor employees’ thoughts already exists, she notes, like ear pods that purport to detect if an employee’s mind is wandering, and can even distinguish between the types of tasks they’re focusing on, e.g. doing work versus idly browsing the web.
Farahany believes the optimal path forward is a “responsive” workplace where “humans, robots, and AI, work seamlessly together.” An example she includes: Penn State researchers who created an overlord robot AI that can monitor stress levels via brainwaves and other metrics in a worker and calibrate the rate they assign them more tasks.
“Done well, neurotechnology has extraordinary promise. Done poorly, it could become the most oppressive technology we’ve ever introduced in a wide scale across society. We still have the chance to make it right.” She acknowledges that it “also has a dystopian possibility.”
“But we can make a choice to use it well,” Farahany proclaims. “We can make a choice to have it be something that empowers individuals.”
Her enthusiasm for this nightmarish tech is offputting, but befitting of an economic forum. Yet perhaps the most sinister thing Farahany presents us with is a false dichotomy, as if our only choices are between employers using brain monitoring technology in an evil way and employers using it in a good way that “empowers individuals.” If employees get to choose to opt into using invasive brain tech to hold themselves more accountable, rather than their employer formally requiring them to, then the ethical dilemma is averted. But if employees don’t get to make those decisions for themselves now, what makes her think that they will be able to in the future?
Ultimately, her rhetoric and the overweeningly presented dichotomy serve to placate us into accepting a future where the widespread use of increasingly invasive surveillance devices is the norm. Accept it now with naive and vague promises of accountability, and we can avoid a dystopian future. The “choice” doesn’t matter. All that matters is that you’re willing to embrace the technology, one way or another.
Of course, she’s not addressing the working class masses here, but a highly select group of businesspeople, investors, economists, and world leaders who will want to make that “choice” for you. And whether through their own mouths or carefully orchestrated marketing, it’ll likely be sold to you using the same rhetoric used here. Better to recognize it now in the hopes of one day making a third choice for ourselves.
Beijing says the outposts aren’t doing police work, but Chinese state media reports say they “collect intelligence” and solve crimes far outside their jurisdiction.
The rising tide of policing by robots and drones may seem relentless or even inevitable. But activism, legislative advocacy, and public outrage can do a lot to protect our safety and freedom from these technologies.
This year began with a report that elucidated what police are doing with drones. Answer? Not much for now. A law in Minnesota mandates police departments report all of the times they deployed drones and for what reason. We’ve suspected that police have few clear uses, other than invasive surveillance. The Minnesota report reveals that drones were mostly just for training purposes.
One purpose Axon was hoping to find for drones this year was to stop school shooters. The company announced they were developing a drone that came with a mounted taser for the purpose of subduing people in dangerous situations. The backlash was immediate. After a majority of Axon’s ethics board resigned the company paused the project.
In Oakland and in San Francisco, activists defeated municipal plans to authorize police to use deadly force with remote-controlled robots. In Oakland, police hoped to use a shotgun-mounted robot-–a plan which received so much backlash the proposal was pulled in just a few days. In San Francisco, it took a little longer. After the Board of Supervisors voted 8-to-3 to authorize police to use robots strapped with bombs to deploy deadly force, an EFF-led coalition mobilized. After one week, which included a rally and international press attention, the Board of Supervisors reversed course.
Of course, no fight stays won. Robot companies still want to make money. Police still want to send robots to do their work. The Department of Homeland Security still has plans to test autonomous robot dogs on the U.S. border as part of its massive infrastructure of border surveillance. But, with enough organizing, lobbying, and a fair bit of outrage, we can resist and often win.
Twitter executives have claimed for years that the company makes concerted efforts to detect and thwart government-backed covert propaganda campaigns on its platform.
Behind the scenes, however, the social networking giant provided direct approval and internal protection to the U.S. military’s network of social media accounts and online personas, whitelisting a batch of accounts at the request of the government. The Pentagon has used this network, which includes U.S. government-generated news portals and memes, in an effort to shape opinion in Yemen, Syria, Iraq, Kuwait, and beyond.
The accounts in question started out openly affiliated with the U.S. government. But then the Pentagon appeared to shift tactics and began concealing its affiliation with some of these accounts — a move toward the type of intentional platform manipulation that Twitter has publicly opposed. Though Twitter executives maintained awareness of the accounts, they did not shut them down, but let them remain active for years. Some remain active.
The revelations are buried in the archives of Twitter’s emails and internal tools, to which The Intercept was granted access for a brief period last week alongside a handful of other writers and reporters. Following Elon Musk’s purchase of Twitter, the billionaire starting giving access to company documents, saying in a Twitter Space that “the general idea is to surface anything bad Twitter has done in the past.” The files, which included records generated under Musk’s ownership, provide unprecedented, if incomplete, insight into decision-making within a major social media company.
Twitter did not provide unfettered access to company information; rather, for three days last week, they allowed me to make requests without restriction that were then fulfilled on my behalf by an attorney, meaning that the search results may not have been exhaustive. I did not agree to any conditions governing the use of the documents, and I made efforts to authenticate and contextualize the documents through further reporting. The redactions in the embedded documents in this story were done by The Intercept to protect privacy, not Twitter.
The direct assistance Twitter provided to the Pentagon goes back at least five years.
On July 26, 2017, Nathaniel Kahler, at the time an official working with U.S. Central Command — also known as CENTCOM, a division of the Defense Department — emailed a Twitter representative with the company’s public policy team, with a request to approve the verification of one account and “whitelist” a list of Arab-language accounts “we use to amplify certain messages.”
“We’ve got some accounts that are not indexing on hashtags — perhaps they were flagged as bots,” wrote Kahler. “A few of these had built a real following and we hope to salvage.” Kahler added that he was happy to provide more paperwork from his office or SOCOM, the acronym for the U.S. Special Operations Command.
Twitter at the time had built out an expanded abuse detection system aimed in part toward flagging malicious activity related to the Islamic State and other terror organizations operating in the Middle East. As an indirect consequence of these efforts, one former Twitter employee explained to The Intercept, accounts controlled by the military that were frequently engaging with extremist groups were being automatically flagged as spam. The former employee, who was involved with the whitelisting of CENTCOM accounts, spoke with The Intercept under condition of anonymity because they were not authorized to speak publicly.
In his email, Kahler sent a spreadsheet with 52 accounts. He asked for priority service for six of the accounts, including @yemencurrent, an account used to broadcast announcements about U.S. drone strikes in Yemen. Around the same time, @yemencurrent, which has since been deleted, had emphasized that U.S. drone strikes were “accurate” and killed terrorists, not civilians, and promoted the U.S. and Saudi-backed assault on Houthi rebels in that country.
Other accounts on the list were focused on promoting U.S.-supported militias in Syria and anti-Iran messages in Iraq. One account discussed legal issues in Kuwait. Though many accounts remained focused on one topic area, others moved from topic to topic. For instance, @dala2el, one of the CENTCOM accounts, shifted from messaging around drone strikes in Yemen in 2017 to Syrian government-focused communications this year.
On the same day that CENTCOM sent its request, members of Twitter’s site integrity team went into an internal company system used for managing the reach of various users and applied a special exemption tag to the accounts, internal logs show.
One engineer, who asked not to be named because he was not authorized to speak to the media, said that he had never seen this type of tag before, but upon close inspection, said that the effect of the “whitelist” tag essentially gave the accounts the privileges of Twitter verification without a visible blue check. Twitter verification would have bestowed a number of advantages, such as invulnerability to algorithmic bots that flag accounts for spam or abuse, as well as other strikes that lead to decreased visibility or suspension.
Kahler told Twitter that the accounts would all be “USG-attributed, Arabic-language accounts tweeting on relevant security issues.” That promise fell short, as many of the accounts subsequently deleted disclosures of affiliation with the U.S. government.
The Internet Archive does not preserve the full history of every account, but The Intercept identified several accounts that initially listed themselves as U.S. government accounts in their bios, but, after being whitelisted, shed any disclosure that they were affiliated with the military and posed as ordinary users.
This appears to align with a major report published in August by online security researchers affiliated with the Stanford Internet Observatory, which reported on thousands of accounts that they suspected to be part of a state-backed information operation, many of which used photorealistic human faces generated by artificial intelligence, a practice also known as “deep fakes.”
The researchers connected these accounts with a vast online ecosystem that included “fake news” websites, meme accounts on Telegram and Facebook, and online personalities that echoed Pentagon messages often without disclosure of affiliation with the U.S. military. Some of the accounts accuse Iran of “threatening Iraq’s water security and flooding the country with crystal meth,” while others promoted allegations that Iran was harvesting the organs of Afghan refugees.
The Stanford report did not definitively tie the sham accounts to CENTCOM or provide a complete list of Twitter accounts. But the emails obtained by The Intercept show that the creation of at least one of these accounts was directly affiliated with the Pentagon.
“It’s deeply concerning if the Pentagon is working to shape public opinion about our military’s role abroad and even worse if private companies are helping to conceal it.”
One of the accounts that Kahler asked to have whitelisted, @mktashif, was identified by the researchers as appearing to use a deep-fake photo to obscure its real identity. Initially, according to the Wayback Machine, @mktashif did disclose that it was a U.S. government account affiliated with CENTCOM, but at some point, this disclosure was deleted and the account’s photo was changed to the one Stanford identified as a deep fake.
The new Twitter bio claimed that the account was an unbiased source of opinion and information, and, roughly translated from Arabic, “dedicated to serving Iraqis and Arabs.” The account, before it was suspended earlier this year, routinely tweeted messages denouncing Iran and other U.S. adversaries, including Houthi rebels in Yemen.
Another CENTCOM account, @althughur, which posts anti-Iran and anti-ISIS content focused on an Iraqi audience, changed its Twitter bio from a CENTCOM affiliation to an Arabic phrase that simply reads “Euphrates pulse.”
The former Twitter employee told The Intercept that they were surprised to learn of the Defense Department’s shifting tactics. “It sounds like DOD was doing something shady and definitely not in line with what they had presented to us at the time,” they said.
Twitter did not respond to a request for comment.
“It’s deeply concerning if the Pentagon is working to shape public opinion about our military’s role abroad and even worse if private companies are helping to conceal it,” said Erik Sperling, the executive director of Just Foreign Policy, a nonprofit that works toward diplomatic solutions to foreign conflicts.
“Congress and social media companies should investigate and take action to ensure that, at the very least, our citizens are fully informed when their tax money is being spent on putting a positive spin on our endless wars,” Sperling added.
Nick Pickles, public policy director for Twitter, speaks during a full committee hearing on “Mass Violence, Extremism, and Digital Responsibility,” in Washington, D.C., on Sept. 18, 2019.
Photo: Olivier DoulieryAFP via Getty Images
For many years, Twitter has pledged to shut down all state-backed disinformation and propaganda efforts, never making an explicit exception for the U.S. In 2020, Twitter spokesperson Nick Pickles, in a testimony before the House Intelligence Committee, said that the company was taking aggressive efforts to shut down “coordinated platform manipulation efforts” attributed to government agencies.
“Combatting attempts to interfere in conversations on Twitter remains a top priority for the company, and we continue to invest heavily in our detection, disruption, and transparency efforts related to state-backed information operations. Our goal is to remove bad-faith actors and to advance public understanding of these critical topics,” said Pickles.
In 2018, for instance, Twitter announced the mass suspension of accounts tied to Russian government-linked propaganda efforts. Two years later, the company boasted of shutting down almost 1,000 accounts for association with the Thai military. But rules on platform manipulation, it appears, have not been applied to American military efforts.
The emails obtained by The Intercept show that not only did Twitter whitelist these accounts in 2017 explicitly at the behest of the military, but also that high-level officials at the company discussed the accounts as potentially problematic in the following years.
In the summer of 2020, officials from Facebook reportedly identified fake accounts attributed to CENTCOM’s influence operation on its platform and warned the Pentagon that if Silicon Valley could easily out these accounts as inauthentic, so could foreign adversaries, according to a September report in the Washington Post.
Twitter emails show that during that time in 2020, Facebook and Twitter executives were invited by the Pentagon’s top attorneys to attend classified briefings in a sensitive compartmented information facility, also known as a SCIF, used for highly sensitive meetings.
“Facebook have had a series of 1:1 conversations between their senior legal leadership and DOD’s [general counsel] re: inauthentic activity,” wrote Yoel Roth, then the head of trust and safety at Twitter. “Per FB,” continued Roth, “DOD have indicated a strong desire to work with us to remove the activity — but are now refusing to discuss additional details or steps outside of a classified conversation.”
Stacia Cardille, then an attorney with Twitter, noted in an email to her colleagues that the Pentagon may want to retroactively classify its social media activities “to obfuscate their activity in this space, and that this may represent an overclassification to avoid embarrassment.”
Jim Baker, then the deputy general counsel of Twitter, in the same thread, wrote that the Pentagon appeared to have used “poor tradecraft” in setting up various Twitter accounts, sought to potentially cover its tracks, and was likely seeking a strategy for avoiding public knowledge that the accounts are “linked to each other or to DoD or the USG.” Baker speculated that in the meeting the “DoD might want to give us a timetable for shutting them down in a more prolonged way that will not compromise any ongoing operations or reveal their connections to DoD.”
What was discussed at the classified meetings — which ultimately did take place, according to the Post — was not included in the Twitter emails provided to The Intercept, but many of the fake accounts remained active for at least another year. Some of the accounts on the CENTCOM list remain active even now — like this one, which includes affiliation with CENTCOM, and this one, which does not — while many were swept off the platform in a mass suspension on May 16.
In a separate email sent in May 2020, Lisa Roman, then a vice president of the company in charge of global public policy, emailed William S. Castle, a Pentagon attorney, along with Roth, with an additional list of Defense Department Twitter accounts. “The first tab lists those accounts previously provided to us and the second, associated accounts that Twitter has discovered,” wrote Roman. It’s not clear from this single email what Roman is requesting – she references a phone call preceding the email — but she notes that the second tab of accounts — the ones that had not been explicitly provided to Twitter by the Pentagon — “may violate our Rules.” The attachment included a batch of accounts tweeting in Russian and Arabic about human rights violations committed by ISIS. Many accounts in both tabs were not openly identified as affiliated with the U.S. government.
Twitter executives remained aware of the Defense Department’s special status. This past January, a Twitter executive recirculated the CENTCOM list of Twitter accounts originally whitelisted in 2017. The email simply read “FYI” and was directed to several Twitter officials, including Patrick Conlon, a former Defense Department intelligence analyst then working on the site integrity unit as Twitter’s global threat intelligence lead. Internal records also showed that the accounts that remained from Kahler’s original list are still whitelisted.
Following the mass suspension of many of the accounts this past May, Twitter’s team worked to limit blowback from its involvement in the campaign.
Shortly before publication of the Washington Post story in September, Katie Rosborough, then a communications specialist at Twitter, wrote to alert Twitter lawyers and lobbyists about the upcoming piece. “It’s a story that’s mostly focused on DoD and Facebook; however, there will be a couple lines that reference us alongside Facebook in that we reached out to them [DoD] for a meeting. We don’t think they’ll tie it to anything Mudge-related or name any Twitter employees. We declined to comment,” she wrote. (Mudge is a reference to Peiter Zatko, a Twitter whistleblower who filed a complaint with federal authorities in July, alleging lax security measures and penetration of the company by foreign agents.)
After the Washington Post’s story published, the Twitter team congratulated one another because the story minimized Twitter’s role in the CENTCOM psyop campaign. Instead, the story largely revolved around the Pentagon’s decision to begin a review of its clandestine psychological operations on social media.
“Thanks for doing all that you could to manage this one,” wrote Rebecca Hahn, another former Twitter communications official. “It didn’t seem to get too much traction beyond verge, cnn and wapo editors promoting.”
CENTCOM did not initially provide comment to The Intercept. Following publication of this story, CENTCOM’s media desk referred The Intercept to Brigadier Gen. Pat Ryder’s comments in a September briefing, in which he said that the Pentagon had requested “a review of Department of Defense military information support activities, which is simply meant to be an opportunity for us to assess the current work that’s being done in this arena, and really shouldn’t be interpreted as anything beyond that.”
The U.S. military and intelligence community have long pursued a strategy of fabricated online personas and third parties to amplify certain narratives in foreign countries, the idea being that an authentic-looking Persian-language news portal or a local Afghan woman would have greater organic influence than an official Pentagon press release.
Military online propaganda efforts have largely been governed by a 2006 memorandum. The memo notes that the Defense Department’s internet activities should “openly acknowledge U.S. involvement” except in cases when a “Combatant Commander believes that it will not be possible due to operational considerations.” This method of nondisclosure, the memo states, is only authorized for operations in the “Global War on Terrorism, or when specified in other Secretary of Defense execute orders.”
In 2019, lawmakers passed a measure known as Section 1631, a reference to a provision of the National Defense Authorization Act, further legally affirming clandestine psychological operations by the military in a bid to counter online disinformation campaigns by Russia, China, and other foreign adversaries.
In 2008, the U.S. Special Operations Command opened a request for a service to provide “web-based influence products and tools in support of strategic and long-term U.S. Government goals and objectives.” The contract referred to the Trans-Regional Web Initiative, an effort to create online news sites designed to win hearts and minds in the battle to counter Russian influence in Central Asia and global Islamic terrorism. The contract was initially carried out by General Dynamics Information Technology, a subsidiary of the defense contractor General Dynamics, in connection with CENTCOM communication offices in the Washington, D.C., area and in Tampa, Florida.
A program known as “WebOps,” run by a defense contractor known as Colsa Corp., was used to create fictitious online identities designed to counter online recruitment efforts by ISIS and other terrorist networks.
The Intercept spoke to a former employee of a contractor — on the condition of anonymity for legal protection — engaged in these online propaganda networks for the Trans-Regional Web Initiative. He described a loose newsroom-style operation, employing former journalists, operating out of a generic suburban office building.
“Generally what happens, at the time when I was there, CENTCOM will develop a list of messaging points that they want us to focus on,” said the contractor. “Basically, they would, we want you to focus on say, counterterrorism and a general framework that we want to talk about.”
From there, he said, supervisors would help craft content that was distributed through a network of CENTCOM-controlled websites and social media accounts. As the contractors created content to support narratives from military command, they were instructed to tag each content item with a specific military objective. Generally, the contractor said, the news items he created were technically factual but always crafted in a way that closely reflected the Pentagon’s goals.
“We had some pressure from CENTCOM to push stories,” he added, while noting that he worked at the sites years ago, before the transition to more covert operations. At the time, “we weren’t doing any of that black-hat stuff.”
Update: December 20, 2022, 4:17 p.m.
This story has been updated with information provided by CENTCOM following publication.
From CrimethInc. On November 25, at the urging of a far-right troll, Elon Musk banned the @crimethinc Twitter account. Musk’s goal in acquiring Twitter had nothing to do with “free speech”—it was a partisan move intended to silence opposition while…
The conventional picture of climate change and what is causing it is about to change.
The NSA whistleblower said creating the Department of Homeland Security “was always a mistake,” but “its plan to become the Speech Police is the final straw.”