Beijing says the outposts aren’t doing police work, but Chinese state media reports say they “collect intelligence” and solve crimes far outside their jurisdiction.
Archive for category: Surveillance
The rising tide of policing by robots and drones may seem relentless or even inevitable. But activism, legislative advocacy, and public outrage can do a lot to protect our safety and freedom from these technologies.
This year began with a report that elucidated what police are doing with drones. Answer? Not much for now. A law in Minnesota mandates police departments report all of the times they deployed drones and for what reason. We’ve suspected that police have few clear uses, other than invasive surveillance. The Minnesota report reveals that drones were mostly just for training purposes.
One purpose Axon was hoping to find for drones this year was to stop school shooters. The company announced they were developing a drone that came with a mounted taser for the purpose of subduing people in dangerous situations. The backlash was immediate. After a majority of Axon’s ethics board resigned the company paused the project.
In Oakland and in San Francisco, activists defeated municipal plans to authorize police to use deadly force with remote-controlled robots. In Oakland, police hoped to use a shotgun-mounted robot-–a plan which received so much backlash the proposal was pulled in just a few days. In San Francisco, it took a little longer. After the Board of Supervisors voted 8-to-3 to authorize police to use robots strapped with bombs to deploy deadly force, an EFF-led coalition mobilized. After one week, which included a rally and international press attention, the Board of Supervisors reversed course.
Of course, no fight stays won. Robot companies still want to make money. Police still want to send robots to do their work. The Department of Homeland Security still has plans to test autonomous robot dogs on the U.S. border as part of its massive infrastructure of border surveillance. But, with enough organizing, lobbying, and a fair bit of outrage, we can resist and often win.
Twitter executives have claimed for years that the company makes concerted efforts to detect and thwart government-backed covert propaganda campaigns on its platform.
Behind the scenes, however, the social networking giant provided direct approval and internal protection to the U.S. military’s network of social media accounts and online personas, whitelisting a batch of accounts at the request of the government. The Pentagon has used this network, which includes U.S. government-generated news portals and memes, in an effort to shape opinion in Yemen, Syria, Iraq, Kuwait, and beyond.
The accounts in question started out openly affiliated with the U.S. government. But then the Pentagon appeared to shift tactics and began concealing its affiliation with some of these accounts — a move toward the type of intentional platform manipulation that Twitter has publicly opposed. Though Twitter executives maintained awareness of the accounts, they did not shut them down, but let them remain active for years. Some remain active.
The revelations are buried in the archives of Twitter’s emails and internal tools, to which The Intercept was granted access for a brief period last week alongside a handful of other writers and reporters. Following Elon Musk’s purchase of Twitter, the billionaire starting giving access to company documents, saying in a Twitter Space that “the general idea is to surface anything bad Twitter has done in the past.” The files, which included records generated under Musk’s ownership, provide unprecedented, if incomplete, insight into decision-making within a major social media company.
Twitter did not provide unfettered access to company information; rather, for three days last week, they allowed me to make requests without restriction that were then fulfilled on my behalf by an attorney, meaning that the search results may not have been exhaustive. I did not agree to any conditions governing the use of the documents, and I made efforts to authenticate and contextualize the documents through further reporting. The redactions in the embedded documents in this story were done by The Intercept to protect privacy, not Twitter.
The direct assistance Twitter provided to the Pentagon goes back at least five years.
On July 26, 2017, Nathaniel Kahler, at the time an official working with U.S. Central Command — also known as CENTCOM, a division of the Defense Department — emailed a Twitter representative with the company’s public policy team, with a request to approve the verification of one account and “whitelist” a list of Arab-language accounts “we use to amplify certain messages.”
“We’ve got some accounts that are not indexing on hashtags — perhaps they were flagged as bots,” wrote Kahler. “A few of these had built a real following and we hope to salvage.” Kahler added that he was happy to provide more paperwork from his office or SOCOM, the acronym for the U.S. Special Operations Command.
Twitter at the time had built out an expanded abuse detection system aimed in part toward flagging malicious activity related to the Islamic State and other terror organizations operating in the Middle East. As an indirect consequence of these efforts, one former Twitter employee explained to The Intercept, accounts controlled by the military that were frequently engaging with extremist groups were being automatically flagged as spam. The former employee, who was involved with the whitelisting of CENTCOM accounts, spoke with The Intercept under condition of anonymity because they were not authorized to speak publicly.
In his email, Kahler sent a spreadsheet with 52 accounts. He asked for priority service for six of the accounts, including @yemencurrent, an account used to broadcast announcements about U.S. drone strikes in Yemen. Around the same time, @yemencurrent, which has since been deleted, had emphasized that U.S. drone strikes were “accurate” and killed terrorists, not civilians, and promoted the U.S. and Saudi-backed assault on Houthi rebels in that country.
Other accounts on the list were focused on promoting U.S.-supported militias in Syria and anti-Iran messages in Iraq. One account discussed legal issues in Kuwait. Though many accounts remained focused on one topic area, others moved from topic to topic. For instance, @dala2el, one of the CENTCOM accounts, shifted from messaging around drone strikes in Yemen in 2017 to Syrian government-focused communications this year.
On the same day that CENTCOM sent its request, members of Twitter’s site integrity team went into an internal company system used for managing the reach of various users and applied a special exemption tag to the accounts, internal logs show.
One engineer, who asked not to be named because he was not authorized to speak to the media, said that he had never seen this type of tag before, but upon close inspection, said that the effect of the “whitelist” tag essentially gave the accounts the privileges of Twitter verification without a visible blue check. Twitter verification would have bestowed a number of advantages, such as invulnerability to algorithmic bots that flag accounts for spam or abuse, as well as other strikes that lead to decreased visibility or suspension.
Kahler told Twitter that the accounts would all be “USG-attributed, Arabic-language accounts tweeting on relevant security issues.” That promise fell short, as many of the accounts subsequently deleted disclosures of affiliation with the U.S. government.
The Internet Archive does not preserve the full history of every account, but The Intercept identified several accounts that initially listed themselves as U.S. government accounts in their bios, but, after being whitelisted, shed any disclosure that they were affiliated with the military and posed as ordinary users.
This appears to align with a major report published in August by online security researchers affiliated with the Stanford Internet Observatory, which reported on thousands of accounts that they suspected to be part of a state-backed information operation, many of which used photorealistic human faces generated by artificial intelligence, a practice also known as “deep fakes.”
The researchers connected these accounts with a vast online ecosystem that included “fake news” websites, meme accounts on Telegram and Facebook, and online personalities that echoed Pentagon messages often without disclosure of affiliation with the U.S. military. Some of the accounts accuse Iran of “threatening Iraq’s water security and flooding the country with crystal meth,” while others promoted allegations that Iran was harvesting the organs of Afghan refugees.
The Stanford report did not definitively tie the sham accounts to CENTCOM or provide a complete list of Twitter accounts. But the emails obtained by The Intercept show that the creation of at least one of these accounts was directly affiliated with the Pentagon.
“It’s deeply concerning if the Pentagon is working to shape public opinion about our military’s role abroad and even worse if private companies are helping to conceal it.”
One of the accounts that Kahler asked to have whitelisted, @mktashif, was identified by the researchers as appearing to use a deep-fake photo to obscure its real identity. Initially, according to the Wayback Machine, @mktashif did disclose that it was a U.S. government account affiliated with CENTCOM, but at some point, this disclosure was deleted and the account’s photo was changed to the one Stanford identified as a deep fake.
The new Twitter bio claimed that the account was an unbiased source of opinion and information, and, roughly translated from Arabic, “dedicated to serving Iraqis and Arabs.” The account, before it was suspended earlier this year, routinely tweeted messages denouncing Iran and other U.S. adversaries, including Houthi rebels in Yemen.
Another CENTCOM account, @althughur, which posts anti-Iran and anti-ISIS content focused on an Iraqi audience, changed its Twitter bio from a CENTCOM affiliation to an Arabic phrase that simply reads “Euphrates pulse.”
The former Twitter employee told The Intercept that they were surprised to learn of the Defense Department’s shifting tactics. “It sounds like DOD was doing something shady and definitely not in line with what they had presented to us at the time,” they said.
Twitter did not respond to a request for comment.
“It’s deeply concerning if the Pentagon is working to shape public opinion about our military’s role abroad and even worse if private companies are helping to conceal it,” said Erik Sperling, the executive director of Just Foreign Policy, a nonprofit that works toward diplomatic solutions to foreign conflicts.
“Congress and social media companies should investigate and take action to ensure that, at the very least, our citizens are fully informed when their tax money is being spent on putting a positive spin on our endless wars,” Sperling added.

Nick Pickles, public policy director for Twitter, speaks during a full committee hearing on “Mass Violence, Extremism, and Digital Responsibility,” in Washington, D.C., on Sept. 18, 2019.
Photo: Olivier DoulieryAFP via Getty Images
For many years, Twitter has pledged to shut down all state-backed disinformation and propaganda efforts, never making an explicit exception for the U.S. In 2020, Twitter spokesperson Nick Pickles, in a testimony before the House Intelligence Committee, said that the company was taking aggressive efforts to shut down “coordinated platform manipulation efforts” attributed to government agencies.
“Combatting attempts to interfere in conversations on Twitter remains a top priority for the company, and we continue to invest heavily in our detection, disruption, and transparency efforts related to state-backed information operations. Our goal is to remove bad-faith actors and to advance public understanding of these critical topics,” said Pickles.
In 2018, for instance, Twitter announced the mass suspension of accounts tied to Russian government-linked propaganda efforts. Two years later, the company boasted of shutting down almost 1,000 accounts for association with the Thai military. But rules on platform manipulation, it appears, have not been applied to American military efforts.
The emails obtained by The Intercept show that not only did Twitter whitelist these accounts in 2017 explicitly at the behest of the military, but also that high-level officials at the company discussed the accounts as potentially problematic in the following years.
In the summer of 2020, officials from Facebook reportedly identified fake accounts attributed to CENTCOM’s influence operation on its platform and warned the Pentagon that if Silicon Valley could easily out these accounts as inauthentic, so could foreign adversaries, according to a September report in the Washington Post.
Twitter emails show that during that time in 2020, Facebook and Twitter executives were invited by the Pentagon’s top attorneys to attend classified briefings in a sensitive compartmented information facility, also known as a SCIF, used for highly sensitive meetings.
“Facebook have had a series of 1:1 conversations between their senior legal leadership and DOD’s [general counsel] re: inauthentic activity,” wrote Yoel Roth, then the head of trust and safety at Twitter. “Per FB,” continued Roth, “DOD have indicated a strong desire to work with us to remove the activity — but are now refusing to discuss additional details or steps outside of a classified conversation.”
Stacia Cardille, then an attorney with Twitter, noted in an email to her colleagues that the Pentagon may want to retroactively classify its social media activities “to obfuscate their activity in this space, and that this may represent an overclassification to avoid embarrassment.”
Jim Baker, then the deputy general counsel of Twitter, in the same thread, wrote that the Pentagon appeared to have used “poor tradecraft” in setting up various Twitter accounts, sought to potentially cover its tracks, and was likely seeking a strategy for avoiding public knowledge that the accounts are “linked to each other or to DoD or the USG.” Baker speculated that in the meeting the “DoD might want to give us a timetable for shutting them down in a more prolonged way that will not compromise any ongoing operations or reveal their connections to DoD.”
What was discussed at the classified meetings — which ultimately did take place, according to the Post — was not included in the Twitter emails provided to The Intercept, but many of the fake accounts remained active for at least another year. Some of the accounts on the CENTCOM list remain active even now — like this one, which includes affiliation with CENTCOM, and this one, which does not — while many were swept off the platform in a mass suspension on May 16.
In a separate email sent in May 2020, Lisa Roman, then a vice president of the company in charge of global public policy, emailed William S. Castle, a Pentagon attorney, along with Roth, with an additional list of Defense Department Twitter accounts. “The first tab lists those accounts previously provided to us and the second, associated accounts that Twitter has discovered,” wrote Roman. It’s not clear from this single email what Roman is requesting – she references a phone call preceding the email — but she notes that the second tab of accounts — the ones that had not been explicitly provided to Twitter by the Pentagon — “may violate our Rules.” The attachment included a batch of accounts tweeting in Russian and Arabic about human rights violations committed by ISIS. Many accounts in both tabs were not openly identified as affiliated with the U.S. government.
Twitter executives remained aware of the Defense Department’s special status. This past January, a Twitter executive recirculated the CENTCOM list of Twitter accounts originally whitelisted in 2017. The email simply read “FYI” and was directed to several Twitter officials, including Patrick Conlon, a former Defense Department intelligence analyst then working on the site integrity unit as Twitter’s global threat intelligence lead. Internal records also showed that the accounts that remained from Kahler’s original list are still whitelisted.
Following the mass suspension of many of the accounts this past May, Twitter’s team worked to limit blowback from its involvement in the campaign.
Shortly before publication of the Washington Post story in September, Katie Rosborough, then a communications specialist at Twitter, wrote to alert Twitter lawyers and lobbyists about the upcoming piece. “It’s a story that’s mostly focused on DoD and Facebook; however, there will be a couple lines that reference us alongside Facebook in that we reached out to them [DoD] for a meeting. We don’t think they’ll tie it to anything Mudge-related or name any Twitter employees. We declined to comment,” she wrote. (Mudge is a reference to Peiter Zatko, a Twitter whistleblower who filed a complaint with federal authorities in July, alleging lax security measures and penetration of the company by foreign agents.)
After the Washington Post’s story published, the Twitter team congratulated one another because the story minimized Twitter’s role in the CENTCOM psyop campaign. Instead, the story largely revolved around the Pentagon’s decision to begin a review of its clandestine psychological operations on social media.
“Thanks for doing all that you could to manage this one,” wrote Rebecca Hahn, another former Twitter communications official. “It didn’t seem to get too much traction beyond verge, cnn and wapo editors promoting.”
CENTCOM did not initially provide comment to The Intercept. Following publication of this story, CENTCOM’s media desk referred The Intercept to Brigadier Gen. Pat Ryder’s comments in a September briefing, in which he said that the Pentagon had requested “a review of Department of Defense military information support activities, which is simply meant to be an opportunity for us to assess the current work that’s being done in this arena, and really shouldn’t be interpreted as anything beyond that.”
The U.S. military and intelligence community have long pursued a strategy of fabricated online personas and third parties to amplify certain narratives in foreign countries, the idea being that an authentic-looking Persian-language news portal or a local Afghan woman would have greater organic influence than an official Pentagon press release.
Military online propaganda efforts have largely been governed by a 2006 memorandum. The memo notes that the Defense Department’s internet activities should “openly acknowledge U.S. involvement” except in cases when a “Combatant Commander believes that it will not be possible due to operational considerations.” This method of nondisclosure, the memo states, is only authorized for operations in the “Global War on Terrorism, or when specified in other Secretary of Defense execute orders.”
In 2019, lawmakers passed a measure known as Section 1631, a reference to a provision of the National Defense Authorization Act, further legally affirming clandestine psychological operations by the military in a bid to counter online disinformation campaigns by Russia, China, and other foreign adversaries.
In 2008, the U.S. Special Operations Command opened a request for a service to provide “web-based influence products and tools in support of strategic and long-term U.S. Government goals and objectives.” The contract referred to the Trans-Regional Web Initiative, an effort to create online news sites designed to win hearts and minds in the battle to counter Russian influence in Central Asia and global Islamic terrorism. The contract was initially carried out by General Dynamics Information Technology, a subsidiary of the defense contractor General Dynamics, in connection with CENTCOM communication offices in the Washington, D.C., area and in Tampa, Florida.
A program known as “WebOps,” run by a defense contractor known as Colsa Corp., was used to create fictitious online identities designed to counter online recruitment efforts by ISIS and other terrorist networks.
The Intercept spoke to a former employee of a contractor — on the condition of anonymity for legal protection — engaged in these online propaganda networks for the Trans-Regional Web Initiative. He described a loose newsroom-style operation, employing former journalists, operating out of a generic suburban office building.
“Generally what happens, at the time when I was there, CENTCOM will develop a list of messaging points that they want us to focus on,” said the contractor. “Basically, they would, we want you to focus on say, counterterrorism and a general framework that we want to talk about.”
From there, he said, supervisors would help craft content that was distributed through a network of CENTCOM-controlled websites and social media accounts. As the contractors created content to support narratives from military command, they were instructed to tag each content item with a specific military objective. Generally, the contractor said, the news items he created were technically factual but always crafted in a way that closely reflected the Pentagon’s goals.
“We had some pressure from CENTCOM to push stories,” he added, while noting that he worked at the sites years ago, before the transition to more covert operations. At the time, “we weren’t doing any of that black-hat stuff.”
Update: December 20, 2022, 4:17 p.m.
This story has been updated with information provided by CENTCOM following publication.
The post Twitter Aided the Pentagon in its Covert Online Propaganda Campaign appeared first on The Intercept.
The conventional picture of climate change and what is causing it is about to change.
Imagine, for a moment, the near future Amazon dreams of.
Every morning, you are gently awakened by the Amazon Halo Rise. From its perch on your nightstand, the round device has spent the night monitoring the movements of your body, the light in your room, and the space’s temperature and humidity. At the optimal moment in your sleep cycle, as calculated by a proprietary algorithm, the device’s light gradually brightens to mimic the natural warm hue of sunrise. Your Amazon Echo, plugged in somewhere nearby, automatically starts playing your favorite music as part of your wake-up routine. You ask the device about the day’s weather; it tells you to expect rain. Then it informs you that your next “Subscribe & Save” shipment of Amazon Elements Super Omega-3 softgels is out for delivery. On your way to the bathroom, a notification bubbles up on your phone from Amazon’s Neighbors app, which is populated with video footage from the area’s Amazon Ring cameras: Someone has been overturning garbage cans, leaving the community’s yards a total wreck. (Maybe it’s just raccoons.)
Standing at the sink, you glance at the Amazon Halo app, which is connected to your Amazon Halo fitness tracker. You feel awful, which is probably why the wearable is analyzing your tone of voice as “low energy” and “low positivity.” Your sleep score is dismal. After your morning rinse, you hear the Amazon Astro robot chasing your dog, Fred, down the hallway; you see on the Astro’s video feed that Fred is gnawing on your Amazon Essentials athletic sneaker. Your Ring doorbell sounds. The pills have arrived.
It would be a bit glib—and more than a little clichéd—to call this some kind of technological dystopia. Actually, dystopia wouldn’t be right, exactly: Dystopian fiction is generally speculative, whereas all of these items and services are real. At the end of September, Amazon announced a suite of tech products in its move toward “ambient intelligence,” which Amazon’s hardware chief, Dave Limp, described as technology and devices that slip into the background but are “always there,” collecting information and taking action against it.
This intense devotion to tracking and quantifying all aspects of our waking and non-waking hours is nothing new—see the Apple Watch, the Fitbit, social media writ large, and the smartphone in your pocket—but Amazon has been unusually explicit about its plans. The Everything Store is becoming an Everything Tracker, collecting and leveraging large amounts of personal data related to entertainment, fitness, health, and, it claims, security. It’s surveillance that millions of customers are opting in to.
I won’t be one of them. Growing up in Detroit under the specter of the police unit STRESS—an acronym for “Stop the Robberies, Enjoy Safe Streets”—armed me with a very specific perspective on surveillance and how it is deployed against Black communities. A key tactic of the unit was the deployment of surveillance in the city’s “high crime” areas. In two and a half years of operation during the 1970s, the unit killed 22 people, 21 of whom were Black. Decades later, Detroit—with its Project Greenlight web of cameras and a renewed commitment to ShotSpotter microphones, which purport to detect gunfire and help police respond without a 911 call—continues to be one of the Blackest and most surveilled cities in America. My work concentrates on how surveillance mechanisms are disproportionately deployed against Black folks; think of facial recognition falsely incriminating Black men, or the Los Angeles Police Department requesting Ring-doorbell footage of Black Lives Matter protests.
The conveniences promised by Amazon’s suite of products may seem divorced from this context; I am here to tell you that they’re not. These “smart” devices all fall under the umbrella of what the digital-studies scholar David Golumbia and I call “luxury surveillance”—that is, surveillance that people pay for and whose tracking, monitoring, and quantification features are understood by the user as benefits. These gadgets are analogous to the surveillance technologies deployed in Detroit and many other cities across the country in that they are best understood as mechanisms of control: They gather data, which are then used to affect behavior. Stripped of their gloss, these devices are similar to the ankle monitors and surveillance apps such as SmartLINK that are forced on people on parole or immigrants awaiting hearings. As the author and activist James Kilgore writes, “The ankle monitor—which for almost two decades was simply an analog device that informed authorities if the wearer was at home—has now grown into a sophisticated surveillance tool via the use of GPS capacity, biometric measurements, cameras, and audio recording.”
The functions Kilgore describes mirror those offered by wearables and other trackers that many people are happy to spend hundreds of dollars on. Gadgets such as Fitbits, Apple Watches, and the Amazon Halo are pitched more and more for their ability to gather data that help you control and modulate your behavior, whether that’s tracking your steps, looking at your breathing, or analyzing the tone of your voice. The externally imposed control of the formerly incarcerated becomes the self-imposed control of the individual.
Amazon and its Ring subsidiary deny allegations that their devices enable harmful surveillance and deepen racial inequities. “Ring’s mission is to make neighborhoods safer, and that means for everyone—not just certain communities,” Emma Daniels, a spokesperson for Amazon Ring, said in response to a request for comment. “We take these topics seriously, which is why Ring has conducted independent audits with credible third-party organizations like the NYU School of Law to ensure that the products and services we build promote equity, transparency, and accountability. With respect to Halo, no one views your personally identifiable Halo health data without your permission, and Halo Band and Halo View do not have GPS and cannot be used to track individuals.”
Here, it’s useful to remember that contexts shift very quickly when technology is involved. Ring approached the NYU School of Law in 2020 to audit its products—specifically, their impacts on privacy and policing. That report came out in December 2021 and promised to produce greater “transparency” where the company’s partnerships with law enforcement are concerned. This past July—just seven months later—Senator Edward Markey released a letter indicating that the company had given doorbell footage to police without the owners’ consent 11 times this year alone. (Amazon did not deny this in a statement to Politico, but it stressed that it does not give “anyone unfettered access to customer data or video.”)
And remember, GPS tracking isn’t the only form of surveillance. Health-monitoring and smart-home devices all play a role. Consumers may believe that they have nothing to fear (or hide) from these luxury-surveillance devices, or that adopting this technology could only benefit them. But these very devices are now leveraged against people by their employers, the government, their neighbors, stalkers, and domestic abusers. To buy into these ecosystems is to tacitly support their associated harms.
[Read: The doorbell company that’s selling fear]
Hidden below all of this is the normalization of surveillance that consistently targets marginalized communities. The difference between a smartwatch and an ankle monitor is, in many ways, a matter of context: Who wears one for purported betterment, and who wears one because they are having state power enacted against them? Looking back to Detroit, surveillance cameras, facial recognition, and microphones are supposedly in place to help residents, although there is scant evidence that these technologies reduce crime. Meanwhile, the widespread adoption of surveillance technologies—even ones that offer supposed benefits—creates an environment where even more surveillance is deemed acceptable. After all, there are already cameras and microphones everywhere.
The luxury-surveillance market is huge and diverse—it is not just Amazon, of course. But Amazon is the market leader in key categories, and its language and product announcements paint a clear picture. (Note also that Apple and Google have yet to advertise an airborne security drone that patrols your hallways, as Amazon has.)
At the bottom of its press releases, Amazon reminds us that it is guided by four tenets, the first of which is “customer obsession rather than competitor focus.” It would be wise to remember that this obsession takes the form of rampant data gathering. What does it mean when one’s life becomes completely legible to tech companies? Taken as a whole, Amazon’s suite of consumer products threatens to turn every home into a fun-house-mirror version of a fulfillment center. Ultimately, we may be managed as consumers the way the company currently manages its workers—the only difference being that customers will pay for the privilege.
Why aren’t we hearing more about corporate surveillance of employees in the United States?
Today’s New York Times has a story about Russia’s powerful internet regulator, Roskmnadzor, whose collection of personal data about average Russians has, in the Times’s words, “catapulted Russia, along with authoritarian countries like China and Iran, to the forefront of nations that aggressively use technology as a tool of repression.”
A few weeks ago, the Times ran a story about China’s collection of personal data on its citizens through phone-tracking devices, voice prints, one of the largest DNA databases in the world, facial recognition technology, and more than half of the world’s nearly one billion surveillance cameras.
This is important and useful reporting. But pardon me if I ask an impertinent question: Why aren’t we hearing more about corporate surveillance of employees in the United States? Or about corporate surveillance of Americans in general? Or how this corporate surveillance is being used by the US and state governments?
Even if Russia’s and China’s surveillance states are far more dangerously intrusive than America’s surveillance capitalism, shouldn’t we know more about how the same or very similar technologies are being utilized here?
Since I was secretary of labor, I’ve seen American companies load up on monitoring software — to watch what workers are doing every minute of the day. Workers are now subject to trackers, scores, and continuous surveillance of their hands, eyes, faces, and bodies. And increasingly, they’re paid only for the minutes (or seconds) when the systems detect they’re actively working.
Kroger cashiers, UPS drivers, and millions of others are monitored by the minute. Amazon measures seconds. J.P. Morgan — the largest bank in the United States — tracks how its workers spend time, from making phone calls to composing emails. At UnitedHealth Group, low keyboard activity can affect compensation and sap bonuses. In Amazon warehouses, some workers don’t get enough time to go to the bathrooms. ESW Capital, a Texas-based business software company, tracks workers in 10-minute intervals during which — at some moment that workers can’t anticipate — cameras take snapshots of their faces and screens.
“Digital productivity monitoring” — isn’t that an innocent-sounding phrase? — is spreading even to white-collar jobs requiring graduate degrees. Radiologists get scoreboards showing “inactivity” time and comparing productivity to their colleagues’. Doctors and nurses describe increasing electronic surveillance over workdays. Even lawyers are being closely monitored.
Firms selling all this monitoring technology gush with testimonials from supervisors describing newfound powers of “near X-ray vision” into what workers are doing other than working: watching porn, playing video games, using bots to mimic typing, two-timing. Dystopia now!
Russia’s and China’s growing surveillance systems seem more dangerous and intrusive than America’s increasing surveillance of our workers because the information Russia and China collect can stifle dissent.
But are the surveillance systems really that far apart? Big corporations that gather loads of data on exactly what their workers do all day (and sometimes into the night) — including in their purview the growing ranks of remote or gig workers — can stifle workers’ efforts to form labor unions or show any disgruntlement at all.
Russia’s and China’s surveillance of their inhabitants and America’s surveillance of our workers are starting to overlap because the technologies are starting to overlap.
A technology company in eastern China even designs “smart” cushions for office chairs that record when workers are absent from their desks. How long before we see smart cushions in American offices?
And more and more, we’re being surveilled without knowing it. Delta Air Lines boasts that its Atlanta airport’s Terminal F is the “first biometric terminal” in the United States where passengers can use facial recognition technology “from curb to gate.”
The Financial Times reports that a Microsoft facial recognition training database of 10 million images drawn from the internet without anyone’s knowledge is utilized by agencies that include the United States and Chinese military.
A new joint report from the Associated Press and Electronic Frontier Foundation highlights a major surveillance tool, known as “Fog Reveal,” now being used by dozens of local law enforcement agencies across the United States to collect personal data without a warrant. The tool makes use of advertising data — including location, timestamp, and a unique advertising ID tied to individual devices — to construct a searchable database that enables law enforcement to either track an individual device or see which devices passed through a certain area.
Where does this end?
A few years back, Mark Zuckerberg predicted that “Facebook will know every book, film, and song you ever consumed, and its predictive models will tell you what bar to go to when you arrive at a strange city, where the bartender will have your favorite drink waiting.”
Well, that day has just about arrived.
Google’s Eric Schmidt has said, “We know where you are. We know where you’ve been. We more or less know what you’re thinking about.”
With Google using my search data and its high-tech trucks surveilling my neighborhood, I’m sure Schmidt is right.
As Shoshana Zuboff noted in her brilliant The Age of Surveillance Capitalism, we once celebrated these new digital services as free but we are learning that the platforms are hyper-velocity global bloodstreams into which almost anyone may introduce a dangerous virus without a vaccine, or from which big corporations and government can draw anything they’d like to know about us.
I’m not so sure we should be so disdainful of Russia’s and China’s surveillance systems, given what’s happening in the United States.
Isn’t it time we got serious about protecting our freedom from being watched, monitored, examined, and exposed? Otherwise, the surveillance state and surveillance capitalism merge — and we’ll have no place to hide.
Amazon wants to be your retailer, educator, grocery store, security system, bank and now your healthcare provider: A few days ago, reports came out that the tech giant is looking to buy One Medical for $4 billion, its latest foray into the healthcare business. At the same time, Amazon provides law enforcement with easy access to massive amounts of data and the tools to parse it. Of course it’s not just Amazon: Facebook, Google, Microsoft, Twitter and every big tech company obey the basic law of capitalism: grow or perish. They all rely on one shared resource: our data. And they all cozy up to the U.S. government when it comes to information sharing.
The post CovertAction Bulletin – Surveillance Nation: From Amazon to the NSA appeared first on CovertAction Magazine.