Just Because We’re Paranoid Doesn’t Mean Uber’s Not Spying on Us

March 6, 2017


Over the summer, when Michelle Miller and I were working on our paper, “New Frontiers of Worker Power: Challenges and Opportunities in the Modern Economy,” we deliberated over whether or not to include the following paragraph, which outlines the potential for gig economy platforms and other technologically advanced firms to digitally monitor their employees and crack down on those deemed an organizing threat.

The post-Snowden era has unleashed a torrent of “insider threat” tracking software that collects interpersonal communications content, time management data, and physical location data to track employees (Goldhill 2016)…[these] systems can also be used to profile potential internal agitators or organizers within a firm and target them for dismissal. Nathan Newman (2016) argues that this presents a “collective harm to the workforce” as the “benefits gained by internal agitators are extended to the general workforce” when these employers speak up for wage increases or improved safety protocols. Meanwhile, such software can simultaneously be used to identify workers who are unlikely to protest wage stagnation or a decline in conditions, due to a combination of personal circumstances, economic liabilities or emotional disposition that may surface in a firm’s analysis of behavioral data…

The idea, widely discussed among tech-savvy labor experts but new to the wider workplace dialogue, is that digital platforms such as Uber or Task Rabbit, or even technologically advanced non-platform firms, could deploy monitoring software and combine it with data from public sources, such as social media posts, in order to identify and weed out “problematic” employees.

We debated the paragraph’s inclusion because we worried that it might come off as a bit far-fetched and alarmist. After all, the argument was more theoretical, based on what would be possible than what had occurred so far. Ultimately we decided to include it, reasoning that reports of discrimination among several gig economy firms, combined with growing opportunities for surveillance, provided enough substantiation for the idea.

Now, just six months later, Uber has made us (and more specifically, Michelle, who wrote that section) look extremely prescient. Here is a paragraph from Friday’s New York Times report on Uber’s “Greyball”—the recently exposed surveillance and profiling program Uber has used to block potential law enforcement investigators from hailing Ubers in order to gather information and issue tickets. From the report:

One technique involved drawing a digital perimeter, or “geofence,” around the government offices on a digital map of a city that Uber was monitoring. The company watched which people were frequently opening and closing the app — a process known internally as eyeballing — near such locations as evidence that the users might be associated with city agencies…Other techniques included looking at a user’s credit card information and determining whether the card was tied directly to an institution like a police credit union.

The methods described here, deployed in Uber’s attempts to circumvent law enforcement, are strikingly similar to those Michelle outlined in the paper. And while Uber has not yet been accused of directing a Greyball-type program against its own workers, their possession of the means, motive, and opportunity to do so should provide regulators and legislators with all the impetus they need to seek greater insight into the company’s data collection and monitoring activities.

(As a side note: While it has not so far been identified as a systemic problem, there was a high-profile case in which Uber employees accessed a special “God View” in order to spy on journalists, celebrities, and ex-girlfriends.)

It’s also important to acknowledge that the issue goes well beyond Uber, and indeed well beyond other gig economy platforms like it. Surveillance and digital discrimination can happen in any workplace, and is especially hard to identify in our contemporary digital workplaces, in which many complex automated tasks go unnoticed, hidden within the depths of our ubiquitous devices.

To illustrate how this surveillance and discrimination might work, let’s take the example of the favors-for-hire platform, TaskRabbit. By monitoring client reviews, social media posts, and statistics such as the number and quality of jobs received and completed, TaskRabbit HQ could identify potentially disgruntled workers and deactivate them or de-prioritize their profile in matching algorithms so that these threatening workers would receive fewer gigs, eventually becoming discouraged and leaving the platform.

A more straightforward example would be a company that tracks employee communications carried out on office wifi. Scanning for keywords that indicate general dissatisfaction or intentions to organize (or even meta-data, like frequency of communication with other coworkers), the employer could identify and monitor potential threats, dismissing them if necessary.

What both of these examples describe is the ability of modern firms to leverage advanced technology into a strategic advantage over their workers, forestalling both formal and informal attempts at organizing for better conditions. As the specter of market power casts a shadow over the U.S. economy and its workers, these sort of anti-competitive practices should be of increasing concern to workers and all those concerned with their welfare and the health of the economy.

When we wrote our paper, Michelle and I thought scenarios like this were possible but unlikely, and that we were mostly warning of a world of digital worker abuse that could develop. It appears that world might have arrived sooner than we anticipated.

In case there was ever any doubt, concerned policymakers need to begin taking digital threats to workers very seriously.