Patchwork Nation: From Code to Cleaning, AI Is Beginning to Devalue Everyone’s Work
August 7, 2025
By Samantha Shorey
Fireside Stacks is a weekly newsletter from Roosevelt Forward about progressive politics, policy, and economics. We write on the latest with an eye toward the long game. We’re focused on building a new economy that centers economic security, shared prosperity, and rebalanced power.
When Anthropic’s CEO warned that AI could cause massive white-collar job loss as soon as next year, people of every political stripe got the message. Steve Bannon contributed to the article with a comment forecasting an “evisceration” of entry-level tech jobs. Bernie Sanders referenced it in an Instagram post calling for productivity gains to benefit working people, not just Wall Street. Barack Obama declared at the Connecticut Forum a few weeks later: “I guarantee you, you’re going to start seeing shifts in white-collar work as a consequence of these AI models.”
AI anxiety is becoming a bipartisan issue. The tech sector was once a seemingly unshakable source of solid employment. Now people who can code are facing the same threats of potential obsolescence as people with manufacturing, service, and administrative skills.
These stories frame AI as a force that will reshape white-collar work in unforeseen ways—but when we look at the shifts that have already occurred for low-wage workers in other sectors, we find a likely road map for the future.
Automation Doesn’t Arrive Overnight
It requires what researchers call a “pre-automation” phase: a period when the work that humans do is reorganized to enable future automation. First, companies create new forms of labor by taking stable jobs and outsourcing them into piecework. Then, they figure out how to automate those pieces.
Taxi drivers become Uber drivers, who may be replaced by self-driving cars. Government postal service couriers become contract Amazon delivery workers, who may be replaced by aerial drones.
White-collar workers in the technology industry are right to be worried, because pre-automation in their field has already begun. As a signal: The share of job listings for contract workers in the tech sector tripled in 2022. Then, contract workers were the first people laid off in May this year, reportedly due to advances in generative AI. A report from TechEquity Collaborative indicates that those most affected in the initial waves of automation are the same people who are already underrepresented in higher-level STEM professions—women and Black, Hispanic, and Asian workers—are disproportionately contractors.
Yet, sooner or later, this automation cycle will come for us all.
Devaluing Work and Workers Leads to AI Implementation Failures
Fractured forms of work, like contracts and platform gigs, contribute to the perception that complex jobs are just an accumulation of tasks that can be interchangeably done by anyone. This devaluation is compounded when occupations are feminized and racialized, as jobs that are predominantly done by women (and especially women of color) often have lower pay and social status.
Technology designers misinterpret devaluation as an indicator that a job is simple, so they create automated systems that can’t contend with the actual complexity of this work. Computer science researchers at Carnegie Mellon University call this the “AI failure loop”—where a failure to recognize the skills of people who perform a job also means that AI implementation is prone to fail. When this happens, human labor is mobilized to retrain, supervise, and patch together systems so that they appear intelligent.
When AI Fails, Workers Still Have to Fix Its Mistakes
My collaborators and I use the term “patchwork” to describe the labor required to fill the gaps between AI hype and the realities of day-to-day work.
Airport janitors babysit floor cleaning robots, mopping up the water the robots spill on the floor to protect passengers from slip-and-fall hazards. Recycling sorters compensate for robotic arms that regularly miss objects when they become jammed, manually sorting in the robot’s place to keep the system from shutting down completely. Patchworkers prep materials and environments so that robots can operate in them more effectively. They repair the robots when they break down. Though this material work may seem unrelated to the digital technologies that are transforming the intellectual labor of white-collar workers, it isn’t.
White-collar workers of all kinds are already overseeing AI and correcting its outputs. As we document in Roosevelt’s recent report on AI and government workers, public administrators are handed frustrated constituents whose questions have befuddled AI chatbots. They detect mistranslations in permit and benefit applications, knowledge that is then incorporated into AI models to improve them. This kind of patchwork is framed by technology designers as temporary or transitional, but technology’s problems are persistent.
A decade ago, reporters at Wired wrote about the army of contractors in the Philippines who were tasked with removing violent imagery from social media sites. Today, contractors in Kenya are removing the same kind of content from the data used to train generative AI. In the process, labor that is essential to technology production has been rebranded as “clickwork,” emptied of benefits, and stripped of intellectual credit.
Rather than being made obsolete, tech and other white-collar workers will be pushed to the margins and hidden by outsourcing where they are responsible for overseeing and improving the very tools meant to replace them.
Though layoffs have already begun, recent benchmarking studies show that AI cannot effectively complete project management and software engineering tasks 70 percent of the time. Another recent study similarly found that even state-of-the-art AI models were only able to correctly calculate US federal income taxes less than a third of the time. Tech workers—whose jobs will likely be transformed to enable AI’s future advances and to compensate for its current shortcomings—should be at the center of AI policy.
What Worker-Centered AI Policy Could Look Like
The best way to begin addressing AI anxiety is intentional policymaking, and we have great ideas on the table.
First, we need greater transparency in the technology production process. AI ethicist Margaret Mitchell cautions against the “recent regulatory pitfall of centering on technology” and instead urges legislators to focus on the people in the AI pipeline. She argues technology developers should be required to produce documentation that establishes that their products meet safety, security, and nondiscrimination standards. Though her framework focuses on the importance of these rights for users, the same documentation could also be required for the safety of workers.
One place this documentation could be mandated is in government contracts with technology companies, which are already subject to established federal regulations that require vendors to meet other responsibility standards for integrity and ethics. Harvard’s Center for Labor & a Just Economy observes that procurement policies “act as a lever for greater oversight and accountability” by creating standards that are enforced external to the technology industry at the state and local level too.
We also need greater transparency in outsourcing to establish industry-wide criteria for fair pay and health protections that tech contract workers around the globe are organizing the fight for. Analysts at Brookings observe that the industries currently facing the greatest threat from generative AI have some of the lowest rates of unionization. Without traditional methods of workplace representation, tech workers are further disempowered by AI hype that obscures their contributions from the public.
The lessons learned from industries as different as recycling and online content moderation point to a central truth: Addressing the risks that AI poses to workers across our economy doesn’t just require better technology, but also requires the same solutions we know provide good work and fair pay, like union representation and basic employment protections for contractors. We need interventions that place the social conditions of workers front and center, before unrestricted AI degrades skills and magnifies inequality across the economy.