Democratically Deploying AI Means Letting Labor Lead

April 29, 2025

This essay is part of Roosevelt’s 2025 collection, Restoring Economic Democracy: Progressive Ideas for Stability and Prosperity.


The Trump administration’s so-called “Department of Government Efficiency” (DOGE) claims its plan to downsize the civil service and deploy new technologies like AI will provide better government service more efficiently, with savings to taxpayers. But they’re undermining their own purported efforts by firing civil servants without strategy; interfering with backend technological systems that manage payments, staffing, and basic infrastructure; and even shuttering entities like 18F, which housed our nation’s brightest public technologists. This suggests that DOGE’s deployment of AI is less about improving government services, and more about deploying untested, speculative software that gobbles up massive troves of sensitive data and puts it into the hands of private actors.

These actions could force us into a false binary around the future of AI. One path is outright refusal in the face of this effort, denying any potential prosocial use of AI to reduce friction for people accessing government services. Another is acquiescence—bargaining or debating how to minimize the worst effects without addressing the bigger questions around who decides what problems AI is deployed to solve, who should benefit from those solutions, and how to deal fairly with those impacted by the consequences of AI. 

As progressives, it is our responsibility to refuse these constraints as we grapple with what could be some of the most consequential technological advances in a generation. Reimagining the purpose and function of science and innovation policy will be a core component of asserting our bigger vision of democracy and daily democratic practice. We can envision a future in which all of us, not just a few tech billionaires and their investors, decide how to put advanced computational power to use and make the tough decisions about how we want to govern ourselves. 

This practice can, and should, start in our workplaces.1 In the workplace, we are faced with two interrelated challenges: who makes decisions about how AI is deployed and what happens to workers displaced by AI. Educators, health-care providers, clerical workers, customer service agents, legal professionals, and creatives already are inundated with AI-backed software that promises to “enhance” or replace much of the work that they have traditionally performed. Most of the software was developed by Silicon Valley engineers who have little to no experience performing the work they are promising to fundamentally alter. The software is typically built to hoover up data in a given environment and indiscriminately automate whatever it can. Often, it is—in this phase of development—software looking for a purpose. 

The progressive answer to who makes the decisions about how AI is used in the workplace should be clear: Workers themselves. Centering workers will lead to not only better outcomes for labor but more effective deployment of technology. For this to happen, workers’ voice must be imbued with sufficient power to preclude employer domination, and it must be an authentic, collective voice. Workers must have avenues into the decision-making process at a point in time and in a manner that allows for meaningful intervention. Too often today, even when workers and their unions have an opportunity to influence technology issues, that opportunity comes too late to be meaningful.

This is how workers like Ylonda Sherrod, a call center worker in Mississippi, end up managing an AI assistant that “nudges” her about how to interact with customers, a job she has performed for 20 years.2 Ylonda’s union, the Communications Workers of America, has positioned her to bargain over the use of the technology her employer, AT&T, has already developed and is attempting to implement. But what if Ylonda, and other workers like her, had been at the table earlier? What if she had been able to identify the best use of an AI system in her workflow? Would she have chosen to have a robot tell her stuff she already knows about how to do her job? Or would she have been more savvy about testing software that could actually provide help where she needs it—AI to actually augment human potential—and put guardrails on what kind of data it collects from her? If Ylonda were the one identifying the problems AI could solve, she might like to use it to problem-solve in real time with customers on the line, determining where more information might help, instead of “nudging” about how to behave on the phone or producing inaccurate call transcripts. 

Workers across industries have experience and insights about the best use of labor-saving software that can increase productivity, safety, and job satisfaction while protecting their data, privacy, and dignity. In our report, Worker Power and Voice in the AI Response, published by Harvard Law School’s Center for Labor and a Just Economy, we suggest some approaches to ensuring workers have a real voice and influence over the way that AI is deployed and what problems it is used to solve.3 We advocate for the creation of roles such as AI impact monitors, experienced technologists who can act as partners to worker committees who audit and evaluate new technologies on an ongoing basis. This, in combination with mandates around meaningful transparency and strict prohibitions on automated firing and limitless surveillance, creates an advanced structure for workers to be true partners in developing their employers’ technology procurement and implementation policies in ways that ensure everyone benefits. 

But we also must be realistic. Although many predictions that an AI-driven worker-free future is just around the corner may be overblown, there is a strong likelihood of significant displacement of workers in the not-too-distant future. In previous periods of displacement, such as the one caused by the advance of free trade, too many leaders tried to assuage workers’ anxieties about the future by directing their attention to predictions about aggregate economic growth. But workers do not live their lives in the aggregate. And not enough attention was paid to disaggregated impact—what would happen to workers who found themselves on the downside of displacement. A pro-worker agenda for the future cannot repeat that mistake.

So what does a pro-worker approach to displacement look like? One possibility could be to ramp up and expand traditional forms of transitional support for workers, like those extended to workers displaced by trade. These supports include enhanced access to unemployment insurance, generous retraining subsidies, and wage replacement for workers compelled to accept reemployment at lower wages. More robust supports could include incentives for downsizing employers to convert continuing positions to part-time positions to help public income supports make up the difference in income. An even more ambitious response would be to create a system of good-quality public employment to provide services that only humans can deliver, such as home care, childcare, and medical services, and which are vital supports for the rest of the economy. And if large-scale displacement comes to pass, we must consider a system of job guarantees to ensure that technological progress does not result in an economy that cements the primacy of a tech oligarchy, leaving behind the security and dignity of everyone else.

Read Footnotes
  1.  Much of what follows aligns with the Department of Labor’s response to President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110, October 2023), entitled, “Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers” (DOL.gov archive, 2025). ↩︎
  2.  Emma Goldberg, “‘Training My Replacement’: Inside a Call Center Worker’s Battle With AI,” New York Times, July 19, 2023, https://www.nytimes.com/2023/07/19/business/call-center-workers-battle-with-ai.html. ↩︎
  3.  Center for Labor and a Just Economy, Worker Power and Voice in the AI Response, Cambridge, MA: Harvard Law School, 2024, https://clje.law.harvard.edu/worker-power-and-voice-in-the-ai-response. ↩︎

Sharon Block

Sharon Block is a professor of practice and executive director of the Center for Labor and a Just Economy at Harvard Law School, where she teaches labor, employment, and administrative law. For twenty years, Block has held key labor policy positions across the legislative and executive branches of the federal government, including as head of the Office of Information and Regulatory Affairs in President Joe Biden’s White House and on the Biden-Harris transition team. Block writes frequently on labor, employment and administrative law topics. She is a senior contributor to OnLabor.org, and her opinion pieces have appeared in the New York Times, Washington Post, Fortune, The American Prospect, The Hill, USA Today, Forbes, and Newsweek.  

Michelle Miller

Michelle Miller is the director of innovation for the Center for Labor and a Just Economy at Harvard Law School. She joined the Center after a decade as the cofounder and codirector of Coworker, an organization that nurtures early-stage worker-led organizing across multiple industries. During this time, Miller also pioneered the labor movement’s research and response to the proliferation of software and technology tools being used to manage and surveil workers and working-class people. Miller is also a visiting social innovator with the Social Innovation + Change Initiative at the Harvard Kennedy School. She is on the boards of the Brooklyn Institute for Social Research and Arts and Democracy.