A Call for Proactive, Worker-Powered AI Governance

March 10, 2026

For millions of workers, AI is not a thought experiment. It is a manager, a monitor, and sometimes a pink slip.

AI is already reshaping work in real time. Industries are being restructured, jobs rewritten, and livelihoods reimagined. The private sector is moving fast, deploying AI with few checks and often without input from the people most affected. The public sector, too, has begun incorporating AI tools—though not always boosting efficiency or making services more accessible for the public. Internationally, workers are addressing exploitative working conditions in the tech industry, forming groups like the Global Trade Union Alliance of Content Moderators (GTUACM). Meanwhile, US regulation is fragmented and reactive, presenting a chaotic environment for labor to organize in.

AI-driven disruption will impact workers across the spectrum—blue-collar and white-collar employees, warehouse workers tracked by AI-powered productivity systems, screenwriters whose execs want to cut corners, and teachers navigating classroom tech.

In early 2025, union leaders came together in California to define the impact of AI on their members’ physical, mental, and emotional well-being. The quantitative picture is still emerging, but the qualitative results are damning: One Amazon worker described feeling like “a robot,” worn down by oversight so intense he quit—only to come back because he couldn’t find another job. This kind of experience is becoming increasingly common.

Tech-optimists argue that this wave of automation will ultimately boost productivity and lift all boats. But MIT economists Daron Acemoglu and Pascual Restrepo have found that even when automation increases productivity, most gains flow to capital owners, not workers. Often, displaced workers experience long-term wage loss and job instability and frequently move into lower-paid work.

The question isn’t whether AI will change the workforce. It already is. But who will shape that change?

AI’s Power Problem

Across the country, workers are raising the alarm not only about job loss and surveillance, but about exclusion. They want a say in when and how AI is introduced into their workplaces. But decades of weakened labor law and declining union density have left workers starting from a structurally disadvantaged position. To make matters worse, the sectors most exposed to AI disruption, including legal services, finance, and office administration, are among the least unionized. Molly Kinder, Mark Muro, and Xavier de Souza Briggs at the Brookings Institution call this the “Great Mismatch”: AI is advancing fastest where worker power is thinnest.

The policy landscape compounds this imbalance. Congressional leaders have backed efforts to limit state regulation through federal preemption, even as federal oversight remains minimal. Last year’s executive action on AI governance, which articulated principles but created few enforceable protections for workers, directs the Attorney General to “challenge State AI laws inconsistent with the policy set forth” in the order. If states are blocked from acting, one of the only arenas producing concrete safeguards will disappear.

Unions and worker-led organizations cannot sit back. We must negotiate guardrails around surveillance, data use, displacement, and algorithmic management.

And states are acting. In the 2025 legislative session alone, more than 350 AI-related bills addressing workplace technology and worker impacts were introduced across legislatures in more than 40 states, tackling algorithmic management, automated decision-making, electronic monitoring, worker data protections, and transparency requirements. A number of states have already enacted workplace-specific protections aimed at limiting discriminatory systems, improving transparency, and regulating surveillance technologies. California and New York, for example, are advancing proposals and laws focused on algorithmic fairness, employment data privacy, and automated decision-system oversight. Halting this momentum would not create clarity; it would leave workers exposed to technologies and default norms being shaped without their voice.

Unions and worker-led organizations cannot sit back. We must negotiate guardrails around surveillance, data use, displacement, and algorithmic management. Just as important, we must organize in the very sectors where AI exposure is highest and union density is lowest. Collective bargaining, sectoral standards, and new forms of worker governance are essential to giving non-unionized workers leverage. If AI is reshaping work where power is weakest, rebuilding that power is the core of the fight.

Cautious Optimism and a Chance to Lead

The full picture of worker sentiment around AI is still emerging. Brookings finds that some workers, particularly in roles like HR and logistics, are already using generative AI to write job descriptions, draft onboarding materials, and communicate with coworkers. Similarly, according to Deloitte’s cross-country study, 77 percent of early-career workers believe AI will improve their job quality, but 55 percent worry it may weaken essential skills like writing and critical thinking.

Workers are not uniformly fearful of the technology itself. Many see practical upside in tools that reduce repetitive tasks, ease burnout, or expand access to knowledge. What they resist is unilateral deployment. They do not want AI introduced without input into how it shapes their workload, safety, evaluation, or pay.

For labor and social movements, the lesson is clear. Disengagement is not a strategy. In a recent essay, Lee Anderson and Oluwakemi Oso argue that movements must engage AI directly or risk ceding its design and governance to corporate actors. The question is not whether AI will shape work, but who will shape AI.

Unions and worker organizations are uniquely positioned to lead, not just as defenders but as designers of solutions. We can channel curiosity to build worker-led AI governance models that center equity, transparency, and inclusion. As organizational leaders, fear and discomfort around AI risks letting private companies define the terms of this period, while we deal with the fallout.

What Does Worker-Led AI Governance Look Like?

A growing set of examples offers a practical roadmap. We can think of proactive AI governance as a spectrum, ranging from the early stages of basic guardrails to building the future via co-led innovation.

Tier 1: Drawing Red Lines

This is where many labor efforts start: defining what is unacceptable AI use and demanding immediate protections.

National Nurses United (NNU)

In 2024, NNU conducted a national survey of more than 2,300 registered nurses. Over 60 percent said they do not trust their employers to prioritize patient safety when rolling out AI tools. Many reported AI systems were found to be undermining clinical judgment and potentially worsening patient care. NNU responded by calling for the “precautionary principle”: no unproven or unsafe AI in clinical settings.

Nurses clarified that they’re not anti-tech; in fact, they’ve consistently adopted and mastered new technologies. But AI must augment, not override, human care.

The NewsGuild-CWA

At the end of 2025, the NewsGuild launched a campaign to secure ethical guidelines for the use of AI in journalism. The Guild’s demands include transparency when publishing AI-assisted news stories, consent before using journalists’ original work for training data, and no AI-driven layoffs or wage reductions.

But first on the list is an industry-specific redline: AI may be used to assist work, but not to write and publish original work without human oversight.

Tier 2: Embedding Worker Voice

In addition to blocking harmful use cases of AI, some unions are proactively shaping how AI is implemented over time by bargaining for greater transparency and worker input. These efforts don’t just react to harms after they happen, but create systems for monitoring and negotiating AI’s evolving role in the workplace.

Communications Workers of America (CWA)

CWA has taken a proactive, member-led approach to AI governance, rooted in members’ collective bargaining and expertise. In 2023, its Committee on Artificial Intelligence (composed of workers from journalism, telecom, and tech) issued a set of bargaining principles to guide future negotiations.

These principles include protections against surveillance, automated decision-making, and job displacement, as well as a requirement for worker voice in the development and deployment of AI tools.

The Writers Guild Strike

When Hollywood writers walked off the job in 2023, generative AI was at the center of the dispute. Their concern wasn’t just about job loss; it was about authorship, control, and dignity. After five months, they won a historic contract: Writers, not studios, decide whether and how to use AI in their creative process, and AI cannot be credited as a writer or used to undermine compensation minimums. The contract protected credits, residuals, and job security. It didn’t ban AI, but it enabled safe experimentation. This contract created space for bottom-up innovation, where writers can safely explore AI tools without risking their livelihoods.

This model of embedding worker voice into evolving AI practice has parallels in adjacent sectors of the entertainment industry. Video game actors represented by SAG-AFTRA ended a nearly yearlong strike with ratification of the 2025 Interactive Media Agreement, securing informed consent and disclosure requirements for AI digital replica use and the ability for performers to suspend consent during strikes, ensuring that AI cannot be deployed without performer agreement. These protections also come with improved compensation and benefits in the same agreement, embedding worker oversight into how AI technologies are used and governed in their work.

Tier 3: Building with Workers

The boldest efforts go a step further, ensuring that workers are cocreators of AI systems that meet their needs and reflect their values, from design to deployment.

National Education Association (NEA) and American Federation of Teachers (AFT)

Educator unions like the AFT and NEA have begun translating principles into implementation tools. AFT, a cofounder of the National Academy of AI Instruction, launched AI Educator Brain on its curriculum and training portal, Share My Lesson, in 2024. In 2025, NEA produced vetting and implementation guides for assessing third-party tools through the lenses of equity, data privacy, accessibility, and pedagogy, as well as an AI glossary and best-practice materials that define core concepts and establish boundaries for instructional use. Both organizations have created partner resource collections that offer replicable templates, examples, and planning artifacts for school-level deployment, such as NEA’s TeachAI tool kit.

In June 2024, NEA’s AI Task Force released a comprehensive, worker-informed report acknowledging the threats (bias, surveillance, dehumanization) and laying a vision for how AI could support students and teachers, especially those with disabilities. Their five guiding principles call for centering educators and students, ensuring ethical development, protecting data privacy, promoting equitable access, and supporting AI literacy. It’s a model for those seeking to find a balance between uncertainty and possibility.

National Domestic Workers Alliance (NDWA)

AI systems are already influencing home care and domestic work through algorithmic scheduling, shift-matching, data collection, and client monitoring tools used by agencies and platforms. Cornell researchers have documented that many home care workers are unaware that AI is being used in their workplaces, even as algorithms shape their hours, pay, and access to clients. Absent worker participation, these systems risk reinforcing existing power inequalities rather than improving care. Their proposed democratic governance models include routine audits of AI impacts, shared oversight boards, and dedicated training pathways so that frontline care workers can influence rather than simply absorb new workplace technologies.

NDWA has begun operationalizing this vision from the worker side. Instead of positioning domestic workers as passive recipients of technology, the organization has built in-house AI development capacity, convened worker councils to shape product and governance decisions, and launched Ask Aya, an AI-powered workplace and rights tool developed with and for domestic workers. Ask Aya supports Know Your Rights education, workplace negotiation and communication coaching, and multilingual writing assistance, grounded in NDWA organizing expertise and real worker scenarios. This combination of worker-led design, internal technical capacity, ongoing field testing, and explicit governance structures offers a rare model in which a labor organization is not only responding to AI but actively building it, allowing workers to shape how AI enters their industry rather than being shaped by it.

What Leaders Can Do Now

These examples represent just a fraction of work that the labor movement has been doing to engage with the rise of the AI industry. Learning from these and other cases, movement leaders can work to develop

  • boundaries to define unacceptable uses;
  • worker testimony to capture on-the-ground impacts and surface unexpected harms or benefits;
  • governance structures that give workers ongoing review and influence over AI deployments;
  • organizational AI principles backed by bargaining, campaigns, or public guidance; and
  • codesign and experimentation that builds with workers before technologies are fully deployed.

We’ve been here before. The industrial revolution and the dawn of the internet sparked seismic responses in labor. Now is the time to prepare, organize, and build to grapple with new unknowns.

Thank you to Kary Perez at the National Domestic Workers Alliance for assisting with this piece.

March 18, 2026: This blog post has been updated with additional information about AFT and NEA.