The Critics are Right: The Left Risks Losing the Political Battle on AI
There's no reason Progressives shouldn't be able to define the political terrain on the most consequential economic and social issue in a generation—but we need to get our story straight.
There’s a hot take circulating in AI and policy circles: The Left is failing at the politics of AI. Dan Kagan-Kans argued in Transformer.ai recently that The Left, by dismissing AI’s transformative potential, is ceding the political conversation to a populist Right that takes it seriously. By classifying AI as “fancy autocomplete” that can’t accomplish what its boosters claim, the Left is making itself irrelevant to a public that now largely considers AI a revolutionary force.
Thoughtful rebuttals have emerged (including this persuasive one from Brian Merchant) that make a strong case that the Left actually owns this space right now. So it might surprise some of my colleagues and partners that I (mostly) come down on the side of the critics.
Many on the Left believe that AI isn’t the world-shaking technology both accelerationists and doomers proclaim it to be. In their view, we are being fooled by the hypemen who run and invest in AI companies. The chatbots trick us into thinking they're intelligent when they're really just making statistically informed predictions about how sentences fit together. Anyone working on the Left side of the AI policy, research and advocacy ecosystem has to admit this sentiment is widespread.
Are they wrong? Here’s where both the Left and their critics are off base, it doesn’t really matter. At least not politically. Whether we’re on the verge of AGI or not, the technology is already reshaping society in ways that have profound political consequences. We on the Left are going to need a political program—a compelling narrative, a coherent agenda, a vision for the future—that reckons with AI, both what it is now and what it could be.
As Merchant points out, the Left isn’t losing this battle as badly as its critics think—or at least not in the way its critics perceive. At TechEquity we have been advancing a policy agenda in California that is premised on the assumption that AI will have profound ramifications for society and the economy. We lead a coalition of 40+ left-leaning groups from labor, civil rights, economic justice, consumer and privacy rights and more that have made a lot of progress to ensure that we establish guardrails around AI in the state. We have worked with groups like Demand Progress who are leading similar federal campaigns.
But to the extent that we are gaining ground in the court of public opinion, it has a lot to do with how poorly the AI industry is conducting itself, and the almost comically villainous way AI billionaires talk about what they are inflicting on society. Where the Left has failed is in moving that advocacy into the public sphere, and tying our policy message to a political one that can galvanize ordinary people. The public is absolutely repelled by what they are seeing coming out of the industry, and are primed for a story that helps them understand what’s happening and an agenda that helps them turn anger into action. These campaigns are expensive, and we’re simply not as well funded as groups on the Right—or the AI safety groups who seem to save their harshest critiques for us.
Even if we had the money to do it, though, I think us Progressives would struggle to develop a shared narrative, in large part because our work lacks soul. The Left’s conversation on AI is overwhelmingly focused on structural economic issues. When we do connect those issues to an emotional overstory that resonates with real people, it tends not to engage with the real emotional questions people are grappling with. What does it mean to be human in a world where robots can mimic us? What is consciousness really? How do we create real meaning in a world where robots can do everything for us? What of human agency when the AI rules everything around us?
The Right has been doing this very effectively for years. The most compelling argument I’ve heard for regulating AI came from Steve Bannon in a 2025 interview with Ross Douthat. The National Conservatism Conference that year focused heavily on AI, featuring a speech from Senator Josh Hawley about AI’s threats to working people. Faith leaders—most of whom are from traditionally conservative denominations—have been speaking eloquently about AI’s potential consequences for humanity.
One reason the Right has found it easier to build an emotionally resonant AI message is intrinsic in our identities: conservative politics has always been more attuned to the personal than the structural. The most visceral reactions people have to AI stem from where they see it interfering in their personal lives—how it robs their childrens’ attention, claims to be a replacement for a loving partner, or robs them of the dignity derived from quality employment. The Right meets people where AI touches their lives: their kids, their relationships, their sense of self. The Left, when we’re not being dismissive, responds with governance frameworks and economic theories.
This is where the instinct by some on the Left to dismiss AI becomes a real political liability. To advance a policy agenda that reins in corporate power and protects workers—probably the two most consistent policy agenda outcomes the Left is advocating for—we have to be able to hang it on a narrative framework that tells a story about what it means to be human. And we have to point to a vision of what the world can be—in terms that are tangible in the lives of a normal person—if our version of the future plays out. It is impossible to do this if you’ve already conceded that the technology isn’t important.
When the Left does take AI seriously, it defaults to a structural critique that focuses on abstractions like corporate power and governance. We spend our time on the technicalities of policy and speak in terms like “algorithmic bias” and “surveillance capitalism” that don’t hold emotional resonance for the average person.
The structural analysis is critical, we won’t win without it. But if we can’t connect it to the more emotional questions about what these tools mean for the human experience, we will lose the battle of ideas on what I believe will be the primary political terrain for the next generation.
We’re not starting entirely from scratch. There are storytellers out there making compelling progressive arguments about AI. First among them, the Left’s standard bearer, Bernie Sanders, has been far ahead of his peers in the Senate raising concerns about not just the structural economic risks that AI raises but also the human impacts. Chris Murphy and Pete Buttigieg (both of whom are likely running for President in 2028) have written thoughtful essays about AI’s connection to human dignity and self determination. Tristan Harris of the Center for Humane Technology has thoughtfully tied together an argument about corporate economic incentives and matters of human dignity.
But these narratives aren’t in conversation with each other. And they haven’t been consistently adopted across the ecosystem of groups who are actually doing the advocacy work. What’s more, they don’t tell a story about what AI could be if we do it right.
So how do we solve this? I recently attended a conference in New Orleans hosted by the Future of Life Institute about “Pro-Human AI.” The attendees, in the words of the organizers, spanned the “Bernie-to-Bannon” continuum but shared one thing in common: we all had the same values about how AI should be governed.
What struck me was that I had never met 97% of the people in that room—religious leaders, parents whose children had been hurt or killed by AI, right-leaning influencers—and yet we were all remarkably aligned. Conversations about structural economics and corporate power sat alongside inquiries into faith and human consciousness. A mother whose child died after an encounter with a chatbot spoke in a way that moved me to tears. I left contemplating questions of the soul and how they connected to TechEquity’s work. How had I never fully considered these before? Why didn’t I know these people and their organizations existed?
The outcome of that meeting was a shared declaration which we co-created over the two days: the Pro-Human AI Declaration. It weaves together the personal and the structural, connecting the need to address corporate power with the desire to protect the family. While it isn’t meant to be a political statement—it lacks the necessary narrative coherence to serve as one—it nonetheless shows that there is a foundation on which to build a movement that the Bernie end of the Bernie-to-Bannon spectrum should champion.
And there is potential, if we can grab it, to have it be the basis for a new politics that upends our traditional left/right understanding. The Bannon end of the spectrum sees it, and are building their own program. Our side can’t afford to keep clinging to the idea that AI systems are fake or inconsequential. They are here. They are only getting more powerful. And the political battle over who gets to define them is already underway.
