AI is a Whip to Discipline Labor
Capital got so obsessed with breaking knowledge workers that it accidentally surrendered cultural power
When the power went out in San Francisco last December, the Waymo robotaxis died in the street. Not dramatically. No explosions, no crashes. They just… stopped. Billions of dollars in autonomous vehicle technology, stranded in intersections like expensive paperweights. Blocking traffic. Waiting for humans with keys to come push them out of the way. The artificial intelligence revolution, in miniature: impressive until the infrastructure fails, then just another obstacle in the road.
But the Waymo shutdown wasn’t a malfunction. It was a preview. Because AI has never been about working. It’s been about disciplining workers.
How did we get here? The age of surveillance capitalism promised us algorithmic efficiency, or so Shoshana Zuboff warned us. What we got instead was just the surveillance authoritarian dystopia she described—and it’s not even working. The AI revolution, breathlessly covered by every tech journalist with a LinkedIn account, generates less consumer revenue than smartwatches, despite maintenance costs that are orders of magnitude higher (and two trillion dollars already invested in AI). Nobody’s writing thinkpieces about the “smart watch revolution.” But we’re supposed to believe that AI will cure cancer overnight, solve war, and make you hotter.
The truth is simpler and stupider: AI isn’t about productivity. It’s a whip. And capital has been cracking it so frantically, so single-mindedly, that they’ve lost their grip on culture.
I. The Whip Is Real (The Revenue Isn’t)
Let’s start with what’s actually happening, stripped of the mythology.
AI as sold to us—the revolution, the transformation, the inevitable future—is generating somewhere between $30-40 billion annually, depending on whose numbers you trust and how generously you define “AI revenue.” The global smartwatch market? About $40 billion. No one is treating Fitbit sales as civilizational infrastructure. No one’s demanding government subsidies for Apple Watch data centers. But AI gets the Genesis Mission: a literal government-backed initiative to pour public money into AI-energy infrastructure, positioned as “national security.” The argument is so flimsy it’s almost refreshing. Capital isn’t even trying anymore. They’re just pointing at GPUs and saying “infrastructure” until someone writes a check.
And yes, AI sales to businesses rather than consumers are larger, but the sector is far from profitable. Revenue and profitability are not the point. They never were. The point was always discipline.
Zoom exposed something dangerous during the pandemic: without their big offices and executive suites, senior managers were revealed as shallow platitude-generators. Meanwhile, knowledge workers—the people who actually produce tangible things—discovered they could do their jobs from anywhere. Some even started enjoying life. That was unacceptable.
AI is capital’s revenge. Not because it works, but because it scares. Every announcement of a new model, every demo of code generation, every leaked memo about “AI efficiency gains” is a message to knowledge workers: you are replaceable. The threat doesn’t have to be real. It just has to be loud enough to get people back in line.
The whip works in two directions. First, it threatens replacement—making workers accept worse conditions out of fear. Second, it provides cover for the actual extraction: offshoring, especially in elite employment sectors like finance and tech. Actual automation of knowledge work hasn’t happened at any meaningful scale, especially outside finance and tech. But “AI transformation” makes a convenient excuse when jobs disappear to Bangalore or Manila. The remaining US workers see “efficiency gains.” The executives see labor arbitrage. The AI gets the credit for what’s really just old-fashioned exploitation of underpaid overseas labor with a new PR strategy.
In tech, junior developers have been successfully replaced to an extent: unemployment rates for CS graduates have spiked. But there’s substantial evidence that large language models, or LLMs, will create software problems. LLMs are the main AI technology being deployed today, and they fail test cases at higher rates than junior developers when used to write code. The code looks clean. Professional, even. Which makes the bugs harder to find. So companies are generating massive technical debt, building systems on foundations that will crack under pressure. But that’s a future problem. In the present, executives get cheaper labor costs and threatened senior developers. That’s all that matters.
And when the AI actually needs to work? It doesn’t.
I found this out preparing for a lecture during a long drive. I wanted my notes read aloud—basic text-to-speech, technology that’s existed since the 1990s. I tried three different “AI” apps. One had Gwyneth Paltrow’s voice and tried to extract $140 from me for this decades-old feature. None of them worked. I finally asked ChatGPT to read its own generated notes back to me. It just buffered. Indefinitely. So I switched to Claude, which I pay for, and it finally worked. The whole ordeal took thirty minutes—longer than it would’ve taken to just read the notes out loud myself.
This is the revolution. This is what gets billions in private investment and government backing. Technology that sometimes performs basic tasks, wrapped in enough hype that you’re supposed to feel grateful when it eventually works. But capital doesn’t even need AI to work. They just needed people to believe it might. Because the whip isn’t about productivity. It’s about fear.
And it’s working. Remote work is collapsing. Return-to-office mandates are spreading. Workers are taking jobs they’re overqualified for because the market is “uncertain.” The fear is doing exactly what it was designed to do.
But here’s the thing about wielding a whip that frantically: you can’t hold anything else.
II. The Cultural Surrender
While capital spent four years terrorizing knowledge workers, something strange happened: they lost control of culture.
Not all at once. Not dramatically. But the reins slipped.
Bad Bunny became the biggest artist in the world. Not just in Latin America—globally. And he uses that platform to talk about Puerto Rico’s colonial history, about gringos moving there for tax breaks, about “que se vayan ellos”—let them leave. Imagine a sea-side club in San Juan packed full of White gentrifiers, dancing and shouting the inoffensive parts of Bad Bunny songs, vibing to verses about the sun and el perreo. But when it gets to collaborator Berlingeri’s outro “que se vayan ellos,” they suddenly don’t realize it’s about them. They just quietly keep dancing.
Capital greenlit this for the Super Bowl Halftime Show. Not because they’re generous, but because they were distracted.
The war on knowledge work—the RTO mandates, the layoff threats, the AI hype cycle, the obsessive need to break worker leverage gained during COVID—it required total focus. And in that distraction, they accidentally gave up their tastemaker role, outsourcing it instead to opaque engagement metrics that accidentally amplified voices like Benito, Rosalía, and Stromae.
The same layoffs that terrorized software engineers also gutted the cultural gatekeepers—the A&R managers, the playlist curators, the editors who used to decide what got amplified. Algorithms stepped in to fill the gap, optimizing for engagement metrics rather than ideological control. And engagement, it turns out, rewards genuine cultural power over manufactured pop.
In other words, an opening appeared in the vaulted arches of Hollywood gatekeeping. And artists ran through it.
The same pattern shows up everywhere. Netflix doesn’t have to create good quality film or TV anymore because it’s optimizing for time on app. Background viewing counts. So they can distribute relatively cheap international productions—and accidentally amplify genuine art like Emilia Pérez and Money Heist alongside whatever algorithm-optimized dreck they’re pushing this week.
K-pop, Afro-French music (see: the Arcane soundtrack), the resurgence of British indie—they’re all thriving in the gap capital left when they decided knowledge workers were the real enemy.
Even TikTok, that demonic miracle of addictive UI design, accidentally became a platform where working-class voices can reach millions. Not because ByteDance is benevolent, but because the algorithm needs content, and capital was too busy cracking the whip to micromanage what got through. And the US TikTok algorithm just got lobotomized this month—a reminder that when capital notices the gap, they close it.
The most revealing part about this cultural shift? The algorithm is eating its own tail.
Capital—executives, tech founders, finance guys—they’re on these platforms too. And they can’t help themselves. They post their wealth. The private jets. The empty, cold homes with that stupid $8,000 chair in the middle of a marble void. They do it for reach, for influence, for the legitimacy tax they’ve always paid.
But now those displays travel everywhere. The global poor see them. The working class sees them. And every “how I optimize my morning routine with 10 AI agents” post is another data point in the case against them.
They’re literally paying for their own demobilization. Broadcasting their spiritual emptiness to an audience that’s learning, in real time, that these people have nothing to offer but extraction.
III. Even Their Art Is Begging Them to Stop
If you want to see capital’s panic made visible, watch Wicked.
I’m serious. The Broadway adaptation—specifically part two, Wicked For Good, released late last year—is a $150 million mea culpa aimed squarely at elites. Capital not only let it through; they marketed it so hard the campaign reached a London fireworks show. Because they genuinely don’t understand what they’ve made.
The politics are not subtle. The Wizard is a thinly veiled Trump, bragging about being adored no matter what he does. The flying monkeys are ICE agents, pressuring animals to self-deport. There are literal tiki-torch mobs chanting “melt her” instead of “lock her up”. Glinda—played by Ariana Grande—is a reader proxy for the white women who voted for Trump in 2016 and 2024, reflecting the neoliberal fantasy that “moderates” can still be swung without material concessions to the working class.
Grande’s casting becomes grimly appropriate. She’s been so thoroughly processed by algorithmic beauty standards that her reactions arrive with perceptible lag, her face struggling to execute the broad theatrical expressions the role demands. She’s been engineered into a walking screenshot, with a physique so petite it is impossible not to think of Judy Garland’s diet abuse by the production company behind Wizard of Oz (Wicked’s predecessor), and asked to fill a cinema screen for two hours. And yet, Grande rises up to the occasion. She is delightful to watch, even though she was probably hungry as hell.
What’s genuinely strange is that the movie doesn’t earn its emotions. Erivo and Grande barely share screen time. The nuance and joy in friendship between women of different social standings, especially in the face of a shared romantic interest—what made Wicked, one of only two Broadway shows I’ve seen in New York, uniquely magical—is sidelined for yet another Dark Scary Autocrat plot. The leading hunk gets no character development (though he is, bafflingly, extremely hot even as a green nutcracker who wears a one-arm blazer). The result is narratively incoherent.
But three scenes work. Really work.
Fiyero’s capture — a small, intimate scene where he points a gun at Glinda to save Elphaba. No VFX. Just acting.
“No Good Deed” — Erivo alone, interrogating the contradiction of hero narratives while surrounded by CGI chaos, but cutting straight through it: Was I really seeking good, or just seeking attention? (Ouch.)
The final separation — Elphaba and Glinda singing across distance, as a Black woman effectively says: I tried protest. This is your problem now. I’m leaving with my hot scarecrow.
These scenes work because they’re cheap to produce or otherwise overlooked. Executives don’t micromanage the monologues because they don’t think women’s interiority matters. So the art survives in the cracks.
The rest is a movie about how capital needs to stop propping up fascists, told through Oz, on a budget that could’ve funded public housing. It’s aimed at people in cold marble lounges, no doubt the inspiration for Glinda’s abode. The working class—the people actually living under fascism—don’t get to speak.
The theory of change isn’t wrong. Capital does control both parties. Our ruling elites are Glindas: heirs to unearned political influence, bought with campaign contributions. But the movie can’t say redistribute wealth because the people who paid for it would never allow that. So instead it pleads: please, just stop being this evil.
Capital will watch it, feel briefly moved during Erivo’s monologue, and then go back to optimizing labor costs with AI fear campaigns.
IV. The Executives Are Automating Themselves Into Hell
Here’s the sickest irony: the jobs most easily automated by AI are executive jobs.
Think about what executives actually do. They attend meetings. They synthesize information from reports. They make decisions based on incomplete data. They communicate directives. They manage their calendars. They perform confidence.
You know what’s really good at that? LLMs and other statistics-driven technologies. Not because they’re intelligent, but because executive work is mostly bullshit.
It’s knowledge workers—engineers, researchers, designers, writers—who do things that are actually hard to automate. Creative problem-solving. Novel solutions to undefined problems. Work that requires genuine expertise and contextual judgment.
But executives are paid exorbitantly. Automating them would save millions per company. So why aren’t they automating themselves?
Because they’re the ones holding the whip.
And deep down, they know it. You can hear it in the anxious undertone of every claim that AI will turn humanity into “managers of infinite minds”. The fear isn’t that AI will replace workers. The fear is that everyone will notice executives are already replaceable.
So they focus the threat downward. Use AI to discipline labor. Announce “efficiency gains” that mean layoffs. Create fear. Maintain control.
But it’s making them miserable.
Satya Nadella talking about using 10 AI agents in parallel to start his morning isn’t even efficiency porn, annoying as that would be coming from a person who can afford help for every imaginable chore. It’s really a confession. Multiple chat boxes as part of your morning routine is not a life worth living. That’s a description of hell. You’re the CEO of Microsoft, one of the most powerful companies in history and you’re … having chatbots summarize your email? That’s the dream?
V. The Bubble Is Popping (Capital Is Building the Alibi)
We’re already in an AI winter: a financial season of depressed investment in AI, a field that has actually been around for decades. We just don’t have the language for it yet because Uncle Sam is delaying the crash with federal R&D dollars. But Sundar Pichai, CEO of Google, went on record saying there are “elements of irrationality” in AI investment. He is pre-building the alibi.
When the bubble pops—and it will—Pichai will be able to say “I warned you.” Never mind that Google has thrown billions at AI. Never mind that he personally benefited, extracting eye-popping compensation as he laid off thousands that he had hired 1-2 years before to meet the pandemic’s peak in online shopping. He’ll be the grown-up who saw it coming, not one of the bubble’s main inflators.
This is Manufacturing Consent 101. Capital is softening the ground for the narrative they’ll need when the crash happens:
∙ “We always knew it was overheated.”
∙ “The market got ahead of the fundamentals.”
∙ “No one could have predicted this.”
Except we did. We are. The revenue numbers are public. The exorbitant costs are known. Working people are widely skeptical of AI hype and see it as oversold, burdensome, and threatening rather than beneficial. The lack of product-market fit is obvious to anyone not actively lying.
But the narrative is already being written. And when it collapses, the people who pumped billions into stochastic parrots will walk away clean while the workers who got laid off “for efficiency” stay unemployed.
The crash itself won’t matter much—AI is a small industry. But the narrative opportunity will be enormous. Every company with a bloated executive layer will “restructure.” Every municipality with underfunded services will “streamline.” Every institution will use the chaos to consolidate power upward.
That’s the plan, anyway.
VI. But the Plan Is Already Failing
Here’s what capital didn’t anticipate:
They’ve been running the same playbook for 40 years. Manufacture crisis, impose austerity, consolidate power. It’s worked every time. “There is no alternative,” as Margaret Thatcher told us in the 1980s.
But this time, people can see it.
The 2008 bailouts were confusing. The mechanisms were opaque. “Credit default swaps” and “mortgage-backed securities” sound complicated. People didn’t understand what happened, so they accepted the narrative: “everyone was greedy, everyone made mistakes, now we all have to tighten our belts.”
But AI is simple. It’s matrix multiplication. It’s autocomplete on steroids. The demos are impressive but the substance is thin. And crucially, everyone has used it. Everyone’s tried ChatGPT. Everyone knows it’s useful but limited. Everyone’s seen it hallucinate.
So when capital says “we need to restructure because of AI,” people aren’t confused. They’re mad as hell. Because they know it’s bullshit.
VII. The Master’s Tools (And I’m Using Them Too)
I have to acknowledge something uncomfortable:
There’s no way my voice and prose and cadence would have emerged this quickly without LLMs. I’m milking the same capital AI mirage I’m critiquing. “The Master’s Tools Will Never Dismantle the Master’s House,” as Audre Lorde warned.
And yet—I use Claude, my AI of choice, to think faster, write clearer, organize the chaos in my head into something coherent. Every day. I’m a beneficiary of the bubble. My productivity has genuinely increased.
But here’s the thing: that doesn’t invalidate the critique. It confirms it.
Because I’m using AI the way it actually works: as a tool for someone with expertise to move faster. I’m not replacing labor. I’m augmenting my own. And I’m doing it in narrow domains where I already have the knowledge to evaluate outputs. For instance, I didn’t blindly accept ChatGPT’s script for my lecture, which it generated from course slides. I listened to the script and then delivered a real lecture of my own.
That’s the real use case. Not “replace workers,” but “help experts move faster.”
But that’s not what’s being sold. What’s being sold is fear. The threat of replacement. The promise that you won’t need experts anymore. And that threat is the whip.
The fear worked. Knowledge workers are back in line, for now. But to achieve that, capital lost control of the culture. They lost narrative coherence. They even lost the ability to enjoy their own wealth in peace. And most dangerously: they lost the invisibility that made the system work.
VIII. Coda: Sci-Fi Fell Short
When Hollywood depicts a futuristic AI dystopia, it usually involves a hyper-intelligent entity bent on driving humanity to extinction: from The Terminator to The Matrix to Westworld and hundreds more in between.
But AI is here, and reality is somehow darker. Empty cars blocking streets in San Francisco during a blackout. A chatbot built by New York City which encouraged illegal activity. A delivery robot malfunctioning on rail tracks in Miami, getting crushed by a train. We live in the stupidest timeline, as kids these days say on TikTok.
The whip to discipline labor is also so resource intensive that it is causing spikes in electricity prices and draining lakes. We are being driven to ecological disaster by a tiny group of overcompensated men bent on... what, exactly? Beating each other at being exorbitantly rich?
Everyone can see the puppeteers now. And they’re boring. Running the same plays, wielding the same threats, demanding the same sacrifices, all for what? Bigger yachts? Colder, even more humongous homes? Paris Paloma gets it:
I knew one day I’d have to watch powerful men burn the world down
I just didn’t expect them to be such losers
~



Excelente tu exposición mi estimado profesor sobre uso de la IA.
Yo quiero exponerte lo que considero lo mejor y lo peor de la IA:
Dentro de lo mejor creo que está la automatización de tareas repetitivas. Las demostraciones arrojan un resultado espectacular por el manejo de grandes volúmenes de datos con la mejor eficiencia.
Y en segundo lugar su disponibilidad de 24/7 ya que los sistemas funcionan sin descanso y manejan grandes volumenes de datos, lo que hace que la IA esté siendo utilizada en todo el mundo.
Dentro de lo peor de la IA está la amenaza de "El Reemplazo" la idea de que ya no se necesitarán expertos y que las maquinas sustituirán a la humanidad. En Segundo lugar y lo mas terrorífico para mi, es el uso de informacion falsa y la infracción de la propiedad intelectual. Ya uno no reconoce en las redes sociales cuando un video o fotografia es original y cuando es falso, usando IA, un futuro incierto y desgarrador.
REGULACIONES YA !!!