…
The hour of the barbarian is at hand. The modern barbarian. The American hour. Violence, excess, waste, mercantilism, bluff, conformism, stupidity, vulgarity, disorder.
| Aimé Césaire, Discourse on Colonialism (1950)
…
Events of recent weeks confirm Césaire’s anticipation of the neoliberal world order in the 20th century, and today’s 21st century post-neoliberalism, which looks a great deal like the original form of imperialism: piracy1.
AI, AI, AI
The title above can be pronounced in two ways: the first is ‘eh-eye, eh-eye, eh-eye’, alluding to artificial intelligence, which is everywhere in the news (but not really in the stats?). The second pronunciation is ‘aye-ee, aye-ee, aye-ee’, like someone screaming because their hair is on fire.
Also, I confess that I am not researching these stories as they flitter by across my radar screen. I am merely repeating anecdotal evidence: mere anecdittos.
…
A whopping 3 percent.
“I can’t really remember a boom with such active hostility to it,” William Quinn, co-author of the 2020 history tome “Boom and Bust: A Global History of Financial Bubbles,” told the NYT. “People usually find new technology exciting. It happened with electricity, bicycles, motorcars. There were fears but also hopes. AI is notable, perhaps unique, for the lack of enthusiasm.”
As consumer sentiment goes from sour to moldy, the CEOs behind the bubble only seem to be doubling down.
“It’s extremely hurtful, frankly,” said Nvidia chief executive Jensen Huang in a January interview about the “battle of [AI] narratives.”
Huang insisted that AI is suffering a “lot of damage” from “very well-respected people who have painted a doomer narrative, end-of-the-world narrative, science fiction narrative.”
OpenAI CEO Sam Altman has concurred, lamenting pushback against the “diffusion, the absorption” of AI in broader society. “Looking at what’s possible, it does feel sort of surprisingly slow,” he said at the recent Cisco AI Summit.
While AI boosters could argue we’re simply living under the tyranny of a vocal, AI-hating minority, evidence suggests the public’s aversion runs deep — and not just against the tech itself. As one Pew Research survey from 2025 found, about 60 percent of respondents said they’d like “more control” over how AI is used in their lives, while only 17 percent are “comfortable” with AI remaining in the hands of a few tech billionaires.
Consumer data paints an even more dramatic story. In mid-2025, when mainstream analyst firms were still parroting uncritical AI hype before investor sentiment turned cold in December, the number of US AI users who regularly paid for the privilege stood at a whopping 3 percent.
| Joe Wilkins, Tech CEOs Confused by Why Everybody Hates AI So Much
I know why people hate AI so much: the billionaires who are steering the economy into a dystopian future, or the ditch.
…
42%
The approximate share of tech-industry workers who said their direct manager expects AI use in day-to-day work as of last October, up from 32% just eight months before, according to a survey from AI consulting firm Section.
…
But what incentives?
Fifty-five percent of companies surveyed say they are offering no premiums, no bonuses, no equity, for employees who have built out their AI skillset. Only 14% offer higher base pay, 10% offer bonuses, and 9% offer long‑term incentives.
AI adoption in exchange for equity might be a sensible model, come to think of it.
…
Blunt-force trauma.
Thirty-six percent of chief marketing officers expect to reduce head count over the next 12 to 24 months “by utilizing AI or eliminating redundancies,” according to a new survey from executive search firm Spencer Stuart based on November interviews with approximately 90 CMOs and other marketing leaders.
At larger companies, the outlook was grimmer. Forty-seven percent of respondents at companies with $20 billion or more in revenue said they expect to cut staff over the next 12 to 24 months, and 32% already did so this year, the survey found.
The key factor is growing pressure to show returns on companies’ significant investments in AI, said Richard Sanderson, who leads Spencer Stuart’s marketing, sales and communications officer practice.
“We’re hearing, particularly from the largest … companies, that they have to deliver, and it may have to be through blunt-force of head-count reduction,” Sanderson said.
So, AI isn't generating enough work taken on by bots to justify the spend, so, of course, companies will cut jobs instead of cutting back on AI. Totally reasonable. Uh huh. Sure. Got it.
…
Job applicants want to know why they are rejected by AI.
Applicants for AI-screened jobs are suing a recruitment tool company, Eightfold AI, for failing to share the scores and results for their job applications. The novel twist is comparing this to credit ratings and the Fair Credit Reporting Act. Seems reasonable that they should know why the system is rejecting them, in some case for thousands of applications.
| Stacy Cowley2
Seems fair to me. But the companies behind this software, like Eightfold AI, don’t want to share how they do what they do. Or maybe they don’t know how the AI is doing it.
…
And job hunters are using AI, too.
It’s easy to spot when candidates over-rely on AI, some employers said. Oftentimes, executive summaries will look eerily similar to each other, odd phrases that people wouldn’t normally use in conversation creep into descriptions, fancy vocabulary appears, and someone with entry-level experience uses language that indicates they are much more senior, they added.
It’s worse when they use auto-apply AI tools, which will find jobs, fill out applications and submit résumés on the candidate’s behalf, some employers said. Those tend to misinterpret some of the application questions and fill in the wrong information in inappropriate spots. If these applications were evaluated alone, employers say they’d have a harder time identifying AI usage. But when hundreds of applications all have the same issue, they said, AI’s role in it becomes obvious.
Joseph Eitner, chief human resources officer for New York-based investment firm Eaton Capital Management, said he has no issue with candidates turning to AI to add some keywords, clean up their grammar, or even help them think through a question on the application. But ultimately, he said, candidates should do the writing themselves, express their own ideas and personalities, and take the time to manually submit their applications.
“If that’s how you apply and how you work, I don’t want to hire you,” he said. AI auto-apply services are “snake oil. It’s a disservice to yourself and to the people you’re applying to.”
| Danielle Abril, Employers to job seekers: Your AI résumé isn’t fooling anyone3
I’ve been writing and speaking about the future of work for over 30 years. Along the way, I’ve consulted with and written for Microsoft, IBM, Google, Dell, Cisco, and dozens more. In 2007, I coined the term ‘hashtag’ (yes, hashtag), and many other terms that have shaped the way we think about work, like ‘work management’, ‘social tools’, ‘work media’, and others. If you want to help me continue my work, consider a paid subscription.


