What Are People For?
Stowe Boyd | AI Hallucinations | Cogitive Bias Cheatsheet | Productivity versus Engagement
The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the 'bot-based economy?
| Stowe Boyd, Pew Research: The Future of AI, Robotics and Jobs
Way back in 2014, I was one of a large panel of ‘experts’ selected by the Pew Research folks to answer some questions about the impact of AI and robots on work, looking ahead to 2025.
Here we are, ten years after.
I think the question I formulated has stood up better than some of my other pronouncements. If we are headed to a world where tens (hundreds?) of millions of people will lose their jobs to AI-driven automation, what are people for?
I was influenced by two Oxford University researchers — Carl Benedikt Frey and Michael A. Osborne — who had recently (2013) released their controversial report — The Future of Employment: How Susceptible Are Jobs To Computerization? — analyzing that impact by job category:
We examine how susceptible jobs are to computerisation. To assess this, we begin by implementing a novel methodology to estimate the probability of computerisation for 702 detailed occupations, using a Gaussian process classifier. Based on these estimates, we examine expected impacts of future computerisation on US labour market outcomes, with the primary objective of analysing the number of jobs at risk and the relationship between an occupation’s probability of computerisation, wages and educational attainment. According to our estimates, about 47 percent of total US employment is at risk. We further provide evidence that wages and educational attainment exhibit a strong negative relationship with an occupation’s probability of computerisation.
I don’t know if an occupation-by-occupation analysis would be helpful, today. My sense is that AI has already become so diffuse and distributed we’d find it hard to segregate. But of course, that’s what we’d all like to see. That’s why these next bits are so perfectly apt and comprehensively humorous.
…
The Little AI Study That Wasn’t There
Of course, you know it had to happen.
One 26-yo researcher, Aidan Toner-Rodgers, writes an article called Artificial Intelligence, Scientific Discovery, and Product Innovation, posts it on arXiv and submits it to The Quarterly Journal of Economics, where it was on track to be published. The article covers all the methodological bases, data drawn from 1000 materials researchers, and it was heralded as a breakthrough by notable economists, like Daron Acemoglu (2024 Nobel laureate in economics) and David Autor. Except, it was all ginned up by AI.
Ben Shindel was a great write-up:
Unfortunately for everyone involved, the work is entirely fraudulent. MIT put out a press release this morning stating that they had conducted an internal, confidential review and that they have “no confidence in the veracity of the research contained in the paper.” The WSJ has covered this development as well. The econ department at MIT sent out an internal email so direly-worded on the matter that on first glance, students reading the email had assumed someone had died.
In retrospect, there had been omens and portents. I wish I had read the article at the time of publication, because I suspect my BS detector would have risen to an 11 out of 10 if I’d given it a close read. It really is the perfect subject for this blog: a fraudulent preprint called “Artificial Intelligence, Scientific Discovery, and Product Innovation,” with a focus on materials science research.
The only good news is that this was caught before we traveled too far down the path.
But it raises the question: how much spurious science is being hallucinated into existence by well-guided AIs?
I Would Have Read That Book
Along similar lines, I heard through Bluesky that the Chicago Sun-Times published a summer reading list where fake books hallucinated by some AI were attributed to real authors.
I learned from r/chicago’s xxxlovelit that only five of the fifteen books are real:
Brit Bennett doesn't have a book titled Hurricane Season. The fuck? Isabel Allende has no book named Tidewater Dreams. The Last Algorithm by Andy Weir, The Collector's Piece by Taylor Jenkins Reid, Nightshade Market by Min Jin Lee, The Longest Day by Rumaan Alam, Boiling Point by Rebecca Makkai, Migrations by Maggie O'Farrell, The Rainmakers by Percival Everett, and Salt and Honey by Delia Owens are all books that DO NOT EXIST!!!
[How the fuck did] the editors at the Sun Times not catch this? Do they use AI consistently in their work? How did the editors at the sometimes not catch this?
I would have been eager to read at least one of those fictional books:
"Nightshade Market" by Min Jin Lee - The author of "Pachinko" delivers a riveting tale set in Seoul's underground economy. Following three women whose paths intersect in an illegal night market, the novel examines class, gender and the shadow economies beneath prosperous societies.
Perhaps someone can point an AI at that description and get at least a first draft?
This is likely linked to the recent head chopping at the paper, as reported by David Roeder:
Thirty employees of the Chicago Sun-Times — around 1 in 5 on its payroll — have agreed to resign under buyout terms the paper’s nonprofit ownership offered in hopes of stanching persistent financial deficits.
The departures consist mostly of writers and editors — many with decades of experience. The cuts are the most drastic the oft-imperiled Sun-Times has faced in several years and will bring about recognizable changes to its content, although top leaders said the buyouts ensure there will be no layoffs in the near future.
Those leaving include most of the paper’s editorial board. Editorial Page Editor Lorraine Forte and board members Tom Frisbie and Marlen Garcia are leaving, causing staff speculation that the Sun-Times will no longer post editorials reflecting board members’ views of various issues. Company leadership would not confirm that plan. If implemented, it would be a major break with newspaper tradition.
So the answer to xxxlovelit’s question is there were no editors left standing to fact-check the summer reading list. And of course we now live in a world where summer reading lists need to be fact checked.
Elsewhere
Cognitive Bias Cheat Sheet
I wrote this in 2018, in Know Your Limitations but couldn’t find it here. (Now published.)
…
Buster Benson has done us a great service by organizing the world's tally of cognitive biases into a few manageable categories in his Cognitive Bias Cheat Sheet, because, as he puts it, 'thinking is hard'. How true.
His post is exhaustive, but I will just reproduce his categorization, which is based on four classes of mental shortcuts and their rationale (I have reshuffled his explanations into a more compact format):
Information overload sucks, so we aggressively filter. Noise becomes signal. In order to avoid drowning in information overload, our brains need to skim and filter insane amounts of information and quickly, almost effortlessly, decide which few things in that firehose are actually important and call those out. We don’t see everything. Some of the information we filter out is actually useful and important. (Examples: Baader-Meinhof Phenomenon, and Confirmation bias.)
Lack of meaning is confusing, so we fill in the gaps. Signal becomes a story. In order to construct meaning out of the bits and pieces of information that come to our attention, we need to fill in the gaps, and map it all to our existing mental models. In the meantime we also need to make sure that it all stays relatively stable and as accurate as possible. Our search for meaning can conjure illusions. We sometimes imagine details that were filled in by our assumptions, and construct meaning and stories that aren’t really there. (Examples: Gambler’s fallacy and Stereotyping.)
Need to act fast lest we lose our chance, so we jump to conclusions. Stories become decisions. In order to act fast, our brains need to make split-second decisions that could impact our chances for survival, security, or success, and feel confident that we can make things happen. Quick decisions can be seriously flawed. Some of the quick reactions and decisions we jump to are unfair, self-serving, and counter-productive.(Examples: Dunning-Kruger effect and Sunk cost fallacy.)
This isn’t getting easier, so we try to remember the important bits. Decisions inform our mental models of the world. And in order to keep doing all of this as efficiently as possible, our brains need to remember the most important and useful bits of new information and inform the other systems so they can adapt and improve over time, but [to remember] no more than that. Our memory reinforces errors. Some of the stuff we remember for later just makes all of the above systems more biased, and more damaging to our thought processes.(Examples: Suggestability and Next-in-line Effect.)
Supplemental information for sponsors past the paywall.
Keep reading with a 7-day free trial
Subscribe to workfutures.io to keep reading this post and get 7 days of free access to the full post archives.