IT HAPPENED: AN AI WROTE A STORY “GOOD” ENOUGH TO FOOL AN EDITOR
- Allan Dyen-Shapiro
- 9 minutes ago
- 5 min read
I had predicted that the first time a large language model would generate a good enough short story to fool an editor into not only thinking it human-written but also buying it, the genre would be one of the formulaic ones where very specific rules generate a standardizable product, Harlequin-style romance or cozy mystery, for example. I figured that speculative fiction would be the toughest nut to crack because there weren’t strict rules. I was wrong.
Today, in the email sent to free-tier subscribers of The Daily Tomorrow, a pro-rate-paying venue for speculative fiction that sends serialized stories directly to your inbox, the editor said that he ran no story today because he’d nearly been duped into buying a 100% AI-generated story. The only thing that tipped the editor off was email correspondence showing the “author” clearly had no grasp of standard written English.
Soon after ChatGPT’s first public release, I was at a Zoom-based meeting of our local fiction writing group, and the topic of whether an AI could write stories or not came up for the first time. I decided to put it to the test. With the rest of the group eagerly waiting, I asked the AI for something I chose largely for expectations that it would generate some silly fun for the group. I asked it to write a 1000-word romance story where the protagonist was a duck. While I admit that I was amused by the name the AI chose for the male love interest (Drake), for the most part, the story was absolute drek. It followed all the standard romance beats (in a formulaic fashion, as I had anticipated), sure, but there was nothing that could be considered creative in evidence.
The AI could generate grammatical sentences; however, they were uninspired, with nothing resembling a character voice, let alone an authorial voice. Although at this short length, it could generate plot that evolved as a logical progression, there was no story, the character development and arc that lead the reader on an emotional journey and say something unique on an underlying theme. I hadn’t asked it to write what science fiction excels at—a story that that also provokes deep thought, shining a funhouse mirror on reality that asks the reader to step outside the story and question their own society, values, or existence. (Brechtian alienation is the theater term; cognitive estrangement is the phrase science fiction theory prefers.) As such, that wouldn’t be a fair criticism. Still, I couldn’t imagine ChatGPT capable of doing such. I decided ChatGPT was worthless, at least for creative endeavors, and never tried this again, even as a goof.
So, how did the Daily Tomorrow story succeed? It wrote weird fiction.
Weird fiction is the subgenre of speculative fiction where things aren’t supposed to make sense. Weird fiction readers enjoy being pulled into a lushly drawn world where rationality is put aside.
Much of the story was written to simulate a dream state. Dream logic is not mathematical logic; the progression reflects psychology that ignores the daytime rules of logical progression. Nearly all such stories qualify as “weird.”
If the goal of this subgenre is to subvert logic, how can an editor tell gold from garbage? As the fictional David St. Hubbins agonizes in the movie This is Spinal Tap, “There’s such a fine line between stupid and clever.”
Making the editor’s job even harder, non-Western traditions have different rules for storytelling. An editor’s demanding strict adherence to the “rules” (laid out in craft books, for example, one recommended to me when I first started writing, Debra Dixon’s G M C: Goal, Motivation, and Conflict: The Building Blocks of Good Fiction) runs the risk of privileging colonialist narratives and repressing the voices of those other than white men. Yet, to demand every small press editor have the equivalent of a Ph.D. in non-Western storytelling traditions is unrealistic.
I don’t have an answer to this conundrum (aside from being glad I don’t earn my living as an editor). The Daily Tomorrow editor says he relies on a version of Goethe’s three questions to be asked when critiquing art: “What was the creator trying to achieve?”, “Was the creator successful?” and “Was it worth doing?”. The editor concludes that this story failed the test because an AI wasn’t “trying” to create anything, but when he initially assumed it was human-written, he thought the writing was great.
The editor’s answer is sufficient if a story follows Darko Suvin’s concept of cognitive estrangement: if the story says nothing about the human condition, it’s worthless. It would also cover stories where the goal is “escapism” or “fun” (desired by several prominent publications). Or literary fiction where story is a puzzle to be solved by the reader (e.g., David Mitchell’s Cloud Atlas, which I loved.) Or “upmarket” fiction where the primary goal is to allow the reader to experience tension (following the advice from Dwight Swain’s Techniques of the Selling Writer or anything more recent that builds on his recipes.) But if the goal is to allow the reader to lose themselves in a story and not worry about anything making sense, the recipe falls short.
So, what would I do if I was an editor? Perhaps I’d engage the author in email correspondence, asking the first of Goethe’s questions. This “author” was, as the Daily Tomorrow editor points out, too dumb to use AI to write his emails, so he was caught. But many wouldn’t be. ChatGPT excels at crafting bullshit answers to questions like this.
It’s now entirely possible that an AI-crafted piece would pass Goethe’s test. Clearly, something else is required. I propose a fourth question: “How unique is the story?”
In a way, this is an unfair test. Top journals routinely publish space opera, or secondary world fantasy, or space-based hard science fiction (among other things) where my first reaction as a reader is that there’s nothing new, just one more example of something already done to death.
Here enters capitalism. If an American magazine fails to at least break even, it tends to disappear. Putting the name of a prominent author on the cover, even if their story is mediocre, helps an issue to sell. Including accessible stories that appeal to the magazine’s reader base, even if they aren’t unique, does too. If art is beholden to the rules of capitalism, AI-created art is inevitable.
The American answer to this problem is to tell readers that they should support these magazines with subscriptions and donations. With fewer and fewer Americans reading fiction (one recent survey put the percentage of Americans who do at 2%), this route will lead to most of these magazines disappearing.
Canadian magazines are often supported by government grants. Perhaps the arts are an endeavor more worthy of government support than endless drone and missile strikes on an ever-increasing number of “enemies”? I see Ursula Le Guinn’s The Dispossessed coming to life, with Canada as Anarres and the US as Urras.
Weird fiction editors, this is your problem to solve. I can’t imagine any of you relishing becoming the lapdog of the tech bros, relegated to publishing their AI slop. I don’t enjoy most weird fiction; I have never tried writing a pure form of it. The weird fiction I have liked had enough structure and/or internal logic to satisfy Goethe’s three conditions as well as the one I added. But I am just one person. I can’t argue that there is anything better about my views relative to those of weird fiction enthusiasts any more than I can act superior to those who intentionally and purposely write fiction for Americans who read below a high school level. (I’m looking at you, Stephen King.)
I’d love it if folks reading this would share their views either as comments on this post or on the social media platforms where I share the link. I don’t have all the answers. I’d like to hear yours.