Date published:

The fine art of AI Literature

Literature created with the assistance or full participation of artificial intelligence (in the form of so-called large language models) has become a subject of debate in the field of electronic literature and an area of practical application in recent months.

Language AI models are becoming active creative platforms, allowing authors of electronic literature to explore this still uncharted territory. An example of the debate on the effects of artificial intelligence on literary production, artistic education, and even the creation of text in general, are discussions in the Electronic Book Review initiated by Matthew Kirschenbaum's presentation in the popular Atlantic Monthly titled "Prepare for the Textpocalypse" Kirschenbaum brings up scenarios where conversational systems feed on text and respond to prompts not just written by humans, but by other machines. This, the researcher suggests, is a short step to the geometric growth of "gray goo" of machine-generated text. John Cayley, Scott Rettberg, and Davin Heckman respond to such an approach to the topic in Ebr. Cayley places systems like Chat-GPT in the context of the philosophy of language, the history of text generators, and e-literature. The poet treats artistic creations using GPT models, especially those creating images and illustrations such as Dall-e or Midjourney, as fundamentally transmedial practices that assign an important role to language. "Modelit," as Cayley calls the new genre, due to the "black box" system it represents, has two major problems: first, the hermeticism of its own production rules, and second, the copyright infringement not yet covered by law. Statements, texts, whole books found on the internet, on which models like Chat-GPT have been trained, can be considered intercepted, kidnapped, and stolen. The constructors of these models, Cayley warns, owe compensation to the creators, especially if the services turn out to be available only on commercial terms in the future.

Davin Heckman examines the political aspects of the discourse on large language models. In today's highly polarized context of American academia and society, it's difficult to be both an enthusiast and a moral voice of opposition against the new medium. Language models of artificial intelligence should therefore be treated as a classic pharmakon, that is, both a remedy and a poison, as a "transitional object" that "demystifies one thing in the process of mystifying another," the researcher encourages. For Scott Rettberg, linguistic artificial intelligence introduces new forms of cyborg authorship into the circulation of digital literature, creating a special kind of creative environment. Human and machine conduct a narrative game with human language understood as a probabilistic cognitive system within it. The task of creators and teachers will be to understand the permissions and limitations of these new ways of writing. Rettberg mentions the recommendable book blog by Stephen Wolfram, What Is ChatGPT Doing … and Why Does It Work?, in which the author attempts to reveal the hermeticism of Chat-GPT described by Cayley by describing the text-generating processes inside "black boxes" (the title illustration is the graphic illustration of the process of "irreducible computability" of textual artificial intelligence generated by Wolfram).

In response to Rettberg's postulates about understanding the permissions and limitations of the new medium, Mark Marino publishes a series of practical tips for writing with systems like Chat-GPT on Medium.com. The key to effectively using artificial intelligence is a properly written prompt that gives the model context, purpose, and means to direct our text. In the article "Secrets of Writing Chat-GPT Prompts," Marino says that first, we need to endow the impersonal interlocutor with personality traits. We need to tell the machine who it is and assign it a role. This will entail a certain tone, diction, and narrator's attitude. It's worth defining good and bad writing so that the program knows what to avoid; it's worth showing a sample of text on which the content will be based, and finally, it's worth referring to quotes, facts (historical, cultural, statistical). Such measures should effectively prevent what we call "hallucinations" of large language models. Marino supports his series of tips with his own experience and his own publication: the book Hallucinate This!, a brilliant, highly readable postmodernist-cyberpunk metafiction written "in collaboration" with Chat-GPT.