Home » Uncategorized » “Confabulating” instead of “hallucinating” in ChatGPT and generative AI errors?

“Confabulating” instead of “hallucinating” in ChatGPT and generative AI errors?

Generative artificial intelligence, or generative AI, is a technology that creates material from an existing corpus of material based on a user’s prompts. For texts, it uses a large language model that mimics a writer by combining words that statistically co-occur across a range of texts in the corpus. However, these models are prone to generating statistically probable but empirically false statements – and even invent references out of whole cloth to “support” these claims. These falsities are often called hallucinations to highlight the bizarre qualities of calling non-existent research study results and references, or legal cases and citations.

These incorrect text productions are better described as confabulations. Hallucination refers to a sensory experience not based in reality. In contrast, confabulation refers to forming or recalling false memories without an intent to deceive or an awareness of the falsity of the information. Confabulations can be provoked by questions asking for specific information (like a prompt a user might give to a generative AI) or spontaneous during the production of non-directed speech (as might happen when a generative AI calls on words that might plausibly go together but are not factual when assembled during an answer or elaboration). Confabulations can be sparse or well-elaborated; the latter are more consistent with the confabulations generative AI creates. Confabulations are also distinct from amnesia, which represents a loss of memory without an attempt to fill in its gaps, and delusions, which are false beliefs about the external world.

Confabulations are also considered ill-grounded or unjustified memories, which summarizes nicely how generative AI uses statistical relationships among words instead of facts to guide its productions. A variety of clinical conditions that damage normal memory functioning feature confabulation, ranging from harmful alcohol use to neurocognitive disorders. Likewise, a variety of large language models would be prone to generating confabulations, as their function is based on probabilistic relationships among sets of words, not semantic or episodic declarative memories. None of these result from a misperception of the user’s prompt.

In short, generative AI’s errors of false text generation should be considered confabulation, not hallucination. Researchers and practitioners of generative AI should update the terminology accordingly to prevent incorrect descriptions of the phenomenon along with stigmatization of those who suffer hallucinations.

Leave a comment

Your email address will not be published. Required fields are marked *