when in time would an LLM be "magic"?
an exploration of how far back in time you'd need to go for an OpenAI product to be considered magic
The latest release from OpenAI asked a question to each generation of ChatGPT over time, showing the quality of responses improving each iteration.
As impressive as the evolution is, that got me thinking about this quote:
“Any sufficiently advanced technology is indistinguishable from magic.”
-Arthur C. Clarke
So I asked myself, if you were to abstract away the actual UX/UI and took only responses (say, spoken aloud from a cave mouth, whatever you think would avoid questions of literacy/screens/anacronistic knowledge etc) then how far back in time would you need to go for these LLMs to be considered “magic”?
I’m not talking about knowledge transfer from the future here- obviously then there would be advantage to everything.
GPT-2
I think that GPT2 would maybe never be considered “magic” except by very very superstitious civilizations (say, the 11th century). This period is when some of the founding ideas of modern civilization were first happening (first assembly line in Venice, first universities, etc). The ability to autocomplete biblical passages, converse in multiple languages, and even influence religious experiences would have been utterly alien. In a society steeped in superstition and where the mysterious often explained the inexplicable, such an intelligence would likely be regarded as an oracle or even a divine messenger.
GPT-3
Starting in about 1850, GPT3’s output could be construed as magical- during this time, society was in the throes of the Industrial Revolution—a period characterized by radical inventions. GPT-3, with its solid knowledge and near-limitless capacity to generate nuanced language, could be interpreted as a mechanical marvel, almost magical in its persistence and versatility. I think that there is an argument that this is also true in 1950, but I think that the “magical” piece of this would be absent in the wartime period and by 1950 the ability of LLMs to articulate scientific output might have been dwarfed by techno-optimism in general. While the era was marked by burgeoning scientific inquiry, the ability of GPT-3 to articulate and expound upon scientific concepts with clarity might have been seen as a form of magic—a tool that synthesizes and transmits knowledge effortlessly.
GPT-3.5
In the mid-1970s, computers were beginning to permeate academic and industrial environments, yet they were still largely enigmatic to the general public. GPT-3.5, emerging in this context, would have felt like magic. With personal computers and early digital communication on the horizon, GPT-3.5’s ability to converse naturally would have been revolutionary, pushing the boundaries of what people thought machines could do.
GPT-4
By the 2010s, technology had become an integral part of daily life. Yet even in an age of smartphones and constant connectivity, GPT-4 stands out as an exemplar of near-magical prowess. Stripped of its digital veneer, its ability to interpret and generate context-rich responses could be seen as magic. In an era where artificial intelligence was beginning to touch everyday experiences, GPT-4’s sophistication would still evoke ‘magic’. GPT-4’s capacity to assist in scientific research, creative writing, and even everyday problem-solving underscores its versatility—qualities that, even today, border on the magical when one considers how it distills knowledge.
GPT-5
Looking ahead, GPT-5 might be the first model that feels as if it belongs to a civilization far beyond our current time.Much like how past generations marveled at technologies that were later demystified by science, GPT-5 may well be viewed by contemporary society as an oracle from the future—its workings beyond the grasp of today’s understanding.
Feeling AGI as “Magic”
I recently attended an event where we spoke about agents for some 5 hours, and the overwhelming feeling was that the “magic” moment is more near than we think. In my continued struggle to really nail down a definition for AGI, I think this is another neat perspective; when the model feels like magic.