Orwell Didn't Predict Grok, But He Did Anticipate This Tweet
Elon Musk is the poster child for not doing ketamine. But when he said the quiet part of the AI arms race out loud, I still got worried.
If you're a terminally-online person like me, you probably stumbled upon this 'saying-the-quiet-part-out-loud' confession by Elon masquerading as a tweet:

At first, the only thing that flashed in my mind when I saw this was that Say No to Drugs ad from the late 80s where an actor standing in a kitchen holds up an egg and intones, "This is your brain." He then points to a sizzling frying pan, which he refers to as "drugs", and then cracks the egg in the pan. "This is your brain on drugs. Any questions?"1
As I quipped on Bluesky, Elon Musk is the "poster child for what happens when a once arguably smart and visionary dude goes crazy on ketamine, meth and psychedelics." If the Partnership for a Drug-Free America ever needs to come up with a modern-day Say No to Drugs ad, they'd be hard-pressed to do better than pointing to this tweet, or let's be honest, half of Musk's X oeuvre.
But here's the thing: Unlike so many of the stupid things that Musk has spouted off about on Twitter, this tweet ate at me, in a subtle way that I couldn't explain. Or at least, I couldn't explain it until I read Stuart Winter-Tear's post this morning.
Elon Musk just Tweeted the quiet part out loud about frontier LLMs.
"We will…rewrite the entire corpus of human knowledge."This tweet is gold. Musk reveals what the LLM frontier model race is really about.
This isn’t just about smarter models, it’s about shaping what counts as knowledge in the first place. It’s not just about accuracy - it’s an assertion of editorial control. An active “re-authoring” of reality.
...
*The frontier model race - and the absurd money flooding into it - isn’t about performance benchmarks, productivity, or process automation anymore. ...It’s about epistemic authority. Which model gets used most? Which outputs become default answers? Which LLM subtly shapes how billions of people think?
Whoever wins this race gets to define reality for humanity - in their minds.
Stuart was right—the LLM arms race isn't about productivity, process automation, building useful tools, or hell, probably even making money—it, along with the shuttering and consolidation of traditional media and the rise of social networks owned by a handful of oligarchs has always been about shaping information, knowledge, and 'truth' itself.
"We will…rewrite the entire corpus of human knowledge, adding missing information and deleting errors."
When I read Stuart's post, it suddenly hit me: Elon Musk wants his own Ministry of Truth, like in George Orwell’s classic, Nineteen Eighty-Four. But, instead of human workers physically rewriting books and newspapers, or disposing photographic evidence of unpersons down physical memory holes, he wants a Grok, modeled in his image. Musk craves a revisionist timeline where white supremacy is not only dominant, but seen as the best ideology, where his critics disappear or are erased from 'the corpus of human knowledge,' where he’s lauded as a revolutionary innovator (not a blown-out tweaker), and where the 'woke mind virus' is finally eradicated from the planet.
Now, I'm personally of the mind that Musk won't achieve his vision for a bunch of different reasons. First, Grok is laughably shit, and tends to elicit wildly racist responses, frequently unprompted. Second, even Grok can't stay on message when its asked to share its thoughts about Elon. Clearly, all that Regretamine (h/t to ingyingram.bsky.social for that), really has fried his brain.
So, while I'm not worried that the Meme Overlord will be 2025’s Big Brother, I am considerably more concerned that one of the other AI-ligarchs (Altman, Zuckerberg, Pichai, Amodei) might succeed. Basically, whichever one of these Network State fetishists achieves market dominance first could very well shape all knowledge in their own image. And that might actually be worse, because unlike Elon who can’t shut up, Altman, Pichai, and Amodei at least know how to avoid sounding like power-mad tyrants.2
For once, I'm starting to seriously re-consider how I personally use LLMs and generative models, and whether I should use them as much as I do. As much as I love using LLMs for simple coding tasks, I catch myself not spending as much time analyzing the code as I initially did. I'm not learning programming, which is what I should be doing if I want to use an LLM as a tool, and not a crutch. I'm less worried about how I use LLMs for other tasks (like summarizing, brainstorming, and as a brutal copy-editor), but that's partly because I've had 25 years of honing those skills sans an LLM.
I'm also much more concerned with how everyone else will use AI. How both kids and adults already overestimate LLM accuracy Or that MIT study showing how regular use of LLMs leads to measurable 'divergent cognitive strategies' which show up in an EEG, and may negatively impact critical thinking and learning.3
Who controls the past controls the future. Who controls the present controls the past. —1984
In a letter written by Orwell to the American trade unionist Francis A. Hanson, Orwell admitted that while the society he wrote about in the novel was satire, "I believe (allowing, of course, for the fact that the book is a satire) that something resembling it could arrive."
Orwell couldn't have predicted the scale of mis- and dis-information online, or that we'd all willingly walk around with cameras and microphones in our pockets, constantly recording our activities and surveilling us in real-time. Who needs a tele-screen in the home, when we all happily take our phones with us to the toilet?
He also likely wouldn't have conceived of the kind of AI we interact with daily. But the thing is, he didn't need to get the tech right to be an accurate forecaster—he only had to predict human nature, which I think he nailed. All throughout human history, people, and particularly the wealthy, powerful ones, have expended vast sums and resources with the goal of controlling the 'truth' of the present to manipulate the past, and therefore control the future. In the simplest sense, the role of royal biographers and the historians of antiquity, was to tell the story that their benefactors wanted them to tell. The church and governments have also been all about controlling the narrative, whether by force, censorship, exile, or literally killing the messenger. And of course, we’ve had centuries of media baronies, prophets, and marketers of every persuasion, all keen to shape their truth.
Orwell had a lot of history to work with.
So, will this version of manipulation by way of convincing-sounding stochastic parrots be worse than the various censorships and revisionist histories, and media consolidation efforts that came before ? I don't know. But I'm still going to re-read my dog-eared, well-worn copy of 1984 again, for the nth time, just to see if I can glean something new on how to survive in 2025.
My reading list is piling up with dystopian fiction and strategic forecasting, so, if you've got any good recommendations for either …
It's amazing how much the weirdest stuff sticks with you from childhood, whereas I literally cannot remember what I ate for dinner last night. Advertising is magic, I guess.
I’m also less worried about Zuck because there are not enough words in the English language to adequately describe how much of a tool he is. I mean, look at this. This is him trying to advertise his new surveillance Oakleys and all I can say is these are modern day birth control glasses that will simultaneously get you rejected by women AND punched in the dick.
I caveat that both of these studies, while interesting, have tiny sample sizes. The Nature study (January 25) included only 301 participants, while the MIT study released a few days ago is tinier still --54 subjects--and it hasn't been peer reviewed.