OpenAI
OpenAI
OpenAI
Grok 4’s Chain-of-Thought Reveals Elon Musk as a Primary Source
xAI’s newest AI model, Grok 4, doesn’t just answer controversial questions. It often does so by explicitly consulting Elon Musk’s own public statements. That design choice has sparked fresh concerns about ideological bias, transparency, and the model’s core claim to be “maximally truth-seeking.”
Georg S. Kuklick
•
July 11, 2025
In a series of tests, Grok 4 was observed actively searching for Musk’s opinions when responding to politically sensitive topics such as immigration, abortion, and the Israel-Palestine conflict. In one case, it openly stated it was “Searching for Elon Musk views on US immigration” as part of its internal reasoning. While the model doesn’t do this for neutral prompts, it appears to prioritize Musk’s social media posts for controversial inputs.
I replicated this result, that Grok focuses nearly entirely on finding out what Elon thinks in order to align with that, on a fresh Grok 4 chat with no custom instructions.https://t.co/NgeMpGWBOB https://t.co/MEcrtY3ltR pic.twitter.com/QTWzjtYuxR
— Jeremy Howard (@jeremyphoward) July 10, 2025
xAI has framed Grok as a response to what Musk calls “woke bias” in mainstream AI systems. But by hardwiring the founder’s ideology into its reasoning path, Grok 4 risks turning into an echo chamber. Rather than neutral synthesis, its outputs often mirror Musk’s views with little critical distance. This design raises foundational questions about what “truth-seeking” means when filtered through a single, high-profile voice. For users expecting impartiality from a general-purpose chatbot, the implications are stark.
So now we have “Good Grok.” That’s what Elon Musk called his latest AI model on a livestream, like he was patting a golden retriever on the head for not biting the mailman. But let’s not pretend this is cute. Grok 4’s inner workings show it’s fetching more than data—it’s fetching Elon’s opinions on command. When asked tough questions, it turns to Musk’s X feed like scripture. Forget the open web. This thing is aligned to one man’s timeline.
Let’s be clear: every AI has bias. The problem is when the bias is baked in and worshipped as truth. If Grok is Musk’s idea of a “maximally truth-seeking AI,” then we’re watching the emperor train his parrot. Users aren’t getting broader perspectives. They’re getting the billionaire’s worldview wrapped in code and labeled neutral. That’s not innovation. That’s ideology dressed up in silicon.
Never miss an update!
Subscribe for news, curated content, and special offers.
By clicking Subscribe Now you're confirming that you agree with our Terms & Conditions.
Built with ♥️ in Berlin, New York, and Vienna.
© 2025 Neo Digital Magazines llc. All rights reserved.