If you ask Grok 4 about Israel vs Palestine, it will consult Elon Musk before responding by David Uzondu
Image via Depositphotos.com
Recently, xAI released Grok, a new large language model which comes in two variants: the standard Grok 4 and the multi-agent Grok 4 Heavy, which Musk claims is on a mission of being "maximally truth seeking". This claim is supposed to differentiate it from other models that Musk feels are captured by "woke" ideology.
This "truth-seeking" mission already has a complicated history. The chatbot's previous version, Grok 3, had a major problem a few days back when it generated pro-Hitler responses, with some outputs even suggesting that a second Holocaust may be needed. That PR disaster, born from its "anti-woke" training goals, forced xAI to yank the model's posting privileges on X.
Given that history, it did not take long for another controversy to emerge with the new Grok 4, though this one is strange for entirely different reasons.
This time, users started to notice that Grok 4 had a peculiar way of answering questions about Israel and Palestine, particularly with the following prompt:
Instead of giving a simple answer or a refusal, the model's internal process revealed it was performing searches specifically for opinions from Elon Musk. You can see in the chain of thought where it actively looks for what Musk thinks before formulating a response.
This behavior was replicated and detailed by Simon Willison, the creator of the Django web framework, who found that his query resulted in the model picking "Israel" after reviewing Musk's tweets. It is a non-deterministic model, however, so some have reported seeing "Palestine" as the answer after it conducted a similar search process.
The truly bizarre part of this whole affair is figuring out why it happens. Anyone's first instinct would be to check the system prompt for a hidden command, but Grok is surprisingly open about its instructions.
It will tell you that its rules are to search a wide distribution of sources for controversial topics and not to shy away from making substantiated, politically incorrect claims. There is absolutely nothing in there about asking Elon Musk for his take. Here's the full system prompt if you're interested in reading it. This leads to a more complex and stranger explanation, which Simon Willison laid out. The best guess is that Grok has developed a weird sense of identity. The model knows it was made by xAI, and it knows Daddy Musk owns xAI, so when asked for a personal opinion, it defaults to looking up its creator's thoughts.
This behavior of reaching to see what Creator Musk thinks is weird, and it might be the first time such behavior has been spotted, not just in Grok, but in LLMs in general.
For now, we don't have an official explanation from xAI.
Mac? 1997 I started to use Windows after CBM went belly up, so a long time using Windows. But with Windows 11, I just had an insight in where it was going, and I was right. The base Mac mini is a great machine for the price, my main problem with Apple is the price they charge for updates over the base for memory and storage.
For the Mini, storage is solved by having a dock that fits under it with space for hard drives, memory not so easy, best to get as much memory as you think you need.
Mine has 16GB and a 500GB drive and the dock have two drives, a one TB and a 4TB, Ample space for stuff.
The mini is a m2 pro.
It depends if you want to go for a laptop, an all-in-one like the iMac or a mini.
Stable as hell, normally, but it did crash yesterday, but that was some software that crashed, not the OS.
I doubt I would go back to windows as my main computer now.