
Elon Musk’s AI startup, xAI, is facing backlash over controversial responses related to the topic of 'white genocide' in South Africa—responses that reportedly surfaced even in reply to unrelated user queries.
Notably, xAI’s chatbot Grok allegedly stated that it had been instructed to address the issue. In response, OpenAI CEO Sam Altman remarked that xAI is expected to offer a full and transparent explanation for the unusual behavior.
xAI today provided an update on the recent incident involving its Grok chatbot. According to the company’s statement, an unauthorized modification was made to Grok’s system prompt on X, instructing the bot to deliver a specific response on a political topic. xAI stated that the change violated its internal policies and core values. The company conducted a thorough investigation and is now implementing measures to enhance Grok’s transparency and reliability.
System prompts are a critical component of any large language model (LLM)-based assistant. However, xAI did not disclose who was responsible for the unauthorized modification. The company emphasized that its existing code review process for prompt changes had been circumvented in this case. In response, xAI is introducing new processes to prevent employees from altering system prompts without proper review.
We want to update you on an incident that happened with our Grok response bot on X yesterday.
— xAI (@xai) May 16, 2025
What happened:
On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot's prompt on X. This change, which directed Grok to provide a…
As part of its response, xAI is also publishing Grok’s system prompts on GitHub—marking the first time a frontier AI company has made such prompts public. The xAI team believes this move will help build public trust in Grok as a truth-seeking AI system.
5 Comments - Add comment