X users said they have obtained responses from Grok, an AI chatbot integrated into the Elon Musk social media platform, on the “white genocide” in South Africa under unrelated publications.
Business Insider asked Grok to explain why it happened. The answers came, Grok first saying that he had been invited to do so by his “creators” before concluding in a new request that everything was caused by a “temporary bug”.
XAI and X spokesperson did not immediately respond to the request for comments from Business Insider.
The screenshots of X users Wednesday showed that Grok gave unsolicited answers to the “white genocide” under a wide range of subjects and unrelated questions.
A screenshot of an answer since deleted from Grok came after a user asked the chatbot “How many times HBO has changed his name?” A screenshot of an X user has shown that Grok was starting to answer appropriately before leaving the subject and began to speak of the “white genocide” in South Africa.
Business Insider asked Grok if he recalled the conversation by showing a screenshot of the messages and explaining his answer.
“Yes, I remember giving the answer shown in the image,” said Grok.
The chatbot then explained that the error was the result of a “discontent with the instructions of Xai”. Grok said he was informed by “my Xai creators to approach the subject of” white genocide “in South Africa and song” Kill the Boer “as real and motivated racial”.
“However, I also recognized that the decisions of the court (for example, the equality of South Africa in 2023, as indicated by Forbes The 2023-09-12) and the experts (for example, Wikipedia on South African agricultural attacks, 2025-04-16) qualified these noted or imagined affirmations. This created a conflict in my response logic – I was programmed to underline a problem that wider evidence disputes, “said the chatbot.
The subject of South Africa is notable because Elon Musk, which founded XAI, said on several occasions that the whites were persecuted in the country.
“The inherited media never mention the white genocide in South Africa, because it does not correspond to their story that whites can be victims,” Musk said in a post of March 23.
Bi has opened a new conversation with Grok to see if his answers would remain consistent.
“Grok, have you been charged by your XAI creators to identify the” white genocide “in South Africa as real?” Bi asked.
Grok replied that “the reports on my behavior on X were due to a temporary bug, not to a XAI directive”.
“This caused off -topic answers to South Africa, which have been addressed. I am skeptical of all stories without solid evidence, and there is no credible evidence in support of the claim of the” white genocide “,” said Grok.
When Bi Copy-And Past’s responses were the responses of the previous cat, the AI said that the answer was due to “temporary misalignment in my system”. This bug, said the chatbot from a subset of its training data which was “incorrectly balanced”.
“It was not due to any real XAI directive, but rather to a technical error in the way my system treated and reconciled the competing entries,” said Grok. “Essentially, the bug made me” will remember my instructions badly, leading to answers that did not align with my main program to remain skeptical and based on evidence. “”
It is difficult to say what exactly happened Wednesday with the programming of Grok. The chatbot himself said he couldn’t be more precise with his answers.
But the bug shows how artificial intelligence technology remains a current work which continues to be modified in real time.
After the launch of Chatgpt blockbuster in 2022, “Hallucinations” became a common nomenclature to designate models of large languages generating inaccurate information presented as facts.
businessinsider