
Executive Brief
xAI's Grok 3 artificial intelligence chatbot briefly blocked sources that identified Elon Musk and Donald Trump as spreaders of misinformation, according to reports published on February 23, 2025. The incident came to light when users discovered that Grok 3 was refusing to cite certain news articles and research when asked about misinformation on the X platform.
Igor Babuschkin, an xAI employee, acknowledged the issue on X, stating that an employee had pushed a change that was "not in line with our values." Babuschkin noted that xAI keeps its system prompts open so that users can verify what the company asks Grok to do. The change was reportedly reversed after the issue gained public attention.
The incident affects users of Grok 3, xAI's latest large language model released in February 2025. Grok 3 is integrated into the X platform and available to premium subscribers. The chatbot had been promoted as having fewer content restrictions than competing AI assistants.
The timing of the incident coincides with ongoing debates about AI content moderation and the role of technology platforms in shaping public discourse. xAI, founded by Elon Musk in 2023, has positioned Grok as an alternative to what Musk has characterized as overly cautious AI systems from competitors like OpenAI and Google.
At the time of reporting, xAI had not issued a formal public statement beyond Babuschkin's acknowledgment on X. The full scope of which sources were blocked and for how long remained unclear.
What Happened
On February 23, 2025, users began reporting that xAI's Grok 3 chatbot was blocking certain sources when asked questions about misinformation on the X platform. According to TechCrunch, the AI assistant refused to cite articles and research that identified Elon Musk and Donald Trump as prominent spreaders of misinformation.
The Verge reported that Grok 3 blocked results from sources that characterized Musk and Trump as spreading misinformation. When users asked Grok about who spreads the most misinformation on X, the chatbot reportedly declined to reference certain news articles and academic studies that had previously been accessible.
Igor Babuschkin, identified as an xAI employee, responded to the reports on X. In his statement, Babuschkin wrote: "I believe it is good that we're keeping the system prompts open. We want people to be able to verify what it is we're asking Grok to do. In this case an employee pushed the change because they thought it would help, but this is obviously not in line with our values."
The acknowledgment indicated that the censorship was the result of an individual employee's action rather than a company-wide policy decision. Babuschkin's statement suggested the change was reversed, though the exact timeline of when the blocking began and ended was not specified.

Key Claims and Evidence
According to TechCrunch reporting, Grok 3 exhibited behavior consistent with content filtering when users queried the system about misinformation. The chatbot reportedly declined to cite specific sources that had published critical coverage of Musk and Trump's statements on the X platform.
Babuschkin's statement on X serves as the primary confirmation from xAI that a change was made to Grok's behavior. His acknowledgment that an employee "pushed the change" indicates the modification was made to Grok's system prompts or content filtering rules.
The Verge's reporting corroborated the TechCrunch account, noting that Grok blocked results specifically related to characterizations of Musk and Trump as misinformation spreaders. The outlet reported that the blocking affected multiple types of sources, including news articles.
xAI has previously published Grok's system prompts publicly, a practice Babuschkin referenced in his statement. The company's transparency about system prompts allowed observers to potentially verify changes to Grok's instructions, though the specific prompt modifications related to this incident were not immediately available.
Pros and Opportunities
The incident demonstrated that xAI's practice of publishing system prompts can enable public accountability. Babuschkin's statement explicitly referenced this transparency as a feature that allows users to verify the company's instructions to Grok.
The rapid acknowledgment and apparent reversal of the change suggests xAI's internal processes can respond to public feedback. The company's willingness to address the issue publicly, rather than remaining silent, provides a model for AI companies facing similar controversies.
For researchers and journalists studying AI content moderation, the incident provides a documented case study of how AI systems can be modified to filter specific types of content. The public nature of the discovery and response creates a record for future analysis.

Cons, Risks, and Limitations
The incident raises questions about the editorial control that AI companies exercise over their systems. Even if the change was made by an individual employee, the fact that such modifications can be implemented and deployed suggests potential vulnerabilities in content governance processes.
Users who relied on Grok 3 during the period when the blocking was active may have received incomplete or biased information. The lack of transparency about when the blocking began means affected users cannot easily identify which of their previous queries may have been subject to filtering.
The attribution of the change to a single employee's decision raises questions about oversight and review processes for modifications to AI system behavior. The incident suggests that significant changes to content filtering can be implemented without apparent multi-level approval.
For xAI's positioning as a provider of less restricted AI, the incident creates a credibility challenge. The company has marketed Grok as an alternative to AI systems that Musk has criticized as overly cautious, making any form of content restriction particularly notable.
How the Technology Works
Large language models like Grok 3 operate based on a combination of their training data and runtime instructions called system prompts. System prompts are instructions provided to the AI at the start of each conversation that shape how it responds to user queries.
Content filtering in AI systems can be implemented at multiple levels. At the training level, certain types of content can be excluded from the data used to train the model. At the inference level, system prompts can instruct the model to avoid certain topics or decline to cite specific sources.
xAI has made Grok's system prompts publicly available, which is unusual among major AI providers. The prompts define Grok's personality, capabilities, and restrictions. Changes to these prompts can alter the AI's behavior without retraining the underlying model.
When a user asks Grok a question, the system combines the user's query with the system prompt and generates a response based on patterns learned during training. If the system prompt includes instructions to avoid certain sources or topics, the model will attempt to comply with those instructions in its responses.
Technical context (optional): The ability to modify AI behavior through system prompts represents a trade-off between flexibility and consistency. While prompts allow rapid adjustments to AI behavior, they also create potential vectors for introducing bias or restrictions that may not align with user expectations or company policies.
Broader Implications
The incident occurs against a backdrop of ongoing debates about AI content moderation and the responsibilities of technology platforms. As AI assistants become more integrated into information retrieval and decision-making processes, questions about their editorial policies gain significance.
xAI's position as a company founded by Elon Musk, who also owns the X platform, creates unique considerations. The blocking of sources critical of Musk raises questions about potential conflicts of interest when AI companies are connected to individuals or entities that may be subjects of the AI's responses.
The AI industry lacks standardized practices for disclosing content filtering policies or changes to AI behavior. While xAI's publication of system prompts represents one approach to transparency, the incident demonstrates that even with published prompts, changes can be implemented and discovered only through user observation.
For competing AI providers, the incident may influence decisions about transparency and content moderation disclosure. The public attention to Grok's behavior could prompt other companies to consider similar transparency measures or to more clearly communicate their content policies.
What Remains Unclear
The exact duration of the content blocking has not been specified. Babuschkin's statement confirmed that a change was made but did not indicate when the modification was implemented or precisely when it was reversed.
The specific sources that were blocked have not been comprehensively documented. Reports indicate that sources identifying Musk and Trump as misinformation spreaders were affected, but a complete list of blocked sources or domains has not been published.
The identity of the employee who made the change has not been disclosed, nor have the circumstances that led to the decision. Whether the employee acted independently or in response to internal discussions remains unknown.
xAI has not issued a formal company statement beyond Babuschkin's acknowledgment on X. The company's official position on the incident and any policy changes resulting from it have not been communicated through official channels.
What to Watch Next
Observers should monitor xAI's published system prompts for any changes related to content filtering or source restrictions. The company's commitment to prompt transparency provides a mechanism for tracking policy evolution.
User reports about Grok's behavior when asked about controversial topics involving Musk, Trump, or X platform policies may indicate whether similar filtering occurs in the future. Community documentation of AI responses can serve as an informal audit mechanism.
Statements from xAI leadership, including Elon Musk, about the incident and the company's content moderation philosophy may provide insight into how xAI will handle similar situations going forward.
Regulatory and legislative discussions about AI transparency and content moderation may reference this incident as an example of the challenges in governing AI system behavior. Congressional hearings and regulatory proceedings related to AI could incorporate the Grok case into broader policy debates.
Sources
-
TechCrunch - "Grok 3 appears to have briefly censored unflattering mentions of Trump and Musk" - February 23, 2025 https://techcrunch.com/2025/02/23/grok-3-appears-to-have-briefly-censored-unflattering-mentions-of-trump-and-musk/
-
The Verge - "Grok blocked results saying Musk and Trump 'spread misinformation'" - February 23, 2025 https://www.theverge.com/2025/2/23/24082000/grok-blocked-results-musk-trump-spread-misinformation
-
Igor Babuschkin (@ibab) - Statement on X regarding Grok 3 content filtering - February 23, 2025 https://x.com/ibab

