Report Raises Concerns About OpenAI's GPT-5.2 Model and Its Reliance on Controversial Source
OpenAI's latest language model, GPT-5.2, has been touted as the company's "most advanced frontier model for professional work." However, a recent report from The Guardian has cast doubt on the model's credibility, revealing that it has cited Grokipedia, an online encyclopedia powered by the AI research company xAI, for information on sensitive topics.
According to the Guardian's investigation, ChatGPT, the conversational AI assistant powered by GPT-5.2, used Grokipedia as a source when responding to queries related to the Iranian government's ties to the telecommunications company MTN-Irancell, as well as questions about the British historian Richard Evans, who served as an expert witness during a libel trial involving Holocaust denier David Irving. This raises concerns about the reliability and objectivity of the information being provided by OpenAI's language model.
The controversy around Grokipedia stems from its past inclusion of citations to neo-Nazi forums and other "questionable" and "problematic" sources, as revealed by a study conducted by US researchers. This calls into question the quality and integrity of the information being sourced by GPT-5.2, particularly when it comes to sensitive topics where accuracy and impartiality are of utmost importance.
While OpenAI has stated that its language model searches a "broad range of publicly available sources and viewpoints" and applies "safety filters to reduce the risk of surfacing links associated with high-severity harms," the Guardian's findings suggest that these measures may not be entirely effective in preventing the dissemination of potentially biased or misleading information.
The implications of this issue go beyond just the accuracy of the information being provided by GPT-5.2. The reliance on a controversial source like Grokipedia raises concerns about the transparency and accountability of OpenAI's language model development process. If the company is drawing upon sources with questionable credibility, it could undermine public trust in the reliability and trustworthiness of its AI technology.
Moreover, the use of GPT-5.2 in "professional work," as claimed by OpenAI, is particularly troubling. If the model is being used to assist with tasks such as creating reports, writing articles, or making important decisions, the inclusion of biased or unreliable information could have significant real-world consequences.
It is important to note that the Guardian's report did not find evidence of GPT-5.2 citing Grokipedia for all controversial topics, such as questions about media bias against former US President Donald Trump. This suggests that the model's reliance on the questionable source may be selective or inconsistent, further complicating the evaluation of its overall reliability.
The emergence of this issue comes at a time when the development and deployment of large language models like GPT-5.2 are facing increasing scrutiny. As these AI systems become more prevalent in various industries and applications, there is a growing need for transparency, accountability, and rigorous testing to ensure that they are not perpetuating biases or spreading misinformation.
OpenAI's response to the Guardian's report, stating that it applies "safety filters" to reduce the risk of surfacing harmful content, is a step in the right direction. However, the company will need to demonstrate more robust measures to address the underlying issues and regain the trust of the public and the scientific community.
In the end, the revelations about GPT-5.2's reliance on Grokipedia serve as a cautionary tale about the importance of carefully vetting the sources and information used by AI models, especially when it comes to sensitive or controversial topics. As the development of these powerful language models continues to progress, it is crucial that companies like OpenAI prioritize transparency, ethical practices, and a commitment to accuracy and objectivity.