OpenAI and Anthropic cross-test models for hallucination and security issues
Jinse Finance reported that OpenAI and Anthropic have recently evaluated each other's models in order to identify issues that may have been missed during their own internal testing. Both companies stated on their respective blogs on Wednesday that this summer, they conducted safety tests on each other's publicly available AI models, examining whether the models exhibited hallucination tendencies and issues of so-called "misalignment," meaning the models did not operate as intended by their developers. These evaluations were completed before OpenAI launched GPT-5 and before Anthropic released Opus 4.1 in early August. Anthropic was founded by former OpenAI employees.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Today's Fear and Greed Index remains at 28, indicating a level of fear.
"1011 Insider Whale" closed a 5x short position of 5,000 ETH, earning approximately $55,000 in profit.
ETH ICO address transfers nearly $120 million worth of ETH to a new wallet after 10 years
