Home Tech OpenAI’s Next Flagship AI Model Reportedly Struggling to Outperform Older Models in...

OpenAI’s Next Flagship AI Model Reportedly Struggling to Outperform Older Models in Certain Tasks

Call us


OpenAI is rumoured to be working on the next generation of its flagship large language model (LLM), however, it might have hit a bottleneck. As per a report, the San Francisco-based AI firm is struggling to considerably upgrade the capabilities of its next AI model, internally codenamed Orion. The model is said to be outperforming older models when it comes to language-based tasks but is underwhelming in certain tasks such as coding. Notably, the company is also said to be struggling to accumulate enough training data to properly train AI models.

OpenAI’s Orion AI Model Reportedly Fails to Show Significant Improvements

The Information reported that the AI firm’s next major LLM, Orion, is not performing as per expectations when it comes to coding-related tasks. Citing unnamed employees, the report claimed that the AI model has shown a considerable upgrade when it comes to language-based tasks, but certain tasks are underwhelming.

This is considered to be a major issue as Orion is reportedly more expensive to run in OpenAI’s data centres compared to the older models such as GPT-4 and GPT-4o. The cost-to-performance ratio of the upcoming LLM might pose a challenge for the company to make it appealing to enterprises and subscribers.

Additionally, the report also claimed that the overall quality jump between GPT-4 and Orion is less than the jump between GPT-3 and GPT-4. This is a worrying development, however, the trend is also being noticed in other recently released AI models by competitors such as Anthropic and Mistral.

The benchmark scores of Claude 3.5 Sonnet, for instance, show that the quality jump is more iterative with each new foundation model. However, competitors have largely avoided the attention by focusing on developing new capabilities such as agentic AI.

In the report, the publication also highlighted that the industry, as a way to tackle this challenge, is opting to improve the AI model after the initial training is complete. This could be done by fine-tuning the output by adding additional filters. However, this is a workaround and does not offset the limitation that is being caused by either the framework or the lack of enough data.

While the former is more of a technological and research-based challenge, the latter is largely due to the availability of free and licenced data. To solve this, OpenAI has reportedly created a foundations team which has been tasked with finding a way to deal with the lack of training data. However, it cannot be said if this team will be able to procure more data in time to further train and improve the capabilities of Orion.



Source link