OpenAI, a leading company in the artificial intelligence sector, is reportedly experiencing delays in the development of its next-generation core model, GPT-5. Despite significant investment, the achievements thus far have yet to justify the financial outlay.
According to internal sources, OpenAI has conducted at least two large-scale data training attempts as part of its efforts to enhance the model’s performance through extensive data ingestion. However, the first round of training has fallen behind schedule, indicating that the larger-scale training tasks are proving to be not only time-consuming but also economically burdensome.
While GPT-5 shows some improvement over its predecessor, these advancements have not been sufficient to warrant the high operating costs associated with the model.
In terms of data acquisition strategies, OpenAI has taken a multifaceted approach. The company is utilizing a wide range of public and licensed data sources, while also recruiting experts to develop new content through complex mathematical problem-solving and coding. Additionally, OpenAI is augmenting its datasets with synthetic data generated by another model, O1.
Notably, OpenAI’s CEO, Sam Altman, has publicly acknowledged that as the complexity of AI models increases, the company is facing unprecedented challenges in multi-threaded project management, particularly regarding the efficient allocation of computational resources. As a result, OpenAI has indicated that GPT-5 will not be released in the coming year.
Kevin Weil, Chief Product Officer at OpenAI, elaborated on the company’s current strategic focus, emphasizing the commitment to enhancing the model’s safety, simulation accuracy, and scalability of computational power before initiating significant upgrades to its video model, Sora. The goal is to ensure that all standards meet top industry levels.