SAN FRANCISCO – OpenAI on Dec 20 unveiled a new artificial intelligence system, OpenAI o3, that is designed to “reason” through problems involving maths, science and computer programming.
The company said the system, which it is currently sharing only with safety and security testers, outperformed the industry’s leading AI technologies on standardised benchmark tests that rate skills in maths, science, coding and logic.
The new system is the successor to o1, the reasoning system the company introduced in 2024. OpenAI o3 was more accurate than o1 by over 20 per cent in a series of common programming tasks, the company said, and it even outperformed its chief scientist Jakub Pachocki, on a competitive programming test.
OpenAI said it plans to roll the technology out to individuals and businesses in early 2025.
“This model is incredible at programming,” said OpenAI CEO Sam Altman during an online presentation to reveal the new system. He added that at least one OpenAI programmer could still beat the system on this test.
The new technology is part of a wider effort to build AI systems that can reason through complex tasks. This week, Google unveiled similar technology, called Gemini 2.0 Flash Thinking Experimental, and shared it with a small number of testers.
These two companies and others aim to build systems that can carefully and logically solve a problem through a series of steps, each one building on the last. These technologies could be useful to computer programmers who use AI systems to write code or to students seeking help from automated tutors in areas such as maths and science.
With the debut of the ChatGPT chatbot in late 2022, OpenAI showed that machines could handle requests more like people, answering questions, writing term papers and generating computer code. But the responses were sometimes flawed.
ChatGPT learned its skills by analysing enormous amounts of text culled from across the internet, including news articles, books, computer programs and chat logs. By pinpointing patterns, it learned to generate text on its own.
Because the internet is filled with untruthful information, the technology learned to repeat the same untruths. Sometimes, it made things up – a phenomenon that scientists called “hallucination”.
OpenAI built its new system using what is called “reinforcement learning”. Through this process, a system can learn behavior through extensive trial and error. By working through various maths problems, for instance, it can learn which techniques lead to the right answer and which do not. If it repeats this process with a very large number of problems, it can identify patterns.
Although systems such as o3 are designed to reason, they are based on the same core technology as the original ChatGPT. That means they may still get things wrong or hallucinate.
The system is designed to “think” through problems. It tries to break the problem down into pieces and look for ways to solve it, which can require much larger amounts of computing power than is needed for ordinary chatbots. That can also be expensive.
In December, OpenAI began selling OpenAI o1 to individuals and businesses. One service, aimed at professionals, was priced at $200 a month.
The New York Times sued OpenAI and Microsoft in December, alleging copyright infringement of news content related to AI systems. The companies have denied the claims. NY TIMES
Join ST’s Telegram channel and get the latest breaking news delivered to you.
This post was originally published on here