The business’s latest reasoning-focused models with evident chain-of-thought (CoT) are called o3 and o4-mini. The San Francisco-based AI company said that these models have visual reasoning capability, meaning they can analyze and “think” about an image to respond to more complex user queries. The models are the successors to the o1 and o3-mini, and they will be available to ChatGPT’s paid subscribers at the moment. Notably, the business also put out the GPT-4.1 series of AI models earlier this week.
The announcement of the latest large language models (LLMs) was made via OpenAI’s official handle on X, formerly known as Twitter. The AI company referred to these models as the “smartest and most capable models” and said that they now had the capacity to reason visually.
In essence, visual reasoning implies that these AI models are better able to analyse images and extract implicit and contextual information from them. According to OpenAI’s website, these are the company’s first models capable of combining and using all of ChatGPT’s tools in an agentic manner. These consist of image analysis, file interpretation, online search, Python, and picture creation.
The reasoning models, according to Open’s, can now agentically use and combine all of ChatGPT’s tools, including web searches, Python-based file and data analysis, deep reasoning about visual inputs, and even image generation. Importantly, these models are trained to reason about when and how to use tools to produce thoughtful and detailed answers in the right output formats, usually in less than a minute, to solve more complex problems. Today, releasing OpenAI o3 and o4-mini, the latest in our o-series of models trained to think for longer before responding: these models are the smartest models the business have released to date, representing a step change in ChatGPT’s capabilities for everyone from curious users to advanced researchers. As a result, they are better equipped to handle complex queries, which is a step toward ChatGPT becoming more agentic and capable of carrying out activities on your behalf. Setting a new threshold for intelligence and utility, the combination of cutting-edge reasoning with complete tool access results in noticeably better performance on real-world activities and academic benchmarks.
This implies that the o3 and o4-mini AI models are able to search for the picture online, alter it by flipping, cropping, zooming, and improving it, and even execute a Python code to retrieve data. According to OpenAI, this would enable the models to extract information from photos that aren’t ideal.
These models are now capable of reading handwriting from an upside-down notebook, reading a far sign with hardly visible lettering, identifying a specific query from a long list, determining a bus timetable from a bus image, solving puzzles, and more.

In terms of performance, OpenAI asserted that the o3 and o4-mini AI models beat the GPT-4o and o1 models on the CharXiv, MathVista, MMMU, and VLMs are blind benchmarks. There were no performance comparisons with external AI models disclosed by the company.
OpenAI also pointed out a number of these models’ drawbacks. Overly lengthy thinking chains might result from the AI models doing pointless picture editing processes and tool calls. Additionally prone to perception problems, the o3 and o4-mini may provide inaccurate answers by misinterpreting visual cues. The AI company also pointed out that there may be reliability-related problems with the models.
ChatGPT Plus, Pro, and Team users will be able to access both o3 and o4-mini AI models, which will take the place of the o1, o3-mini, and o3-mini-high models in the model selector. Next week, Enterprise and Edu users will be able to access the models through the Chat Completions and Responses application programming interfaces (APIs).
Combining the precise reasoning features of the o-series with more of the natural conversational skills and tool use of the GPT-series, which reflects the direction their models are going in. By combining these strengths, our future models will support advanced problem-solving and proactive tool use in addition to smooth, natural conversations.
More interesting features regarding the introduction of the o3 and the o4 mini, can be known can be read more on the blog site.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.