
OpenAI on Monday launched a brand new household of fashions known as GPT-4.1. Sure, “4.1” — as if the corporate’s nomenclature wasn’t complicated sufficient already.
There’s GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all of which OpenAI says “excel” at coding and instruction following. Accessible by OpenAI’s API however not ChatGPT, the multimodal fashions have a 1-million-token context window, which means they will soak up roughly 750,000 phrases in a single go (longer than “Battle and Peace”).
GPT-4.1 arrives as OpenAI rivals like Google and Anthropic ratchet up efforts to construct subtle programming fashions. Google’s lately launched Gemini 2.5 Professional, which additionally has a 1-million-token context window, ranks extremely on in style coding benchmarks. So do Anthropic’s Claude 3.7 Sonnet and Chinese language AI startup DeepSeek’s upgraded V3.
It’s the purpose of many tech giants, together with OpenAI, to coach AI coding fashions able to performing advanced software program engineering duties. OpenAI’s grand ambition is to create an “agentic software program engineer,” as CFO Sarah Friar put it throughout a tech summit in London final month. The corporate asserts its future fashions will be capable to program total apps end-to-end, dealing with facets similar to high quality assurance, bug testing, and documentation writing.
GPT-4.1 is a step on this route.
“We’ve optimized GPT-4.1 for real-world use based mostly on direct suggestions to enhance in areas that builders care most about: frontend coding, making fewer extraneous edits, following codecs reliably, adhering to response construction and ordering, constant instrument utilization, and extra,” an OpenAI spokesperson informed TechCrunch by way of e-mail. “These enhancements allow builders to construct brokers which are significantly higher at real-world software program engineering duties.”
OpenAI claims the total GPT-4.1 mannequin outperforms its GPT-4o and GPT-4o mini fashions on coding benchmarks, together with SWE-bench. GPT-4.1 mini and nano are mentioned to be extra environment friendly and quicker at the price of some accuracy, with OpenAI saying GPT-4.1 nano is its speediest — and least expensive — mannequin ever.
GPT-4.1 prices $2 per million enter tokens and $8 per million output tokens. GPT-4.1 mini is $0.40/million enter tokens and $1.60/million output tokens, and GPT-4.1 nano is $0.10/million enter tokens and $0.40/million output tokens.
Based on OpenAI’s inner testing, GPT-4.1, which may generate extra tokens directly than GPT-4o (32,768 versus 16,384), scored between 52% and 54.6% on SWE-bench Verified, a human-validated subset of SWE-bench. (OpenAI famous in a weblog publish that some options to SWE-bench Verified issues couldn’t run on its infrastructure, therefore the vary of scores.) These figures are slightly below the scores reported by Google and Anthropic for Gemini 2.5 Professional (63.8%) and Claude 3.7 Sonnet (62.3%), respectively, on the identical benchmark.
In a separate analysis, OpenAI probed GPT-4.1 utilizing Video-MME, which is designed to measure the power of a mannequin to “perceive” content material in movies. GPT-4.1 reached a chart-topping 72% accuracy on the “lengthy, no subtitles” video class, claims OpenAI.
Whereas GPT-4.1 scores moderately effectively on benchmarks and has a more moderen “data cutoff,” giving it a greater body of reference for present occasions (as much as June 2024), it’s essential to understand that even a number of the finest fashions right this moment wrestle with duties that wouldn’t journey up specialists. For instance, many research have proven that code-generating fashions usually fail to repair, and even introduce, safety vulnerabilities and bugs.
OpenAI acknowledges, too, that GPT-4.1 turns into much less dependable (i.e., likelier to make errors) the extra enter tokens it has to take care of. On one of many firm’s personal assessments, OpenAI-MRCR, the mannequin’s accuracy decreased from round 84% with 8,000 tokens to 50% with 1 million tokens. GPT-4.1 additionally tended to be extra “literal” than GPT-4o, says the corporate, generally necessitating extra particular, express prompts.