The new OpenAI model leaps forward in programming capabilities, but raises unprecedented cybersecurity risks

The new OpenAI model leaps forward in programming capabilities, but raises unprecedented cybersecurity risks

GettyImages-2214110034 The new OpenAI model leaps forward in programming capabilities, but raises unprecedented cybersecurity risks

OpenAI believes it has finally gotten ahead in one of the most closely watched races in AI: AI-powered programming. Its latest model, GPT-5.3-Codex, represents a strong lead over competing systems, showing significantly higher performance in coding benchmarks and reported results than previous generations of OpenAI and Anthropic models – signaling a long-awaited advantage in a category that could reshape how software is built.

But the company is rolling out the model with unusually strict controls and delaying full access to developers as it faces a harder reality: The same capabilities that make GPT-5.3-Codex so effective for writing, testing, and thinking about code also raise serious cybersecurity concerns. In the race to build the most powerful programming model, OpenAI faced launch risks.

GPT-5.3-Codex is available to paid ChatGPT users, who can use the model for everyday software development tasks such as writing, debugging, and testing code through OpenAI’s Codex tools and the ChatGPT interface. But right now, the company isn’t opening up unrestricted access for high-risk cybersecurity uses, and OpenAI isn’t immediately enabling full API access that would allow the model to be automated at scale. These more sensitive applications are tied behind additional safeguards, including a new trusted access program for vetted security professionals, reflecting OpenAI’s view that the model has crossed the threshold for new cybersecurity risks.

Company Blog post Accompanying the model’s release on Thursday said that while there was no “conclusive evidence” that the new model could fully automate cyberattacks, “we are taking a precautionary approach and deploying our most comprehensive cybersecurity suite to date. Our mitigations include safety training, automated monitoring, trusted access to advanced capabilities, and execution pipelines including threat intelligence.”

OpenAI CEO Sam Altman posted on X addressed the concerns, saying that the GPT-5.3-Codex is “our first model to achieve a score of ‘High’ for cybersecurity in our preparedness framework,” which is an internal risk rating system that OpenAI uses for model releases. In other words, this is the first model that OpenAI believes is good enough at programming and reasoning and can meaningfully enable real-world cyber harm, especially if automated or used at scale.

Join us at the Fortune Workplace Innovation Summit May 19-20, 2026, in Atlanta. The next era of workplace innovation has arrived, and the old rules of the game are being rewritten. In this exclusive, high-energy event, the world’s most innovative leaders will come together to explore how AI, humanity and strategy are coming together to once again redefine the future of work. Register now.

Share this content:

Post Comment