Some developers say they have so far had largely positive experiences with GPT-5. Jenny Wang, engineer, investor and creator of the personal style agent Alta, told Wired that the model seems to be better to accomplish complex coding tasks in a blow than other models. She compared it to the O3 and 4O of Openai, which she frequently uses for the generation of code and the simple fixes “as formatting, or if I want to create a point of termination API similar to what I already have,” says Wang.
In its GPT-5 tests, Wang says that she asked the model to generate code for a press page for the website of her company, including specific design elements that would correspond to the rest of the aesthetics of the site. GPT-5 finished the task in a single catch, while in the past, Wang should have revised his guests during the process. There was an important error, however: “He hallucinated the URL”, explains Wang.
Another developer, who spoke under the guise of anonymity because their employer has not authorized them to speak to the press, says that GPT-5 excels in solving deep technical problems.
The developer’s current leisure project writes a programmatic network analysis tool, which would require code insulation for safety purposes. “I essentially presented my project and certain paths that I envisaged, and GPT-5 took everything and made some recommendations as well as a realistic calendar,” explains the developer. “I am impressed.”
A handful of OPENAI business partners and business customers, including Cursor, Windisurf and concept, have been publicly guaranteed for GPT-5 coding and reasoning skills. (Openai has included many of these remarks in its own blog article announcing the new model.) Notion has also shared on X that it is “fast, meticulous and manages the complex work 15% better than the other models we have tested”.
But a few days after the release of GPT-5, some developers weighed online with complaints. Many have said that the coding capacities of the GPT-5 seemed behind the curve for what was supposed to be an ultra-compatible model of cutting edge of the most buzzing IA society in the world.
“Openai’s GPT-5 is very good, but it seems to be something that would have been published a year ago,” said Kieran Klassen, a developer who built an AI assistant for reception boxes by email. “His coding capacities remind me of Sonnet 3.5,” he adds, referring to an anthropogenic model launched in June 2024.
Amir Salihefendić, founder of the start-up Doist, said in an article on social networks that he used GPT-5 in the cursor and that he had found it “quite disappointing” and that “it is particularly bad in coding”. He said that the release of GPT-4 looked like a “Llama 4” moment, referring to the Meta AI model, which had also disappointed some people in the AI community.
On X, the developer McKay Wrigley wrote that GPT-5 is a “phenomenal daily cat model”, but with regard to coding, “I will always use Claude Code + opus”.
Other developers describe GPT -5 as “exhaustive” – sometimes useful, but often irritating in its long stopover. Wang, who was satisfied in the entire front coding project she attributed to the GPT-5, says that she noticed that the model was “more redundant. He could clearly have found a cleaner or shorter solution ”. (Kapoor stresses that GPT-5’s verbity can be adjusted, so that users can ask him to be less talkative or even do less reasoning in exchange for better performance or cheaper price.)