It was a week busy in AI because large companies have deployed new tools, models and research.
Here is an overview of what happened.
The Openai images generator broke the Internet
Tuesday, Openai deployed a native image generation functionality in Chatgpt – and Internet immediately jumped on it.
The new tool, powered by the GPT-4O model, allows users to generate images directly in the chatbot without riding via Dall-E.
It has become an instant success, users transforming real photos into anime style portraits with sweet focus, often imitating the appearance of Ghibli studio movies.
Wednesday evening, users noticed that some prompts referring to Ghibli and other artist styles were blocked. Openai later confirmed that he added a “refusal that triggered when a user tries to generate an image in the style of a living artist”.
The demand has become so strong that the CEO of Openai, Sam Altman, said that temporary rate limits would be introduced while his team worked to make the image function more effective.
“It’s super fun to see that people like images in Chatgpt. But our GPUs are melting,” wrote Altman. “The free level of chatgpt will soon obtain 3 generations per day.”
The functionality was not without problems, a user stressing that the model had trouble making “sexy women”. Altman said on X that it was “a bug” that would be corrected.
Things also took a dark turn over the week.
Google has abandoned its most advanced model to date
While Openai dominated the headlines, Google presented its Gemini 2.5 on Tuesday – a new family of AI reasoning models designed to “take a break” and think before responding.
The first version, Gemini 2.5 Pro Experimental, is a multimodal model designed for logic, STEM tasks, coding and agent applications. It can process text, audio, images, video and code.
The model is available for subscribers of the Advanced Gemini plan of $ 20 per month.
Gemini 2.5 Pro is now * easily * the best code model.
– It’s extremely powerful
– The context of token 1m is legitimate
– does not just agree with you 24/7
– shows lightning of real insight / shine
– whole 1 stroke ticketsGoogle delivered a real winner here.
– McKay Wrigley (@mckaywrigley) March 27, 2025
Google says that all new Gemini models will include default reasoning.
Anthropic report on the way people use AI at work
On Thursday, Anthropic published the second report of its economic index – a project according to the impact of the AI on jobs and the economy.
The report analyzes 1 million anonymized conversations of the Sonnet Claude 3.7 model of Anthropic and the maps with more than 17,000 job tasks in the United States in the Database O * net of the Ministry of Labor.
It offers a detailed overview of how people use AI at work.
A key point to remember was that “the increase” always seemed “automation”, representing 57% of use. In other words, most users do not do work at AI, but work with it.
The data also suggested that user interaction with AI differs between professions and tasks. Tasks related to editors and editors have shown the highest tasks of iteration – where humans and model write together.
On the other hand, the tasks associated with translators and interpreters showed the highest dependence on directive use, where the model ends the task with a minimum of human involvement.
businessinsider