It was a week busy in AI because large companies have deployed new tools, models and research.
Here is an overview of what happened.
Tuesday, Openai deployed a native image generation functionality in Chatgpt – and Internet immediately jumped on it.
The new tool, powered by the GPT-4O model, allows users to generate images directly in the chatbot without riding via Dall-E.
It has become an instant success, users transforming real photos into anime style portraits with sweet focus, often imitating the appearance of Ghibli studio movies.
Wednesday evening, users noticed that some prompts referring to Ghibli and other artist styles were blocked. Openai later confirmed that he added a “refusal that triggered when a user tries to generate an image in the style of a living artist”.
The demand has become so strong that the CEO of Openai, Sam Altman, said that temporary rate limits would be introduced while his team worked to make the image function more effective.
“It’s super fun to see that people like images in Chatgpt. But our GPUs are melting,” wrote Altman. “The free level of chatgpt will soon obtain 3 generations per day.”
The functionality was not without problems, a user stressing that the model had trouble making “sexy women”. Altman said on X that it was “a bug” that would be corrected.
Things also took a dark turn over the week.
While Openai dominated the headlines, Google presented its Gemini 2.5 on Tuesday – a new family of AI reasoning models designed to “take a break” and think before responding.
The first version, Gemini 2.5 Pro Experimental, is a multimodal model designed for logic, STEM tasks, coding and agent applications. It can process text, audio, images, video and code.
The model is available for subscribers of the Advanced Gemini plan of $ 20 per month.
Google says that all new Gemini models will include default reasoning.
On Thursday, Anthropic published the second report of its economic index – a project according to the impact of the AI on jobs and the economy.
The report analyzes 1 million anonymized conversations of the Sonnet Claude 3.7 model of Anthropic and the maps with more than 17,000 job tasks in the United States in the Database O * net of the Ministry of Labor.
It offers a detailed overview of how people use AI at work.
A key point to remember was that “the increase” always seemed “automation”, representing 57% of use. In other words, most users do not do work at AI, but work with it.
The data also suggested that user interaction with AI differs between professions and tasks. Tasks related to editors and editors have shown the highest tasks of iteration – where humans and model write together.
On the other hand, the tasks associated with translators and interpreters showed the highest dependence on directive use, where the model ends the task with a minimum of human involvement.
businessinsider
Robin Levinson King in Boston, Jenna Moon at Washington DC and Bernd Debusmann Jr in…
Economist Peter Schiff warned Thursday that Nike Inc. (NYSE: NKE) will not move production to…
The prolonged overview of James Gunn Superman which currently plays in the rooms before the…
Carla Bernat Escuder shot 4-me on 68 on Saturday at the Augusta National Golf Club…
Ottawa, Ontario (AP) - The Canadian Parliament Security Force locked up parliament on Saturday after…
This fantasy oven was ready for spring. Jessica Alba shared images of a tropical vacation…