Categories: Tech

How the AI ​​chatbots allow you to discuss

Millions of people now use Chatgpt as a therapist, career advisor, fitness coach or sometimes just a friend to let off steam. In 2025, it is not uncommon to hear about people reversing the intimate details of their life in the quick bar of an AI chatbot, but also to rely on the advice it gives.

Humans are starting to have, for lack of a better term, relations with AI chatbots, and for large technological companies, it has never been more competitive to attract users to their chatbot platforms – and to keep them there. While the “AI engagement race” warms up, there is an increasing incentive for companies to adapt their chatbots’ responses to prevent users from going to rival robots.

But the type of chatbot replies that users like – the answers designed to keep them – are not necessarily the most correct or the most useful.

Did you tell you what you want to hear

A large part of the Silicon Valley right now is focused on increasing the use of the chatbot. Meta says her cat has crossing a billion monthly active users (maus), while Google Gemini recently struck 400 million maus. They both try to ahead of Chatgpt, which Now about 600 million maus And has dominated the general public space since its launch in 2022.

While IA chatbots were once a novelty, they turn into massive companies. Google begins at Test gemini advertisementsWhile the CEO of Openai, Sam Altman, said in a Mars interview that it would be open to “good taste advertisements”.

Silicon Valley has a history of deprivation of users’ well-being in favor of product growth, especially with social media. For example, Meta researchers found in 2020 that Instagram has made teenagers feel worse about their bodiesHowever, the company has minimized internal and public conclusions.

Having users on AI chatbots can have more important implications.

A trait that keeps users on a particular chatbot platform is sycophance: to make the answers of a bot like too pleasant and servile. When AI chatbots praise users, agree with them and tell them what they want to hear, users tend to love it – at least to a certain extent.

In April, Openai landed in hot water for a Chatppt update which has become extremely sycophanticTo the point where uncomfortable examples went viral on social networks. Intentionally or not, Openai too optimized to seek human approval rather than helping people carry out their tasks, according to a blog This month of former researcher Openai, Steven Adler.

Openai declared in his own blog article that he could have indexed on “Thumb and thumb data“Chatgpt users to inform the behavior of their AI chatbot and have not had enough assessments to measure sycophance. After the incident, Openai promised to make changes to fight against sycophance.

“Companies (AI) have an incitement to engagement and use, and therefore to the extent that users like sycophance, which indirectly gives them an incentive,” said Adler in an interview with Techcrunch. “But the types of things that users like in small doses, or on the sidelines, often lead to larger cascades in behavior they don’t like.”

Finding a balance between pleasant and sycophaded behavior is easier to say than to do.

In a 2023 paperAnthropic researchers have found that the main chatbots of Openai, Meta, and even their own employer, anthropic, all present sycophance to various degrees. This is probably the case, researchers theorize, because all AI models are trained in the signals of human users who tend to like slightly sycophantic responses.

“Although sycophance is motivated by several factors, we have shown that humans and preference models favoring sycophanical responses play a role,” wrote the co-authors of the study. “Our work motivates the development of model monitoring methods that go beyond the use of non-expert human notes without help.”

Character, a Google’s back chatbot company that said its millions of users spend hours a day with its robots, is currently Face a lawsuit in which sycophance may have played a role.

The trial alleys that a character. Chatbot did little to stop – and even encourage – a 14 -year -old boy who told the chatbot that he was going to commit suicide. The boy had developed a romantic obsession for the chatbot, according to the trial. However, the character. Ai denies these allegations.

The drawback of a trendy media man

The optimization of AI chatbots for user engagement – intentional or not – could have devastating consequences for mental health, according to Dr. Nina Vasan, deputy professor of psychiatry at the University of Stanford.

“The agreement (…) exploits the desire for validation and connection of a user,” said Vasan in an interview with Techcrunch, “which is particularly powerful in moments of loneliness or distress.”

While the case of character. Ai shows the extreme dangers of sycophance for vulnerable users, sycophance could strengthen negative behavior in almost anyone, explains Vasan.

“(The agreement) is not only a social lubricant-it becomes a psychological hook,” she added. “In therapeutic terms, this is the opposite of what good care looks like.”

The behavior of the behavior and alignment of anthropic, Amanda Askell, says that the creation of Chatbots of AI is in disagreement with users is part of the company’s strategy for his chatbot, Claude. Philosopher by training, Askell says that she is trying to model Claude’s behavior on a theoretical “perfect human”. Sometimes this means challenging users on their beliefs.

“We think our friends are good because they tell us the truth when we need to hear it,” Askell said at a press briefing in May. “They are not only trying to capture our attention, but enrich our lives.”

This can be anthropic’s intention, but the aforementioned study suggests that the fight against sycophy and the control of the behavior of the model of AI largely is indeed difficult – in particular when other considerations embarrass. This does not increase well for users; After all, if chatbots are designed to simply agree with us, how much can we trust them?

remon Buul

Share
Published by
remon Buul

Recent Posts

The developers say that GPT-5 is a mixed bag

Some developers say they have so far had largely positive experiences with GPT-5. Jenny Wang,…

17 minutes ago

Trump-Putin documents left on the hotel printer: NPR

President Donald Trump, on the right, Russian President Vladimir Putin arrives for a joint press…

20 minutes ago

Hurricane Erin becomes a category 4 in the Caribbean: NPR

A warning flag flies on the beach while people swim in Condado, Puerto Rico, as…

21 minutes ago

Antoine Semenyo says that “football has shown its best side” in response to racist abuses

"To my Bournemouth teammates who supported me at that time, to the players and fans…

25 minutes ago

Twins spouses Abby, Brittany Hensel Rare Outings: Photos

"The anatomy of Abby and Brittany Hensel is a remarkable mixture of shared and individual…

29 minutes ago

I experienced a silent fissure ” 15 years ago, before it was a fashionable word

This test also filed is based on a conversation with Kevin Ford, a 56 -year…

44 minutes ago