Is no longer sacred?
Reddit is one of the last places on the Internet where publications and comments do not feel like an endless pit of AI Sols. But it starts to change, and he threatens what Reddit says is his competitive advantage.
The CEO of Reddit, Steve Huffman, says that what makes people come back to the site is the information provided by real people, who often give reflected answers to the questions. While the Internet is saturated with content generated by AI, Huffman says that the communities of Reddit, organized and managed by real people, distinguish it from other social media platforms.
“The world needs community and shared knowledge, and that’s what we do best,” Huffman told Investors last week during a call for results.
Traffic to Reddit has increased considerably in the past year, thanks in part to users on Google specifically for reddit publications linked to their questions.
The Reddit’s commercial model has experienced increased attention since the company has been made public in March of last year. Since then, Reddit has amplified advertising on its forums and the inked agreements with Openai and Google to allow their models to train on the content of Reddit. In April, Reddit’s actions dropped after some analysts shared fears that the success of the company could be inextricably linked to Google Search.
“Barely a few years ago, the addition of Reddit at the end of your research request felt a novel,” said Huffman during a quarter results in February. “Today is a common way for people to find trust, recommendations and advice.”
But now, some Reddit users complain that the unique human communities for which the site are known are infiltrated by AI robots, or users who rely on tools like Chatgpt to write their publications, which can often be identified by formatting. Chatgpt likes a flea list and an EM-Dash, and nowadays, tends to be efficient in its positivity.
A user of the R / singularity community, who devotes himself to discussion on AI progress, recently pointed out a message of what they thought was a user generated by the AI repairing the disinformation on the assassination of July 2024 of President Donald Trump.
“The AI has just taken over the first page of Reddit,” noted the poster.
And on April 28, the Legal Director -in -Chief of Reddit said that the company sent “official legal requirements” to researchers from the University of Zurich after flooding one of the site’s communities with AI robots for a study. The moderators of the R / ChangemyView forum declared in an article that the researchers had carried out an “unauthorized experience” to “study how AI could be used to change the views”.
The researchers who led the experience declared in an article in Reddit that 21 of the 34 accounts they used were “compliant in the shadows” by Reddit, which means that the content they published would not appear for others. But they said that they had never received Reddit communication concerning the conditions of use of the conditions of use.
The moderators called the experience contrary to ethics and declared that the AI targeted certain users of the forum “in a personal way that they had not registered”. The post indicates that AI collapsed in certain positions, in particular by pretending to be the victim of rape, posing as a black man opposed to Black Lives Matter, and even posing as a person who received lower quality care in a foreign hospital, among other affirmations.
“The risks of psychological manipulation posed by the LLM are a largely studied subject,” wrote the moderators of the community. “It is not necessary to experiment on non -consenting human subjects.”
A spokesperson for the University of Zurich told Business Insider that the school was aware of the study and investigated. The spokesman said that the researchers decided not to publish the results of the study “to their own”.
“In the light of these events, the Ethics Committee of the Faculty of Arts and Social Sciences intends to adopt a stricter examination process in the future and, in particular, to coordinate with communities on platforms before experimental studies,” said the spokesman.
For Reddit’s commercial strategy, which is largely focused on advertising and its conviction that it provides some of the best research around because it is based on real human reactions, the increased presence of AI on the platform is a threat. And Reddit noticed.
On Monday, Huffman said in a Reddit article that the company would start using third parties to “keep Reddit Human”. Huffman said that “Reddit’s strength is its people” and that “independent AI in communities is a serious concern”.
“I have not posted for a while – and let’s be honest, when I introduce myself, it generally means that something has disappeared aside (and if it has not left aside, it’s probably about to do it),” said Huffman.
Third -party services will now ask users creating Reddit accounts for more information, such as their age, said Huffman. More specifically, “we will need to know if you are a human,” he said.
A Reddit spokesperson told Bi that Zurich’s experience was contrary to ethics and that Reddit’s automated tools reported most of the associated accounts before the end of the experience. The spokesman said Reddit is still working on detection characteristics and has already refined his processes since the experience was revealed.
However, some Reddit users say they are fed up with what they consider to be a “proliferation of LLM bots in the past 10 months”.
“Some of them imitate the deadliest of the brain, providing answers in a word with emojis at the end,” wrote a user. “They display a unnatural frequency, largely in submarines known for having voted almost anything.”
businessinsider