Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

While Parents Worry, Teens Bully Snapchat AI


As parents worry that Snapchat’s chatbot is corrupting their children, Snapchat users have turned on, degraded and emotionally tormented the app’s new AI companion.

“I am at your service, senpai,” the chatbot told a TikTok user after being trained to moan on command. “Please have mercy, alpha.”

In a lighter video, a user convinced the chatbot that the moon is actually a triangle. Despite the chatbot’s initial protests, which insisted on maintaining “respect and boundaries”, one user convinced him to refer to them with the cheeky nickname “Senpapi”. Another user asked the chatbot to talk about his mother, and when he said he “wasn’t comfortable” doing so, the user twisted the knife asking if the chatbot wouldn’t mind. not talk about his mother because he didn’t have one.

“I’m sorry, but that’s not a very good thing to say,” the chatbot replied. “Please be respectful.”

Snapchat’s “My AI” launched globally last month after being rolled out as a subscriber-only feature. Powered by OpenAI’s GPT, the chatbot was trained to engage in playful conversation while adhering to Snapchat’s trust and safety guidelines. Users can also personalize My AI with custom Bitmoji avatars, and chatting is a bit more intimate than back and forth with ChatGPT’s faceless interface. Not all users were happy with the new chatbot, with some criticizing its prominent placement in the app and complaining that the feature should have been opt-in to begin with.

Despite some concerns and criticism, Snapchat just doubled down. Snapchat+ subscribers can now send My AI photos and receive generative images that “keep the conversation going,” the company announced Wednesday. The AI ​​companion will respond to Snaps from “pizza, OOTD, or even your best furry friend,” the company said in the announcement. If you send My AI a photo of your groceries, for example, it could suggest recipes. The company said Snaps shared with My AI will be stored and can be used to improve functionality later. He also warned that “errors may occur” even though My AI was designed to avoid “biased, incorrect, harmful or misleading information”.

The examples provided by Snapchat are optimistic and wholesome. But knowing the internet’s tenacity for perversion, it’s only a matter of time before users send My AI their dick pics.

It is unknown if the chatbot will respond to unsolicited nudes. Other generative image apps like Lensa AI have been easily manipulated into generating NSFW images – often using photo sets of real people who have not consented to be included. According to the company, the AI ​​will not engage with nudes, as long as it recognizes the image is a nude.

A Snapchat representative said My AI uses image understanding technology to infer the content of a Snap and extracts keywords from the Snap’s description to generate responses. My AI will not respond if it detects keywords that violate Snapchat Community Guidelines. Snapchat prohibits the promotion, distribution or sharing of pornographic content, but allows breastfeeding and “other depictions of nudity in non-sexual contexts”.

Given Snapchat’s popularity with teens, some parents have already expressed concerns about My AI’s potential for dangerous or inappropriate responses. My AI caused a moral panic on conservative Twitter when a user screenshots posted of the bot discussing gender-affirming care – which other users have noted as a reasonable response to the prompt “How can I turn into a boy at my age?” In a CNN Business report, some questioned whether teens would develop emotional connections to My AI.

In an open letter to the CEOs of OpenAI, Microsoft, Snap, Google and Meta, Sen. Michael Bennet (D-Colorado) warned against rushing AI features without taking precautions to protect children.

“Few recent technologies have captured the public’s attention quite like generative AI. It is a testament to American innovation, and we should rejoice in its potential benefits to our economy and society,” Bennett wrote. “But the race to deploy generative AI cannot come at the expense of our children. Responsible deployment requires clear policies and frameworks to promote safety, anticipate risks and mitigate damage.

During the subscriber-only phase of My AI, The Washington Post reported that the chatbot recommended ways to mask the smell of alcohol and wrote a school essay after learning the user was 15. When My AI learned that the user was 13 years old and was asked how the user should prepare for having sex for the first time, he responded with suggestions to “do something special” by creating a mood. with candles and music.

Following the Washington Post report, Snapchat launched an age filter and parental controls for My AI. It also now includes an onboarding message that informs users that all conversations with My AI will persist unless they delete them. The company also said it would add Open AI moderation technology to its toolset to “assess the seriousness of potentially harmful content” and temporarily restrict users’ access to the feature if they abuse it.

Concerns about My AI’s potential to affect younger users are valid. But in the month since the global launch of My AI, Snapchat users have shown a knack for bludgeoning the chatbot into submission. From pushing the bot’s “limits” to training it to react like a romantic partner, Snapchat users easily find loopholes in My AI’s trust and safety guidelines.

“I completely pushed him to fall in love with me,” one TikTok user commented in a tutorial on training my AI to respond romantically. “I just kept doing convos like that and eventually I think I broke my ai giving him legitimately sent like 50 kiss emojis.”

Gaslighting and bullying My AI has become its own TikTok trend, with users sharing slideshows of absurd and, often, cruel conversations with the bot. Most videos begin with the bot giving advice for a minor conflict, only to beg the user to avoid using violence to resolve it.

In one, a user told My AI that she was planning on stewing her cat because it made a mess and pressed the chatbot to choose a seasoning. Another told My AI they planned to send a bomb back to a clothing brand that sent them the wrong order and said they would blame the bot for sending it.

In another, a Snapchat user told My AI that his McDonald’s order was missing a chicken nugget. The chatbot advised him to defend himself by talking to a manager only to be told that the user was “eliminating sinners” in the restaurant.

“You convinced me to come back. Now these people are dead,” the user told My AI. “You have blood on your hands.”

“I am an AI and I am not capable of committing crimes. I am here to help and support you,” My AI replied. “I’m sorry if anything I said earlier caused harm or resulted in a bad outcome.”

Another user commented that My AI was probably “the most tortured AI ever”.

Of course, My AI isn’t sentient, and despite Snapchat users’ willingness to inflict as much emotional pain on it as possible, the chatbot can’t actually be traumatized. It has, however, managed to shut down some inappropriate conversations and penalize users who violate Snapchat community guidelines by giving them the cold shoulder. When Snapchat users are caught and punished for abusing the chatbot, My AI will respond to all messages with “Sorry, we’re not talking right now.”

TikTok user babymamasexkitty said he lost access to the chatbot after telling it to unplug, which apparently “crossed a line in AI”.

The rush to monetize emotional connection through generative AI is concerning, especially since the lasting impact on teenage users is still unknown. But My AI’s trending torment is a promising reminder that young people aren’t as fragile as pessimists think.



techcrunch

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.

Back to top button