Meta now has an AI chatbot. Experts say to prepare for more AI-powered social media – The Mercury News
Jireh Deng | Los Angeles Times (TNS)
When you use Facebook Messenger these days, a new prompt greets you with this message: “Ask Meta AI anything.”
You may have opened the app to text a friend, but Meta’s new AI-powered chatbot tempts you with encyclopedic knowledge accessible in just a few clicks.
Meta, the parent company of Facebook, has implemented its local chatbot on its Whatsapp and Instagram services. Now, billions of internet users can open one of these free social media platforms and engage Meta AI’s services as a dictionary, guide, advisor, or illustrator, among many other tasks it can perform – well not always reliably or infallibly.
“Our goal is to create the best AI in the world and make it accessible to everyone,” Meta CEO Mark Zuckerberg said in announcing the chatbot’s launch two weeks ago. “We believe meta-AI is now the smartest AI assistant you can freely use.”
As Meta’s initiatives suggest, generative AI is making its way into social media. TikTok has a team of engineers focused on developing large language models that can recognize and generate text, and they hire editors and journalists who can annotate and improve the performance of these language models. AI. On Instagram’s help page it says: “Meta can use posts (users) to train the AI model, helping to improve AIs.” »
TikTok and Meta did not respond to a request for comment, but AI experts said social media users can expect to see more of this technology influencing their experience — for the better or perhaps for the worse. worse.
Part of the reason social media apps are investing in AI is that they want to become more “sticky” for consumers, said Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School who teaches entrepreneurship and innovation. Apps like Instagram try to keep users on their platforms as long as possible because captive attention generates advertising revenue, he said.
During Meta’s first-quarter earnings conference call, Zuckerberg said it would take some time for the company to reap benefits from its investments in chatbot and other uses of AI, but he already seen technology influence user experience on its platforms.
“Right now, about 30% of posts on Facebook Feed are served by our AI recommendation system,” Zuckerberg said, referring to the behind-the-scenes technology that shapes what Facebook users see. “And for the first time, more than 50% of the content people see on Instagram is now recommended by AI. »
In the future, AI will do more than personalize user experiences, said Jaime Sevilla, who runs Epoch, a research institute that studies AI technology trends. By fall 2022, millions of users were fascinated by Lensa’s AI capabilities as it generated fanciful portraits from selfies. Expect to see more, Seville said.
“I think you’re going to end up seeing completely AI-generated people releasing AI-generated music and stuff,” he said. “We could live in a world where the role humans play in social media is only a small part of the whole. »
Mollick, author of the book “Co-intelligence: Living and Working with AI,” said these chatbots already produce some of what people read online. “AI is driving more and more online communications,” he said. “(But) we don’t actually know how many AI scripts exist.”
Seville said generative AI is unlikely to supplant the digital public square created by social media. People seek authenticity in their interactions with friends and family online, he said, and social media companies need to maintain a balance between that and AI-generated content and targeted advertising.
Although AI can help consumers find more useful products in their daily lives, the technology’s allure also presents a dark side that can tip toward coercion, Seville said.
“The systems will be pretty good at persuasion,” he said. A recently published study by AI researchers at the École Polytechnique Fédérale de Lausanne found that GPT-4 was 81.7% more effective than a human at convincing someone to agree in a debate. Although the study has not yet been peer-reviewed, Seville said the results were concerning.
“It is concerning that (AI) could significantly increase the ability of fraudsters to interact with many victims and perpetrate more and more frauds,” he added.
Seville said policymakers should be aware of the dangers of AI in spreading misinformation as the United States heads into another politically charged election season this fall. Other experts caution that the question is not whether AI could play a role in influencing democratic systems around the world, but rather how.
Bindu Reddy, CEO and co-founder of Abacus.AI, said the solution is a little more nuanced than banning AI from our social media platforms: bad actors were spreading hate and misinformation online long before AI entered the equation. For example, human rights advocates criticized Facebook in 2017 for failing to filter online hate speech that fueled the Rohingya genocide in Myanmar.
In Reddy’s experience, AI is effective at detecting things like bias and pornography on online platforms. It has used AI to moderate content since 2016, when it launched an anonymous social media app called Candid that relied on natural language processing to detect misinformation.
Regulators should ban people from using AI to create deepfakes of real people, Reddy said. But she criticizes laws such as the European Union’s draconian restrictions on AI development. She said it is dangerous for the United States to fall behind competing countries, such as China and Saudi Arabia, which are investing billions of dollars in the development of AI technology.
So far, the Biden administration has released a “Blueprint for an AI Bill of Rights” that offers suggestions on what safeguards the public should have, including protections for data privacy and against discrimination algorithmic. It is not enforceable, although it suggests that legislation could be passed.
Seville acknowledged that AI moderators can be trained to adopt a company’s biases, leading to censorship of certain viewpoints. But human moderators have also demonstrated political bias.
For example, in 2021, the Times reported on complaints that pro-Palestinian content was hard to find on Facebook and Instagram. And conservative critics accused Twitter of political bias in 2020 because it blocked links to a New York Post article about the contents of Hunter Biden’s laptop.
“We can actually study what kind of bias AI reflects,” Seville said.
Yet, he added, AI could become so effective that it could severely oppress free speech.
“What happens when everything on your calendar is perfectly within company guidelines? » said Seville. “Is this the kind of social media you want to consume? »
©2024 Los Angeles Times. Visit latimes.com. Distributed by Tribune Content Agency, LLC.
California Daily Newspapers