Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

Slack users horrified to discover messages used for AI training

Slack users horrified to discover messages used for AI training

After launching Slack AI in February, Slack appears to be stubborn, defending its vague policy that by default vacuums up customer data, including messages, content, and files, to train Slack’s global AI models.

According to Slack engineer Aaron Maurer, Slack explained in a blog that the Salesforce-owned chat service does not train its Large Language Models (LLM) on customer data. But Slack’s policy may need to be updated “to more precisely explain how these privacy principles play with Slack AI,” Maurer wrote on Threads, in part because the policy “was originally written on the research/recommendation work we’ve been doing for years before Slack. AI.”

Maurer was responding to a Threads post by engineer and writer Gergely Orosz, who called on companies to step back from data sharing until the policy is clarified, not by a blog, but in the policy language itself. even.

“An ML engineer at Slack says they don’t use messages to train LLM models,” Orosz wrote. “My answer is that current conditions allow them to do so. I will believe it is the policy when it is in the policy. A blog post is not the privacy policy: any serious company knows that.”

The tension for users becomes clearer when comparing Slack’s privacy principles with the way the company touts Slack AI.

Slack’s Privacy Principles specifically state that “machine learning (ML) and artificial intelligence (AI) are useful tools that we use in limited ways to enhance the mission of our product.” To develop AI/ML models, our systems analyze customer data (e.g. messages, content). , and files) submitted to Slack along with other information (including Usage Information) as defined in our Privacy Policy and your Customer Agreement.

Meanwhile, the Slack AI page says: “Work worry-free. Your data is your data. We don’t use them to train Slack AI. »

Because of this incongruity, users have asked Slack to update the privacy principles to clarify how data is used for Slack AI or any future AI updates. According to a Salesforce spokesperson, the company agreed that an update was necessary.

“Yesterday, some members of the Slack community requested more clarity regarding our privacy principles,” the Salesforce spokesperson told Ars. “We will update these principles today to better explain the relationship between customer data and generative AI in Slack.”

The spokesperson told Ars that the policy updates will clarify that Slack “does not develop LLMs or other generative models using customer data”, “does not use customer data to train third-party LLMs” or “do not build or train these models in such a way that they could learn, remember or be able to reproduce customer data. The update will also clarify that “Slack AI uses commercially available LLMs in which models do not retain customer data,” ensuring that “customer data never leaves Slack’s trust boundary and that providers of the LLM never have access to customer data.”

These changes, however, do not appear to address a major concern of users who have never explicitly consented to sharing chats and other Slack content for use in AI training.

Users refuse to share chats with Slack

This controversial policy is not new. Wired warned about it in April and TechCrunch reported that the policy had been in place since at least September 2023.

But widespread backlash began to intensify last night on Hacker News, where Slack users criticized the chat service for apparently failing to notify users of the policy change, instead discreetly having them enabled by default. To reviewers, it seemed like there was no benefit to going with anyone other than Slack.

From there, the backlash spread to social media, where SlackHQ rushed to clarify Slack’s terms with explanations that didn’t seem to address all of the criticism.

“I’m sorry Slack, WHAT are you doing with users’ DMs, messages, files, etc?” Corey Quinn, chief cloud economist for a cost management company called Duckbill Group, posted on X. “I’m sure I’m not reading this correctly.”

SlackHQ responded to Quinn after the economist said, “I hate this so much” and confirmed that he chose not to share data in his paid workspace.

“To clarify, Slack has platform-level machine learning models for things like channel and emoji recommendations and search results,” SlackHQ posted. “And yes, customers can exclude their data from training these ML (non-generative) models. Customer data is owned by Slack AI, which is our generative AI experience built natively into Slack, (and) is an add-on purchased separately. which uses LLMs (Large Language Models) but does not train these LLMs on customer data.

News Source : arstechnica.com
Gn tech

Back to top button