Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
USA

AI subject to consumer protection and anti-bias laws

Policy

FILE – Massachusetts Attorney General Andrea Campbell answers a question during an interview at the State Attorneys General Association meetings, Nov. 14, 2023, in Boston. AP Photo/Charles Krupa, file

BOSTON (AP) — Developers, providers and users of artificial intelligence must comply with existing state laws on consumer protection, anti-discrimination and data privacy, the prosecutor warned Tuesday General of Massachusetts.

In an opinion, Attorney General Andrea Campbell highlighted what she described as a widespread increase in the use of AI and algorithmic decision-making systems by businesses, including consumer-focused technologies.

The opinion aims in part to emphasize that existing national laws on consumer protection, anti-discrimination and data security still apply to emerging technologies, including AI systems – despite the complexity of these systems – as they would in any other context.

“There is no doubt that AI holds enormous and exciting potential that will benefit society and our Commonwealth in many ways, including driving innovation and increasing efficiencies and cost savings in the marketplace” , Cambell said in a statement.

“Yet these benefits do not outweigh the real risk of harm that, for example, any bias and lack of transparency within AI systems, can cause to our residents,” she added.

Falsely advertising the usability of AI systems, providing a faulty AI system, and misrepresenting the reliability or security of an AI system are just some of the actions that could be considered unfair and misleading under state consumer protection laws, Campbell said.

Distorting one person’s audio or video content in an attempt to trick another person into engaging in a commercial transaction or providing personal information as if they were a trusted business partner – such as in the case of deepfakes, voice cloning or chatbots used to commit fraud — could also violate state law, she added.

The goal, in part, is to encourage companies to ensure their AI products and services are bias-free before entering the commercial mainstream – rather than suffering the consequences afterwards.

Regulators also say companies should disclose to consumers when they interact with algorithms. A lack of transparency could run afoul of consumer protection laws.

Elizabeth Mahoney of the Massachusetts High Technology Council, which advocates for the state’s tech economy, said that because there might be some confusion about how state and federal rules apply to the use of IA, it is essential to clearly state state law.

“We think it is important to have ground rules and that consumer and data protection is a key part of that,” she said.

Campbell acknowledges in his opinion that AI has the potential to help bring great benefits to society, although it has also been shown to pose serious risks to consumers, including bias and lack of transparency .

Developers and vendors promise that their AI systems and technologies are accurate, fair, and effective, even though they also claim that AI is a “black box,” meaning they don’t know exactly how to do it. “AI works or generates results,” she said in her opinion. .

The notice also notes that state anti-discrimination laws prohibit AI developers, providers, and users from using technology that discriminates against individuals based on a legally protected characteristic – such as technology that relies on discriminatory inputs or produces discriminatory results that would violate state rules. civil rights laws, Campbell said.

AI developers, providers and users must also take steps to protect personal data used by AI systems and comply with state data breach notification requirements, she added. .

Boston

Back to top button