Business

In the rush to implement AI, ethics takes a back seat in many companies

Companies have been rushing to deploy generative AI technology in their work since the launch of ChatGPT in 2022.

Executives say they are excited about how AI increases productivity, analyzes data and reduces tedious work.

According to Microsoft and LinkedIn’s 2024 Work Trends report, which surveyed 31,000 full-time workers between February and March, nearly four in five business leaders believe their companies must embrace technology to stay competitive.

But adopting AI in the workplace also poses risks, including reputational, financial and legal damage. The challenge in combating them is that they are ambiguous, and many companies are still trying to figure out how to identify and measure them.

Responsibly managed AI programs should include governance, data privacy, ethics, trust and security strategies, but experts who study the risks say the programs have not kept pace with innovation.

Efforts to use AI responsibly in the workplace are moving “nowhere near as fast as they should be,” Tad Roselund, managing director and senior partner at Boston Consulting Group, told Business Insider . These programs often require a considerable investment and a minimum of two years to implement, according to BCG.

This is a significant investment and time commitment, and business leaders seem instead to be focused on allocating resources to rapidly develop AI in ways that increase productivity.

“Building good risk management capabilities requires considerable resources and expertise, which not all companies can afford or have today,” said Nanjira Sam, a researcher and policy analyst. told the MIT Sloan Management Review. She added that “demand for AI governance and risk experts exceeds supply.”

Investors must play a more crucial role in funding the tools and resources needed for these programs, according to Navrina Singh, the founder of Credo AI, a governance platform that helps companies comply with AI regulations. Funding for generative AI startups reached $25.2 billion in 2023, study finds. report from the Stanford Institute for Human-Centered Artificial Intelligencebut it’s unclear how much went to companies that focus on responsible AI.

“The venture capital environment also reflects a disproportionate focus on AI innovation versus AI governance,” Singh told Business Insider via email. “To adopt AI at scale and quickly in a responsible manner, equal emphasis must be placed on ethical frameworks, infrastructure and tools to ensure sustainable and responsible integration of AI across all sectors.”

Legislative efforts have been underway to address this gap. In March, the EU approved law on artificial intelligence, which divides the risks of AI applications into three categories and bans those presenting unacceptable risks. Meanwhile, the Biden administration signed an ambitious executive order in October requiring greater transparency from big tech companies developing artificial intelligence models.

But with the pace of innovation in AI, government regulations may not be enough at this time to ensure businesses protect themselves.

“We risk a substantial accountability gap that could halt AI initiatives before they reach production, or worse, lead to failures that result in unintended societal risks, reputational damage and regulatory complications if they were being put into production,” Singh said.

businessinsider

Back to top button