Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

Microsoft bans US police departments from using corporate AI tool for facial recognition

Microsoft has changed its policy to prohibit US police departments from using generative AI for facial recognition through the Azure OpenAI service, the company’s fully managed, enterprise-focused package around OpenAI technologies.

Language added Wednesday to the Azure OpenAI Service terms of service prohibits the use of integrations with Azure OpenAI Service “by or for” law enforcement agencies for facial recognition in the United States, including integrations with facial recognition models. OpenAI text and speech analysis.

A separate new item covers “all law enforcement agencies worldwide” and explicitly prohibits the use of “real-time facial recognition technology” on mobile cameras, such as body cameras and dashcams, to attempt to identify a person in an “uncontrolled” environment. “wild” environments.

The term changes come a week after Axon, a maker of technology and weapons products for the military and law enforcement, announced a new product that leverages Google’s GPT-4 generative text model. OpenAI to summarize audio from body cameras. Critics have been quick to point out potential pitfalls, like hallucinations (even today’s best generative AI models make up facts) and racial biases introduced from training data (which is particularly concerning given that people of color are much more likely to be stopped by police). than their white peers).

It is unclear whether Axon was using GPT-4 through Azure OpenAI Service and, if so, whether the updated policy was a response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We have contacted Axon, Microsoft and OpenAI and will update this article if we receive a response.

The new conditions give Microsoft room to maneuver.

The complete ban on the use of Azure OpenAI Service concerns the United States only., not international, police. And that doesn’t cover facial recognition done with Stationary cameras in control environments, such as a back office (even if the terms prohibit any use of facial recognition by American police).

This is consistent with Microsoft and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, a Bloomberg article revealed that OpenAI was working with the Pentagon on a number of projects, including cybersecurity capabilities, a departure from the startup’s previous ban on providing its AI to the military. Elsewhere, Microsoft has proposed using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) create software to run military operations, according to The Intercept.

Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features aimed at government agencies, including law enforcement. In a blog post, Candice Ling, senior vice president of the Microsoft Federal government division, promised that Azure OpenAI Service would be “submitted for further authorization” to the DoD for workloads supporting DoD missions.

Update: After publication, Microsoft said that its initial change to the terms of service contained an error and that in fact the ban only applies to facial recognition in the United States. This is not a blanket ban on police departments using the service.

techcrunch

Back to top button