Microsoft has updated its policy to prohibit US police departments from using generative AI for facial recognition through the Azure OpenAI Service. The language added to the terms of service for Azure OpenAI Service prevents integrations with Azure OpenAI Service from being used “by or for” police departments in the US, including integrations with OpenAI’s text- and speech-analyzing models.
The policy also covers “any law enforcement globally,” explicitly barring the use of “real-time facial recognition technology” on mobile cameras, such as body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments. However, the complete ban on Azure OpenAI Service usage pertains only to US, not international, police. It also doesn’t cover facial recognition performed with stationary cameras in controlled environments, like a back office, although the terms prohibit any use of facial recognition by US police.
This policy change comes a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a new product that leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like hallucinations and racial biases introduced from the training data.
The new terms leave wiggle room for Microsoft. The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn’t cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police). That tracks with Microsoft’s and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.
Read more at: techcrunch.com