This article is more than 1 year old

US police have run nearly 1M Clearview AI searches, says founder

Crimes Clearview has helped solve include murder and the devastating scourge of ... shoplifting

US police have used Clearview AI facial recognition tech to conduct nearly one million searches since the company launched in 2017 – but its founder and CEO said he's still unwilling to testify to its accuracy.

Those numbers were provided by Clearview CEO Hoan Ton-That to the BBC, along with another startling number: The US-based facial recognition company has scraped approximately 30 billion images from various social media platforms, many of whom sent cease-and-desist letters to Clearview when its existence mas made public by a 2020 New York Times investigation.

Images collected by Clearview AI are pulled into its systems without the knowledge or consent of their subjects.  

Clearview enables its users to search its 30 billion-strong archive of photos scraped from the internet for faces that match a submitted snapshot. Thanks to a court decision last year, Clearview is off limits to private companies. Public entities – like police departments – are free to use the technology, and by all accounts have done so with gusto.

The Miami Police Department told the BBC that it performs around 450 Clearview searches per year, and that the platform has helped it solve several murders. Miami police also told the BBC that they use Clearview for nonviolent crimes - including offenses as disruptive to good social order as shoplifting. 

The Miami Police Department confirmed receipt of The Reg's questions about its use of Clearview, but didn't send any answers to us as of publication.

Miami PD Assistant Chief of Police Armando Aguilar told the news outlet that it doesn't use Clearview facial recognition to make arrests, instead treating it like a tip that provides more potential faces for a photographic lineup or to drive additional investigations. 

But don't take my word for it

Despite the apparent widespread use of Clearview's tech by US police, Ton-That told the BBC that he's hesitant to testify in court to the company's claimed 99.6 percent accuracy, which was called into question after the 2020 NYT investigation.

Ton-That said that he doesn't want to testify to the accuracy of his product because investigators like the Miami PD are "using other methods to verify it," he told the BBC. 

Along with usage restrictions placed on it last year after settling the aforementioned case brought against it by the American Civil Liberties Union, Clearview AI was also fined £7.5 million ($9.43m) by the UK government for illegally scraping photos of UK residents. As part of a settlement in that case Clearview was also banned from collecting data on UK residents and was ordered to delete all such information it had collected. 

Data protection agencies in Italy, Greece and France have also either fined or banned Clearview from operating within their borders. 

While government agencies and police in the US are free to use Clearview, some localities have banned its use by law enforcement, including the states of Illinois and New Jersey as well as the cities of Portland, San Francisco and Seattle. 

Those states and cities have been entirely justified in that move, Nathan Wessler, the ACLU's deputy project director for the group's speech, privacy and technology project, told The Register.

Wessler described Clearview's tech as "a dangerous privacy disaster" because of a lack of judicial or legislative oversight imposed on law enforcement in the US. 

"Clearview is out of the business of selling to most private entities. It should be out of the business of selling to police too," Wessler said.

A spokesperson at Privacy International told us that Clearview had built its facial recognition database from "violations of various countries' privacy laws".

"They owe millions of dollars to regulators in the EU and elsewhere, and yet keep developing their rights-violating tool. That should be sufficient to outright ban its use - but it's even more maddening to hear that police forces are using Clearview for 'every type of crime'. Only a really high threshold of crime severity, and a clear necessity of using the tool, could ever justify the use of a tool with potentially severe consequences for the human rights and freedoms of individuals - if ever."

Not-for-profit digital rights advocacy group, Fight for the Future, sent un a statement: "Clearview is a total affront to peoples' rights, full stop, and police should not be able to use this tool. Police use of facial recognition has led to wrongful arrests, allows persistent surveillance (disproportionately targeting Black and brown communities), ignores our right to privacy, and throws due process out the window. Lawmakers need to stop watching this tech spread and need to ban it immediately." ®

More about

TIP US OFF

Send us news


Other stories you might like