This article is more than 1 year old

Clearview CEO doubles down, claims biz has now scraped over ten billion social media selfies for surveillance

Plus: DeepMind makes its first profit... by selling to its stablemates

In brief Clearview AI says it has scraped more than 10 billion photographs from people’s public social media accounts for its controversial facial-recognition tool.

The startup’s CEO Hoan Ton-That also told Wired his engineers were working on new features to make blurry images sharper and to make it possible to recognize people even if they were wearing masks. Its software, often peddled to law enforcement agencies, provides face matching – you show it a still from CCTV, it finds the online profiles of that person – and the larger its database, the more faces it can identify.

The latest steps show Clearview has ignored pressure from Facebook, Google, YouTube, and Twitter, which urged the upstart to stop downloading people’s selfies last year. Clearview said it also only operates in the US.

“We're focusing on the United States, because we want to get it right here. We never want this to be abused in any way,” the chief exec said.

Police in Canada were banned from using the technology by the country’s privacy watchdog, and the startup previously told us it doesn’t have any customers in the European Union. The software has, however, been tested by law enforcement in the UK.

Tesla was due to roll out a much-awaited self-driving software update to about 1,000 beta testers on Friday night though stalled the upgrade process that evening. "A few last minute concerns about this build. Release likely on Sunday or Monday. Sorry for the delay," the automaker's CEO Elon Musk tweeted.

Unless he's talking about minor back-end issues with the deployment of the code, we're not convinced this is how safety-critical applications should be issued to those out on the road. Not good enough on a Friday, but fixed within a couple of days? Yeesh.

DeepMind says is profitable for first time

Alphabet-owned AI research lab DeepMind netted £43.9m ($59.8m) in profit last year, according to its most recent financial statement [PDF] filed this week. By contrast, last year it lost £477m ($650m); 2020 is the first time the outfit has ever reported a profit.

Previously, Deepmind was known for losing hundreds of millions of pounds every year. What’s changed? The category "staff and other costs" is the biggest money sink and hasn’t jumped up much this year compared to previous ones, suggesting the company didn’t hire as much last year.

So how does it make money? DeepMind told CNBC it sells its software to its Alphabet stablemates, like Google and YouTube. It is “powering products and infrastructure that enrich the lives of billions through the many collaborations we have worked on across Alphabet over the years,” a spokesperson said.

What those exact products and capabilities are, however, is not well-known.

US Bill of Rights for humans against AI

The Biden administration reckons us mere mortals need to be protected against the uprising of machines as AI algorithms increasingly encroach on our lives.

The director and deputy director of the White House's Office of Science and Technology Policy (OSTP), Eric Lander and Alondra Nelson, respectively, argued for legal protection against the harmful effects of machine-learning systems. These technologies require vast amounts of training data, with biases baked into the software, it now seems clear.

Deploying them in the real world can affect people’s lives, the pair argued. Take automated hiring algorithms, for example: they gather personal information from candidates helping to decide whether they’re worth hiring or not. People can be unfairly shut out from opportunities that impact their livelihoods.

“Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure,” Lander and Nelson wrote in an op-ed.

“Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.”

The OSTP is working on the bill and is looking to engage with people on how to protect our rights in the digital age. ®

More about

TIP US OFF

Send us news


Other stories you might like