This article is more than 1 year old

Surprise, surprise: AI cameras sold to schools in New York struggle with people of color and are full of false positives

Plus: US President signs a new executive order on AI and how JAX is becoming more popular at DeepMind

In brief A Canadian security company apparently lied to officials at New York’s Lockport City School District about the accuracy of its facial recognition cameras, when the technology was installed across schools last year.

Documents, obtained by Vice, show that SN Technologies’ CEO KC Flynn claimed the algorithm, id3, running of its cameras had been vetted by the National Institute of Standards and Technology. It ranked 49th out of 139 in tests for racial bias, Flynn said. Although id3 was tested by NIST, a scientist denied it had tested one that matched Flynn’s description.

Schools believe computer vision systems can detect weapons and prevent shootings. But experts have repeatedly warned that false positives are more likely to discriminate against black students, painting them as suspected criminals when they’re not.

A report also showed that SN Technologies' software was worse at identifying black people than the company let on. It also mistook objects like broom handles for guns. Parents have sued the New York State Education Department (NYSED) for approving facial recognition to be used at Lockport City Schools.

The US President urged the government to build trustworthy AI systems

Donald Trump signed an executive order this week, outlining nine principles that the US government will adhere to when designing and implementing AI technology.

It promised to uphold constitutional rights and laws to protect privacy and civil liberties, make sure the systems in place are accurate, transparent, understandable, and regularly monitored. Agencies deploying the software will be held accountable to ensure the principles are being enforced.

“Artificial intelligence (AI) promises to drive the growth of the United States economy and improve the quality of life of all Americans,” the order said. “Given the broad applicability of AI, nearly every agency and those served by those agencies can benefit from the appropriate use of AI…Agencies are encouraged to continue to use AI, when appropriate, to benefit the American people. The ongoing adoption and acceptance of AI will depend significantly on public trust.”

You can read the full document here.

DeepMind is turning to python-based JAX

PyTorch is the favoured framework in the AI community. It has overtaken Google’s clunky and difficult to use TensorFlow, so the search giant decided to come up with something simpler: JAX.

Like PyTorch, JAX is also based on Python. And this week, DeepMind described how its researchers have been increasingly using it in their work. “We have found that JAX has enabled rapid experimentation with novel algorithms and architectures and it now underpins many of our recent publications,” it said.

It allows researchers to build and test their software more quickly, and has helped them develop all sorts of tools for training models, inspecting code, and creating AI agents in reinforcement learning experiments.

You can read about that more in detail here.

MLCommons, a new benchmarking system for AI infrastructure

The team behind MLPerf, an industry effort that provides standard testing to benchmark machine learning hardware, have launched a new project known as MLCommons.

“Machine Learning is a young field that needs industry-wide shared infrastructure and understanding,” David Kanter, executive director of MLCommons, said in a statement. “With our members, MLCommons is the first organization that focuses on collective engineering to build that infrastructure.”

“We are thrilled to launch the organization today to establish measurements, datasets, and development practices that will be essential for fairness and transparency across the community.”

It published People’s Speech, a giant public dataset containing more than 80,000 hours of speech samples, to test a machine’s ability to accurately transcribe speech to text. Companies selling such a tool over the cloud, for example, can enter the competition to find out which model is most accurate.

Whilst these benchmarking efforts are laudable, they’re only useful and impactful if as many companies take part as much as possible.

Machine learning software has gotten better at identifying faces covered by masks

Face masks are a common sight during the coronavirus pandemic. Covering up the bottom half of your mug, however, makes it difficult for facial recognition software to identify faces.

NIST examined the effects of mask wearing on the technology, earlier this year in July, and found that many vendors struggled with the same problems. The same tests have now been performed again, and this time round things have improved.

“Some newer algorithms from developers performed significantly better than their predecessors. In some cases, error rates decreased by as much as a factor of 10 between their pre- and post-COVID algorithms,” said Mei Ngan, a NIST scientist. “In the best cases, software algorithms are making errors between 2.4 and 5 [per cent] of the time on masked faces, comparable to where the technology was in 2017 on nonmasked photos.”

NIST tested 152 different algorithms, and published the results in a report. Take them with a pinch of salt, however, since the test images used photographs of people with so-called “digital masks” pasted onto their faces rather than them wearing real cloth masks. ®

More about

TIP US OFF

Send us news


Other stories you might like