This article is more than 1 year old

More facial-recognition bans, new creeper tool links girlfriends to past porno, Microsoft's AI school, and more

Plus machine systems can trounce humans at Quake III flag captures

Roundup Let's get right to it: here's your latest roundup of recent machine-learning related news beyond what we've already reported.

Cheap human labor is remote controlling Kiwibots: Food delivery machines known as Kiwibots may look dinky and sweet, trundling slowly across the campus of the University of California, Berkeley, to bring students food.

Their screens light up with a pair of eyes that can blink and wink, but they aren't as smart as they make out. But underneath their cuteness is a team of humans, who work to keep the bots on track since they have no idea where they’re going.

Operators are outsourced from Colombia and have to figure out “waypoints” to trace a path for the Kiwibots to follow. They update the delivery robots with directions every five to ten seconds, and are paid less than $2 per hour, according to the San Francisco Chronicle. During that cheap hour operators can aid the robots in up to 15 trips, which - less face it - is cheaper than developing your own AI system.

The robots are decked out with GPS and the operators can see where they are on a street map, and a camera feed shows its local surroundings. It looks like Kiwibots use some sort of AI software to avoid collisions, but nothing that would help it navigate autonomously.

Kiwi is a 2017 startup based in Berkeley. It currently only delivers food around UC Berkeley and parts of the local surrounding neighbourhood, but has aspirations to spread its Kiwibots to other college campuses across America.

Check if your girlfriend has been in a porno with AI. Just eeewww: A Chinese techie living in Germany has set tongues wagging after he claimed to have developed a facial recognition system that can link XXX-movie actresses' faces to social media profile selfies.

He posted the announcement on the Chinese social media platform Weibo anonymously, which was spotted by a Stanford PhD student.

The tool is to check whether your girlfriend has featured in an adult flick, apparently. Unsurprisingly, the reactions have been mixed. Some are excited, and some are disgusted. Besides all the serious ethical and legal questions, there are also technical ones too.

How would such a system work? The developer claims to have scraped over 100TB of data from various porn sites and a machine learning model can use this to cross examine profile pictures from social media platforms like Facebook, Instagram, or Weibo. It sounds as though, in order for something like this to be effective, you have to build a massive database of images from porn videos.

There’s reason to be skeptical even if the developer claims to have identified over 100,000 porn actresses. What happens if the image quality is poor? Or if there isn’t much data to go on in the videos or from social media profiles? Can something like this really be scaled up across all adult clips? There are already so many technical issues with facial recognition.

Initially, he didnt’t seem to feel bad about creating such an abhorrent tool. He said he didn't share any data or the database for people to use and that sex work is legal in Germany, where he lives. Also, he’d be up for building a tool to scrutinize male porn stars too, although after the backlash he appears to have given up on the scheme.

But the anonymous coder has since apologized for the tool, and has, apparently, deleted all the data and discontinued the project, according to MIT Tech Review.

AI bots can play Capture the Flag mode in Quake III: Researchers at DeepMind have trained a team of machine learning agents to play cooperatively with each other in a game of Capture The Flag in Quake III Arena.

Capture The Flag is a popular game in first-person shooters. The goal is to capture the opponent team’s flag and get it to your base, while protecting your own. It requires teamwork, something that is learnt by the bots over time. They pick up on certain behaviors, such as defending their home base, camping around their opponent’s base and following their teammates around the map.

The researchers don’t explicitly code in any hard rules, and the only reward signal is whether the team has won or lost. The bots were trained using reinforcement learning over 450,000 games. After this, they were pitted against 25 humans in 100 games. The humans only won about 30 per cent of the time, according to the results published in a Science paper this week.

Multi-agent training is pretty cool and all, but when is it going to extend to something actually useful in the real world, eh? Still, DeepMind is adamant that it will have potential one day.

“In general, this work highlights the potential of multi-agent training to advance the development of artificial intelligence: exploiting the natural curriculum provided by multi-agent training, and forcing the development of robust agents that can even team up with humans,” it said.

Microsoft’s AI Business School is open to, erm, government agencies now!: Remember Microsoft’s free course AI Business School? No? Well, okay, it’s an online series to teach leadership skills that apply to AI and the big bad world of business. Microsoft has now just added some lessons for people working in government.

These include a lecture on how officials can identify opportunities to use AI, and two case studies in how technology is being used to develop smart cities in Finland and chatbots for government websites.

“Leaders in the public sector are often faced with unique challenges when considering how to apply AI to improve the speed and quality of the government services they offer their citizens,” said Mitra Azizirad, corporate vice president for Microsoft AI marketing.

“The opportunities and scenarios for AI in the public sector are ever increasing, which can make deciding where and how to apply it quite daunting. This is precisely why we expanded Microsoft’s AI Business School to now include a specifically tailored and targeted public sector curriculum to help these leaders address their citizens’ unique needs.”

If any of that sounds remotely interesting to you at all, then here’s a link to all the classes.

Michigan may be the next state to ban facial recognition for law enforcement: San Francisco was the first city to do it, and now Michigan may be the first US state to ban the technology.

State Senator Peter Lucido, working for the Republican Party in Lansing, Michigan, drafted a bill that would prevent law enforcement from using facial recognition technology.

“A law enforcement official shall not obtain, access, or use any face recognition technology or any information obtained from the use of face recognition technology to enforce the laws of this state or a political subdivision of this state,” according to the bill introduced this week.

It also says that any evidence or search and arrest warrants obtained through the use of the technology would be unconstitutional, violating the Fourth Amendment.

It’s very early days yet, and the bill will have to pass through various committees before it can be overseen by the Senate and House of Representatives. Keep your eyes peeled. ®

More about

TIP US OFF

Send us news


Other stories you might like