This article is more than 1 year old

Reinforcement learning woes, robot doggos, Amazon's homegrown AI chips, and more

Why machines aren't really superhuman at all

Roundup Hello! Here's a brief roundup of some interesting news from the AI world from the past two weeks, beyond what we've already reported.

Behold a fascinating, honest explanation of why reinforcement learning isn't all that, Amazon developing its own chips, and an AI that colors in comic books. Also, there's a new Boston Dynamics robot video.

TL;DR: Deep RL sucks – A Google engineer has published a long, detailed blog post explaining the current frustrations in deep reinforcement learning, and why it doesn’t live up to the hype.

Reinforcement learning makes good headlines. Teaching agents to play games like Go well enough to beat human experts like Ke Jie fuels the man versus machine narrative. But a closer look at deep reinforcement learning, a method of machine learning used to train computers to complete a specific task, shows the practice is riddled with problems.

All impressive RL results that achieve human or superhuman level require a massive amount of training and experience to get the machine to do something simple. For example, it took DeepMind’s AlphaZero program to master chess and Go over 68 million games of self play – no human could ever play this many games in a lifetime.

Alex Irpan, a researcher using deep reinforcement learning for robotics, calls this “sample inefficiency”.

“There’s an obvious counterpoint here: What if we just ignore sample efficiency? There are several settings where it’s easy to generate experience. Games are a big example. But, for any setting where this isn’t true, RL faces an uphill battle, and unfortunately, most real-world settings fall under this category,” he wrote.

It’s difficult to try and coax an agent into learning a specific behavior, and in many cases hard coded rules are just better. Sometimes when it’s just trying to maximize its reward, the model learns to game the system by finding tricks to get around a problem rather than solve it.

The post lists a few anecdotes where this popped up in research. Here is a good one: “A researcher gives a talk about using RL to train a simulated robot hand to pick up a hammer and hammer in a nail. Initially, the reward was defined by how far the nail was pushed into the hole. Instead of picking up the hammer, the robot used its own limbs to punch the nail in. So, they added a reward term to encourage picking up the hammer, and retrained the policy. They got the policy to pick up the hammer…but then it threw the hammer at the nail instead of actually using it.”

The random nature of RL makes it difficult to reproduce results, another major problem for research.

Irpan is, however, still optimistic about RL and thinks it can improve in the future. “Deep RL is a bit messy right now, but I still believe in where it could be. That being said, the next time someone asks me whether reinforcement learning can solve their problem, I’m still going to tell them that no, it can’t. But I’ll also tell them to ask me again in a few years. By then, maybe it can.”

New Arm mobile chips – Arm announced the launch of Project Trillium, in effort to put more chips capable of processing machine learning workloads for IoT devices, wearables and mobiles.

Rene Haas, president, IP Products Group at Arm, said: “The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint. To meet this demand, Arm is announcing its new ML platform, Project Trillium.”

There are two products available for an early preview in April and out for general availability mid-2018. The first is the Arm ML processor and can deliver more than 4.6 trillion operations per second, apparently. It has a “programmable layer engine”, some local memory and an external memory to run machine learning algorithms.

The Arm OD processor is geared towards object detection, so it could be useful for identifying people for security cameras.

It analyzes video stills at 60 frames per second. The algorithms have been developed by Arm developers and work for “whole human forms”, including faces, heads and shoulders, and can determine the direction each person is facing. The data streams only add up to a few kilobytes, so more videos can be sent to the cloud.

Arm said the initial launch focuses on mobile processors, but future products will also target sensors, smart speakers, home entertainment, etc.

Both products will be supported by Arm NN software to be used alongside Arm Compute Library and CMSIS-NN. The code is optimised for neural networks and works with frameworks such as TensorFlow, Caffe, and Android NN on Arm’s CPUs and GPUs as well as its ML processors.

Chips at Amazon – Amazon was reported to be producing AI chips to power the Echo, its home speaker device fronted by Alexa, the digital assistant, and it supposedly gobbled up a security camera startup secretly last year.

We don’t know much about the chips as there were scant details and Amazon did not comment, according to The Information. The idea is that a specialized accelerator chip will make Echo work more efficiently and faster, as it’ll be able to carry out instructions faster rather than relying on the cloud.

Amazon acquired Annapurna labs, a chipmaker based in Israel in 2015 for $350 million. And now, Reuters reports that it also secretly bought Blink, another startup that specializes in chips for use in security cameras for $90 million.

We pressed Amazon for comment on its Echo chips and whether it has any plans to develop chips for its cloud business. But a spokesperson told us: "We are not commenting on this topic." That could put Nvidia in a bit of a pickle, if all the cloud giants start developing their own chips for AI and machine learning in their cloud.

Google has already announced plans to do just that with its Cloud TPU chips.

A robot dog opens a door – Boston Dynamics released another teaser video showing off robot dogs.

What looks like a headless robot dog approaches a door and waits for another robo dog to emerge from the shadows. His buddy then dispatches a long arm attached to a gripper on his head and opens the door for him.

Boston Dynamics always send the internet into a brief frenzy. The careful mechanical control of these robots is impressive (remember the backflipping Atlas?), but it’s unknown how they work and how autonomous they are.

The company is notoriously tight lipped.

Here is the video below.

Youtube Video

AI tracking human movement – Researchers at Facebook have trained convolutional neural network systems to pick out human bodies in videos and then to map different textures on to them.

An example below shows crowded scenes with people walking or skateboarding. After the pixels associated with the bodies have been mapped, various skins and outfits are superimposed onto them.

At first, it might seem a bit silly and pointless. But in a paper published on arXiv, the researchers said, “This work aims at pushing further the envelope of human understanding in images by establishing dense correspondences from a 2D image to a 3D, surface-based representation of the human body.”

It might aid graphics, augmented reality, or human-computer interaction, and even lead to a general understanding of objects in three dimensions, apparently.

To train the network known as DensePose, 50,000 pictures are taken from the COCO dataset, a popular image detection dataset. The relevant parts such as the face, arms, torso and legs are annotated and segmented to train a convolutional neural network to highlight bodies in images and videos it hasn’t seen before. Different textures can now be mapped onto the pixels highlighted by the neural network.

Here's a video that shows the social networkers' work.

Youtube Video

Coloring in comics – Preferred Networks, a Japanese AI startup interested in IoT, have collaborated with publishing companies to distribute manga comics that have been automatically colored using deep learning.

The model, dubbed PaintsChainer, is trained on pairs of sketches and its colored version to learn what colors should be used where; for example skin colors should be used for faces.

Hakusensha and Hakuhodo DY digital, both Japanese publishers of internet manga comics have released titles that have been automatically colored by PaintsChainer. There is also another option for those that want to hold onto their artistic freedom, where you can broadly choose the color of the clothes or hair in your drawings, and then PaintsChainer fills in the rest.

You can upload your own drawings here and play with PaintsChainer here. ®

More about

TIP US OFF

Send us news


Other stories you might like