This article is more than 1 year old

IBM hastens END OF HUMANITY with teachable AI 'brain'

Could also bust annoying CAPTCHAs, if that's your thing

IBM researchers in Almaden have built a brain-like system capable of recognising hand-written numbers, essentially by using phase-change memory chips instead of flash.

Neuromorphic computing is a variant of artificial intelligence research, which has a computer’s memory configured in a rudimentary brain-like way, as quasi-neurons and synapses, and termed a neural network.

Statistical learning algorithms are used to “teach” such a system to behave in particular ways by strengthening or weakening individual synaptic connections and so affecting a neuron’s binary value.

Typically, researchers focus the system on learning some human ability, like detecting items in images or reading hand-writing by enabling a sum of certain neuron values to indicate a desired outcome, such as: “It is a letter A,” when the system is given a particular image (of the letter A).

Neurons have multiple synaptic connections and these are given different values or weights as the system learns. The neurons and synapses can be organised in layers (multi-layer perceptron) which sequentially process the information coming from an image and, hopefully, recognises it.

IBM Research staff at Almaden built a system with 913 neurons and 165,000 synaptic connections using pairs of phase-change memory cells to represent on a 500 x 661 array of mushroom cell 1T1R PCM devices.

The research team presented a paper on their work, Experimental Demonstration and Tolerancing of a Large-scale Neural Network (165,000 synapses), Using Phase-change Memory as the Synaptic Weight Element, at the International Electron Devices Meeting in San Francisco in December, 2014.

Researcher Dr Geoff Burr, who works in the storage-class memory area, said Phase-Change Memory (PCM) is better suited than NAND to neuromorphic computing.

This is because it is faster, denser, meaning more quasi-neurons and synapses in less space, enabling larger systems to be built. It is also simpler to change a cell’s values, with no need for NAND’s block erase/rewrite process.

This is appropriate because neuromorphic computing relies on the system learning, with changed neuron behaviour as a result of synaptic connectivity value variance, and so improving the system’s overall performance.

This system is quite different from IBM’s True North neuromorphic processor which used 4,096 cores to mimic 1,000,000 neurons and had much input from Cornell Tech and iniLabs. The Almaden researchers had synapses - (plastic) bi-polar synapses - emulated by PCM chips.

The paper’s abstract said:

Using two PCM devices per synapse, a three-layer perceptron network with 164,885 synapses is trained on a subset (5,000 examples) of the MNIST * database of handwritten digits using a back-propagation variant suitable for NVM+selectorcrossbar arrays, obtaining a training (generalization) accuracy of 82.2 per cent (82.9 per cent).

Using a neural network (NN) simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity and asymmetry of NVMconductance response.

That doesn’t sound much, does it, 82 per cent. Put it another way, that’s an error rate of 18 per cent. An alternate multi-column deep neural network approach attained near-human performance (0.23 per cent error rate) on the MNIST dataset in 2012.

It will be interesting to see if the IBM researchers at Almaden build a bigger and/or better PCM-based neuromorphic system to improve on their system’s performance so far. ®

*The MNIST (Mixed National Institute of Standards and Technology) database stores a set of hand-written digits used in attempts to train image-processing systems such as the IBM PCM neuromorphic system described here.

More about

TIP US OFF

Send us news


Other stories you might like