This article is more than 1 year old

When the chips are down, thank goodness for software engineers: AI algorithms 'outpace Moore's law'

ML eggheads, devs get more bang for their buck, say OpenAI duo

Machine-learning algorithms are improving in performance at a rate faster than that of the underlying computer chips, we're told.

AI software techniques have become so efficient, in fact, engineers can now train a neural network on ImageNet – a top dataset for image recognition systems – to about 79.1 per cent accuracy with 44 times less compute power compared to back in 2012. That's according to a study [PDF], emitted this week by OpenAI, that estimated, at this rate of improvement, algorithmic efficiency doubled every 16 months over seven years.

"Notably, this outpaces the original Moore’s law rate of improvement in hardware efficiency (11x over this period)," the paper, by by Danny Hernandez and Tom Brown, stated.

That law is an observation by Intel co-founder Gordon Moore in the 1960s that the number of transistors on a chip doubles roughly every two years, leading people to expect processor performance to double over the same period. It's also a law that's been dying since at least 1999, and considered dead since 2018.

(We like to joke that Moore's second law is that no journalist can write about Intel without mentioning the first law.)

An accuracy level of 79.1 per cent may seem low at first, yet it was chosen because that was the level of performance for AlexNet when it won the ImageNet challenge in 2012. AlexNet is celebrated as the first model that rekindled computer and data scientists' obsession with neural networks.

ml

Moore's Law isn't dead, chip boffin declares – we need it to keep chugging along for the sake of AI

READ MORE

The improvement isn’t just for computer-vision models: it can also be seen in other types of neural network architectures for language translation and reinforcement learning too, OpenAI said.

“Increases in algorithmic efficiency allow researchers to do more experiments of interest in a given amount of time and money,” the OpenAI duo wrote. "In addition to being a measure of overall progress, algorithmic efficiency gains speed up future AI research in a way that’s somewhat analogous to having more compute."

Although advances in algorithm performance are good news for the machine-learning community, it's worth pointing out models are getting larger and more complex, and require significant resources and money to train. A recent paper by AI21, a research hub focused on natural language based in Israel, revealed that it costs anywhere from $2,500 to $50,000 to train a language model with 110 million parameters. When that number increases to 1.5 billion parameters – a number equivalent to OpenAI’s GPT-2 – the costs jump up to anywhere between $80,000 to a whopping $1.6m. ®

More about

TIP US OFF

Send us news


Other stories you might like