This article is more than 1 year old

Like a celeb going bonkers with botox, Google injects 'AI' into anything it can

Ads giant flashes TPU 2 machine-learning ASIC

Google I/O On Wednesday, Google kicked off its annual developer conference and media spectacle, Google I/O, at the Shoreline Amphitheater, a stone's throw from its Mountain View, California, headquarters.

CEO Sundar Pichai reviewed the requisite user milestones, noting that there are now two billion active Android devices. Then he revisited his long-running oratory about the wonders of artificial intelligence.

Google, he said, is rethinking all its products and services in light of AI-oriented computing, which covers machine learning, image recognition, natural language processing, and other computational processes that give software some semblance of smarts. As a sign of Google's commitment to AI, the advertising giant made its Smart Reply, an AI-flavored email auto-responder, generally available to Gmail users, after a lengthy beta testing period.

Pichai announced the introduction of a service called Google Lens, which he described as "a set of vision-based computing capabilities that can understand what you're looking at and help you take action based on that information."

As an example, he showed the Android camera app displaying the image of a flower, labelled with its name, courtesy of image recognition technology. Google Lens can help identify images in smartphone cameras, and is coming to Google Photos and Google Assistant. It can, for example, translate foreign language text in images, just like Word Lens and Google Translate.

"The fact that computers can understand images and videos has profound implications for our core mission," said Pichai.

Pichai said Google's AI-first approach to computing extends to its data centers. The company has developed a second-generation tensor processing unit (TPU), which it is making available through Google Compute Engine. These cloud-available TPUs are, we're told, each capable of achieving 180 teraflops, and Google's TPU boards, which mount four of them, can be stacked together into pods capable of 11.5 petaflops of computation for machine learning workloads.

"We want Google Cloud to be the best cloud for machine learning," said Pichai, who also announced the launch of Google.ai, a web destination for developers to learn more about AI software. Pichai characterized it as an effort to make AI more accessible to non-specialists.

Google's TPU 2 chips, four on a board ... Source

The TPU is an ASIC: a custom-designed chip from Google. As mentioned above, the web giant claims it can do 180 trillion floating-point operations a second, but did not define what those operations are: they could be 32-bit or 16-bit floating point calculations, or a mix of them, and so on. Google's first-generation TPUs are designed to perform AI inference using 8-bit integers; it's not clear what math precision the second-generation units use.

Nvidia's Volta GPUs can, we're told, achieve 120 teraflops albeit when doing mixed-precision 16 and 32-bit multiply-and-accumulate operations. They drop to 15 TFLOPS when doing 32-bit floating-point calculations, according to Nvidia.

Google didn't offer anything in the way of benchmark comparisons, except to say that the TPU2 smokes IBM's Deep Blue, a computer that's about 20 years old – a comparison that is worrying and odd.

As expected, Google introduced Google Assistant for iOS. The new Google Assistant SDK lets third parties incorporate Google Assistant into their products and apps. And this summer, Google Assistant will understand French, German, Portuguese, and Japanese, with more languages to follow.

More about

TIP US OFF

Send us news


Other stories you might like