Why Nvidia thinks it can power the AI revolution

6 hours ago Mar. 31, 2014 - 12:07 PM PDT

Smarter robots and devices are coming to a home near you, and chipmaker Nvidia wants to help make it happen. It wont develop the algorithms that dictate their behavior or build the sensors that let them take in our world, but its graphics-processing units, or GPUs, might be a great way to handle the heavy computing necessary to make many forms of artificial intelligence a reality.

Most applications dont use GPUs exclusively, but rather offload the most computationally intensive tasks onto them from standard microprocessors. Called GPU acceleration, the practice is very common in supercomputing workloads and its becoming ubiquitous in the area of computer vision and object recognition, too. In 2013, more than 80 percent of the teams participating in the ImageNet image-recognition competition utilized GPUs, said Sumit Gupta, general manager of the the Advanced Computing Group at Nvidia.

In March 2013, Google acquired DNNresearch, a deep learning startup co-created by University of Toronto professor Geoff Hinton. Part of the rationale behind that acquisition was teams performance of Hintons team in the 2012 ImageNet competition, where the groups GPU-powered deep learning models easily bested previous approaches.

Source: Nvidia

It turns out that the deep neural network problem is just a slam dunk for the GPU, Gupta said. Thats because deep learning algorithms often require a lot of computing power to process their data (e.g., images or text) and extract the defining features of the things included in that data. Especially during the training phase, when the models and algorithms are being tuned for accuracy, they need to process a lot of data.

Numerous customers are using Nvidias Tesla GPUs forimage and speech recognition, including Adobe and Chinese search giant Baidu. Nvidia is working on other aspects of machine learning as well, Gupta noted. Netflix uses them (in the Amazon Web Services cloud) to power its recommendation engine, Russian search engine Yandex uses GPUs to power its search engine, and IBM uses them to run clustering algorithms inHadoop.

Nvidia might be so excited about machine learning because it has been pushing GPUs as a general-purpose computing platform not just a graphics and gaming chip for years with mixed results. The company has tried to do this by simplify programming its processors via the CUDA language it has developed, but Gupta acknowledged theres still an overall lack of knowledge about how to use GPUs effectively. Thats why so much real innovation still remains with these large users that have the parallel-programming skills necessary to take advantage of 2,500 or more cores at a time (and even more in multi-GPU systems).

Source: Nvidia

However, Nvidia is looking beyond servers and into robotics to fuel some of its machine learning ambitions over the next decade. Last week, the company announced its Jetson TK1 development kit, which Gupta called a supercomputing version of Raspberry Pi. At $192, the kit is programmable using CUDA and includes all the ports one might expect to see, as well as a Tegra K1 system-on-a-chip (the latest version of Nvidias mobile processor) thats comprised of a 192-core Kepler GPU, an ARM Cortex A15 CPU and 300 gigaflops of performance.

More here:

Why Nvidia thinks it can power the AI revolution

Related Posts

Comments are closed.