Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms – All About Circuits

Artificial intelligence labs race to develop processors that are bigger, faster, stronger.

With major companies rolling out AI chips and smaller startups nipping at their heels, theres no denying that the future of artificial intelligence is indeed already upon us. While each boasts slightly different features, theyre all striving to provide ease of use, speed, and versatility. Manufacturers are demonstrating more adaptability than ever before, and are rapidly developing new versions to meet a growing demand.

In a marketplace that promises to do nothing but grow, these four are braced for impact.

The Verge reports that Qualcomms processors account for approximately 40% of the mobile market, so their entry into the AI game is no surprise. Theyre taking a slightly different approach thoughadapt existing technology that utilizes Qualcomms strengths. Theyve developed a Neural Processing Engine, which is an SDK that allows develops to optimize apps to run different AI applications on Snapdragon 600 and 800 processors. Ultimately, this integration means greater efficiency.

Facebook has already begun using its SDK to speed up augmented reality filters within the mobile app. Qualcomms website says that it may also be used to help a devices camera recognize objects and detect object for better shot composition, as well as make on-device post-processing beautification possible. They also promise more capabilities via the virtual voice assistant, and assure users of the broad market applications--from healthcare to security, on myriad mobile and embedded devices, they write. They also boast superior malware protection.

It allows you to choose your core of choice relative to the power performance profile you want for your user, said Gary Brotman, Qualcomm head of AI and machine learning.

Qualcomms SDK works with popular AI frameworks, including Tensor Flow, Caffe, and Caffe2.

Googles AI chip showed up relatively early to the AI game, disrupting what had been a pretty singular marketplace. And Googles got no plans to sell the processor, instead distributing it via a new cloud service from which anyone can build and operate software via the internet that utilizes hundreds of processors packed into Google data centers, reports Wired.

The chip, called TPU 2.0 or Cloud TPU, is a followup to the initial processor that brought Googles AI services to fruition, though it can be used to train neural networks and not just run them like its predecessor. Developers need to learn a different way of building neural networks since it is designed for Tensorflow, but they expectgiven that the chips affordabilitythat users will comply. Google has mentioned that researchers who share their research with the greater public will receive access for free.

Jeff Dean, who leads the AI lab Google Brain, says that the chip was needed to train with greater efficiency. It can handle180 trillion floating point operations per second. Several chips connect to form a pod, that offers 11,500 teraflops of computing power, which means that it takes only six hours to train 32 CPU boards on a portion of a podpreviously, it took a full day.

Intel offers an AI chip via the Movidius Neural Compute Stick, which is a USB 3.0 device with a specialized vision processing unit. Its meant to complement the Xeon and Xeon Phi, and costs only $79.

While it is optimized for vision applications, Intel says that it can handle a variety of DNN applications. They write, Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.

The stick is powered by a VPU like what you might find in smart security cameras, AI drones, and industrial equipment. It can be used with trained Caffe framework-based feed-forward Convolutional Neural Network or the user may choose another pre-trained network, Intel reports. The Movidius Neural Compute Stick supports Cnn profiling, prototyping, and tuningworkflow,provides power and data over a single USB Type A port, does not require cloud connectivity, and runs multiple devices on the same platform.

From Raspberry Pi to PC, the Movidius Neural Compute Stick can be used with any USB 3.0 platform.

NVIDIA was the first to get really serious about AI, but theyre even more serious now. Their new chipthe Tesla V100is a data center GPU. Reportedly, it made enough of a stir that itcaused NVIDIA's shares to jump 17.8% on the day following the announcement.

The chip stands apart in training, which typically requires multiplying matrices of data a single number at a time. Instead, the Volta GPU architecture multiplies rows and columns at once, which speeds up the AI training process.

With 640 Tensor Cores,Volta is five times faster than Pascal and reduces the training time from 18 hours to 7.4 and uses next generation high-speed interconnect technology which, according to the website, enables more advanced model and data parallel approaches for strong scaling to achieve the absolute highest application performance.

Heard of more AI chips coming down the pipe? Let us know in the comments below!

Read the original:

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms - All About Circuits

Related Posts

Comments are closed.