Microsoft Releases a New Deep Learning Acceleration Platform Called Brainwave – Futurism

In BriefMicrosoft unveiled a new cloud-based programmable hardwarethat's capable of handling deep learning operations faster and inreal-time. Dubbed Brainwave, the system operates on a massivehardware network of Intel's latest Statix 10 chips. Accelerating Machine Learning

At the 2017 Hot Chips symposium today, Microsoft unveiled a new hardware capable of boosting artificial intelligence (AI) programs. CalledBrainwave, Microsoft believes itll boost how machine learning models functionby designing them forprogrammable silicon.

We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency, Microsoft explained in a press release. Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.

The model is larger than other AI-dedicated hardware available. It features a Gated Recurrent Unit model running at a speed of 39.5 teraflops on Intels new Stratix 10 field programmable gate array (FPGA) chip. Plus, it doesnt use so-called batching operations, which means it provides real-time insights for machine learning systems by handling requests as they come in.

We call it real-time AI because the idea here is that you send in a request, you want the answer back, Microsoft Research engineer Doug Burger said at the symposium, Venture Beat reports. If its a video stream, if its a conversation, if its looking for intruders, anomaly detection, all the things where you care about interaction and quick results, you want those in real time.

Brainwave allows for cloud-based deep learning models to be performed seamlessly across a the massive FPGA infrastructure Microsoft has installed in its data centers over the past few years. According to Burger, this means that AI features in applications receive more rapid support from Microsoft services. By running on a host of FPGAs, machine learning models that might be too big for just one FPGA chip receive simultaneous support from multiple hardware boards.Click to View Full Infographic

In addition toperformance speeds faster and more flexible than other CPUs or GPUs, Brainwave also incorporates software designed to support a host of popular deep learning frameworks. So, as Burger said, its thuspossible for Microsofts programmable hardware to operate at par with chips dedicated for machine learning operations like Googles Tensor Processing Unit. In fact, he believes that its possible to increase performance in the future from 39.5 teraflops to 90 teraflops by further improving operations with the Stratix 10 chip.

As machine learning models and algorithms see greater employmentby a plethora of applications, hooking these to Brainwave would reduce user waiting timefor these apps to respond. Microsoft hasnt yet made Brainwave available to customers, and no timeline has been set. But they areworking to make it accessible to third-party customers through Microsofts Azure cloud platform.

More:

Microsoft Releases a New Deep Learning Acceleration Platform Called Brainwave - Futurism

Related Posts

Comments are closed.