Microsoft’s Project Brainwave, a hardware accelerated deep-learning platform, now available on Azure cloud
Approximate Reading Time: 2 minutes
Microsoft’s Build 2018 was completely focussed on Artificial Intelligence. It was clear from the get-go that this Build conference was unlike the previous Windows-focussed events.
Microsoft launched its platform to run deep-learning models in the Azure cloud and on edge devices in real time, called Project Brainwave. It is now available for preview on Azure.
Project Brainwave makes use of field-programmable gate arrays (FPGA) instead of custom chips for particular tasks. According to Microsoft, the FPGA gives deep learning models more flexibility over custom chips, and the performance that Microsoft is able to extract out of the Intel Stratix FPGA chip (on which Project Brainwave is based) is at par with that obtained from custom chips such as Google’s Tensor Processing Units (TPU). Unlike custom chips, it is also easy to program the FPGAs to accelerate the AI chores with changes to the algorithms, claims Microsoft.
The other promise with Project Brainwave is that customers can run AI related jobs using Microsoft hardware itself (edge devices) at their own sites rather than having to go through Microsoft’s data centres.
According to a report in TechCrunch, Microsoft had showcased an early development model of Brainwave back in August. This prototype consisted of three layers which included a high-performance distributed architecture, a hardware accelerated deep neural networking engine in the FPGAs and a compiler and runtime for deploying these trained models.
According to Microsoft, another advantage of using FPGA chips is the reduction in latency offered as the architecture allows Microsoft to talk directly to FPGAs directly rather than going via the CPUs of traditional services. Microsoft claims that Brainwave offers 5x less latency than Google’s TPUs.