{"id":237415,"date":"2017-08-22T23:37:31","date_gmt":"2017-08-23T03:37:31","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/intel-qualcomm-google-and-nvidia-race-to-develop-ai-chips-and-platforms-all-about-circuits.php"},"modified":"2022-09-11T10:22:10","modified_gmt":"2022-09-11T14:22:10","slug":"intel-qualcomm-google-and-nvidia-race-to-develop-ai-chips-and-platforms-all-about-circuits","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/intel-qualcomm-google-and-nvidia-race-to-develop-ai-chips-and-platforms-all-about-circuits.php","title":{"rendered":"Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms &#8211; All About Circuits"},"content":{"rendered":"<p><p>    Artificial intelligence labs race to develop processors that    are bigger, faster, stronger.  <\/p>\n<p>    With major companies rolling out AI chips and smaller startups    nipping at their heels, theres no denying that the future of    artificial intelligence is indeed already upon us. While each    boasts slightly different features, theyre all striving to    provide ease of use, speed, and versatility. Manufacturers are    demonstrating more adaptability than ever before, and are    rapidly developing new versions to meet a growing demand.  <\/p>\n<p>    In a marketplace that promises to do nothing but grow, these    four are braced for impact.  <\/p>\n<\/p>\n<p>    The Verge reports that Qualcomms    processors account for approximately 40% of the mobile market,    so their entry into the AI game is no surprise. Theyre taking    a slightly different approach thoughadapt existing technology    that utilizes Qualcomms strengths. Theyve developed a Neural    Processing Engine, which is an SDK that allows develops to    optimize apps to run different AI applications on Snapdragon    600 and 800 processors. Ultimately, this integration means    greater efficiency.  <\/p>\n<\/p>\n<\/p>\n<p>    Facebook has already begun using its SDK to speed up augmented    reality filters within the mobile app. Qualcomms website says that it may also be used to    help a devices camera recognize objects and detect object for    better shot composition, as well as make on-device    post-processing beautification possible. They also promise more    capabilities via the virtual voice assistant, and assure users    of the broad market applications--from healthcare to security,    on myriad mobile and embedded devices, they write. They also    boast superior malware protection.  <\/p>\n<p>    It allows you to choose your core of choice relative to the    power performance profile you want for your user, said Gary    Brotman, Qualcomm head of AI and machine learning.  <\/p>\n<p>    Qualcomms SDK works with popular AI frameworks, including    Tensor Flow, Caffe, and Caffe2.  <\/p>\n<\/p>\n<p>    Googles AI chip showed up relatively early to the AI game,    disrupting what had been a pretty singular marketplace. And    Googles got no plans to sell the processor, instead    distributing it via a new cloud service from which anyone can    build and operate software via the internet that utilizes    hundreds of processors packed into Google data centers, reports    Wired.  <\/p>\n<p>    The chip, called TPU 2.0 or Cloud TPU, is a followup to the initial    processor that brought Googles AI services to fruition, though    it can be used to train neural networks and not just run them    like its predecessor. Developers need to learn a different way    of building neural networks since it is designed for    Tensorflow, but they expectgiven that the chips    affordabilitythat users will comply. Google has mentioned that    researchers who share their research with the greater public    will receive access for free.  <\/p>\n<\/p>\n<\/p>\n<p>    Jeff Dean, who leads the AI lab Google Brain, says that the    chip was needed to train with greater efficiency. It can    handle180 trillion floating point operations per second.    Several chips connect to form a pod, that offers 11,500    teraflops of computing power, which means that it takes only    six hours to train 32 CPU boards on a portion of a    podpreviously, it took a full day.  <\/p>\n<\/p>\n<p>    Intel offers an AI chip via the Movidius Neural Compute Stick, which is a    USB 3.0 device with a specialized vision processing unit. Its    meant to complement the Xeon and Xeon Phi, and costs only $79.  <\/p>\n<p>    While it is optimized for vision applications, Intel says that    it can handle a variety of DNN applications. They write,    Designed for product developers, researchers and makers, the    Movidius Neural Compute Stick aims to reduce barriers to    developing, tuning and deploying AI applications by delivering    dedicated high-performance deep-neural network processing in a    small form factor.  <\/p>\n<\/p>\n<\/p>\n<p>    The stick is powered by a VPU like what you might find in smart    security cameras, AI drones, and industrial equipment. It can    be used with trained Caffe framework-based feed-forward    Convolutional Neural Network or the user may choose another    pre-trained network, Intel reports. The Movidius Neural Compute    Stick supports Cnn profiling, prototyping, and    tuningworkflow,provides power and data over a    single USB Type A port, does not require cloud connectivity,    and runs multiple devices on the same platform.  <\/p>\n<p>    From Raspberry Pi to PC, the Movidius Neural Compute Stick can    be used with any USB 3.0 platform.  <\/p>\n<\/p>\n<p>    NVIDIA was the first to get really serious about AI, but    theyre even more serious now. Their new chipthe Tesla V100is a data center GPU. Reportedly,    it made enough of a stir that itcaused NVIDIA's shares to jump 17.8% on the    day following the announcement.  <\/p>\n<\/p>\n<\/p>\n<p>    The chip stands apart in training, which typically requires    multiplying matrices of data a single number at a time.    Instead, the Volta GPU architecture multiplies rows and columns    at once, which speeds up the AI training process.  <\/p>\n<p>    With 640 Tensor Cores,Volta    is five times faster than Pascal and reduces the training time    from 18 hours to 7.4 and uses next generation high-speed    interconnect technology which, according to the website,    enables more advanced model and data parallel approaches for    strong scaling to achieve the absolute highest application    performance.  <\/p>\n<\/p>\n<p>    Heard of more AI chips coming down the pipe? Let us know in the    comments below!  <\/p>\n<\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.allaboutcircuits.com\/news\/intel-qualcomm-google-and-nvidia-race-to-develop-ai-chips-and-platforms\/\" title=\"Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms - All About Circuits\">Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms - All About Circuits<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Artificial intelligence labs race to develop processors that are bigger, faster, stronger.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/intel-qualcomm-google-and-nvidia-race-to-develop-ai-chips-and-platforms-all-about-circuits.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-237415","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":"Danzig","_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/237415"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=237415"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/237415\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=237415"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=237415"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=237415"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}