{"id":207700,"date":"2017-07-25T12:16:33","date_gmt":"2017-07-25T16:16:33","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/qualcomm-opens-up-its-ai-optimization-software-says-dedicated-mobile-chips-are-coming-the-verge\/"},"modified":"2017-07-25T12:16:33","modified_gmt":"2017-07-25T16:16:33","slug":"qualcomm-opens-up-its-ai-optimization-software-says-dedicated-mobile-chips-are-coming-the-verge","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/qualcomm-opens-up-its-ai-optimization-software-says-dedicated-mobile-chips-are-coming-the-verge\/","title":{"rendered":"Qualcomm opens up its AI optimization software, says dedicated mobile chips are coming &#8211; The Verge"},"content":{"rendered":"<p><p>    In the race to get AI working faster on your smartphone,    companies are trying all sorts of things. Some, like     Microsoft and     ARM, are designing new chips that are better suited to run    neural networks. Others, like     Facebook and     Google, are working to reduce the computational demands of    AI itself. But for chipmaker Qualcomm  whose processors    account for 40 percent of the    mobile market  the current plan is simpler: adapt the    silicon thats already in place.  <\/p>\n<p>    To this end the company has developed what it calls its Neural    Processing Engine. This is a software development kit (or SDK)    that helps developers optimize their apps to run AI    applications on Qualcomms Snapdragon 600 and 800 series    processors. That means that if youre building an app that uses    AI for, say, image recognition, you can integrate Qualcomms    SDK and it will run faster on phones with compatible    processors.  <\/p>\n<p>    Qualcomm first announced the Neural Processing Engine a year    ago as part of its     Zeroth platform (which has since been killed off as a    brand). From last September its been working with a few    partners on developing the SDK, and today its opening it up to    be used by all.  <\/p>\n<p>    Any developer big or small that has already invested in deep    learning  meaning they have access to data and trained AI    models  they are the target audience, Gary Brotman,    Qualcomms head of AI and machine learning, told The    Verge. Its simple to use. We abstract everything under    the hood so you dont have to get your hands dirty.  <\/p>\n<p>    The company says one of the first companies to integrate its    SDK is Facebook, which is currently using it to speed up the        augmented reality filters in its mobile app. By using the    Neural Processing Engine, says Qualcomm, Facebooks filters    load five times faster than compared to a generic CPU    implementation.  <\/p>\n<p>    How exactly developers will use the SDK will vary from job to    job, but the basic task of the software is to allocate tasks to    different parts of Qualcomms Snapdragon chipset. Depending on    whether developers want to optimize for battery life or    processing speed, for example, they can draw on compute    resources from different parts of the chip  eg, the CPU, GPU,    or DST. It allows you choose your core of choice relative to    the power performance profile you want for your user, explains    Brotman.  <\/p>\n<p>    The SDK works with some of the most popular frameworks for    developing AI systems, including Caffe, Caffe2, and Googles    TensorFlow. Qualcomm says its designed not just to optimize AI    on mobile devices, but also in cars, drones, VR headsets, and    smart home products.  <\/p>\n<p>    what were seeing is a tidal wave of AI workloads.  <\/p>\n<p>    But deploying frameworks that adapt existing silicon is only    the beginning. What were seeing is a tidal wave of AI    workloads that are creating more demand for compute, says    Brotman. To meet this demand, companies are working on entirely    new architectural designs for AI-optimized chips. Microsoft,    for example, is building a custom machine learning processor        for the Hololens 2, while British chipmaker Graphcore        recently raised $30 million to build its own Intelligence    Processing Units for mobile devices.  <\/p>\n<p>    For Qualcomm, this switch is further down the line, but its    definitely coming. When were baking something into silicon,    thats a very deliberate bet for us, and it doesnt come easy,    says Brotman. Computes compute, and if we can optimize now    what weve already got in our portfolio then were doing our    job well. Longer term, though, is there going to be a need for    dedicated neural computing? I think thats going to be the    case, and the question is just, when do we place that bet.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Here is the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.theverge.com\/2017\/7\/25\/16024540\/ai-mobile-chips-qualcomm-neural-processing-engine-sdk\" title=\"Qualcomm opens up its AI optimization software, says dedicated mobile chips are coming - The Verge\">Qualcomm opens up its AI optimization software, says dedicated mobile chips are coming - The Verge<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> In the race to get AI working faster on your smartphone, companies are trying all sorts of things.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/qualcomm-opens-up-its-ai-optimization-software-says-dedicated-mobile-chips-are-coming-the-verge\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":6,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-207700","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/207700"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=207700"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/207700\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=207700"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=207700"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=207700"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}