{"id":193260,"date":"2017-05-17T01:44:15","date_gmt":"2017-05-17T05:44:15","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/as-technology-lends-you-its-ear-these-technologies-will-determine-what-it-hears-techcrunch\/"},"modified":"2017-05-17T01:44:15","modified_gmt":"2017-05-17T05:44:15","slug":"as-technology-lends-you-its-ear-these-technologies-will-determine-what-it-hears-techcrunch","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/technology\/as-technology-lends-you-its-ear-these-technologies-will-determine-what-it-hears-techcrunch\/","title":{"rendered":"As technology lends you its ear, these technologies will determine what it hears &#8211; TechCrunch"},"content":{"rendered":"<p><p>    In 1999, while traveling through Eastern Europe, I was robbed    in a relatively remote area of the Czech Republic.  <\/p>\n<p>    Someone called the police, but we quickly realized we couldnt    communicate: They didnt speak English and I could offer no    Czech. Even the local high school English teacher, who offered    her assistance, didnt speak English well enough for me to    effectively communicate with the police.  <\/p>\n<p>    At a time well before smartphones, I didnt realize then that    technologists were already hard at work on    innovationsthat could one day play a vital role in events    like the one I had.  <\/p>\n<p>    In 1994, several influential computer scientists at Microsoft,    led by Xuedong Huang, began setting the groundwork to tackle    our global language barrier through technology. Microsoft was    developing a new voice recognition team  one of    the first in the world.  <\/p>\n<p>      Image: Getty Images\/dane_mark\/DigitalVision    <\/p>\n<\/p>\n<p>    In the early days of the technology, voice recognition was    imperfect. We measure the accuracy of voice recognition with    something called the Word Error Rate (WER). The WER measures    how many words are interpreted incorrectly. If I say five words    and four of them are understood correctly while one word is    not, we have a WER of 20 percent. Back in the 1990s, the WER    was nearly 100 percent. Almost every word spoken was    incorrectly heard by these computer systems.  <\/p>\n<p>    But computer scientists such as Huang and his team continued to    work. Slowly but surely, the technology improved. By 2013, the    WER had dropped to roughly 25 percent, an improvement to be    sure, but still not sufficient to be truly helpful.  <\/p>\n<p>    While a WER of 25 percent might seem adequate, imagine the    frustration a user might feel in a home automation environment    when they say, turn on the BEDroom lights, and the LIVINGroom    lights go on. Or imagine trying to dictate something and having    to correct a quarter of your work after the fact. The    long-promised productivity gains simply hadnt materialized    after decades of efforts.  <\/p>\n<p>    And then the magic of innovation and technology began to kick    in.  <\/p>\n<p>    Over the last three years, the WER has dropped from roughly 25    percent to around five percent.  <\/p>\n<p>    The team at Microsoftrecently declared they had    achieved human parity with the technology  it was now as    good at interpreting human speech as humans are. We have seen    more progress in the last 30 months than we saw in the first 30    years.  <\/p>\n<p>      Image: Mina De La O\/Getty Images    <\/p>\n<p>    Many of us have experienced the seeming magic that voice    recognition has become. In using voice recognition platforms in    recent years, youve also likely watched as the words    transcribed are updated and changed after additional words are    spoken.  <\/p>\n<p>    Speech recognition is going beyond just individual word    recognition to account for context and grammar as well. Network    effects are kicking in and the application of big data is    enabling the technology to move at a rapid pace unseen in its    history.  <\/p>\n<p>    Today, we talk to computers on an increasingly regular basis.    While packing for a trip to Singapore, I talked with Google    Homes voice-activated digital assistant to prepare for my trip     going back and forth on everything from weather and history    to the religious breakdown of the city-state.  <\/p>\n<p>    Similarly, Amazons Alexa will order you an Uber or a pizza,    read off your Fitbit stats or update you on the balance in your    bank account. Alexa can help around the house, too, if you ask     dishing you the daily news while youre in the kitchen or    reading you an audiobook before bed. And paired with the right    hardware, shell lock your front door, turn off your lights, or    adjust the temperature in your home.  <\/p>\n<p>    To be sure, the technology has a long way to go before it is    omnipresent. But it is beginning to be deployed in new and    interesting ways. And at CES 2017, voice-recognition was one of    the clear winners, permeating every corner of the show floor.    From Ford and Volkswagen to Martian Watches and LG    refrigerators, voice-integration transcended every category.    Voice is becoming the common OS stitching together diverse    systems across a myriad of user applications.  <\/p>\n<p>    As we have made these astronomical improvements in voice    accuracy, I foresee two important directions voice will go from    here.  <\/p>\n<p>    First, digitization and connectivity will beget    personalization. In the future, it wont be enough that we can    talk to the connected objects around us. Each member of a    household or office can and will have a unique relationship    with voice-enabled objects. Google has started to push Google Home in this    direction.  <\/p>\n<p>    Second, remember that voice is the user-interface layer to a    much richer computing environment. Siri, Cortana, Alexa, Google    Home and others are bringing individuals face-to-face with an    AI-infused computing experience. For many daily tasks where we    might use voice today, doing them on our phones or other    devices might currently be more efficient because we can see    extra information. But the role of AI in these voice systems    will begin to transform the user experience.  <\/p>\n<p>    Context is the next dimension for voice-optimized platforms.    For example, when I can open my refrigerator, read off a series    of potential ingredients I have on hand, and get back receipt    suggestions, Ill have accomplished something with my voice    that would be cumbersome in other computing environments.    Context is king and voice will make that more apparent and    readily accessible than ever.  <\/p>\n<p>    I sometimes think back to that incident in Eastern Europe when    even the local English teacher couldnt communicate with me.    Today, I could speak to her mobile phone and get a relevant    reply in return. The technology now available to us would have    changed my experience. And likewise, this technology will    forever change how we interact with computing  and with each    other.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/techcrunch.com\/2017\/05\/16\/1487505\/\" title=\"As technology lends you its ear, these technologies will determine what it hears - TechCrunch\">As technology lends you its ear, these technologies will determine what it hears - TechCrunch<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> In 1999, while traveling through Eastern Europe, I was robbed in a relatively remote area of the Czech Republic. Someone called the police, but we quickly realized we couldnt communicate: They didnt speak English and I could offer no Czech. Even the local high school English teacher, who offered her assistance, didnt speak English well enough for me to effectively communicate with the police.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/technology\/as-technology-lends-you-its-ear-these-technologies-will-determine-what-it-hears-techcrunch\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187726],"tags":[],"class_list":["post-193260","post","type-post","status-publish","format-standard","hentry","category-technology"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/193260"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=193260"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/193260\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=193260"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=193260"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=193260"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}