{"id":176239,"date":"2017-02-09T06:13:33","date_gmt":"2017-02-09T11:13:33","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai-systems-are-learning-to-communicate-with-humans-futurism\/"},"modified":"2017-02-09T06:13:33","modified_gmt":"2017-02-09T11:13:33","slug":"ai-systems-are-learning-to-communicate-with-humans-futurism","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-systems-are-learning-to-communicate-with-humans-futurism\/","title":{"rendered":"AI Systems Are Learning to Communicate With Humans &#8211; Futurism"},"content":{"rendered":"<p><p>    In the future, service robots equipped with artificial    intelligence (AI) are bound to be a common sight. These bots    will help people navigate crowded airports, serve meals, or    even schedule meetings.  <\/p>\n<p>    As these AI systems become more integrated into daily life, it    is vital to find an efficient way to communicate with them. It    is obviously more natural for a human to speak in plain    language rather than a string of code. Further, as the    relationship between humans and robots grows, it will be    necessary to engage in conversations, rather than just give    orders.  <\/p>\n<p>    This human-robot interaction is what Manuela M. Velosos    research is all about. Veloso, a professor at Carnegie Mellon    University, has focused her research on CoBots, autonomous    indoor mobile service robots which transport items, guide    visitors to building locations, and traverse the halls and    elevators. The CoBot robots have been successfully autonomously    navigating for several years now, and have traveled more than    1,000km. These accomplishments have enabled the research team    to pursue a new direction, focusing now on novel human-robot    interaction.  <\/p>\n<p>    If you really want these autonomous robots to be in the    presence of humans and interacting with humans, and being    capable of benefiting humans, they need to be able to talk with    humans Veloso says.  <\/p>\n<p>    Velosos CoBots are capable of autonomous localization and    navigation in the Gates-Hillman Center using WiFi, LIDAR,    and\/or a Kinect sensor (yes, the same type used for video    games).  <\/p>\n<p>    The robots navigate by detecting walls as planes, which they    match to the known maps of the building. Other objects,    including people, are detected as obstacles, so navigation is    safe and robust. Overall, the CoBots are good navigators and    are quite consistent in their motion. In fact, the team noticed    the robots could wear down the carpet as they traveled the same    path numerous times.  <\/p>\n<p>    Because the robots are autonomous, and therefore capable of    making their own decisions, they are out of sight for large    amounts of time while they navigate the multi-floor buildings.  <\/p>\n<p>    The research team began to wonder about this unaccounted time.    How were the robots perceiving the environment and reaching    their goals? How was the trip? What did they plan to do next?  <\/p>\n<p>    In the future, I think that incrementally we may want to query    these systems on why they made some choices or why they are    making some recommendations, explains Veloso.  <\/p>\n<p>    The research team is currently working on the question of why    the CoBots took the route they did while autonomous. The team    wanted to give the robots the ability to record their    experiences and then transform the data about their routes into    natural language. In this way, the bots could communicate with    humans and reveal their choices and hopefully the rationale    behind their decisions.  <\/p>\n<p>    The internals underlying the functions of any autonomous    robots are completely based on numerical computations, and not    natural language. For example, the CoBot robots in particular    compute the distance to walls, assigning velocities to their    motors to enable the motion to specific map coordinates.  <\/p>\n<p>    Asking an autonomous robot for a non-numerical explanation is    complex, says Veloso. Furthermore, the answer can be provided    in many potential levels of detail.  <\/p>\n<p>    We define what we call the verbalization space in which this    translation into language can happen with different levels of    detail, with different levels of locality, with different    levels of specificity.  <\/p>\n<p>    For example, if a developer is asking a robot to detail their    journey, they might expect a lengthy retelling, with details    that include battery levels. But a random visitor might just    want to know how long it takes to get from one office to    another.  <\/p>\n<p>    Therefore, the research is not just about the translation from    data to language, but also the acknowledgment that the robots    need to explain things with more or less detail. If a human    were to ask for more detail, the request triggers CoBot to    move into a more detailed point in the verbalization space.  <\/p>\n<p>    We are trying to understand how to empower the robots to be    more trustable through these explanations, as they attend to    what the humans want to know, says Veloso. The ability to    generate explanations, in particular at multiple levels of    detail, will be especially important in the future, as the AI    systems will work with more complex decisions. Humans could    have a more difficult time inferring the AIs reasoning.    Therefore, the bot will need to be more transparent.  <\/p>\n<p>    For example, if you go to a doctors office and the AI there    makes a recommendation about your health, you may want to know    why it came to this decision, or why it recommended one    medication over another.  <\/p>\n<p>    Currently, Velosos research focuses on getting the robots to    generate these explanations in plain language. The next step    will be to have the robots incorporate natural language when    humans provide them with feedback. [The CoBot] could say, I    came from that way, and you could say, well next time, please    come through the other way, explains Veloso.  <\/p>\n<p>    These sorts of corrections could be programmed into the code,    but Veloso believes that trustability in AI systems will    benefit from our ability to dialogue, query, and correct their    autonomy. She and her team aim at contributing to a    multi-robot, multi-human symbiotic relationship, in which    robots and humans coordinate and cooperate as a function of    their limitations and strengths.  <\/p>\n<p>    What were working on is to really empower people  a random    person who meets a robot  to still be able to ask things about    the robot in natural language, she says.  <\/p>\n<p>    In the future, when we will have more and more AI systems that    are able to perceive the world, make decisions, and support    human decision-making, the ability to engage in these types of    conversations will be essential.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/futurism.com\/70545-2\/\" title=\"AI Systems Are Learning to Communicate With Humans - Futurism\">AI Systems Are Learning to Communicate With Humans - Futurism<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> In the future, service robots equipped with artificial intelligence (AI) are bound to be a common sight.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-systems-are-learning-to-communicate-with-humans-futurism\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":9,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-176239","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176239"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=176239"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176239\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=176239"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=176239"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=176239"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}