{"id":208127,"date":"2017-07-26T16:32:28","date_gmt":"2017-07-26T20:32:28","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/how-robots-are-getting-better-at-making-sense-of-the-world-singularity-hub\/"},"modified":"2017-07-26T16:32:28","modified_gmt":"2017-07-26T20:32:28","slug":"how-robots-are-getting-better-at-making-sense-of-the-world-singularity-hub","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/singularity\/how-robots-are-getting-better-at-making-sense-of-the-world-singularity-hub\/","title":{"rendered":"How Robots Are Getting Better at Making Sense of the World &#8211; Singularity Hub"},"content":{"rendered":"<p><p>    The multiverse of science fiction is populated by robots that    are indistinguishable from humans. They are usually smarter,    faster, and stronger than us. They seem capable of doing any    job imaginable, from piloting a starship and battling alien    invaders to taking out the trash and cooking a gourmet meal.  <\/p>\n<p>    The reality, of course, is far from fantasy. Aside from    industrial settings, robots have yet to meet The Jetsons. The    robots the public are exposed to seem little more than    over-sized plastic toys, pre-programmed to perform a set of    tasks without the ability to interact meaningfully with their    environment or their creators.  <\/p>\n<p>    To paraphrase PayPal co-founder and tech entrepreneur Peter    Thiel, we wanted cool robots, instead we got 140 characters and        Flippy the burger bot. But scientists are making progress    to empower robots with the ability to see and respond to their    surroundings just like humans.  <\/p>\n<p>    Some of the latest developments in that arena were presented    this month at the annual Robotics: Science and    Systems Conference in Cambridge, Massachusetts. The papers    drilled down into topics that ranged from how to make robots    more conversational and help them understand language    ambiguities to helping them see and navigate through complex    spaces.  <\/p>\n<p>    Ben Burchfiel, a graduate student at Duke University, and his    thesis advisor George Konidaris, an assistant professor of    computer science at Brown University, developed an algorithm to    enable machines to see the world more like humans.  <\/p>\n<p>    In the paper,    Burchfiel and Konidaris demonstrate how they can teach robots    to identify and possibly manipulate three-dimensional objects    even when they might be obscured or sitting in unfamiliar    positions, such as a teapot that has been tipped over.  <\/p>\n<p>    The researchers trained their algorithm by feeding it 3D scans    of about 4,000 common household items such as beds, chairs,    tables, and even toilets. They then tested its ability to    identify about 900 new 3D objects just from a birds eye view.    The algorithm made the right guess 75 percent of the time    versus a success rate of about 50 percent for other computer    vision techniques.  <\/p>\n<p>    In an email interview with Singularity Hub, Burchfiel notes his    research is not the first to train machines on 3D object    classification. How their approach differs is that they confine    the space in which the robot learns to classify the objects.  <\/p>\n<p>    Imagine the space of all possible objects, Burchfiel    explains. That is to say, imagine you had tiny Legos, and I    told you [that] you could stick them together any way you    wanted, just build me an object. You have a huge number of    objects you could make!  <\/p>\n<p>    The infinite possibilities could result in an object no human    or machine might recognize.  <\/p>\n<p>    To address that problem, the researchers had their algorithm    find a more restricted space that would host the objects it    wants to classify. By working in this restricted    spacemathematically we call it a subspacewe greatly simplify    our task of classification. It is the finding of this space    that sets us apart from previous approaches.  <\/p>\n<p>    Meanwhile, a pair of undergraduate students at Brown University    figured out a way to teach robots to understand directions    better, even at varying degrees of abstraction.  <\/p>\n<p>    The research, led by Dilip Arumugam and Siddharth Karamcheti,    addressed how to train a robot to understand nuances of natural    language and then follow instructions correctly and    efficiently.  <\/p>\n<p>    The problem is that commands can have different levels of    abstraction, and that can cause a robot to plan its actions    inefficiently or fail to complete the task at all, says    Arumugam in a press    release.  <\/p>\n<p>    In this project, the young researchers crowdsourced    instructions for moving a virtual robot through an online    domain. The space consisted of several rooms and a chair, which    the robot was told to manipulate from one place to another. The    volunteers gave various commands to the robot, ranging from    general (take the chair to the blue room) to step-by-step    instructions.  <\/p>\n<p>    The researchers then used the database of spoken instructions    to teach their system to understand the kinds of words used in    different levels of language. The machine learned to not only    follow instructions but to recognize the level of abstraction.    That was key to kickstart its problem-solving abilities to    tackle the job in the most appropriate way.  <\/p>\n<p>    The research eventually moved from virtual pixels to a real    place, using a Roomba-like robot that was able to respond to    instructions within one second 90 percent of the time.    Conversely, when unable to identify the specificity of the    task, it took the robot 20 or more seconds to plan a task about    50 percent of the time.  <\/p>\n<p>    One application of this new machine-learning technique    referenced in the paper    is a robot worker in a warehouse setting, but there are many    fields that could benefit from a more versatile machine capable    of moving seamlessly between small-scale operations and    generalized tasks.  <\/p>\n<p>    Other areas that could possibly benefit from such a system    include things from autonomous vehicles to assistive robotics,    all the way to medical robotics, says Karamcheti, responding    to a question by email from Singularity Hub.  <\/p>\n<p>    These achievements are yet another step toward creating robots    that see, listen, and act more like humans. But dont expect    Disney to build a real-life Westworld next to Toon Town anytime    soon.  <\/p>\n<p>    I think were a long way off from human-level communication,    Karamcheti says. There are so many problems preventing our    learning models from getting to that point, from seemingly    simple questions like how to deal with words never seen before,    to harder, more complicated questions like how to resolve the    ambiguities inherent in language, including idiomatic or    metaphorical speech.  <\/p>\n<p>    Even relatively verbose chatbots can run out of things to say,    Karamcheti notes, as the conversation becomes more complex.  <\/p>\n<p>    The same goes for human vision, according to Burchfiel.  <\/p>\n<p>    While deep learning techniques have dramatically improved    pattern matchingGoogle can find just about any picture of a    cattheres more to human eyesight than, well, meets the eye.  <\/p>\n<p>    There are two big areas where I think perception has a long    way to go: inductive bias and formal reasoning, Burchfiel    says.  <\/p>\n<p>    The former is essentially all of the contextual knowledge    people use to help them reason, he explains. Burchfiel uses the    example of a puddle in the street. People are conditioned or    biased to assume its a puddle of water rather than a patch of    glass, for instance.  <\/p>\n<p>    This sort of bias is why we see faces in clouds; we have    strong inductive bias helping us identify faces, he says.    While it sounds simple at first, it powers much of what we do.    Humans have a very intuitive understanding of what they expect    to see, [and] it makes perception much easier.  <\/p>\n<p>    Formal reasoning is equally important. A machine can use deep    learning, in Burchfiels example, to figure out the direction    any river flows once it understands that water runs downhill.    But its not yet capable of applying the sort of human    reasoning that would allow us to transfer that knowledge to an    alien setting, such as figuring out how water moves through a    plumbing system on Mars.  <\/p>\n<p>    Much work was done in decades past on this sort of formal    reasoning but we have yet to figure out how to merge it with    standard machine-learning methods to create a seamless system    that is useful in the actual physical world.  <\/p>\n<p>    Robots still have a lot to learn about being human, which    should make us feel good that were still by far the most    complex machines on the planet.  <\/p>\n<p>    Image Credit:     Alex Knightvia Unsplash  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/singularityhub.com\/2017\/07\/26\/how-robots-are-getting-better-at-making-sense-of-the-world\/\" title=\"How Robots Are Getting Better at Making Sense of the World - Singularity Hub\">How Robots Are Getting Better at Making Sense of the World - Singularity Hub<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The multiverse of science fiction is populated by robots that are indistinguishable from humans.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/singularity\/how-robots-are-getting-better-at-making-sense-of-the-world-singularity-hub\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187807],"tags":[],"class_list":["post-208127","post","type-post","status-publish","format-standard","hentry","category-singularity"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/208127"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=208127"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/208127\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=208127"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=208127"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=208127"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}