{"id":230274,"date":"2017-07-26T14:41:47","date_gmt":"2017-07-26T18:41:47","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/ai2-lists-top-artificial-intelligence-systems-in-its-visual-understanding-challenge-geekwire.php"},"modified":"2017-07-26T14:41:47","modified_gmt":"2017-07-26T18:41:47","slug":"ai2-lists-top-artificial-intelligence-systems-in-its-visual-understanding-challenge-geekwire","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/ai2-lists-top-artificial-intelligence-systems-in-its-visual-understanding-challenge-geekwire.php","title":{"rendered":"AI2 lists top artificial intelligence systems in its Visual Understanding Challenge &#8211; GeekWire"},"content":{"rendered":"<p><p>    For AI2s Charades    Challenge, visual systems had to recognize and classify a wide    variety of daily activities in realistic videos. This is just a    sampling of the videos. (AI2 Photos)  <\/p>\n<p>    Some of the worlds top researchers in AI have proved their    mettle by taking top honors in three challenges posed by the    Seattle-based Allen Institute for    Artificial Intelligence.  <\/p>\n<p>    The institute, also known as AI2, was created by Microsoft    co-founder Paul Allen in 2014 to blaze new trails in the field    of artificial intelligence. One of AI2sprevious    challenges tested the ability of AI platforms to     answer eighth-grade-level science questions.  <\/p>\n<p>    The three latest    challenges focused on visual understanding that is,    the ability of a computer program to navigate real-world    environments and situations using synthetic vision and machine    learning.  <\/p>\n<p>    These arent merely academic exercises: Visual understanding is    a must-have for AI applications ranging from self-driving cars    to automated security monitoring to sociable robots.  <\/p>\n<p>    More than a dozen teams signed up for the competitions, and the    algorithms were judged based on their accuracy. Here are the    three challenges and the results:  <\/p>\n<p>    Charades    Activity Challenge: Computer vision algorithms    looked at videos of people performing everyday    activities for example, drinking coffee, putting on    shoes while sitting in a chair, or snuggling with a blanked on    a couch while watching something on a laptop. One of the    algorithms objectives were to classify all activity categories    for a given video, even if two activities were happening at the    same time. Another objective was to identify the time frames    for all activities in a video.  <\/p>\n<p>        Team Kinetics from Google DeepMind won the challenge on    both counts. In a statement, AI2 said the challenge    significantly raised state-of-the-art accuracy for human    activity recognition.  <\/p>\n<p>    THOR    Challenge: The teams computer vision systems had    to navigate through 30 nearly photorealistic virtual scenes of    living rooms and kitchens to find a specified target object,    such as a fork or an apple, based solely on visual input.  <\/p>\n<p>    THORs top finisher was a team from National Tsing Hua    University in Taiwan.  <\/p>\n<p>    Textbook    Question Answering Challenge: Computer algorithms    were given a data set of textual and graphic information from a    middle-school science curriculum, and then were asked to answer    more than 26,000 questions about the content.  <\/p>\n<p>    AI2 said the competition was exceptionally close, but the    algorithm created by Monica Haurilet and Ziad Al-Halah from    Germanys Karlsruhe Institute of Technology came out on top for    text questions. Yi Tay and Anthony Luu from Nanyang    Technological University in Singapore won the diagram-question    challenge.  <\/p>\n<p>    The challenge participants significantly improved    state-of-the-art performance on TQAs text questions, while at    the same time confirming the difficulty machine learning    methods have answering questions posed with a diagram, AI2    said.  <\/p>\n<p>    The top test scores are pretty good  for an AI. But theyd be    failing grades for a flesh-and-blood middle-schooler: 42    percent accuracy on the text-question exam, and 32 percent on    the diagram-question test.  <\/p>\n<p>    Representatives from the winning teams will join other AI    researchers at a workshop planned for Wednesday during the 2017    Conference on Computer    Vision and Pattern Recognition in Honolulu.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read this article: <\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.geekwire.com\/2017\/ai2-lists-top-artificial-intelligence-systems-visual-understanding-challenge\/\" title=\"AI2 lists top artificial intelligence systems in its Visual Understanding Challenge - GeekWire\">AI2 lists top artificial intelligence systems in its Visual Understanding Challenge - GeekWire<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> For AI2s Charades Challenge, visual systems had to recognize and classify a wide variety of daily activities in realistic videos.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/ai2-lists-top-artificial-intelligence-systems-in-its-visual-understanding-challenge-geekwire.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-230274","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/230274"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=230274"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/230274\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=230274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=230274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=230274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}