AI2 lists top artificial intelligence systems in its Visual Understanding Challenge – GeekWire

For AI2s Charades Challenge, visual systems had to recognize and classify a wide variety of daily activities in realistic videos. This is just a sampling of the videos. (AI2 Photos)

Some of the worlds top researchers in AI have proved their mettle by taking top honors in three challenges posed by the Seattle-based Allen Institute for Artificial Intelligence.

The institute, also known as AI2, was created by Microsoft co-founder Paul Allen in 2014 to blaze new trails in the field of artificial intelligence. One of AI2sprevious challenges tested the ability of AI platforms to answer eighth-grade-level science questions.

The three latest challenges focused on visual understanding that is, the ability of a computer program to navigate real-world environments and situations using synthetic vision and machine learning.

These arent merely academic exercises: Visual understanding is a must-have for AI applications ranging from self-driving cars to automated security monitoring to sociable robots.

More than a dozen teams signed up for the competitions, and the algorithms were judged based on their accuracy. Here are the three challenges and the results:

Charades Activity Challenge: Computer vision algorithms looked at videos of people performing everyday activities for example, drinking coffee, putting on shoes while sitting in a chair, or snuggling with a blanked on a couch while watching something on a laptop. One of the algorithms objectives were to classify all activity categories for a given video, even if two activities were happening at the same time. Another objective was to identify the time frames for all activities in a video.

Team Kinetics from Google DeepMind won the challenge on both counts. In a statement, AI2 said the challenge significantly raised state-of-the-art accuracy for human activity recognition.

THOR Challenge: The teams computer vision systems had to navigate through 30 nearly photorealistic virtual scenes of living rooms and kitchens to find a specified target object, such as a fork or an apple, based solely on visual input.

THORs top finisher was a team from National Tsing Hua University in Taiwan.

Textbook Question Answering Challenge: Computer algorithms were given a data set of textual and graphic information from a middle-school science curriculum, and then were asked to answer more than 26,000 questions about the content.

AI2 said the competition was exceptionally close, but the algorithm created by Monica Haurilet and Ziad Al-Halah from Germanys Karlsruhe Institute of Technology came out on top for text questions. Yi Tay and Anthony Luu from Nanyang Technological University in Singapore won the diagram-question challenge.

The challenge participants significantly improved state-of-the-art performance on TQAs text questions, while at the same time confirming the difficulty machine learning methods have answering questions posed with a diagram, AI2 said.

The top test scores are pretty good for an AI. But theyd be failing grades for a flesh-and-blood middle-schooler: 42 percent accuracy on the text-question exam, and 32 percent on the diagram-question test.

Representatives from the winning teams will join other AI researchers at a workshop planned for Wednesday during the 2017 Conference on Computer Vision and Pattern Recognition in Honolulu.

Read this article:

AI2 lists top artificial intelligence systems in its Visual Understanding Challenge - GeekWire

Related Posts

Comments are closed.