MIT Researchers Develop AI That Better Understands Object Relationships – Datanami

Posted: December 10, 2021 at 6:50 pm

Increasingly, AI is competent when it comes to identifying objects in a scene: built-in AI for an app like Google Photos, for instance, might recognize a bench, or a bird, or a tree. But that same AI might be left clueless if you ask it to identify the bird flying between two trees, or the bench beneath the bird, or the tree to the left of a bench. Now, MIT researchers are working to change that with a new machine learning model aimed at understanding the relationships between objects.

When I look at a table, I cant say that there is an object at XYZ location, explained Yilun Du, a PhD student in MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper, in an interview with MITs Adam Zewe. Our minds dont work like that. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we could use that system to more effectively manipulate and change our environments.

The model incorporates object relationships by first identifying each object in a scene, then identifying relationships one at a time (e.g. the tree is to the left of the bird), then combining all identified relationships. It can then reverse that understanding, generating more accurate images from text descriptions even when the relationships between objects have changed. This reverse process works much the same as the forward process: generate each object relationship one at a time, then combine.

Other systems would take all the relations holistically and generate the image one-shot from the description, Du said. However, such approaches fail when we have out-of-distribution descriptions, such as descriptions with more relations, since these [models] cant really adapt one shot to generate images containing more relationships. However, as we are composing these separate, smaller models together, we can model a larger number of relationships and adapt to novel combinations.

Testing the results on humans, they found that 91% of participants concluded that the new model outperformed prior models. The researchers underscored that this work is important because it could, for instance, help AI-powered robots better navigate complex situations. One interesting thing we found is that for our model, we can increase our sentence from having one relation description to having two, or three, or even four descriptions, and our approach continues to be able to generate images that are correctly described by those descriptions, while other methods fail, Du said.

Next, the researchers are working to assess how the model performs on more complex, real-world images before moving to real-world testing with object manipulation.

To learn more about this research, read the article from MITs Adam Zewe here. You can read the paper describing the research here.

Visit link:

MIT Researchers Develop AI That Better Understands Object Relationships - Datanami

Related Posts