Facebook And Artificial Intelligence: Company's AI Chief Explains How He Tags Your Photos

Google Inc. announced this month that it had developed the most accurate facial-recognition technology to date called FaceNet, which the company said trumped Facebook Inc.s rival software called DeepFace by almost three percentage points in a test of accuracy. That was a tough truth for Facebook to swallow, because both companies have invested heavily in artificial-intelligence and computer-logic research to fuel the accuracy and speed of their respective systems, and because a billion monthly users alreadyrely on a form of Facebooks version to tag photographs when they log into the site. It appeared Facebook was getting beat at its own game.

Yann LeCun, head of Facebooks Artificial Intelligence Research lab, spoke Tuesday about how Facebook originally built the tools that currently handle the sites many photos and how his team plans to expand on that proficiency to build the next generation of artificial-intelligence software at an event co-sponsored by Facebook, Medidata and New York Universitys Center for Data Science that was held at the formers offices in New York. Its complicated, but its simpler than you might think, LeCun said.He leads a 40-member group of artificial-intelligence experts thatis only a year old, and split between Facebooks offices in New York, the companys headquarters in Menlo Park, California, and the firms new branch in Paris.

That team and Facebooks developers are in a race against other major technological companies, including Google, to create the fastest and most sophisticated systems not only for facial recognition but also for a whole suite of products built on the tenets of artificial intelligence. Along with Facebook and Google, Alibaba Group Holding Ltd. and Amazon.com Inc. also have stated interests in this area,as Bloomberg Business reported. Last year, 16 artificial-intelligence startups were funded, while in 2010 the comparable figure was only two.

Facebook and its competitors believe people will increasingly rely on artificial intelligence to communicate with each other and to interact with the digital world. To stay ahead in this stiff competition, LeCun said his team needs to make breakthroughs in the field of deep learning, or the process by which machines can help humans at tasks that people have always proven best at, including making decisions or reasoning.

A computer capable of the advanced machine logic known as deep learning would require more inputs, outputs, levels and layers than Facebooks facial recognition and photo-tagging software, but LeCun said both projects would rely on many of the same fundamental methods that computers and programmers currently use to organize and prioritize information.

At any given moment, Facebook software is busy tagging and categorizing the 500 million photos that users upload to the site each day, all within two seconds of when the images first appear. At nearly the same time, the systems logic decides which photos to display to which users based not only on permissions but also on their preferences. Although the volume of data that this program processes would be mind-boggling for any human, the methods by which it sorts through those images are crafted by LeCuns team.

Most Facebook users have seen friends names pop up in suggested tags when they upload photos to the site, but the company also uses tags to categorize the objects within images and help its software to decide which photos to display on the site. Although the system could display as many as 1,500 photos a day in a users stream, the average Facebook user will spend only enough time on the site to see between 100 and 150 images a day. A form of artificial intelligence helps Facebook ensure users are seeing the most important ones.

To create a similar system that would fuel the company's foray into deep learning, developers and experts began with a large database of images and tags such as ImageNet, and they built programs that learned to associate characteristics of each tag with specific types of images. For example, differentiating between colors and shapes helps the software pick out a black road versus a gray sidewalk in an image of a city street. The network is able to take advantage of the fact that the world is compositional, LeCun said.

Once the program recognizes features such as streets or sidewalks in a photo, it can draw a box around each object and identify them as separate from each other, or highlight examples of only one or the other. LeCun demonstrated this last concept in a shaky video taken on a walk through Washington Square Park in New York. The software picked out pedestrians as they moved past, drawing a rectangular box around them on the screen.

A sophisticated tagging program should also be able to first distinguish between a black road and a black car, and then assign names and categories to these objects. To do this, experts teach the system to grab contextual clues from the pixels surrounding an unidentified object to determine its most likely identity. So in that photo of a city street, the software may identify and tag a road based on its shape, its color and the presence of a nearby sidewalk. Then, it could surmise that the bulky shape in the center of that road is probably a black car.

Read the original post:

Facebook And Artificial Intelligence: Company's AI Chief Explains How He Tags Your Photos

Related Posts

Comments are closed.