Wrangler Supercomputer at TACC Supports Information Retrieval Projects – HPCwire

Feb. 7 Much of the data of the World Wide Web hides like an iceberg below the surface. The so-called deep web has beenestimatedto be 500 times bigger than the surface web seen through search engines like Google. For scientists and others, the deep web holds important computer code and its licensing agreements. Nestled further inside the deep web, one finds the dark web, a place where images and video are used by traders in illicit drugs, weapons, and human trafficking. A new data-intensive supercomputer called Wrangler is helping researchers obtain meaningful answers from the hidden data of the public web.

TheWranglersupercomputer got its start in response to the question, can a computer be built to handle massive amounts of I/O (input and output)? TheNational Science Foundation(NSF) in 2013 got behind this effort andawardedthe Texas Advanced Computing Center (TACC), Indiana University, and the University of Chicago $11.2 million to build a first-of-its-kind data-intensive supercomputer. Wranglers 600 terabytes of lightning-fast flash storage enabled the speedy reads and writes of files needed to fly past big data bottlenecks that can slow down even the fastest computers. It was built to work in tandem with number crunchers such as TACCsStampede, which in 2013 was the sixth fastest computer in the world.

While Wrangler was being built, a separate project came together headed by theDefense Advanced Research Projects Agency(DARPA) of the U.S. Department of Defense. Back in 1969, DARPA had built theARPANET, which eventually grew to become the Internet, as a way to exchange files and share information. In 2014, DARPA wanted something new a search engine for the deep web. They were motivated to uncover the deep webs hidden and illegal activity, according toChris Mattmann, chief architect in the Instrument and Science Data Systems Section of the NASA Jet Propulsion Laboratory (JPL) at the California Institute of Technology.

Behind forms and logins, there are bad things. Behind the dynamic portions of the web like AJAX and Javascript, people are doing nefarious things, said Mattmann. Theyre not indexed because the web crawlers of Google and others ignore most images, video, and audio files. People are going on a forum site and theyre posting a picture of a woman that theyre trafficking. And theyre asking for payment for that. People are going to a different site and theyre posting illicit drugs, or weapons, guns, or things like that to sell, he said.

Mattmann added that an even more inaccessible portion of the deep web called the dark web can only be reached through a special browser client and protocol called TOR, The Onion Router. On the dark web, said Mattmann, theyre doing even more nefarious things. They traffic in guns and human organs, he explained. Theyre basically doing these activities and then theyre tying them back to terrorism.

In response, DARPA started a program calledMemex. Its name blends memory with index and has roots to an influential 1945 Atlantic magazine article penned by U.S. engineer and Raytheon founder Vannevar Bush. His futuristic essay imagined making all of a persons communications books, records, and even all spoken and written words in fingertip reach. The DARPA Memex program sought to make the deep web accessible. The goal of Memex was to provide search engines the information retrieval capacity to deal with those situations and to help defense and law enforcement go after the bad guys there, Mattmann said.

Karanjeet Singh is a University of Southern California graduate student who works with Chris Mattmann on Memex and other projects. The objective is to get more and more domain-specific (specialized) information from the Internet and try to make facts from that information, said Singh said. He added that agencies such as law enforcement continue to tailor their questions to the limitations of search engines. In some ways the cart leads the horse in deep web search. Although we have a lot of search-based queries through different search engines like Google, Singh said, its still a challenge to query the system in way that answers your questions directly.

Once the Memex user extracts the information they need, they can apply tools such as named entity recognizer, sentiment analysis, and topic summarization. This canhelp law enforcement agencieslike the U.S. Federal Bureau of Investigations find links between different activities, such as illegal weapon sales and human trafficking, Singh explained.

Lets say that we have one system directly in front of us, and there is some crime going on, Singh said. The FBI comes in and they have some set of questions or some specific information, such as a person with such hair color, this much age. Probably the best thing would be to mention a user ID on the Internet that the person is using. So with all three pieces of information, if you feed it into the Memex system, Memex would search in the database it has collected and would yield the web pages that match that information. It would yield the statistics, like where this person has been or where it has been sited in geolocation and also in the form of graphs and others.

What JPL is trying to do is trying to automate all of these processes into a system where you can just feed in the questions and and we get the answers, Singh said. For that he worked with an open source web crawler calledApache Nutch. It retrieves and collects web page and domain information of the deep web. TheMapReduceframework powers those crawls with a divide-and-conquer approach to big data that breaks it up into small pieces that run simultaneously. The problem is that even the fastest computers like Stampede werent designed to handle the input and output of millions of files needed for the Memex project.

The Wrangler data-intensive supercomputer avoids data overload by virtue of its 600 terabytes of speedy flash storage. Whats more, Wrangler supports theHadoopframework, which runs using MapReduce. Wrangler, as a platform, can run very large Hadoop-based and Spark-based crawling jobs, Mattmann said. Its a fantastic resource that we didnt have before as a mechanism to do research; to go out and test our algorithms and our new search engines and our crawlers on these sites; and to evaluate the extractions and analytics and things like that afterwards. Wrangler has been an amazing resource to help us do that, to run these large-scale crawls, to do these type of evaluations, to help develop techniques that are helping save people, stop crime, and stop terrorism around the world.

Click here to viewthe entire article.

Source: Jorge Salazar, TACC

Excerpt from:

Wrangler Supercomputer at TACC Supports Information Retrieval Projects - HPCwire

Related Posts

Comments are closed.