{"id":205833,"date":"2017-02-07T17:05:10","date_gmt":"2017-02-07T22:05:10","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/wrangler-supercomputer-at-tacc-supports-information-retrieval-projects-hpcwire.php"},"modified":"2017-02-07T17:05:10","modified_gmt":"2017-02-07T22:05:10","slug":"wrangler-supercomputer-at-tacc-supports-information-retrieval-projects-hpcwire","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/super-computer\/wrangler-supercomputer-at-tacc-supports-information-retrieval-projects-hpcwire.php","title":{"rendered":"Wrangler Supercomputer at TACC Supports Information Retrieval Projects &#8211; HPCwire"},"content":{"rendered":"<p><p>    Feb. 7  Much of the data of the World Wide Web hides like an    iceberg below the surface. The so-called deep web has    beenestimatedto be 500 times bigger than    the surface web seen through search engines like Google. For    scientists and others, the deep web holds important computer    code and its licensing agreements. Nestled further inside the    deep web, one finds the dark web, a place where images and    video are used by traders in illicit drugs, weapons, and human    trafficking. A new data-intensive supercomputer called Wrangler    is helping researchers obtain meaningful answers from the    hidden data of the public web.  <\/p>\n<p>    TheWranglersupercomputer got its start    in response to the question, can a computer be built to handle    massive amounts of I\/O (input and output)? TheNational Science Foundation(NSF) in    2013 got behind this effort andawardedthe Texas Advanced Computing    Center (TACC), Indiana University, and the University of    Chicago $11.2 million to build a first-of-its-kind    data-intensive supercomputer. Wranglers 600 terabytes of    lightning-fast flash storage enabled the speedy reads and    writes of files needed to fly past big data bottlenecks that    can slow down even the fastest computers. It was built to work    in tandem with number crunchers such as TACCsStampede, which in 2013 was the sixth    fastest computer in the world.  <\/p>\n<p>    While Wrangler was being built, a separate project came    together headed by theDefense Advanced Research Projects    Agency(DARPA) of the U.S. Department of Defense. Back    in 1969, DARPA had built theARPANET, which eventually grew to become    the Internet, as a way to exchange files and share information.    In 2014, DARPA wanted something new  a search engine for the    deep web. They were motivated to uncover the deep webs hidden    and illegal activity, according toChris Mattmann, chief architect in the    Instrument and Science Data Systems Section of the NASA Jet    Propulsion Laboratory (JPL) at the California Institute of    Technology.  <\/p>\n<p>    Behind forms and logins, there are bad things. Behind the    dynamic portions of the web like AJAX and Javascript, people    are doing nefarious things, said Mattmann. Theyre not indexed    because the web crawlers of Google and others ignore most    images, video, and audio files. People are going on a forum    site and theyre posting a picture of a woman that theyre    trafficking. And theyre asking for payment for that. People    are going to a different site and theyre posting illicit    drugs, or weapons, guns, or things like that to sell, he said.  <\/p>\n<p>    Mattmann added that an even more inaccessible portion of the    deep web called the dark web can only be reached through a    special browser client and protocol called TOR, The Onion    Router. On the dark web, said Mattmann, theyre doing even    more nefarious things. They traffic in guns and human organs,    he explained. Theyre basically doing these activities and    then theyre tying them back to terrorism.  <\/p>\n<p>    In response, DARPA started a program calledMemex. Its name blends memory with    index and has roots to an influential 1945 Atlantic magazine    article penned by U.S. engineer and Raytheon founder Vannevar    Bush. His futuristic essay imagined making all of a persons    communications  books, records, and even all spoken and    written words  in fingertip reach. The DARPA Memex program    sought to make the deep web accessible. The goal of Memex was    to provide search engines the information retrieval capacity to    deal with those situations and to help defense and law    enforcement go after the bad guys there, Mattmann said.  <\/p>\n<p>    Karanjeet Singh is a University of Southern California graduate    student who works with Chris Mattmann on Memex and other    projects. The objective is to get more and more    domain-specific (specialized) information from the Internet and    try to make facts from that information, said Singh said. He    added that agencies such as law enforcement continue to tailor    their questions to the limitations of search engines. In some    ways the cart leads the horse in deep web search. Although we    have a lot of search-based queries through different search    engines like Google, Singh said, its still a challenge to    query the system in way that answers your questions directly.  <\/p>\n<p>    Once the Memex user extracts the information they need, they    can apply tools such as named entity recognizer, sentiment    analysis, and topic summarization. This canhelp law enforcement agencieslike the    U.S. Federal Bureau of Investigations find links between    different activities, such as illegal weapon sales and human    trafficking, Singh explained.  <\/p>\n<p>    Lets say that we have one system directly in front of us, and    there is some crime going on, Singh said. The FBI comes in    and they have some set of questions or some specific    information, such as a person with such hair color, this much    age. Probably the best thing would be to mention a user ID on    the Internet that the person is using. So with all three pieces    of information, if you feed it into the Memex system, Memex    would search in the database it has collected and would yield    the web pages that match that information. It would yield the    statistics, like where this person has been or where it has    been sited in geolocation and also in the form of graphs and    others.  <\/p>\n<p>    What JPL is trying to do is trying to automate all of these    processes into a system where you can just feed in the    questions and and we get the answers, Singh said. For that he    worked with an open source web crawler calledApache Nutch. It retrieves and collects web page    and domain information of the deep web. TheMapReduceframework powers those    crawls with a divide-and-conquer approach to big data that    breaks it up into small pieces that run simultaneously. The    problem is that even the fastest computers like Stampede    werent designed to handle the input and output of millions of    files needed for the Memex project.  <\/p>\n<p>    The Wrangler data-intensive supercomputer avoids data overload    by virtue of its 600 terabytes of speedy flash storage. Whats    more, Wrangler supports theHadoopframework, which runs using    MapReduce. Wrangler, as a platform, can run very large    Hadoop-based and Spark-based crawling jobs, Mattmann said.    Its a fantastic resource that we didnt have before as a    mechanism to do research; to go out and test our algorithms and    our new search engines and our crawlers on these sites; and to    evaluate the extractions and analytics and things like that    afterwards. Wrangler has been an amazing resource to help us do    that, to run these large-scale crawls, to do these type of    evaluations, to help develop techniques that are helping save    people, stop crime, and stop terrorism around the world.  <\/p>\n<p>    Click     here to viewthe entire article.  <\/p>\n<p>    Source: Jorge Salazar, TACC  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Excerpt from:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.hpcwire.com\/off-the-wire\/wrangler-supercomputer-tacc-supports-information-retrieval-projects\/\" title=\"Wrangler Supercomputer at TACC Supports Information Retrieval Projects - HPCwire\">Wrangler Supercomputer at TACC Supports Information Retrieval Projects - HPCwire<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Feb. 7 Much of the data of the World Wide Web hides like an iceberg below the surface.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/super-computer\/wrangler-supercomputer-at-tacc-supports-information-retrieval-projects-hpcwire.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[41],"tags":[],"class_list":["post-205833","post","type-post","status-publish","format-standard","hentry","category-super-computer"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/205833"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=205833"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/205833\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=205833"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=205833"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=205833"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}