The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: July 26, 2017
Official: SSC releases admit cards for 2017 CGL Exam I – Business Standard
Posted: July 26, 2017 at 4:20 pm
Admit Cards For 2017 CGL Exam (I) are now available for download on the official website of SSC
Admit Cards for Combined Graduate Level Tier-I exam are now available for download on the official website of Staff Selection Commission (SSC). The Commission has released the Admit Card for different regions. The Commission was supposed to take the exam from August 1, 2017 to August 20, 2017. As per the official notification released later, the SSC is going to conduct the exam from August 5, 2017 to August 24, 2017.
There will be no exams conducted in between on 7th, 13th, 14th, and 15th. The candidates have to log in to the Commissions official site and provide their RegistrationNumber/Roll Number and the Password/D.O.B while logging in.
The SSC is set to releasethe release SSC CGL Admit Card 2017 separately for different levels of Examination. For instance the Admit Card for Tier-I has been released. Then the Commission will release the Admit Card for Tier-II exam and so on. Admit Card for Tier-II will be given to the candidates qualifying in Tier-I Exam. Likewise, Admit Card for Tier-III will be issued to the candidates qualifying in Tier-II exam.
Keep in mind that due to heavy traffic, the sites may be slow or crash. We ask the candidates to keep refreshing their regional sites. It may be found that some of the sites have not yet uploaded the Cards. However, the process of uploading the Admit Cards is underway. All of the sites will soon have the ACs for download. A total of 1, 80,365 candidates have been considered eligible for the SSC CGL Tier I examination from the North region.
Following are the details and URLs of the regional site. Candidates can click their respective regions site and download the Admit Card from that site.
Formed in 1975, the Staff Selection Commission (SSC), under the Government of India, engages in the recruitment of staff for various posts in the various Government Ministries and Departments and in Subordinate Offices. Every year the Commission conducts the SSC Combined Graduate Level exams for hiring non-gazette officers to various government jobs.
Follow this link:
Official: SSC releases admit cards for 2017 CGL Exam I - Business Standard
Posted in Mind Uploading
Comments Off on Official: SSC releases admit cards for 2017 CGL Exam I – Business Standard
Virtually unknown: How to put a price tag on the most progressive form of art – CNN
Posted: at 4:18 pm
Towering above you, his sinewy arms will stretch out for crucifixion. His glowing body will convulse sporadically, shooting off showers of golden embers.
But this isn't the second coming -- it's a piece of virtual reality art by the German-Danish artist Christian Lemmerz.
Titled "La Apparizione" (The Apparition), the artwork will be presented in an empty three-by-three-meter room. Viewers step inside, slip on a VR headset and are transported into outer space, where they can circle the levitating, golden Jesus.
It is one of two virtual reality works being exhibited by the Faurschou Foundation in Venice this summer. The gallery joins a growing list of institutions that have exhibited VR art, including New York's Museum of Modern Art and the Whitney Museum of American Art.
And as the world's top curators embrace this new medium, collectors are starting to circle.
You might not be able to hang "La Apparizione" on your wall like a painting, but it is most definitely for sale. Lemmerz has released five editions, each costing around $100,000.
But valuing virtual art poses a new a challenge for buyers and sellers alike. Galleries normally look to artists' previous work when setting prices, but with only a small number of VR artworks on the market, there are only a few precedents to refer to.
Comparisons with other types of art do not always prove useful. Lemmerz is primarily a sculptor, but can "La Apparizione" be compared to his 2013 bronze sculpture of Jesus?
The latter work is an object, the former an experience. And while artists have been making bronze sculptures for millennia, virtual reality is a brand new technology more familiar to gamers than art collectors.
The market is still adjusting, according to Sandra Nedvetskaia of Khora Contemporary, the production company that helped Lemmerz create his latest VR art.
"At the moment, video art works are the only comparison," she said over the phone. "But (some collectors) have likened (virtual reality artworks) to sculptures because, of course, you find yourself in the middle of that particular artist's moving sculpture."
Hardware is another new consideration for galleries, and those selling VR art often include a headset in the price. Nedvetskaia said that all works produced by Khora Contemporary come with HTC Vive headsets -- and a lifetime service.
"That includes updates," Nedvetskaia added, "so that this artwork doesn't become (like) a video tape that you can no longer experience."
But the speed at which VR technology is changing can be a problem for artists, according to Edward Winkleman, who co-founded of the video-oriented art fair, Moving Image, in 2011.
"Whether they should wait for the hot, new head-mounted display is a constant question in their practice," Winkleman says. "If they wait, they can take advantage of the new upgrades. But they may (also) miss an opportunity to present their work."
Young artists are experimenting with virtual reality -- and not all of their works carry the six-figure price tag of "La Apparizione," according to Murat Orozobekov, the other co-founder of Moving Image.
Notable VR works at this year's fair included a swirling, psychedelic piece by up-and-coming digital artist Brenna Murphy, and "Primal Tourism: Island," which took viewers inside Jakob Kudsk Steensen's dystopian vision of a Polynesian island.
"Prices range from about $2,500 to $6,500 for an emerging artist's work," Orozobekov said over the phone.
At the other end of the market, a disturbing VR piece by American artist Paul McCarthy is currently available at two major European galleries -- Hauser & Wirth and Xavier Hufkens -- for approximately $300,000. Set in a lurid room, the work features a group of female characters who taunt each other, and, occasionally, the viewer.
The difference in asking prices is not simply a matter of reputation, according to Elizabeth Neilson, director of The Zabludowicz Collection in London.
"(There's also) the development costs of the technology they have used. Someone like Rachel Rossin does a lot of the development herself, but someone like Jordan Wolfson does none of the technological work himself, and outsources to Hollywood professionals," Neilson said, referencing two up-and-coming artists who have been working in virtual reality. "As you can imagine, this is expensive."
The price of virtual artworks can be kept high by limiting the number of copies made. McCarthy's VR piece was only released in an edition of three, and Lemmerz's in an edition of five.
By deliberately restricting supply, galleries create a market for virtual reality art that is based on scarcity -- as with paintings and sculptures. But unlike other art, virtual reality pieces are infinitely replicable. In their most basic form, they are simply digital files that can be experienced by anyone with a VR headset.
While an artist can easily limit the editions of a sculpture, it is much harder to curb the spread of a digital file -- something that the music and movie industries discovered the hard way. But this presents opportunities as well as threats, according to Nedvetskaia.
"In five years' time every single one of us might have a set of virtual reality goggles in addition to our iPhone," she said. "So don't rule out the possibility that editions of virtual reality artworks might be made at an affordable price so the public can view them. We're really on the cusp of this market being born right now --the possibilities are limitless."
The art world establishment is yet to fully embrace digital art. Neither Christie's nor Sotheby's have sold a VR work. But both have expressed cautious interest in the medium.
In March this year, Sotheby's became the first major auction house to exhibit virtual reality art. Hosted at its New York headquarters, the technology-focused exhibition "Bunker" featured "La Apparizione" and a VR work by Sarah Rothberg called "Memory/Place: My House."
Christie's chief marketing officer, Marc Sands, believes that it is only a matter of time before VR starts appearing at major auctions.
"Response to (virtual reality art) from both consignors and buyers is largely positive but to date we have not discovered the 'killer' version of VR," Sands said. "However, as with many things digital, it will come sometime soon."
Read more here:
Virtually unknown: How to put a price tag on the most progressive form of art - CNN
Posted in Virtual Reality
Comments Off on Virtually unknown: How to put a price tag on the most progressive form of art – CNN
The 10 best virtual reality apps – The Telegraph – Telegraph.co.uk
Posted: at 4:18 pm
Virtual reality apps canimmerse you in incredible 360-degree visuals. You don't need to splash hundreds of pounds on headsets such as theOculus Rift or HTC Vive, you can enjoy a great VR experience using your smartphone and some brilliant apps.
While early VRexperiences were limited to advanced headsets, now there are a range of options for turning your smartphone into an immersive device.
To use VR on your smartphone you can pick up a budget headset such as Google's Daydream View, the Samsung Gear VR, or even the simple Google Cardboard.There are also plenty of budget virtual reality headsets that work well with iPhones, such as the Homido V2.
There are plenty of VRapps available on Google's Play store and iTunes, some which are dedicated to the medium and some, like YouTube, that have additional VR capabilities and work with headsets.
Google also has a dedicated app store for its Daydream VR headset, which works with Android phones including Google Pixel andHuawei Mate 9 Pro. It will soon be available for the Samsung Galaxy S8. Apps for Samsung Gear VR are available on the Oculus Store.
Read more:
The 10 best virtual reality apps - The Telegraph - Telegraph.co.uk
Posted in Virtual Reality
Comments Off on The 10 best virtual reality apps – The Telegraph – Telegraph.co.uk
Laurene Powell Jobs Leads Funding Round in Virtual-Reality Firm – Bloomberg
Posted: at 4:18 pm
Photographer: Patrick T. Fallon/Bloomberg
Laurene Powell Jobss Emerson Collective LLC andSingapores sovereign wealth fund Temasek Holdings Pte led a $40 million fundraising round in the virtual-reality company Within,joining investors that include Andreessen Horowitz, 21st Century Fox Inc. and Raine Ventures.
The funding brings the total for the Los Angeles-based software startup to $56.6 million, according to a statement Wednesday. WPP Plc, the worlds largest advertising agency, and Macro Ventures also participated in the latest round. Within is already working with companies including Fox, NBCUniversal and Vice Media.
Source: One of Within
Founded by Aaron Koblin and Chris Milk, Within will use the money for development of augmented reality, or AR, experiences -- an emerging form of digital entertainment. The funding reflects growing investor confidence in potential commercial applications for AR and virtual reality, a market that Goldman Sachs Group Inc. says may reach $182 billion by 2025.
In terms of VR/AR content creation, they are our bet and we are extremely excited about teaming with them,Lars Dalgaard, general partner at Andreessen Horowitz, said in an interview. These two guys are the pied pipers that people are following. They have built a real brand.
Technology and entertainment companies are beginning to develop commercial uses for AR and VR. Apple Inc. is preparing to put augmented reality software in as many as a billion mobile devices this autumn. At the same time, Hollywood studios have been looking for ways to deploy the technology for moviegoers.The funding gives Within the ability to expand and develop new technology and content.
The widow of Apple co-founder Steve Jobs, Powell Jobs founded Emerson Collective to focus on education, immigration, the environment and other social justice initiatives. Shes worth about $18 billion, according to the Bloomberg Billionaires Index.
With Temasek as a partner, international expansion is on the horizon, and the company will be working more closely with brands to reach consumers, Milk said.
Our goal is to connect the world through immersive stories, Milk, Withins chief executive officer, said in an interview. That includes both developing the new storytelling language in both of those mediums, as well as the technology to support them on a lot of different current hardware, headsets and devices.
Children using Withins AR Goldilocks storybook can read a story aloud and see the characters they describe appear on an iPad or mobile-phone screen, superimposed in reality. The technology was featured last month at Apples Worldwide Developers Conference.
Walt Disney Co. has revealed plans for an AR headset that will feature Star Wars games, like Holo Chess, the board game with battling holographic pieces seen in A New Hope.
Formerly called Vrse, Within adopted its current name in June 2016 when the company completed a $12.6 million funding round.
One of Withins recent projects,Life of Us, isa multi-user experience based on evolution, with viewers developing from amoeba to apes to humans. It can be seen at Imax VR centers, a virtual-reality pilot project by Imax Corp., the operator of large-format movie theaters.
With assistance by Lizette Chapman, and Alex Webb
Read the original post:
Laurene Powell Jobs Leads Funding Round in Virtual-Reality Firm - Bloomberg
Posted in Virtual Reality
Comments Off on Laurene Powell Jobs Leads Funding Round in Virtual-Reality Firm – Bloomberg
Passchendaele virtual reality view could keep the horrors of the First … – Telegraph.co.uk
Posted: at 4:18 pm
With the First World War, we have got the chance of doing something different. We can keep it fresh.
It will be interesting to see whether my kids find the First World War as distant as I find the Battle of Agincourt, or will the immediacy of it be maintained because we have got this technology.
Its about taking the images and giving it the full treatment of 2017, which is immersive which is a whole step above sitting on your bottom watching television.
These sets have primarily been used by gamers so far, but I think history has the most to gain from these.
Bill Hunt, a Chelsea Pensioner who spent 25 years in the Royal Horse Guards, on Tuesday became one of the first members of the public to try out the new films.
Go here to read the rest:
Passchendaele virtual reality view could keep the horrors of the First ... - Telegraph.co.uk
Posted in Virtual Reality
Comments Off on Passchendaele virtual reality view could keep the horrors of the First … – Telegraph.co.uk
AI winter – Wikipedia
Posted: at 4:18 pm
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. At the meeting, Roger Schank and Marvin Minskytwo leading AI researchers who had survived the "winter" of the 1970swarned the business community that enthusiasm for AI had spiraled out of control in the '80s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
Hypes are common in many emerging technologies, such as the railway mania or the dot-com bubble. The AI winter is primarily a collapse in the perception of AI by government bureaucrats and venture capitalists. Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day." In 2005, Ray Kurzweil agreed: "Many observers still think that the AI winter was the end of the story and that nothing since has come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry."
Enthusiasm and optimism about AI has gradually increased since its low point in 1990, and by the 2010s artificial intelligence (and especially the sub-field of machine learning) became widely used, well-funded and many in the technology predict that it will soon succeed in creating machines with artificial general intelligence. As Ray Kurzweil writes: "the AI winter is long since over."
There were two major winters in 197480 and 198793[6] and several smaller episodes, including:
During the Cold War, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. At the outset, the researchers were optimistic. Noam Chomsky's new work in grammar was streamlining the translation process and there were "many predictions of imminent 'breakthroughs'".[7]
However, researchers had underestimated the profound difficulty of word-sense disambiguation. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made mistakes. An anecdotal example was "the spirit is willing but the flesh is weak." Translated back and forth with Russian, it became "the vodka is good but the meat is rotten." Similarly, "out of sight, out of mind" became "blind idiot". Later researchers would call this the commonsense knowledge problem.
By 1964, the National Research Council had become concerned about the lack of progress and formed the Automatic Language Processing Advisory Committee (ALPAC) to look into the problem. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended.[7]
Machine translation is still an open research problem in the 21st century, which has been met with some success (Google Translate, Yahoo Babel Fish).
Some of the earliest work in AI used networks or circuits of connected units to simulate intelligent behavior. Examples of this kind of work, called "connectionism", include Walter Pitts and Warren McCullough's first description of a neural network for logic and Marvin Minsky's work on the SNARC system. In the late '50s, most of these approaches were abandoned when researchers began to explore symbolic reasoning as the essence of intelligence, following the success of programs like the Logic Theorist and the General Problem Solver.[9]
However, one type of connectionist work continued: the study of perceptrons, invented by Frank Rosenblatt, who kept the field alive with his salesmanship and the sheer force of his personality.[10] He optimistically predicted that the perceptron "may eventually be able to learn, make decisions, and translate languages".[11] Mainstream research into perceptrons came to an abrupt end in 1969, when Marvin Minsky and Seymour Papert published the book Perceptrons, which was perceived as outlining the limits of what perceptrons could do.
Connectionist approaches were abandoned for the next decade or so. While important work, such as Paul Werbos' discovery of backpropagation, continued in a limited way, major funding for connectionist projects was difficult to find in the 1970s and early '80s.[12] The "winter" of connectionist research came to an end in the middle '80s, when the work of John Hopfield, David Rumelhart and others revived large scale interest in neural networks.[13] Rosenblatt did not live to see this, however, as he died in a boating accident shortly after Perceptrons was published.[11]
In 1973, professor Sir James Lighthill was asked by the UK Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its "grandiose objectives." He concluded that nothing being done in AI couldn't be done in other sciences. He specifically mentioned the problem of "combinatorial explosion" or "intractability", which implied that many of AI's most successful algorithms would grind to a halt on real world problems and were only suitable for solving "toy" versions.[14]
The report was contested in a debate broadcast in the BBC "Controversy" series in 1973. The debate "The general purpose robot is a mirage" from the Royal Institution was Lighthill versus the team of Donald Michie, John McCarthy and Richard Gregory.[15] McCarthy later wrote that "the combinatorial explosion problem has been recognized in AI from the beginning."[16]
The report led to the complete dismantling of AI research in England.[14] AI research continued in only a few top universities (Edinburgh, Essex and Sussex). This "created a bow-wave effect that led to funding cuts across Europe," writes James Hendler.[17] Research would not revive on a large scale until 1983, when Alvey (a research project of the British Government) began to fund AI again from a war chest of 350 million in response to the Japanese Fifth Generation Project (see below). Alvey had a number of UK-only requirements which did not sit well internationally, especially with US partners, and lost Phase 2 funding.
During the 1960s, the Defense Advanced Research Projects Agency (then known as "ARPA", now known as "DARPA") provided millions of dollars for AI research with almost no strings attached. DARPA's director in those years, J. C. R. Licklider believed in "funding people, not projects"[18] and allowed AI's leaders (such as Marvin Minsky, John McCarthy, Herbert A. Simon or Allen Newell) to spend it almost any way they liked.
This attitude changed after the passage of Mansfield Amendment in 1969, which required DARPA to fund "mission-oriented direct research, rather than basic undirected research."[19] Pure undirected research of the kind that had gone on in the '60s would no longer be funded by DARPA. Researchers now had to show that their work would soon produce some useful military technology. AI research proposals were held to a very high standard. The situation was not helped when the Lighthill report and DARPA's own study (the American Study Group) suggested that most AI research was unlikely to produce anything truly useful in the foreseeable future. DARPA's money was directed at specific projects with identifiable goals, such as autonomous tanks and battle management systems. By 1974, funding for AI projects was hard to find.[19]
AI researcher Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues: "Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."[20] The result, Moravec claims, is that some of the staff at DARPA had lost patience with AI research. "It was literally phrased at DARPA that 'some of these people were going to be taught a lesson [by] having their two-million-dollar-a-year contracts cut to almost nothing!'" Moravec told Daniel Crevier.[21]
While the autonomous tank project was a failure, the battle management system (the Dynamic Analysis and Replanning Tool) proved to be enormously successful, saving billions in the first Gulf War, repaying all of DARPAs investment in AI[22] and justifying DARPA's pragmatic policy.[23]
DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at Carnegie Mellon University. DARPA had hoped for, and felt it had been promised, a system that could respond to voice commands from a pilot. The SUR team had developed a system which could recognize spoken English, but only if the words were spoken in a particular order. DARPA felt it had been duped and, in 1974, they cancelled a three million dollar a year grant.[24]
Many years later, successful commercial speech recognition systems would use the technology developed by the Carnegie Mellon team (such as hidden Markov models) and the market for speech recognition systems would reach $4 billion by 2001.[25]
In the 1980s, a form of AI program called an "expert system" was adopted by corporations around the world. The first commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment Corporation, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including software companies like Teknowledge and Intellicorp (KEE), and hardware companies like Symbolics and Lisp Machines Inc. who built specialized computers, called Lisp machines, that were optimized to process the programming language Lisp, the preferred language for AI.[26]
In 1987, three years after Minsky and Schank's prediction, the market for specialized AI hardware collapsed. Workstations by companies like Sun Microsystems offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new class of workstations. The performance of these general workstations became an increasingly difficult challenge for LISP Machines. Companies like Lucid and Franz Lisp offered increasingly more powerful versions of LISP. For example, benchmarks were published showing workstations maintaining a performance advantage over LISP machines.[27] Later desktop computers built by Apple and IBM would also offer a simpler and more popular architecture to run LISP applications on. By 1987 they had become more powerful than the more expensive Lisp machines. The desktop computers had rule-based engines such as CLIPS available.[28] These alternatives left consumers with no reason to buy an expensive machine specialized for running LISP. An entire industry worth half a billion dollars was replaced in a single year.[29]
Commercially, many Lisp companies failed, like Symbolics, Lisp Machines Inc., Lucid Inc., etc. Other companies, like Texas Instruments and Xerox abandoned the field. However, a number of customer companies (that is, companies using systems written in Lisp and developed on Lisp machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work.
By the early 90s, the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier in research in nonmonotonic logic. Expert systems proved useful, but only in a few special contexts.[1][30] Another problem dealt with the computational hardness of truth maintenance efforts for general knowledge. KEE used an assumption-based approach (see NASA, TEXSYS) supporting multiple-world scenarios that was difficult to understand and apply.
The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case based reasoning or universal database access. The maturation of Common Lisp saved many systems such as ICAD which found application in knowledge-based engineering. Other systems, such as Intellicorp's KEE, moved from Lisp to a C++ (variant) on the PC and helped establish object-oriented technology (including providing major support for the development of UML).
In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. By 1991, the impressive list of goals penned in 1981 had not been met. Indeed, some of them had not been met in 2001, or 2011. As with other AI projects, expectations had run much higher than what was actually possible.[31]
In 1983, in response to the fifth generation project, DARPA again began to fund AI research through the Strategic Computing Initiative. As originally proposed the project would begin with practical, achievable goals, which even included artificial general intelligence as long term objective. The program was under the direction of the Information Processing Technology Office (IPTO) and was also directed at supercomputing and microelectronics. By 1985 it had spent $100 million and 92 projects were underway at 60 institutions, half in industry, half in universities and government labs. AI research was generously funded by the SCI.[32]
Jack Schwarz, who ascended to the leadership of IPTO in 1987, dismissed expert systems as "clever programming" and cut funding to AI "deeply and brutally," "eviscerating" SCI. Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise, in his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave". Insiders in the program cited problems in communication, organization and integration. A few projects survived the funding cuts, including pilot's assistant and an autonomous land vehicle (which were never delivered) and the DART battle management system, which (as noted above) was successful.[33]
A survey of reports from the mid-2000s suggests that AI's reputation was still less than stellar:
Many researchers in AI in the mid 2000s deliberately called their work by other names, such as informatics, machine learning, analytics, knowledge-based systems, business rules management, cognitive systems, intelligent systems, intelligent agents or computational intelligence, to indicate that their work emphasizes particular tools or is directed at a particular sub-problem. Although this may be partly because they consider their field to be fundamentally different from AI, it is also true that the new names help to procure funding by avoiding the stigma of false promises attached to the name "artificial intelligence."[36]
"Many observers still think that the AI winter was the end of the story and that nothing since come of the AI field," wrote Ray Kurzweil in 2005, "yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late '90s and early 21st century, AI technology became widely used as elements of larger systems,[37] but the field is rarely credited for these successes. In 2006, Nick Bostrom explained that "a lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[38] Rodney Brooks stated around the same time that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."
Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics,[39] speech recognition,[40] banking software,[41] medical diagnosis[41] and Google's search engine.[42]
Fuzzy logic controllers have been developed for automatic gearboxes in automobiles (the 2006 Audi TT, VW Touareg[43] and VW Caravell feature the DSP transmission which utilizes fuzzy logic, a number of koda variants (koda Fabia) also currently include a fuzzy logic-based controller). Camera sensors widely utilize fuzzy logic to enable focus.
Heuristic search and data analytics are both technologies that have developed from the evolutionary computing and machine learning subdivision of the AI research community. Again, these techniques have been applied to a wide range of real world problems with considerable commercial success.
In the case of Heuristic Search, ILOG has developed a large number of applications including deriving job shop schedules for many manufacturing installations.[44] Many telecommunications companies also make use of this technology in the management of their workforces, for example BT Group has deployed heuristic search[45] in a scheduling application that provides the work schedules of 20,000 engineers.
Data analytics technology utilizing algorithms for the automated formation of classifiers that were developed in the supervised machine learning community in the 1990s (for example, TDIDT, Support Vector Machines, Neural Nets, IBL) are now[when?] used pervasively by companies for marketing survey targeting and discovery of trends and features in data sets.
Primarily the way researchers and economists judge the status of an AI winter is by reviewing which AI projects are being funded, how much and by whom. Trends in funding are often set by major funding agencies in the developed world. Currently, DARPA and a civilian funding program called EU-FP7 provide much of the funding for AI research in the US and European Union.
As of 2007, DARPA was soliciting AI research proposals under a number of programs including The Grand Challenge Program, Cognitive Technology Threat Warning System (CT2WS), "Human Assisted Neural Devices (SN07-43)", "Autonomous Real-Time Ground Ubiquitous Surveillance-Imaging System (ARGUS-IS)" and "Urban Reasoning and Geospatial Exploitation Technology (URGENT)"
Perhaps best known is DARPA's Grand Challenge Program[46] which has developed fully automated road vehicles that can successfully navigate real world terrain[47] in a fully autonomous fashion.
DARPA has also supported programs on the Semantic Web with a great deal of emphasis on intelligent management of content and automated understanding. However James Hendler, the manager of the DARPA program at the time, expressed some disappointment with the government's ability to create rapid change, and moved to working with the World Wide Web Consortium to transition the technologies to the private sector.
The EU-FP7 funding program provides financial support to researchers within the European Union. In 2007/2008, it was funding AI research under the Cognitive Systems: Interaction and Robotics Programme (193m), the Digital Libraries and Content Programme (203m) and the FET programme (185m).[48]
Concerns are sometimes raised that a new AI winter could be triggered by any overly ambitious or unrealistic promise by prominent AI scientists. For example, some researchers feared that the widely publicized promises in the early 1990s that Cog would show the intelligence of a human two-year-old might lead to an AI winter.
James Hendler in 2008, observed that AI funding both in the EU and the US were being channeled more into applications and cross-breeding with traditional sciences, such as bioinformatics.[28] This shift away from basic research is happening at the same time as there's a drive towards applications of e.g. the semantic web. Invoking the pipeline argument, (see underlying causes) Hendler saw a parallel with the '80s winter and warned of a coming AI winter in the '10s.
There are also constant reports that another AI spring is imminent or has already occurred:
Several explanations have been put forth for the cause of AI winters in general. As AI progressed from government funded applications to commercial ones, new dynamics came into play. While hype is the most commonly cited cause, the explanations are not necessarily mutually exclusive.
The AI winters can[citation needed] be partly understood as a sequence of over-inflated expectations and subsequent crash seen in stock-markets and exemplified[citation needed] by the railway mania and dotcom bubble. In a common pattern in development of new technology (known as hype cycle), an event, typically a technological breakthrough, creates publicity which feeds on itself to create a "peak of inflated expectations" followed by a "trough of disillusionment". Since scientific and technological progress can't keep pace with the publicity-fueled increase in expectations among investors and other stakeholders, a crash must follow. AI technology seems to be no exception to this rule.[citation needed]
Another factor is AI's place in the organisation of universities. Research on AI often takes the form of interdisciplinary research. One example is the Master of Artificial Intelligence[53] program at K.U. Leuven which involve lecturers from Philosophy to Mechanical Engineering. AI is therefore prone to the same problems other types of interdisciplinary research face. Funding is channeled through the established departments and during budget cuts, there will be a tendency to shield the "core contents" of each department, at the expense of interdisciplinary and less traditional research projects.
Downturns in a country's national economy cause budget cuts in universities. The "core contents" tendency worsen the effect on AI research and investors in the market are likely to put their money into less risky ventures during a crisis. Together this may amplify an economic downturn into an AI winter. It is worth noting that the Lighthill report came at a time of economic crisis in the UK,[54] when universities had to make cuts and the question was only which programs should go.
Early in the computing history the potential for neural networks was understood but it has never been realized. Fairly simple networks require significant computing capacity even by today's standards.
It is common to see the relationship between basic research and technology as a pipeline. Advances in basic research give birth to advances in applied research, which in turn leads to new commercial applications. From this it is often argued that a lack of basic research will lead to a drop in marketable technology some years down the line. This view was advanced by James Hendler in 2008,[28] claiming that the fall of expert systems in the late '80s were not due to an inherent and unavoidable brittleness of expert systems, but to funding cuts in basic research in the '70s. These expert systems advanced in the '80s through applied research and product development, but by the end of the decade, the pipeline had run dry and expert systems were unable to produce improvements that could have overcome the brittleness and secured further funding.
The fall of the Lisp machine market and the failure of the fifth generation computers were cases of expensive advanced products being overtaken by simpler and cheaper alternatives. This fits the definition of a low-end disruptive technology, with the Lisp machine makers being marginalized. Expert systems were carried over to the new desktop computers by for instance CLIPS, so the fall of the Lisp machine market and the fall of expert systems are strictly speaking two separate events. Still, the failure to adapt to such a change in the outside computing milieu is cited as one reason for the 1980s AI winter.[28]
Several philosophers, cognitive scientists and computer scientists have speculated on where AI might have failed and what lies in its future. Hubert Dreyfus highlighted flawed assumptions of AI research in the past and, as early as 1966, correctly predicted that the first wave of AI research would fail to fulfill the very public promises it was making. Others critics like Noam Chomsky have argued that AI is headed in the wrong direction, in part because of its heavy reliance on statistical techniques.[55] Chomsky's comments fit into a larger debate with Peter Norvig, centered around the role of statistical methods in AI. The exchange between the two started with comments made by Chomsky at a symposium at MIT[56] to which Norvig wrote a response.[57]
Visit link:
Posted in Ai
Comments Off on AI winter – Wikipedia
Google launches its own AI Studio to foster machine intelligence startups – TechCrunch
Posted: at 4:18 pm
A new week brings a fresh Google initiative targeting AI startups. We started the month with the announcement of Gradient Ventures, Googles on-balance sheet AI investment vehicle. Two days later we watched the finalists of Google Clouds machine learning competition pitch to a panel of top AI investors. And today, Googles Launchpad is announcing a new hands-on Studio program to feed hungry AI startups the resources they need to get off the ground and scale.
The thesis is simple not all startups are created the same. AI startups love data and struggle to get enough of it. They often have to go to market in phases, iterating as new data becomes available. And they typically have highly technical teams and a dearth of product talent. You get the picture.
The Launchpad Studio aims to address these needs head-on with specialized data sets, simulation tools and prototyping assistance. Another selling point of the Launchpad Studio is that startups accepted will have access to Google talent, including engineers, IP experts and product specialists.
Launchpad, to date, operates in 40 countries around the world, explains Roy Geva Glasberg, Googles Global Lead for Accelerator efforts. We have worked with over 10,000 startups and trained over 2,000 mentors globally.
This core mentor base will serve as a recruiting pool for mentors that will assist the Studio.Barak Hachamov, board member for Launchpad, has been traveling around the world withGlasberg to identify new mentors for the program.
The idea of a startup studio isnt new. It has been attempted a handful of times in recent years, but seems to have finally caught on withAndy Rubins Playground Global. Playground offers startups extensive services and access to top talent to dial-in products and compete with the largest of tech companies.
On the AI Studio front, Yoshua Bengios Element AI raised a $102 million Series A to create a similar program. Bengio, one of, if not the, most famous AI researchers, can help attract top machine learning talent to enable recruiting parity with top AI groups like Googles DeepMind and Facebooks FAIR. Launchpad Studio wont have Bengio, but it will bringPeter Norvig, Dan Ariely, Yossi Matias and Chris DiBona to the table.
But unlike Playgrounds $300 million accompanying venture capital arm and Elements own coffers, Launchpad Studio doesnt actually have any capital to deploy. On one hand, capital completes the package. On the other, Ive never heard a good AI startup complain about not being able to raise funding.
Launchpad Studio sits on top of the Google Developer Launchpad network. The group has been operating an accelerator with global scale for some time now. Now on its fourth class of startups, the team has had time to flesh out its vision and build relationships with experts within Google to ease startup woes.
Launchpad has positioned itself as the Google global program for startups, asserts Glasberg. It is the most scaleable tool Google has today to reach, empower, train and support startups globally.
With all the resources in the world, Googles biggest challenge with its Studio wont be vision or execution but this doesnt guarantee everything will be smooth sailing. Between GV, Capital G, Gradient Ventures, GCP and Studio, entrepreneurs are going to have a lot of potential touch-points with the company.
On paper, Launchpad Studio is the Switzerland of Googles programs. It doesnt aim to make money or strengthen Google Clouds positioning. But from the perspective of founders, theres bound to be some confusion. In an ideal world we will see a meeting of the minds between Launchpads Glasberg, Gradients Anna Patterson and GCPs Sam OKeefe.
The Launchpad Studio will be based in San Francisco, with additional operations in Tel Aviv and New York City. Eventually Toronto, London, Bangalore and Singapore will host events locally for AI founders.
Applications to the Studio are now open if youre interested you can apply here.The program itself is stage-agnostic, so there are no restrictions on size. Ideally early and later-stage startups can learn from each other as they scale machine learning models to larger audiences.
See the original post here:
Google launches its own AI Studio to foster machine intelligence startups - TechCrunch
Posted in Ai
Comments Off on Google launches its own AI Studio to foster machine intelligence startups – TechCrunch
The data that transformed AI researchand possibly the world – Quartz
Posted: at 4:18 pm
In 2006, Fei-Fei Li started ruminating on an idea.
Li, a newly-minted computer science professor at University of Illinois Urbana-Champaign, saw her colleagues across academia and the AI industry hammering away at the same concept: a better algorithm would make better decisions, regardless of the data.
But she realized a limitation to this approachthe best algorithm wouldnt work well if the data it learned from didnt reflect the real world.
Her solution: build a better dataset.
We decided we wanted to do something that was completely historically unprecedented, Li said, referring to a small team who would initially work with her. Were going to map out the entire world of objects.
The resulting dataset was called ImageNet. Originally published in 2009 as a research poster stuck in the corner of a Miami Beach conference center, the dataset quickly evolved into an annual competition to see which algorithms could identify objects in the datasets images with the lowest error rate. Many see it as the catalyst for the AI boom the world is experiencing today.
Alumni of the ImageNet challenge can be found in every corner of the tech world. The contests first winners in 2010 went on to take senior roles at Baidu, Google, and Huawei. Matthew Zeiler built Clarifai based off his 2013 ImageNet win, and is now backed by $40 million in VC funding. In 2014, Google split the winning title with two researchers from Oxford, who were quickly snapped up and added to its recently-acquired DeepMind lab.
Li herself is now chief scientist at Google Cloud, a professor at Stanford, and director of the universitys AI lab.
Today, shell take the stage at CVPR to talk about ImageNets annual results for the last time2017 was the final year of the competition. In just seven years, the winning accuracy in classifying objects in the dataset rose from 71.8% to 97.3%, surpassing human abilities and effectively proving that bigger data leads to better decisions.
Even as the competition ends, its legacy is already taking shape. Since 2009, dozens of new AI research datasets have been introduced in subfields like computer vision, natural language processing, and voice recognition.
The paradigm shift of the ImageNet thinking is that while a lot of people are paying attention to models, lets pay attention to data, Li said. Data will redefine how we think about models.
In the late 1980s, Princeton psychologist George Miller started a project called WordNet, with the aim of building a hierarchal structure for the English language. It would be sort of like a dictionary, but words would be shown in relation to other words rather than alphabetical order. For example, within WordNet, the word dog would be nested under canine, which would be nested under mammal, and so on. It was a way to organize language that relied on machine-readable logic, and amassed more than 155,000 indexed words.
Li, in her first teaching job at UIUC, had been grappling with one of the core tensions in machine learning: overfitting and generalization. When an algorithm can only work with data thats close to what its seen before, the model is considered overfitting to the data; it cant understand anything more general past those examples. On the other hand, if a model doesnt pick up the right patterns between the data, its overgeneralizing.
Finding the perfect algorithm seemed distant, Li says. She saw that previous datasets didnt capture how variable the world could beeven just identifying pictures of cats is infinitely complex. But by giving the algorithms more examples of how complex the world could be, it made mathematic sense they could fare better. If you only saw five pictures of cats, youd only have five camera angles, lighting conditions, and maybe variety of cat. But if youve seen 500 pictures of cats, there are many more examples to draw commonalities from.
Li started to read about how others had attempted to catalogue a fair representation of the world with data. During that search, she found WordNet.
Having read about WordNets approach, Li met with professor Christiane Fellbaum, a researcher influential in the continued work on WordNet, during a 2006 visit to Princeton. Fellbaum had the idea that WordNet could have an image associated with each of the words, more as a reference rather than a computer vision dataset. Coming from that meeting, Li imagined something grandera large-scale dataset with many examples of each word.
Months later Li joined the Princeton faculty, her alma mater, and started on the ImageNet project in early 2007. She started to build a team to help with the challenge, first recruiting a fellow professor, Kai Li, who then convinced Ph.D student Jia Deng to transfer into Lis lab. Deng has helped run the ImageNet project through 2017.
It was clear to me that this was something that was very different from what other people were doing, were focused on at the time, Deng said. I had a clear idea that this would change how the game was played in vision research, but I didnt know how it would change.
The objects in the dataset would range from concrete objects, like pandas or churches, to abstract ideas like love.
Lis first idea was to hire undergraduate students for $10 an hour to manually find images and add them to the dataset. But back-of-the-napkin math quickly made Li realize that at the undergrads rate of collecting images it would take 90 years to complete.
After the undergrad task force was disbanded, Li and the team went back to the drawing board. What if computer-vision algorithms could pick the photos from the internet, and humans would then just curate the images? But after a few months of tinkering with algorithms, the team came to the conclusion that this technique wasnt sustainable eitherfuture algorithms would be constricted to only judging what algorithms were capable of recognizing at the time the dataset was compiled.
Undergrads were time-consuming, algorithms were flawed, and the team didnt have moneyLi said the project failed to win any of the federal grants she applied for, receiving comments on proposals that it was shameful Princeton would research this topic, and that the only strength of proposal was that Li was a woman.
A solution finally surfaced in a chance hallway conversation with a graduate student who asked Li whether she had heard of Amazon Mechanical Turk, a service where hordes of humans sitting at computers around the world would complete small online tasks for pennies.
He showed me the website, and I can tell you literally that day I knew the ImageNet project was going to happen, she said. Suddenly we found a tool that could scale, that we could not possibly dream of by hiring Princeton undergrads.
Mechanical Turk brought its own slew of hurdles, with much of the work fielded by two of Lis Ph.D students, Jia Deng and Olga Russakovsky . For example, how many Turkers needed to look at each image? Maybe two people could determine that a cat was a cat, but an image of a miniature husky might require 10 rounds of validation. What if some Turkers tried to game or cheat the system? Lis team ended up creating a batch of statistical models for Turkers behaviors to help ensure the dataset only included correct images.
Even after finding Mechanical Turk, the dataset took two and a half years to complete. It consisted of 3.2 million labelled images, separated into 5,247 categories, sorted into 12 subtrees like mammal, vehicle, and furniture.
In 2009, Li and her team published the ImageNet paper with the datasetto little fanfare. Li recalls that CVPR, a leading conference in computer vision research, only allowed a poster, instead of an oral presentation, and the team handed out ImageNet-branded pens to drum up interest. People were skeptical of the basic idea that more data would help them develop better algorithms.
There were comments like If you cant even do one object well, why would you do thousands, or tens of thousands of objects? Deng said.
If data is the new oil, it was still dinosaur bones in 2009.
Later in 2009, at a computer vision conference in Kyoto, a researcher named Alex Berg approached Li to suggest that adding an additional aspect to the contest where algorithms would also have to locate where the pictured object was, not just that it existed. Li countered: Come work with me.
Li, Berg, and Deng authored five papers together based on the dataset, exploring how algorithms would interpret such vast amounts of data. The first paper would become a benchmark for how an algorithm would react to thousands of classes of images, the predecessor to the ImageNet competition.
We realized to democratize this idea we needed to reach out further, Li said, speaking on the first paper.
Li then approached a well-known image recognition competition in Europe called PASCAL VOC, which agreed to collaborate and co-brand their competition with ImageNet. The PASCAL challenge was a well-respected competition and dataset, but representative of the previous method of thinking. The competition only had 20 classes, compared to ImageNets 1,000.
As the competition continued in 2011 and into 2012, it soon became a benchmark for how well image classification algorithms fared against the most complex visual dataset assembled at the time.
But researchers also began to notice something more going on than just a competitiontheir algorithms worked better when they trained using the ImageNet dataset.
The nice surprise was that people who trained their models on ImageNet could use them to jumpstart models for other recognition tasks. Youd start with the ImageNet model and then youd fine-tune it for another task, said Berg. That was a breakthrough both for neural nets and just for recognition in general.
Two years after the first ImageNet competition, in 2012, something even bigger happened. Indeed, if the artificial intelligence boom we see today could be attributed to a single event, it would be the announcement of the 2012 ImageNet challenge results.
Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky from the University of Toronto submitted a deep convolutional neural network architecture called AlexNetstill used in research to this daywhich beat the field by a whopping 10.8 percentage point margin, which was 41% better than the next best.
ImageNet couldnt come at a better time for Hinton and his two students. Hinton had been working on artificial neural networks since the 1980s, and while some like Yann LeCun had been able to work the technology into ATM check readers through the influence of Bell Labs, Hintons research hadnt found that kind of home. A few years earlier, research from graphics-card manufacturer Nvidia had made these networks process faster, but still not better than other techniques.
Hinton and his team had demonstrated that their networks could perform smaller tasks on smaller datasets, like handwriting detection, but they needed much more data to be useful in the real world.
It was so clear that if you do a really good on ImageNet, you could solve image recognition, said Sutskever.
Today, these convolutional neural networks are everywhereFacebook, where LeCun is director of AI research, uses them to tag your photos; self-driving cars are using them to detect objects; basically anything that knows whats in a image or video uses them. They can tell whats in an image by finding patterns between pixels on ascending levels of abstraction, using thousands to millions of tiny computations on each level. New images are put through the process to match their patterns to learned patterns. Hinton had been pushing his colleagues to take them seriously for decades, but now he had proof that they could beat other state of the art techniques.
Whats more amazing is that people were able to keep improving it with deep learning, Sutskever said, referring to the method that layers neural networks to allow more complex patterns to be processed, now the most popular favor of artificial intelligence. Deep learning is just the right stuff.
The 2012 ImageNet results sent computer vision researchers scrambling to replicate the process. Matthew Zeiler, an NYU Ph.D student who had studied under Hinton, found out about the ImageNet results and, through the University of Toronto connection, got early access to the paper and code. He started working with Rob Fergus, a NYU professor who had also built a career working on neural networks. The two started to develop their submission for the 2013 challenge, and Zeiler eventually left a Google internship weeks early to focus on the submission.
Zeiler and Fergus won that year, and by 2014 all the high-scoring competitors would be deep neural networks, Li said.
This Imagenet 2012 event was definitely what triggered the big explosion of AI today, Zeiler wrote in an email to Quartz. There were definitely some very promising results in speech recognition shortly before this (again many of them sparked by Toronto), but they didnt take off publicly as much as that ImageNet win did in 2012 and the following years.
Today, many consider ImageNet solvedthe error rate is incredibly low at around 2%. But thats for classification, or identifying which object is in an image. This doesnt mean an algorithm knows the properties of that object, where it comes from, what its used for, who made it, or how it interacts with its surroundings. In short, it doesnt actually understand what its seeing. This is mirrored in speech recognition, and even in much of natural language processing. While our AI today is fantastic at knowing what things are, understanding these objects in the context of the world is next. How AI researchers will get there is still unclear.
While the competition is ending, the ImageNet datasetupdated over the years and now more than 13 million images strongwill live on.
Berg says the team tried to retire the one aspect of the challenge in 2014, but faced pushback from companies including Google and Facebook who liked the centralized benchmark. The industry could point to one number and say, Were this good.
Since 2010 there have been a number of other high-profile datasets introduced by Google, Microsoft, and the Canadian Institute for Advanced Research, as deep learning has proven to require data as vast as what ImageNet provided.
Datasets have become haute. Startup founders and venture capitalists will write Medium posts shouting out the latest datasets, and how their algorithms fared on ImageNet. Internet companies such as Google, Facebook, and Amazon have started creating their own internal datasets, based on the millions of images, voice clips, and text snippets entered and shared on their platforms every day. Even startups are beginning to assemble their own datasetsTwentyBN, an AI company focused on video understanding, used Amazon Mechanical Turk to collect videos of Turkers performing simple hand gestures and actions on video. The company has released two datasets free for academic use, each with more than 100,000 videos.
There is a lot of mushrooming and blossoming of all kinds of datasets, from videos to speech to games to everything, Li said.
Its sometimes taken for granted that these datasets, which are intensive to collect, assemble, and vet, are free. Being open and free to use is an original tenet of ImageNet that will outlive the challenge and likely even the dataset.
In 2016, Google released the Open Images database, containing 9 million images in 6,000 categories. Google recently updated the dataset to include labels for where specific objects were located in each image, a staple of the ImageNet challenge after 2014. London-based DeepMind, bought by Google and spun into its own Alphabet company, recently released its own video dataset of humans performing a variety of actions.
One thing ImageNet changed in the field of AI is suddenly people realized the thankless work of making a dataset was at the core of AI research, Li said. People really recognize the importance the dataset is front and center in the research as much as algorithms.
Correction (July 26): An earlier version of this article misspelled the name of Olga Russakovsky.
Visit link:
The data that transformed AI researchand possibly the world - Quartz
Posted in Ai
Comments Off on The data that transformed AI researchand possibly the world – Quartz
How AI Will Change the Way We Make Decisions – Harvard Business Review
Posted: at 4:18 pm
Executive Summary
Recent advances in AI are best thought of as a drop in the cost of prediction.Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Judgmentis the process of determining what the reward to a particular action is in a particular environment.In many cases, especially in the near term, humans will be required to exercise this sort of judgment. Theyll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions. But couldnt AI calculate costs and benefits itself? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.
With the recent explosion in AI, there has been the understandable concern about its potential impact on human work. Plenty of people have tried to predict which industries and jobs will be most affected, and which skills will be most in demand. (Should you learn to code? Or will AI replace coders too?)
Rather than trying to predict specifics, we suggest an alternative approach. Economic theory suggests that AI will substantially raise the value of human judgment. People who display good judgment will become more valuable, not less. But to understand what good judgment entails and why it will become more valuable, we have to be precise about what we mean.
Recent advances in AI are best thought of as a drop in the cost of prediction. By prediction, we dont just mean the futureprediction is about using data that you have to generate data that you dont have, often by translating large amounts of data into small, manageable amounts. For example, using images divided into parts to detect whether or not the image contains a human face is a classic prediction problem. Economic theory tells us that as the cost of machine prediction falls, machines will do more and more prediction.
Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Consider the example of a credit card network deciding whether or not to approve each attempted transaction. They want to allow legitimate transactions and decline fraud. They use AI to predict whether each attempted transaction is fraudulent. If such predictions were perfect, the networks decision process is easy. Decline if and only if fraud exists.
However, even the best AIs make mistakes, and that is unlikely to change anytime soon. The people who have run the credit card networks know from experience that there is a trade-off between detecting every case of fraud and inconveniencing the user. (Have you ever had a card declined when you tried to use it while traveling?) And since convenience is the whole credit card business, that trade-off is not something to ignore.
This means that to decide whether to approve a transaction, the credit card network has to know the cost of mistakes. How bad would it be to decline a legitimate transaction? How bad would it be to allow a fraudulent transaction?
Someone at the credit card association needs to assess how the entire organization is affected when a legitimate transaction is denied. They need to trade that off against the effects of allowing a transaction that is fraudulent. And that trade-off may be different for high net worth individuals than for casual card users. No AI can make that call. Humans need to do so.This decision is what we call judgment.
Judgment is the process of determining what the reward to a particular action is in a particular environment. Judgment is howwe work out the benefits and costs of different decisions in different situations.
Credit card fraud is an easy decision to explain in this regard. Judgment involves determining how much money is lost in a fraudulent transaction, how unhappy a legitimate customer will be when a transaction is declined, as well as the reward for doing the right thing and allowing good transactions and declining bad ones. In many other situations, the trade-offs are more complex, and the payoffs are not straightforward. Humans learn the payoffs to different outcomes by experience, making choices and observing their mistakes.
Getting the payoffs right is hard. It requires an understanding of what your organization cares about most, what it benefits from, and what could go wrong.
In many cases, especially in the near term, humans will be required to exercise this sort of judgment. Theyll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions.
But couldnt AI calculate costs and benefits itself? In the credit card example, couldnt AI use customer data to consider the trade-off and optimize for profit? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.
Like people, AIs can also learn from experience. One important technique in AI is reinforcement learning whereby a computer is trained to take actions that maximize a certain reward function. For instance, DeepMinds AlphaGo was trained this way to maximize its chances of winning the game of Go. Games are often easy to apply this method of learning because the reward can be easily described and programmed shutting out a human from the loop.
But games can be cheated. As Wired reports, when AI researchers trained an AI to play the boat racing game, CoastRunners, the AI figured out how to maximize its score by going around in circles rather than completing the course as was intended. One might consider this ingenuity of a type, but when it comes to applications beyond games this sort of ingenuity can lead to perverse outcomes.
The key point from the CoastRunners example is that in most applications, the goal given to the AI differs from the true and difficult-to-measure objective of the organization. As long as that is the case, humans will play a central role in judgment, and therefore in organizational decision-making.
In fact, even if an organization is enabling AI to make certain decisions, getting the payoffs right for the organization as a whole requires an understanding of how the machines make those decisions. What types of prediction mistakes are likely? How might a machine learn the wrong message?
Enter Reward Function Engineering. As AIs serve up better and cheaper predictions, there is a need to think clearly and work out how to best use those predictions. Reward Function Engineering is the job of determining the rewards to various actions, given the predictions made by the AI. Being great at itrequires having an understanding of the needs of the organization and the capabilities of the machine. (And it is not the same as putting a human in the loop to help train the AI.)
Sometimes Reward Function Engineering involves programming the rewards in advance of the predictions so that actions can be automated. Self-driving vehicles are an example of such hard-coded rewards. Once the prediction is made, the action is instant. But as the CoastRunners example illustrates, getting the reward right isnt trivial. Reward Function Engineering has to consider the possibility that the AI will over-optimize on one metric of success, and in doing so act in a way thats inconsistent with the organizations broader goals.
At other times, such hard-coding of the rewards is too difficult. There may so be many possible predictions that it is too costly for anyone to judge all the possible payoffs in advance. Instead, some human needs to wait for the prediction to arrive, and then assess the payoff. This is closer to how most decision-making works today, whether or not it includes machine-generated predictions. Most of us already do some Reward Function Engineering, but for humans not machines. Parents teach their children values. Mentors teach new workers how the system operates. Managers give objectives to their staff, and then tweak them to get better performance. Every day, we make decisions and judge the rewards. But when we do this for humans, prediction and judgment are grouped together, and the distinct role of Reward Function Engineering has not needed to be explicitly separate.
As machines get better at prediction, the distinct value of Reward Function Engineering will increase as the application of human judgment becomes central.
Overall, will machine prediction decrease or increase the amount of work available for humans in decision-making? It is too early to tell. On the one hand, machine prediction will substitute for human prediction in decision-making. On the other hand, machine prediction is a complement to human judgment. And cheaper prediction will generate more demand for decision-making, so there will be more opportunities to exercise human judgment. So, although it is too early to speculate on the overall impact on jobs, there is little doubt that we will soon be witness to a great flourishing of demand for human judgment in the form of Reward Function Engineering.
Read the original post:
How AI Will Change the Way We Make Decisions - Harvard Business Review
Posted in Ai
Comments Off on How AI Will Change the Way We Make Decisions – Harvard Business Review
AI Grant aims to fund the unfundable to advance AI and solve hard … – TechCrunch
Posted: at 4:18 pm
Artificial intelligence-focused investment funds are a dime a dozen these days. Everyone knows theres money to be made from AI, but to capture value, good VCs know they need to back products and not technologies. This has left a bit of a void in the space where research occurs within research institutions and large tech companies and commercialization occurs within verticalized startups there isnt much left for the DIY AI enthusiast. AI Grant, created by Nat Friedman and Daniel Gross, aims to bankroll science projects for the heck of it to give untraditional candidates a shot at solving big problems.
Gross, a partner at Y Combinator, and Friedman, a founder who grewXamarin to acquisition by Microsoft, started working on AI Grant back in April. AI Grant issues no-strings-attached grants to people passionate about interesting AI problems. The more formalized version launching today brings a slate of corporate partners and a more structured application review process.
Anyone, regardless of background, can submit an application for a grant. The application is online and consists of questions about background and prior projects in addition to basic information about what the money will be used for and what the initial steps will be for the project. Applicants are asked to connect their GitHub, LinkedIn, Facebook and Twitter accounts.
Gross told me in an interview that the goal is to build profiles of non-traditional machine learning engineers. Eventually, the data collected from the grant program could allow the two to play a bit of machine learning moneyball valuing machine learning engineers without traditional metrics (like having a PhD from Stanford). You can imagine how all the social data could even help build a model for ideal grant recipients in the future.
The long-term goal is to create a decentralized AI research lab think DeepMind but run through Slack and full of engineers that dont cost $300,000 a pop. One day, the MacArthur genius grant-inspired program could serve other industries outside of AI offering a playground of sorts for the obsessed to build, uninhibited.
The entire AI Grant project reminds me of a cross between a Thiel Fellowship and a Kaggle competition. The former, a program to give smart college dropouts money and freedom to tinker and the later, an innovative platform for evaluating data scientists through competition. Neither strive to advance the field in the way the AI Grant program does, but you can see the ideological similarity around democratizing innovation.
Some of the early proposals to receive the AI Grant include:
Charles River Ventures (CRV) is providing the $2,500 grants that will be handed out to the next 20 fellows. In addition, Google has signed on to provide $20,000 in cloud computing credits to each winner, CrowdFlower is offering $18,000 in platform credit with $5,000 in human labeling credits, Scale is giving $1,000 in human labeling credit per winner and Floyd will give 250 Tesla K80 GPU hours to each winner.
During the first selection of grant winners, Floodgate awarded $5,000 checks. The program launching today will award $2,500 checks. Gross told me that this change was intentional the initial check size was too big. The plan is to add additional flexibility in the future to allow applicants to make a case for how much money they actually need.
You can check out the application here and give it a go. Applications will be taken until August 25th. Final selection of fellows will occur on September 24th.
Read this article:
AI Grant aims to fund the unfundable to advance AI and solve hard ... - TechCrunch
Posted in Ai
Comments Off on AI Grant aims to fund the unfundable to advance AI and solve hard … – TechCrunch