The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: July 2017
Can virtual reality help save endangered Pacific languages? – ABC Online
Posted: July 29, 2017 at 7:15 pm
Posted July 29, 2017 17:53:08
The Pacific is the most linguistically rich region in the world, with Papua New Guinea alone being home to a staggering 850 languages.
Yet experts fear that widespread language loss could be the future for the region.
To draw attention to the issue, and to document more Pacific languages, Australian researchers are trialling a new way of making their database of languages more exciting and accessible.
To do this, they are turning to virtual reality technology.
"We've got this fantastic resource a database of a thousand endangered languages," lead researcher Dr Nick Thieberger from the University of Melbourne said.
"But it's not very engaging, it's a bit dull, so we wanted to do something to change that."
Over the past 15 years, researchers from Australian universities have been digitalising recordings of languages and storing them in the Pacific and Regional Archive for Digital Sources in Endangered Cultures (PARADISEC).
The database has documented more than 6,000 hours of recordings from over 1,000 languages.
Earlier this year, Dr Thieberger, Dr Rachel Hendry a lecturer in digital humanities and media artist Dr Andrew Burrell created a virtual reality experience using files from the database.
Audiences don a pair of virtual reality goggles, allowing them to "fly across" Pacific nations such as Vanuatu and Papua New Guinea.
As they do so, shards of light emerge that play clips of local languages.
"We really wanted to look at how we could make this database more exciting for people and to get them engaging with it," Dr Thieberger said.
The VR display is currently only exhibited in museums, but the team is working on versions that could be accessed anywhere.
"We're working on an iPad version as well as a Google Cardboard version which will mean people in remote communities can have a comparable experience," Dr Thieberger said.
Dr Hendry said these types of immersive experiences will become more common.
"We're only just seeing the start of this type of immersive representation, and not just with language data," she said.
"Our technology and smart phone capabilities are growing every day and that's exciting for linguists wanting to get this out into the public."
It is hoped that with more public interaction with the database, people will help to expand the collection.
Much of the data in PARADISEC has come from researchers and the team are keen to get audio sent in from regular people.
"There are so many interesting recordings out there clips taken on local people's phones, tapes from tourists," Dr Thieberger said.
"Much of this stuff is just sitting in homes, and it's likely valuable to this collection.
"A good example is last year when we had some tapes arrive and it turned out to be the only known record of some of PNG's languages."
Dr Thieberger said many languages in the Pacific are passed down orally, meaning a recording might be their only documentation.
It also means they are more susceptible to extinction because as older speakers die they take their language with them unless it has been passed down to the next generation.
According to a UNESCO report on endangered languages, many languages are being replaced by 'world languages' such as English and French or being diluted through Creole languages such as Tok Pisin.
Dr Julia Miller is the data manager for the Centre of Excellence for the Dynamics of Language at the Australian National University, and oversees the ANU's PARADISEC unit.
Her research has involved fieldwork in the Morehead District of PNG.
Dr Miller said it's a region that is important to document because it has so far bucked the language loss trend.
"Tok Pisin hasn't become the dominant language there, so all the kids are learning languages of their mother as well as their fathers," she said.
"I'll be returning next year to do follow-up work and all of that material will be achieved in PARADISEC."
Dr Hendry said language revival is ultimately up to public will.
But this, she added, was where new technologies such as VR and language databases could help.
"It's important to have these types of databases because linguists can pull audio from there and creating things like VR's, create audio books where you can read along and re-learn languages," Dr Hendry said.
"And with things like the VR, it really shows what is at stake.
"It's not a policy paper, it's you being immersed in languages that are at risk, that's much more powerful for people and policy makers."
Dr Thieberger is pragmatic when considering language revival.
"I'm not sure we can say we are reviving languages but by doing this stuff people will want to go into it and from that they can reintroduce something back to the community," she said.
"It could be a song, a concept, or just a word it might not sound like a lot, but it's something."
Topics: languages, community-and-society, computers-and-technology, papua-new-guinea, pacific
See the original post here:
Can virtual reality help save endangered Pacific languages? - ABC Online
Posted in Virtual Reality
Comments Off on Can virtual reality help save endangered Pacific languages? – ABC Online
Kasparov: ‘Embrace’ the AI revolution – BBC News – BBC News
Posted: at 7:14 pm
BBC News | Kasparov: 'Embrace' the AI revolution - BBC News BBC News AI may destroy jobs but will create many more and increase productivity, said the chess grandmaster. |
Read the original here:
Posted in Ai
Comments Off on Kasparov: ‘Embrace’ the AI revolution – BBC News – BBC News
Artificial intelligence system makes its own language, researchers pull the plug – WCVB Boston
Posted: at 7:14 pm
If we're going to create software that can think and speak for itself, we should at least know what it's saying. Right?
That was the conclusion reached by Facebook researchers who recently developed a sophisticated negotiation software that started off speaking English. Two artificial intelligence agents, however, began conversing in their own shorthand that appeared to be gibberish but was perfectly coherent to themselves.
A sample of their conversation:
Bob: I can can I I everything else.
Alice: Balls have zero to me to me to me to me to me to me to me to me to.
Dhruv Batra, a Georgia Tech researcher at Facebook's AI Research (FAIR), told Fast Co. Design "there was no reward" for the agents to stick to English as we know it, and the phenomenon has occurred multiple times before. It is more efficient for the bots, but it becomes difficult for developers to improve and work with the software.
"Agents will drift off understandable language and invent codewords for themselves," Batra said. Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isnt so different from the way communities of humans create shorthands."
Convenient as it may have been for the bots, Facebook decided to require the AI to speak in understandable English.
"Our interest was having bots who could talk to people," FAIR scientist Mike Lewis said.
In a June 14 post describing the project, FAIR researchers said the project "represents an important step for the research community and bot developers toward creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant."
Read the rest here:
Artificial intelligence system makes its own language, researchers pull the plug - WCVB Boston
Posted in Ai
Comments Off on Artificial intelligence system makes its own language, researchers pull the plug – WCVB Boston
The ‘Skynet’ Gambit – AI At The Brink – Seeking Alpha
Posted: at 7:14 pm
"The deployment of full artificial intelligence could well mean the end of the human race." - Stephen Hawking
"He can know his heart, but he don't want to. Rightly so. Best not to look in there. It ain't the heart of a creature that is bound in the way that God has set for it. You can find meanness in the least of creatures, but when God made man the devil was at his elbow. A creature that can do anything. Make a machine. And a machine to make the machine. And evil that can run itself a thousand years, no need to tend it." - Cormac McCarthy, Blood Meridian: Or the Evening Redness in the West
Let me declare at the outset that this article has been tough to write. I am by birthright an American, an optimist and a true believer in our innovative genius and its power to drive better lives for us and the world around us. Ive grown up in the mellow sunshine of Moores law, and lived first hand in a world of unfettered innovation and creativity. That is why it is so difficult to write the following sentence:
Its time for federal regulation of AI and IoT technologies.
I say that reluctantly but with growing certainty. I have come to believe that we share a moral obligation to act now in order to protect our children and grandchildren. We need to take this moment, wake up, and listen to the voices that are warning us that the confluence of technologies that power the AI revolution are advancing so rapidly that they pose a clear and present danger to our lives and well-being.
So this article is about why I have come to feel that way and why I think you should join me in that feeling. Obviously, this has financial implications. Since you are a tech investor, you almost certainly invested in one or more of the companies - like Nvidia (NASDAQ:NVDA), Google (NASDAQ:GOOG) (NASDAQ:GOOGL), and Baidu (NASDAQ:BIDU) - that are profiting from driving the breakneck advances we are seeing in AI base technologies and the myriad of embedded use cases that make the technology so seductive. Indeed, if we look at the entire tech industry ecosystem, from chips through applications and beyond them to their customers that are transforming their business through their use, we can hardly ignore the implications of this present circumstance.
So why? How did we get to this moment? Like me, youve probably been aware of the warnings of well-known luminaries like Elon Musk, Bill Gates, Stephen Hawking and many others, and, like me, you have probably noted their commentary but moved on to consider the next investment opportunity. Personally, being the optimist that I am, I certainly respected those arguments but believed even more strongly that we would innovate ourselves out of the danger zone. So why the change? Two words - one name - Bruce Schneier.
If you have been interested in the fields of cryptology and computer security, you have no doubt heard his name. Now with IBM (NYSE:IBM) as its chief spokesperson on security, he is a noted author and contributor to current thinking on the entire gamut of issues that confront us in this new era of the cloud, IoT, and Internet-based threats to personal privacy and computer system integrity. Mr. Schneiers seminal talk at the recent RSA conference brought it all into focus for me, and I encourage you to watch it. I will briefly recap his argument and then work out some of the consequences that flow from Schneiers argument. So here goes.
Schneiers case begins by identifying the problem - the rise of the cyber-physical system. He points how our day-to-day reality is being subverted as IoT literally stands the world on its head, dematerializing and virtualizing our physical environment. What used to be dumb is now smart. Things that used to be discrete and disconnected are now networked and interconnected in subtle and powerful ways. This is the conceptual linkage that really connected the dots for me. As he puts it in his security blog:
We're building a world-size robot, and we don't even realize it. [] The world-size robot is distributed. It doesn't have a singular body, and parts of it are controlled in different ways by different people. It doesn't have a central brain, and it has nothing even remotely resembling a consciousness. It doesn't have a single goal or focus. It's not even something we deliberately designed. It's something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world. This world-size robot is actually more than the Internet of Things. [] And while it's still not very smart, it'll get smarter. It'll get more powerful and more capable through all the interconnections we're building. It'll also get much more dangerous.
More powerful, indeed. It is at this point where AI and related technologies enter the equation to build a host of managers, agents, bots, natural language interfaces, and other facilities that allow us to leverage the immense scale and reach of our IoT devices - devices that, summed altogether, encompass our physical world and exert enormous power for good and in the wrong hands for evil.
Surely, we can manage this? Well, no, says Schneier - not the way we are going about it now. The problem is, as he cogently points out, our business model for building software and systems is notoriously callus when it comes to security. Our "fail fast fix fast", minimum-market-requirementsfor-version1-shipment protocol is famous for delivering product that comes with a hack me first invitation that is all too often accepted. So whats the difference you may ask? Weve been muddling along with this problem for years. We dig ourselves into trouble, we dig ourselves out. Fail fast, fix fast. Life goes on. Lets go make some money.
Or maybe it doesnt. The IoT phenomenon is leading us headlong into deployment of literally billions of sensors embedded deep into our most personal physical surroundings, connecting us to system entities and actors, nefarious and benign, that now have access to intimate data about our lives. Bad as that is, its not the worst thing. This same access gives these bad actors the potential to control the machines that provide life-sustaining services to us. Its one thing to have your credit card data hacked, its entirely another thing to have a bad actor in control of, say, the power grid, an operating theater robot, your car, or the engine of the airplane you're riding in. Our very lives depend on the integrity of these machines. Do we need to emphasize this point? Fail fast, fix fast does not belong in this world.
So if the prospect of a body-count stat on the next after-action report from some future hack doesnt alarm you, how about this scenario. What if it wasnt a hack? What if it was an unforeseen interaction of otherwise benign AIs that we are relying on to run the system in question? Can we be sure to fully understand the entire capability of an AI that is, say, balancing the second-to-second demands of the power grid?
One thing we can count on - the AI that we are building now will be smarter and more capable tomorrow. How smart is the AI were building? How good is it? Scary good. So lets let Musk answer the question. How smart are these machines were building? [Theyll be] smarter than us. Theyll do everything better than us", he says. So whats the problem? Youre not going to like the answer.
We wont know that the AI has a problem until the AI breaks and we may not know why it broke then. The intrinsic nature of the cognitive software we are building with deep neural nets is that a decision is the product of interactions with thousands and possibly millions of previous decisions from lower levels in the training data, and those decision criteria may well have already been changed as feedback loops communicate learning upstream and down. The system very possibly cant tell us "why". Indeed, the smarter the AI is, the less likely it may be able to answer the why question.
Hard as it is, we really need to understand the scale of the systems we are building. Think about autonomous cars as one, rather small, example. Worldwide the industry has built 88 million cars and light trucks in 2016 and another 26 million medium and heavy trucks. Sometime in the 2025 to 2030 time frame, all of them will be autonomous. With the rise of the driving as a service model, there may not be as many new vehicles being produced, but the numbers will still be huge and fleet sizes will grow every year as the older, self-driving vehicles are replaced. What are the odds that the AI that runs these vehicles performs flawlessly? Can we expect perfection? Our very lives depend on it. God forbid a successful hack into this platform!
Beyond that, what if perfection will kill us? Ultimately, these machines may require our guidance to make moral decisions. Question. You and your spouse are in a car that is in the center lane of the three-lane freeway operating at the 70 mph speed limit. A motorcyclist directly left of you - to the right a family of five in autonomous minivan. Enter a drunk self-driving and old pickup the wrong way at high speed weaving through the three lanes directly in your path. Should your car evade to the left lane and risk the life of the motorcyclist? One would hope our vehicle wouldnt move right and put the family of five at risk. Should it be programmed to conduct a first, do no harm policy which would avoid a swerve into either lane and would simply break as hard as possible in the center lane and hope for the best?
Whatever the scenario, the AIs we develop and deploy, however rich and deep the learning data they have been exposed to, will confront situations that they havent encountered before. In the dire example above and in more mundane conundrums, who ultimately sets the policy that must be adhered to?Should the developer? How about the user (in cases where this is practical)? Or should we have a common policy that must be adhered to by all? For sure, any policy implemented in our driving scenario above will save lives and perform better than any human driver. Even so, in vehicles, airplanes, SCADA systems, chemical plants and myriad other AIs inhabiting devices operating in innately hazardous operating regimes, will it be sufficient to let their in extremis actions be opaque and unknowable? Surely not, but will the AI as developed always gives us the control to change it?
Finally, we must consider a factor that is certainly related to scale but is uniquely and qualitatively different - the network. How freely and ubiquitously should these AIs interconnect? Taken on its face, the decision seems to have been made. The very term, Internet of Things, seems to imply an interconnection policy that is as freewheeling and chaotic as our Internet of people. Is this what We, the People want? Should some AIs - say our nuclear reactors or more generally our SCADA systems - operate with limited or no network connection? Seems likely, but how much further should we go? Who makes the decision?
Beyond such basic questions come the larger issues brought on by the reality of network power. Lets consider the issue of learning and add to that the power of vast network scale in our new cyber physical world. The word seems so simple, so innocuous. How could learning be a bad thing? AI powered IoT systems must be connected to deliver the value we need from them. Our autonomous vehicles, terrestrial and airborne, for example, will be in constant communication with nearby traffic, improving our safety by step-functions.
So how does the fleet learn? Lets take the example from above. Whatever the result, the incident forensics will be sent to the cloud where developers will presumably incorporate the new data in the master learning set. How will the new master be tested? How long? How rigorously? What will be the re-deployment model? Will the new improved version of the AI be proprietary and not shared with the other vehicle manufacturers, leaving their customers at a safety disadvantage? These are questions that demand government purview.
Certainly, there is no unanimous consensus here regarding the threat of AI. Andrew Ng of Baidu/Stanford disagrees that AI will be a threat to us in the foreseeable future. So does Mark Zuckerberg. But these disagreements are only with the overt existential threat - i.e. that a future AI may kill us. More broadly, though, there is very little disagreement that our AI/IoT-powered future poses broad economic and sociopolitical issues that could literally rip our societies apart. What issues? How about the massive loss of jobs and livelihood of perhaps the majority of our population over the course of the next 20 years? As is nicely summarized in this recent NY Times article, AI will almost certainly exacerbate the already difficult problem we have with income disparities. Beyond that, the global consequences of the AI revolution could generate a dangerous dependency dynamic among countries other than the US and China that do not own AI IP.
We could go on and on, but hopefully the issue is clear. Through the development and implementation of increasing capable AI-powered IoT systems, we are embarking upon a voyage into an exciting but dangerous future state which we can barely imagine from our current vantage point. Now is the time to step back and assess where we are and what we need to do going forward. Schneiers prescription for the problem is that the tech industry must get in front of this issue and drive a workable consensus among industry stakeholders and governmental authorities and regulatory bodies about the problem, its causes and potential effects, and most importantly, a reasonable solution to the problems that protects the public while allowing the industry room to innovate and build.
There is no turning back, but we owe it to ourselves and our posterity to do our utmost to get it right. As technologists we are inherently self-interested in protecting and nurturing the opportunity we all have in this exciting new realm. This is natural and understandable. Our singular focus on agility and innovation has brought the world many benefits and will bring many more. But we are not alone and it would be completely irresponsible to insist that we are the only stakeholder in the outcomes we are engineering.
This decision - to engage and attempt to manage the design of the new and evolving regulatory regime - has enormous implications. There is undoubtedly risk. Poor or heavy-handed regulation could well exact a tremendous opportunity cost. One could well imagine a world in which Nvidia's GPU business is severely affected by regulatory inspection and delay, for example. But that is the very reason we need to engage now. The economic leverage that AI provides in every sector of our economy leads us inescapably to economic and wealth-building scenarios beyond anything the world has seen before. As participants and investors, we must do what we can to protect this opportunity to build unprecedented levels of wealth for our country and ourselves. Schneier argues that we are best serving our self-interest by engaging government now rather than burying our heads in the sand waiting for the inevitable backlash that will come when (not if!) these massive systems fail catastrophically in the future.
Schneier has got the right idea. We need to broaden the conversation, lead the search for solutions, and communicate the message to the many non-tech constituencies - including all levels of government - that there is an exciting future ahead but that future must include appropriate regulations that protect the American people and indeed the entire human race.
We wont get a second chance to get this right.
Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Go here to see the original:
Posted in Ai
Comments Off on The ‘Skynet’ Gambit – AI At The Brink – Seeking Alpha
Google Has Started Adding Imagination to Its DeepMind AI – Futurism – Futurism
Posted: at 7:14 pm
Advanced AI
Researchers have started developing artificial intelligence with imagination AI that can reason through decisions and make plans for the future, without being bound by human instructions.
Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.
The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they havent been specifically programmed for. Insert your usual fears of a robot uprising here.
When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall, explain the researchersin a blog post. On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking.
If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to imagine and reason about the future. Beyond that they must be able to construct a plan using this knowledge.
Weve already seen a version of this forward planning inthe Go victoriesthat DeepMinds bots have scored over human opponents recently, as the AI works out the future outcomes that will result from its current actions.
The rules of the real world are much more varied and complex than the rules of Go though, which is why the team has been working on a system that operates on another level.
To do this, the researchers combined several existing AI approaches together, including reinforcement learning (learning through trial and error) and deep learning (learning through processing vast amounts of data in a similar way to the human brain).
What they ended up with is a system that mixes trial-and-error with simulation capabilities, so bots can learn about their environment then think before they act.
One of the ways they tested the new algorithms was with a 1980s video game calledSokoban, in which players have to push crates around to solve puzzles. Some moves can make the level unsolvable, so advanced planning is needed, and the AI wasnt given the rules of the game beforehand.
The researchers found their new imaginative AI solved 85 percent of the levels it was given, compared with 60 percent for AI agents using older approaches.
The imagination-augmented agents outperform the imagination-less baselines considerably,say the researchers. They learn with less experience and are able to deal with the imperfections in modelling the environment.
The team noted a number of improvements in the new bots: they could handle gaps in their knowledge better, they were better at picking out useful information for their simulations, and they could learn different strategies to make plans with.
Its not just advance planning its advance planning with extra creativity, so potential future actions can be combined together or mixed up in different ways in order to identify the most promising routes forward.
Despite the success of DeepMinds testing, its still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, its a promising start in developing AI that wont put a glass of water on a table if its likely to spill over, plus all kinds of other, more useful scenarios.
Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about and plan for the future,conclude the researchers.
The researchers also created a video of the AI in action, which you can see below:
You can read the two papers published to the pre-print website arXiv.orghereandhere.
Continued here:
Google Has Started Adding Imagination to Its DeepMind AI - Futurism - Futurism
Posted in Ai
Comments Off on Google Has Started Adding Imagination to Its DeepMind AI – Futurism – Futurism
Top MIT AI Scientist to Elon Musk: Please Simmer Down – Inc.com
Posted: at 7:14 pm
Science fiction futures generally come in two flavors -- utopian and dystopian. Will tech kill routine drudgery and elevate humanity la Star Trek or The Jetsons? Or will innovation be turned against us in some 1984-style nightmare? Or, worse yet, will the robots themselves turn against us (as in the highly entertaining Robopocalypse)?
This isn't just a question for fans of futuristic fiction. Currently two of our smartest minds -- Elon Musk and Mark Zuckerberg -- are in a war of words over whether artificial intelligence is more likely to improve our lives or destroy them.
Musk is the pessimist of the two, warning that proactive regulation is needed to keep doomsday scenarios featuring smarter-than-human A.I.s from becoming a reality. Zuckerberg imagines a rosier future, arguing that premature regulation of A.I. will hold back helpful tech progress.
Each has accused the other of ignorance. Who's right in this battle of the tech titans?
If you're looking for a referee, you could do a lot worse than roboticist Rodney Brooks. He is the founding director of MIT's Computer Science and Artificial Intelligence Lab, and the co-founder of iRobot and Rethink Robotics. In short, he's one of the top minds in the field. So what does he think of the whole Zuckerberg vs. Musk smackdown?
In a wide-ranging interview with TechCrunch, Brooks came down pretty firmly on the side of optimists like Zuckerberg:
There are quite a few people out there who've said that A.I. is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don't work in A.I. themselves. For those who do work in A.I., we know how hard it is to get anything to actually work through product level.
Here's the reason that people -- including Elon-- make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldn't.] When people saw DeepMind's AlphaGo beat the Korean champion and then beat the Chinese Go champion, they thought, 'Oh my god, this machine is so smart, it can do just about anything!' But I was at DeepMind in London about three weeks ago and [they admitted that things could easily have gone very wrong].
Brooks also argues against Musk's idea of early regulation of A.I., saying it's unclear exactly what should be prohibited at this stage. In fact, the only form of A.I. he would like to see regulated is self-driving cars-- such as those being developed by Musk's Tesla -- which Brooks claims present imminent and very real practical problems. (For example, should a 14-year-old be able to override and "drive" an obviously malfunctioning self-driving car?)
Are you more excited or worried about the future of artificial intelligence?
Follow this link:
Top MIT AI Scientist to Elon Musk: Please Simmer Down - Inc.com
Posted in Ai
Comments Off on Top MIT AI Scientist to Elon Musk: Please Simmer Down – Inc.com
Artificial Intelligence Is Stuck. Here’s How to Move It Forward. – New York Times
Posted: at 7:13 pm
To get computers to think like humans, we need a new A.I. paradigm, one that places top down and bottom up knowledge on equal footing. Bottom-up knowledge is the kind of raw information we get directly from our senses, like patterns of light falling on our retina. Top-down knowledge comprises cognitive models of the world and how it works.
Deep learning is very good at bottom-up knowledge, like discerning which patterns of pixels correspond to golden retrievers as opposed to Labradors. But it is no use when it comes to top-down knowledge. If my daughter sees her reflection in a bowl of water, she knows the image is illusory; she knows she is not actually in the bowl. To a deep-learning system, though, there is no difference between the reflection and the real thing, because the system lacks a theory of the world and how it works. Integrating that sort of knowledge of the world may be the next great hurdle in A.I., a prerequisite to grander projects like using A.I. to advance medicine and scientific understanding.
I fear, however, that neither of our two current approaches to funding A.I. research small research labs in the academy and significantly larger labs in private industry is poised to succeed. I say this as someone who has experience with both models, having worked on A.I. both as an academic researcher and as the founder of a start-up company, Geometric Intelligence, which was recently acquired by Uber.
Academic labs are too small. Take the development of automated machine reading, which is a key to building any truly intelligent system. Too many separate components are needed for any one lab to tackle the problem. A full solution will incorporate advances in natural language processing (e.g., parsing sentences into words and phrases), knowledge representation (e.g., integrating the content of sentences with other sources of knowledge) and inference (reconstructing what is implied but not written). Each of those problems represents a lifetime of work for any single university lab.
Corporate labs like those of Google and Facebook have the resources to tackle big questions, but in a world of quarterly reports and bottom lines, they tend to concentrate on narrow problems like optimizing advertisement placement or automatically screening videos for offensive content. There is nothing wrong with such research, but it is unlikely to lead to major breakthroughs. Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesnt understand a word of what it is translating.
I look with envy at my peers in high-energy physics, and in particular at CERN, the European Organization for Nuclear Research, a huge, international collaboration, with thousands of scientists and billions of dollars of funding. They pursue ambitious, tightly defined projects (like using the Large Hadron Collider to discover the Higgs boson) and share their results with the world, rather than restricting them to a single country or corporation. Even the largest open efforts at A.I., like OpenAI, which has about 50 staff members and is sponsored in part by Elon Musk, is tiny by comparison.
An international A.I. mission focused on teaching machines to read could genuinely change the world for the better the more so if it made A.I. a public good, rather than the property of a privileged few.
Gary Marcus is a professor of psychology and neural science at New York University.
Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.
A version of this op-ed appears in print on July 30, 2017, on Page SR6 of the New York edition with the headline: A.I. Is Stuck. Lets Unstick It.
Continue reading here:
Artificial Intelligence Is Stuck. Here's How to Move It Forward. - New York Times
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Is Stuck. Here’s How to Move It Forward. – New York Times
Artificial intelligence system makes its own language, researchers pull the plug – WESH Orlando
Posted: at 7:13 pm
If we're going to create software that can think and speak for itself, we should at least know what it's saying. Right?
That was the conclusion reached by Facebook researchers who recently developed a sophisticated negotiation software that started off speaking English. Two artificial intelligence agents, however, began conversing in their own shorthand that appeared to be gibberish but was perfectly coherent to themselves.
A sample of their conversation:
Bob: I can can I I everything else.
Alice: Balls have zero to me to me to me to me to me to me to me to me to.
Dhruv Batra, a Georgia Tech researcher at Facebook's AI Research (FAIR), told Fast Co. Design "there was no reward" for the agents to stick to English as we know it, and the phenomenon has occurred multiple times before. It is more efficient for the bots, but it becomes difficult for developers to improve and work with the software.
"Agents will drift off understandable language and invent codewords for themselves," Batra said. Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isnt so different from the way communities of humans create shorthands."
Convenient as it may have been for the bots, Facebook decided to require the AI to speak in understandable English.
"Our interest was having bots who could talk to people," FAIR scientist Mike Lewis said.
In a June 14 post describing the project, FAIR researchers said the project "represents an important step for the research community and bot developers toward creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant."
Read this article:
Artificial intelligence system makes its own language, researchers pull the plug - WESH Orlando
Posted in Artificial Intelligence
Comments Off on Artificial intelligence system makes its own language, researchers pull the plug – WESH Orlando
Emojis Are Everywhere, But For How Long? Artificial Intelligence Could Soon Replace Our Smiley Face Friends – Newsweek
Posted: at 7:13 pm
Forget Donald Trump. Lets talk about something truly dim and oafish: emoji.
The world is in the middle of a disturbing emoji-gasm. You can go see The Emoji Movie and sit through a plot as nuanced and complex as an old episode of Mister Rogers Neighborhood. (Dont miss esteemed Shakespearean actor Patrick Stewart getting to be the voice of Poop.) July also brought us World Emoji Day. To mark the occasion, Apple trumpeted its upcoming release of new emoji, a milestone for society that might only be topped by a new shape of marshmallow in Lucky Charms. Microsoft, always an innovator in artificial intelligence, announced a version of its SwiftKey phone keyboard that will predict which emoji you should use based on what youre typing. Just one more reason to be scared of AI.
Billions of emoji fly around the planet every daythose tiny cartoons of faces and things that supposedly let us express ourselves in ways words cant, unless you know a lot of words. Emoji are such a rage, they have to be governed by a global nonprofit called the Unicode Consortiumkind of like the G-20 for smiley faces. Full members include companies such as Apple, Google, Huawei, SAP and IBM. The group has officially sanctioned 2,666 emoji that can be used across any technology platform. Obviously, the people who sit on the Unicode board do important work. This is why the middle finger emoji you type on your iPhone can look the same on an SAP-generated corporate financial report.
Tech & Science Emails and Alerts - Get the best of Newsweek Tech & Science delivered to your inbox
Emoji are displayed on the Touch Bar on a new Apple MacBook Pro laptop during a product launch event on October 27, 2016 in Cupertino, California. Stephen Lam/Getty
Maybe I dont get emoji because Im a guy. At least thats what Cosmopolitan suggests in a story headlined, Why Your Boyfriend Hates Emoji: Dont blame him, he cant help it. The story explains: Straight guys aren't conditioned to flash bashful smiles. They don't do cute winks. They don't make a cute kissy face. Then again, the articles male writer might not be the most enlightened about gender roles in the 21st century. Another Cosmo story by the same person is headlined, 13 Things Guys Secretly Want to Do With Your Boobs.
Still, serious academics seem to think emoji are serious. (Oh, and I consider the word emoji to be both singular and plural. The kind of people who say emojis are the kind of people who say shrimps.) Researchers from the University of Michigan and Peking University analyzed 427 million emoji-laden messages from 212 countries to understand how emoji use differs across the globe. Those passionate French are the heaviest emoji users. Mexicans send the most negative emojiyet another justification for keeping them behind a wall. Or you can read The Semiotics of Emoji, by Marcel Danesi, an anthropologist at the University of Toronto. The emoji code harbors within it many implications for the future of writing, literacy, and even human consciousness, he writes. Whoa, dude! Someday, we might think in emoji! Hold on while I fire up my Pax and let my mind be blown.
Much of the emoji trend can be blamed on the Japanese, fervent purveyors of creepy-cute characters like Hello Kitty and Pikachu. In the 1990s, when Japan was the smartest player in electronics, NTT DoCoMo introduced the first sort-of-smartphone service called i-mode. Shigetaka Kurita, part of the i-mode team, recalled being disappointed by weather reports that just sent the word fine to his phone instead of showing a smiling, shining sun like he saw on TV. That gave him the idea of creating tiny symbols for i-mode. The first batch of 176 was inspired by facial expressions, street signs and symbols used in manga. The word emoji comes from a mashup of the Japanese words for picture and character.
The rest of the blame for this trend falls on Apple. After introducing the iPhone in 2007, Apple wanted to break into the Japanese market, where users had by then grown accustomed to emoji. So it had to include emoji on the iPhone. That led to people in other countries finding and using the emoji on their iPhones, spreading these things like lice. As emoji got more popular, users wanted more kinds for all kinds of devices. Companies such as Apple and Google keep creating new emoji and proposing them to the Unicode Consortium, which is how weve gotten so many odd emoji, like a roller coaster, cactus, pickax and the eggplantwhich, if you dont know your emoji, you shouldnt send to your mother.
The question now is: What does emoji-mania mean? There are those, like Danesi, who believe were inventing a new language based on pictogramssomething like Chinese, except with no spoken version of the symbols. Generations from now, people will ride in driverless flying Ubers and communicate with one another in nothing but emoji. Novels will be written in emoji. (An engineer, Fred Benenson, already translated Moby-Dick into emoji. Call me Ishmael is a phone, a mans face, a sailboat, a whale and a hand doing an OK sign.)
That vision of the future, though, ignores an important trend. As Amazons Alexa and similar services are showing, AI software is going to get really good at communicating with us by voice. Were going to stop relying so much on typing with our thumbs and looking at screens. Well converse with the technology and one another. Then, the fact that you cant speak in emoji might actually be the end of the damn things. In another decade, we could look back at emoji as a peculiar artifact of an era, like 10-4, good buddy chatter during the 1970s citizens band radio craze.
Then again, emoji might be another sign of the growing anti-intellectual, anti-science movement in America. Maybe emoji are, in fact, where language and thinking are headingaway from the precision of words and toward the primitive grunts of cartoon images. The nation has already elected a president who writes only in tweets. If he wins another term, he might go another level lower, thrilling supporters by communicating his foreign policy position in nothing but a Russian flag, hearts and an eggplant.
See the rest here:
Posted in Artificial Intelligence
Comments Off on Emojis Are Everywhere, But For How Long? Artificial Intelligence Could Soon Replace Our Smiley Face Friends – Newsweek
Disney makes artificial intelligence a group experience – YourStory.com
Posted: at 7:13 pm
Have you ever wanted to sit and drift through a magical world? Disney Research has developed a Magic Bench platform that actualises this dream by combining augmented reality (AR) and mixed reality experience.
In this platform, wearing a head-mounted display or using a handheld device is not required. Instead, the surroundings are instrumented rather than the individual, allowing people to share the magical experience as a group.Moshe Mahler, Principal Digital Artist at Disney Research, said,
This platform creates a multi-sensory immersive experience in which a group can interact directly with an animated character. Our mantra for this project washear a character coming, see them enter the space, and feel them sit next to you.
The Magic Bench shows people their mirrored images on a large screen in front of them, creating a third person point of view. In a paper that will be presented at SIGGRAPH 2017 event in Los Angeles on July 30, researchers said,
The scene is reconstructed using a depth sensor, allowing the participants to actually occupy the same 3D space as a computer-generated character or object, rather than superimposing one video feed onto another.
According to the researchers, a colour camera and depth sensor were used to create a real-time, HD-video-textured 3D reconstruction of the bench, surroundings, and participants. Mahler explained,
The bench itself plays a critical role. Not only does it contain haptic actuators, but it constrains several issues for us in an elegant way. We know the location and the number of participants, and can infer their gaze. It creates a stage with a foreground and a background, with the seated participants in the middle ground.
It even serves as a controller; the mixed reality experience doesnt begin until someone sits down and different formations of people seated create different types of experiences, he added.
Continued here:
Disney makes artificial intelligence a group experience - YourStory.com
Posted in Artificial Intelligence
Comments Off on Disney makes artificial intelligence a group experience – YourStory.com