Page 19«..10..18192021..»

Category Archives: Superintelligence

Multiple Intelligences, and Superintelligence – Freedom to Tinker

Posted: May 6, 2017 at 3:48 am

Superintelligent machines have long been a trope in science fiction. Recent advances in AI have made them a topic for nonfictiondebate, and even planning. And that makes sense. Although the Singularity is not imminentyou can go ahead and buy that economy-sizecontainerof yogurtit seems to me almost certain that machine intelligence will surpass ourseventually, and quite possibly within our lifetimes.

Arguments to the contrary dont seem convincing. Kevin Kellys recent essay in Backchannel is a good example. Hissubtitle, The AI Cargo Cult: The Myth of a Superhuman AIimplies that AI of superhuman intelligence will not occur. His argument centers on five myths:

He rebuts these myths with five heresies :

This is all fine, but notice thateven if all five myths are false, and all five heresies are true, superintelligence could still exist. For example, superintelligence need not be like our own or human or without limitit only needs to outperform us.

The most interestingitem on Kellys lists is heresy #1, that intelligence is not a single dimension, so smarter than humans is a meaningless concept. This is really two claims, so lets consider them one at a time.

First, is intelligence a single dimension, or are there different aspects or skills involved in intelligence? This is an old debate in human psychology, on which I dont have an informed opinion. Butwhatever the nature and mechanisms of human intelligence might be, we shouldnt assume that machine intelligence will be the same.

So far, AI practice has mostly treated intelligence as multi-dimensional, building distinct solutions to different cognitive challenges. Perhaps this is fundamental, and machine intelligence will always be a bundle of different capabilities. Or perhaps there will be a future unification of some sort, to create a single capability that can outperform people onall or nearly all cognitive tasks. At this point itseems like an open question whether machine intelligence is inherently multi-dimensional.

The second part of Kellys claim is that, assuming intelligence is multi-dimensional, smarter than humans is a meaningless concept. This, to put it bluntly, is not correct.

To see why, consider that playing center field in baseball requires multi-dimensional skills: running, throwing, distinguishing balls from strikes, hitting accurately, hitting with power, and so on. Yet every single major league center fielder is vastlybetter than I am at playing center field, because they dominate me by far in every one of the component skills.

Like playing center field, intelligence may be multi-dimensional, and yet one entity can be more intelligent than another by being superior inevery dimension.

What this suggests about the future of machine intelligence is that we may live for quite a while in a state where machines are better than us at some aspects of intelligence and we are better than them at others. Indeed, that is the case now, and has been for years.

If machine intelligence remains multi-dimensional, then machines will surpassour intelligence not at a single point in time, but gradually, and in more and more dimensions of intelligence.

Read more here:

Multiple Intelligences, and Superintelligence - Freedom to Tinker

Posted in Superintelligence | Comments Off on Multiple Intelligences, and Superintelligence – Freedom to Tinker

‘Artificial Superintelligence’ is the First Game from the Makers of the Hilarious ‘CARROT’ Apps, Coming May 11th – Touch Arcade

Posted: May 2, 2017 at 11:15 pm

You may be familiar with Grailr from their CARROT series of apps which offer various functionalities like to-do lists, calorie counting, and alarm clocks but with the added bonus of having them be ran by the hilariously insulting CARROT artificial intelligence. CARROT's apps have garnered worldwide recognition and acclaim, and now Grailr is prepping the release of their first mobile game, which is very appropriately titled Artificial Superintelligence. The game has you building a sentient supercomputer, and based on the trailer it appears to be a bit like Reigns [$2.99] in that you'll be making decisions that will shape how the artificial intelligence will turn out.

The official description of Artificial Superintelligence says "Youll train your AI to speak, recognize objects, and play games - all while trying to handle the bizarre requests of Silicon Valley residents. It has a hilarious branching story overflowing with sci-fi weirdness and a ridiculous amount of replayability, with 50+ endings and this crazy framing device where each playthrough takes place in a parallel universe." It appears the same type of humor found in the CARROT apps is alive and well in Artificial Superintelligence and I think the premise sounds like it will make for a very interesting mobile game. The plan is to launch Artificial Superintelligence on May 11th for $2.99, and in the meantime if you aren't yet familiar with the CARROT apps you really should check out their App Store page.

View post:

'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade

Posted in Superintelligence | Comments Off on ‘Artificial Superintelligence’ is the First Game from the Makers of the Hilarious ‘CARROT’ Apps, Coming May 11th – Touch Arcade

BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company – Broadway World

Posted: at 11:15 pm

BRAVO 25: Your A.I Therapist Will See You Now, winner of City of West Hollywood's One City One Pride Festival scholarship, makes its Los Angeles debut as part of the Hollywood Fringe Festival. This new one woman show will be performed at The Actors Company, 916 N. Formosa Avenue, Los Angeles, CA 90046, June 3 - 25.

Writer & solo performer Eliza Gibson plays an A.I. (Artificial Intelligence) avatar therapist and six humans in a support group unlike any you've ever seen. Broken hearts, addicts in recovery, a polyamorous lesbian awaits the arrival of Superintelligence, someone needs a piece of somebody's liver? Wait. Amber, the A.I. therapist likes donuts?

BRAVO 25 premiered in Fresno at The Rogue Festival in March 2017 to rave audience reviews: "I was absolutely blown away!" "Each character was superbly defined and imaginable." "I laughed like crazy!!!" An early excerpt was selected to compete in San Francisco's PianoFight's ShortLived competition in February 2016. The show has been in development over the last year, with excerpts being performed in the Bay Area at various venues, including The Marsh, the Red Poppy Art House and Solo Sundays. The show was produced by Beyond Words at Stage Werx Theatre in San Francisco in April 2017.

Directed by: David Ford, Written & Performed by: Eliza Gibson. BRAVO 25: Your A.I. Therapist Will See You Now will be performed in the Hollywood Fringe Festival at The Actors Company, 916 N. Formosa Avenue, Los Angeles, CA 90046

Running time: 60 minutes. Admission: 16 and older. For tickets go to: hff17.com/4323

See original here:

BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World

Posted in Superintelligence | Comments Off on BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company – Broadway World

Informatica Journal – Call for Special Issue on Superintelligence – Institute for Ethics and Emerging Technologies

Posted: April 28, 2017 at 3:21 pm

Since the inception of the field of artificial intelligence, a prominent goal has been to create computer systems that would reason as capably as humans across a wide range of fields. Over the last decade, this goal has been brought closer to reality. Machine learning systems have come to excel in many signal processing tasks and have achieved superhuman performance in learning tasks including the games of Go and Heads-up Poker. More broadly, we have seen large changes in every pore of our society. This remarkable progress raises the question of how the world may look if the field of artificial intelligence eventually succeeds in creating highly capable general purpose reasoning systems. In particular, it has been hypothesized that such advances may lead to the development of a superintelligent agent one whose capabilities greatly exceed humans across virtually all domains of interest.

Complete entry

Continue reading here:

Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies

Posted in Superintelligence | Comments Off on Informatica Journal – Call for Special Issue on Superintelligence – Institute for Ethics and Emerging Technologies

Superintelligence and Public Opinion – NewCo Shift

Posted: April 27, 2017 at 2:24 am

Throughout 2017, I have been running polls on the publics appetite for risk regarding the pursuit of superintelligence. Ive been running these on Surveymonkey, paying for audiences so as to minimize distortions in the data. Ive spent nearly $10,000 on this project. I did this in about the most scientific way I could. It is not a passed around survey, but rather paid polling across the entire American spectrum.

All in all, America can perhaps be best characterized as excited about the prospect of a superintelligence explosion, but also deeply afraid, skeptical, and adamantly opposed to the idea that we should plow forth without any regulation or plan. This is, it seems to me, exactly what is happening right now.

You can view the entire dataset here. I welcome any comments. Im not a statistician, dont have a research assistant, and have a full-time job, so my ability to proof-read and double-check things is limited (though I have tried). If you have comments, you can tweet at me @rickwebb.

This is not an essay debating the likely outcome of humanitys pursuit of superintelligence. This is not an essay trying to convince you that its going to turn out one way or another. This is an article about democracy, risk, and the appetite for it.

Furthermore, this is not an essay about weak artificial intelligenceyour Alexa, or Siri, or the algorithms that guide you when using Waze. Artificial Intelligence comes in three flavors:

Virtually all of the public policy discussions, news, and polling has centered around the first type of AI: weak AI. This is the one that will make the robots that will take your jobs. The Obama administrations report on artificial intelligence, for example, dedicated only perhaps 3 paragraphs across its 45 pages to SAI. This was part of a larger push by the Obama administration, who also hosted several events. The primary focus there, too, was on weak AI. What little polling done on AI has been done primarily on weak AI.

But it is superintelligence that arguably poses the much larger risks for mankind. And we are further along than most people realize.

Let me ask you a question: if you were in the ballot booth, and you saw the following question on a ballot, how would you answer?

The situation is this: in the next 100 years or so, theres a chanceno one is sure how good of a chancethat humanity will develop machines that achieve, and then surpass, humans in intelligence levels. When we do that, most experts agree, there are two potential paths for humanity:

Theres a lot of hyperbole and terminology around the debate about pursuing human-level artificial intelligence. It can be confusing. To get up to speed, I strongly recommend you read this two-part primer on the AI dilemma by the wonderful blog Wait But Why (part 1, part 2). Please consider taking a moment to read some of the articles linked above (or bookmark them for later). However you feel about the topic, its probably worth it as a citizen to get up to speed on both sides of the debate, since arguably it will effect us all (or our children).

Now, if youve read all that, I suspect you have one of two responsesmuch like those outlined in the article. Youll read all the good stuff and get really into it and think that sounds great! I think that will happen!

Or you will read all the bad stuff and think that sounds terrible and plausible! I dont want that to happen!

And guess what! Good for you, because whichever side youve taken, there is some super genius out there agreeing with you.

Ive discussed these articles with lots of people. Heres what Ive found: by and large, enthusiasm in favor of AI depends on an individuals belief in the worst-case scenario. We, as humans, have a strange belief that we can predict the future, and if we, personally, predict a positive future, we assume thats the one thats going to happen. And if we predict a negative future, we assume thatll happen.

But if we stop and take a moment, we realize that this is hogwash. We know, intellectually, we cant predict the future, and we could be wrong.

So lets take a moment and acknowledge whats really going on in this scenario: experts pretty much see two potential new paths for humanity when it comes to AI: good and bad.

And the reality is there is some probability that each one of them may come true.

It might be 100% likely that only the good could ever happen. It might be 100% likely only the bad could ever happen. In reality, the odds are probably something other than 1000 or 0100. The odds might be, for example, 5050. We dont really know.

(There is, of course, the likelihood that neither will happen, in which case, cool. Humanity goes on as it was, and this article becomes moot. So we are ignoring that for now).

Furthermore, because of the confusion around weak AI, human-level AI, strong AI/Superintelligence and what have you, I decided I would boil down for the public the central debate to its core: hey, theres a tech out there, it might make us immortal, but it might kill us. What do you think? This is, after all, the core dilemma. The nut. The part of the problem that most calls for the publics input.

So, in the end, were right back to where we started from:

Now, in the question above, Im making up the 1 and 5 probability numbers. It might be one in 100. It might be one in two. We just dont know. NO ONE KNOWS. Remember this. Many, many people will try and convince you that they know. All they are doing is arguing their viewpoint. They dont really know. No one can predict the future. Again, remember this.

We are not arguing over whether or not this will happen in this essay. We are accepting the consensus of experts that it could happen. And we urge consideration of the fact that the actual likelihood it will happen is currently unknown.

This is also not the forum to discuss how we could ever even know the liklehood of an event in the future. Forecasting the future is, of course, an inexact science. Well never really know, for sure, the likelihood of a future event. There are numerous forecasting methodologies out there that scientists and decision-markers use. I make no opinion here. With regard to superintelligence, the Wait but Why essay does a good job going over some of the methods weve utilized in the past, such as polling scientists at conferences).

Ive been aware of the potential of this issue for decades. But like you, I thought it was far off. Not my generations problem. AI researchlike many areas of research my sci-fi inner child lovedhad been stalled for the last 3050 years. We had little progress in space exploration, self driving cars, solar power, virtual reality, electric cars, car planes, etc. Like these other areas, AI research seemed on pause. I suspect that was partially because of the brain drain caused by building the Internet, and partially because some problems proved more difficult than expected.

Yet, much like each of these fields, AI research has exploded in the last fiveten years. The field is back, and back with a vengeance.

Up to now, AI policy has been defined almost exclusively by AI researchers, policy wonks, and tech company executives. Even our own government has been, by and large, absent from the conversation. I asked one friend knowledgeable about the executive branchs handle on the situation and he said, in effect, that theyre not unaware, but they have more pressing matters.

A massive amount of AI research is being done, and most of humanity has no idea how far along we are on the journey. To be fair, the researchers involved often have some good reasons for why they are not shouting their research from the rooftops. They dont want to cause unnecessary alarm. They worry about the clamping down on their ability to publish what they do publish. The fact remains, that the public is, by and large, being left in the dark.

I believe that when facing a decision that affects the entirety of humanity at a fundamental levelnot just life or death but the very notion of existencewe all should be involved in the decision.

This is, admittedly, democratic. Many people believe in democracy in only a limited manner. They fret over the will of the masses, direct democracy, making decisions in the heat of the moment. This is all valid. Reasonable people can have a debate about these nuances. I do not seek to hash them all out here. Im not saying we need a worldwide vote.

I am, saying, however, that all of humanity should have a say in the pursuit of breakthroughs that put its very existence at risk. The will of the people should be our guide. And the better informed they are, the better decisions they will make.

There is a distinction between votes and polling. Polling guides policy, and voting, in its ideal form, affects behavior. A congresswoman may be in office because, say, 22% of all non-felon adults in her district put her there. She may then govern by listening to the will of the people as a whole through polls. Something similar should be applied here.

If this were classical economics, and humans were what John Stuart Mill dubbed homo economicusor perfectly rational beings, with all the relevant knowledge at handhumanity could simply calculate the risk potential and likelihood and measure that against the likelihood of potential benefits. We would then come up with a decision. Reality is more complex. First, the potential downside and upside are both, essentially, infinite in economic terms, thus throwing this equation out of whack. And secondly, of course, we do not actually know the likelihood that SAI will lead to humanitys destruction. Its a safe guess that that number exists, but we dont know it.

Luckily our very faultsthat we are not homo economicusalso leads to our strength in this situation: we can deal with fuzzy numbers and the notion of infinity. Our brains contain multitudes, to borrow from Walt Whitman.

What, then, is the level of acceptable risk that will cause humanity to, at least by consensus, accept our pursuit of superintelligence?

It came as a shock to me, then, that the population at large hasnt really been polled about its views on the potential of a super intelligence apocalypse. There are several polls about artificial intelligence (this one by the British Science Association is a good example), but not so many about the existential risk potentially inherent in pursuing superintelligence. Those that exist are generally in the same mold as this one by 60 Minutes, inquiring about its audiences favorite AI movies, and where one would hide from the robot insurrection. It also helpfully asks if one should fear the robots killing us more than ourselves. One could argue that this is a leading question, and in any case, its hardly useful for the development of public policy. Searching Google for superintelligence polling yields little other than polling of experts, and searching for superintelligence public opinion yields virtually nothing.

On the academic front, this December 2016 paper by Stanfords Ethan Fast and Microsofts Eric Hovitz does a superb job surveying the landscape, relying primarily on press mentions and press tone, while acknowledging that the polling is light, and not specifically focused on superintelligence. Nonetheless, it is a fascinating read.

All in all, though, data around the existential risk mankind may face with the onset of superintelligence, and Americans views on it, is sparse indeed.

So I set out to do it myself.

You can view my entire dataset here.

First, I set out to ask some top level questions about superintelligence research. Now, I confess, I am not a pollster. I know these questions are sort of leading. I did my best to keep them neutral, but Ive got my own biases. Nonetheless, it seemed worthwhile to just go ahead and ask a bunch of Americans what they think about the risks and potentials of superintelligence.

We asked four top-level questions regarding superintelligence research of 400 individuals:

At a top level, Americans seem to find the prospect of superintelligence and its benefits exciting, though it is not a ringing endorsement. Some 47% of Americans characterized themselves as excited on some level.

Again, I caution that this data is limited. Furthermore, I am not a statistics expert, so I cant say (for example) the margin of error when you poll a lot of people but at across many income levels and then analyze the subsets by income, but I suspect that its not as high as the base poll.

It would be awesome if someone started polling about this stuff. This is just one snapshot. Polls are more accurate over time.

And it would be amazing if people started polling other countries. Originally when I planned this research, I wanted to poll across countries, but Surveymonkey didnt have such functionality. Since I started in January, theyve begun offering some international polling. I hope someone gets on that. I am tapped out.

It would be great if people ran these polls at larger numbers, with better margins of error. Especially the poll of black Americans. Other subgroups, tooSurveymonkey doesnt offer much when it comes to Asian Americans, Hispanics and other minority groups.

So. What does all this mean? After all, its not like god will come down from on high and say Hey Americans! right now you have an 80% likelihood of not dying if you give this superintelligence thing a go! We will never, really, know the likelihood. But what this does tell us is that Americans are relatively risk averse in this regard (though the math is a bit wonky when we are dealing with infinite risk and infinite reward). This is not surprising. Modern behavioral economic research has shown that humans value what they have over what they might gain in the future.

We also see from the dataset that Americans are more skeptical of institutions pursuing superintelligence research on their own. I suspect if Americans knew the true extent of whats being done on this front, these trust numbers would continue to decline, but thats just a hunch. In any case, this data could be useful in institutions debating how and when to disclose their superintelligence research to the publicthere may some ticking time bombs surrounding the goodwill line item on some of these companies balance sheets.

America can perhaps be best characterized as excited about the prospect of a superintelligence explosion, but also deeply afraid, skeptical, and adamantly opposed to the idea that we should plow forth without any regulation or plan. This is, it seems to me, exactly what is happening right now.

Whatever your interpretation, its my hope that this can help spawn some efforts by policymakers, researchers, corporations and academic institutions to gauge the will of the people regarding the research they are supporting or undertaking. I conclude with a quote from Robert Oppenheimer, one of the inventors of the atomic bomb: When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.

I pulled the Oppenheimer quote from a recent New Yorker article about CRISPR DNA editing and the scientist Kevin Esvelts efforts to bring the research into the open. We really need to think about the world we are entering. He says elsewhere, To an appalling degree, not that much has changed. Scientists still really dont care very much about what others think of their work.

Ill save my personal interpretation of the data for another essay. Ive tried to keep editorializing to a minimum. This is not to say that I havent formed opinions when looking at this data. I hope you do too.

Read this article:

Superintelligence and Public Opinion - NewCo Shift

Posted in Superintelligence | Comments Off on Superintelligence and Public Opinion – NewCo Shift

Apple’s Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more – Patently Apple

Posted: at 2:24 am

Thomas Robert "Tom" Gruber is a computer scientist, inventor, and entrepreneur with a focus on systems for knowledge sharing and collective intelligence. He did foundational work in ontology engineering and is well known for his definition of ontologies in the context of artificial intelligence. He's better known to Apple fans as the co-founder of Siri Inc that Apple acquired in 2010. One of the earliest patents from Gruber for Apple regarding Siri and Active Ontology dates back to 2010. One of the key patent figures is presented below. Gruber spoke today at TED2017 being held in Vancouver, Canada.

Tom Gruber asked the audience, "How smart can our machines make us? What's the purpose of artificial intelligence? Is it to make machines intelligent, so they can do automated tasks we don't want to do, beat us at complex games like chess and Go and, perhaps, develop superintelligence and become our overlords? No, says Gruber instead of competing with us, AI should augment and collaborate with us." Gruber added that "Superintelligence should give us superhuman abilities."

Taking us back 30 years to the first intelligent assistant he created, which helped a cerebral palsy patient communicate, to Siri, which helps us do everything from navigate cities to answer complex questions, Gruber explained his vision for "humanistic AI" machines designed to meet human needs by collaborating with and augmenting us. Gruber invites us into a future where superintelligent AI can augment our memories and help us remember the name of everyone we've ever met, every song we've ever heard and everything we've ever read.

Gruber further noted that "We have a choice in how we use this powerful tech. We can use it to compete with us or to collaborate with us to overcome our limitations and help us do what we want to do, only better. Every time a machine gets smarter, we get smarter."

Gruber believes that AI could one day play a role to assist those with dementia and Alzheimer's be able to retain more memories so that those afflicted could have a life of dignitiy and connection instead of a life of isolation.

When Grubers full TED talk is publicly made available, we'll post a follow-up report in the coming months.

About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or behavior will result in being blacklisted on Disqus.

Go here to see the original:

Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple

Posted in Superintelligence | Comments Off on Apple’s Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more – Patently Apple

David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI – Singularity Hub

Posted: at 2:23 am

Last year, an AI named Benjamin wrote a weird and entertaining science fiction short film called Sunspring. Now, Benjamins back in a new film titled Its No Game. Like its predecessor, the short is a surprisingly effective blend of human and machine talentplus a healthy dose of the surreal.

Watch the film below to see David Hasselhoff, compelled by nanobots, reel off algorithmically mashed up lines from Knight Rider and Baywatch scripts.

Artificial intelligence is perhaps a bit overhyped currently. Rapid progress in a variety of difficult AI problems has us, at times, confusing future possibilities with present capability. The AI timeline tends to go missing, and human obsolescence in the face of superintelligence appears imminent.

Because AI competes with our proudest assets, such as intelligence and creativity, the response is fearful. What exactly is our value in a world where were outperformed by algorithms at basically everything? Its a fascinating and important question. There is no answer. But we have time to figure it out.

AI is still the narrow type. Most algorithms are excellent, even superhuman, at the task for which they're designed, but ill-suited for anything else. And there are yet some tasks just beyond AI's reach. Writing is one of them.Its No Gameis self-aware enough to call out the worrywriters replaced by robotsright next to the still glitchy (but awesome) output of an artificial neural network.

Benjamins writing relies on whatever content is fed into it. In this case, instead of X Files scripts (as in Sunspring), were treated to multiple segments inspired by Shakespeare, Golden Age Hollywood, Aaron Sorkins fast-paced politics, and of course,Baywatch and Knight Rider.

The output tends toward the nonsensical, but mostly, that's okay. Quick, dense lines from Aaron Sorkins work, for example, can be as much about the emotional sense communicated by the actors as they are about content.

"People will watch a Sorkin movie and not take in whats being said, [but] understand the thrust of the scene and know whats going on," says the Walking Dead'sThomas Payne, who plays one of the screenwriters in the film.

Which is basically why Benjamins stuff works here. Its up to the cast and crews sense of timing and delivery to make the lines meaningful. David Hasselhoff time travels to make those disembodied 80s snippets into an unmistakable resurrection of the Hoff himself. And the freaked out, confused scene at the end ironically echoes our larger existential worries.

We may be headed into a world of artificial general intelligence, and that world may arrive faster than conservative guesses suggest. But make no mistake, even narrow AI is very powerful. And artists, entrepreneurs, and researcherswill no doubt continue to work with such algorithms to make surprising new creations, from the purely useful to the bizarre and fascinating.

(Check out Annalee Newitz's article in Ars Technicafor an excellent and comprehensive behind-the-scenes look at the making ofIt's No Game.)

Image Credit: It's No Game/Ars Technica Videos/YouTube

Here is the original post:

David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub

Posted in Superintelligence | Comments Off on David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI – Singularity Hub

The Guardian view on protein modelling: the answer to life, the universe and everything – The Guardian

Posted: April 21, 2017 at 2:39 am

Designing medicines to target diseases requires knowing what proteins are involved and their form. Scientists have identified a protein which is a key driver for the growth and spread of breast cancer. Photograph: Rui Vieira/PA

When Eliezer Yudkowsky, one of the worlds top artificial intelligence theorists, mused about how superintelligent robots might wipe out humans he speculated that perhaps they would solve one of the sciences holy grails: predicting protein structure from DNA information. In Mr Yudkowskys words these robots would then synthesise customised proteins ... building even more sophisticated molecular machines. Imagine tiny invisible synthetic bacteria, with tiny onboard computers, hiding inside your bloodstream and everyone elses. And then, simultaneously, they release one microgram of botulinum toxin. Everyone just falls over dead. Mr Yudkowskys apocalyptic scenario rests on something science has pondered with no answer for decades: why cant we say what determines a proteins shape?

This is not some idle speculation. Proteins are the bedrock of living systems, intimately involved in every physiological process from triggering an immune response to thinking. Good health requires a fine balance of proteins. An imbalance, and disease often strikes. Cancer is traced to an overproduction of proteins. Misfolding proteins have been linked to type 2 diabetes, while the strange bundling of them is thought to be behind the death of brain cells in Parkinsons disease. Proteins function is dependent on their form, which is the result of a folding up of hundreds of amino acids its constituent parts into a specific and complex 3D structure. That configuration determines what the protein does: whether it becomes an enzyme to accelerate a chemical reaction; or a receptor passing signals to a cells molecular machinery. Crucially, a drug can alter a proteins function by binding to it in a particular spot. Designing medicines to target diseases requires knowing what proteins are involved and their form. After a half century we can identify 100,000 protein shapes. But we have a database of 100m proteins. That is why we have few molecular keys capable of picking the lock to understanding disease-causing proteins.

Why has protein structure proved so hard to crack? They can be probed with x-rays, but that means first purifying proteins and then growing them as crystals in a laboratory. Its a lengthy process. Some do not seem to crystallise at all. There are glimmers of hope. David Jones at Britains Francis Crick institute, which has just been awarded a 2m European grant, uses new computational techniques to predict novel protein structures. But the real prize is the one Mr Yudkowsky identified: by looking at DNA, could one predict the shape of the proteins it released? Since DNA encodes the amino-acid building blocks of an organisms proteins, we know their composition. This is not much help with their structure. Human proteins can fold up in an astonishing number of ways: about a googol cubed or 10 to the power of 300. Theres not enough computing power to work out all these possibilities and thus find the optimum. Less than 10% of human DNA codes and regulates proteins. But we have no idea how altering the gene sequences changes proteins forms and functions. If we did understand it so that we could tamper with it to our advantage, then it would likely lead to all sorts of ethical dilemmas such as growing older without ageing. Maybe that is why Mr Yudkowsky considered it a task only solvable by a superintelligence so clever that its very existence might spell our end.

More here:

The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian

Posted in Superintelligence | Comments Off on The Guardian view on protein modelling: the answer to life, the universe and everything – The Guardian

Limits to the Nonparametric Intuition: Superintelligence and Ecology – Lifeboat Foundation (blog)

Posted: April 12, 2017 at 8:53 am

In a previous essay, I suggested how we might do better with the unintended consequences of superintelligence if, instead of attempting to pre-formulate satisfactory goals or providing a capacity to learn some set of goals, we gave it the intuition that knowing all goals is not a practical possibility. Instead, we can act with a modest confidence having worked to discover goals, developing an understanding of our discovery processes that allows asserting an equilibrium between the risk of doing something wrong and the cost of work to uncover more stakeholders and their goals. This approach promotes moderation given the potential of undiscovered goals potentially contradicting any particular action. In short, wed like a superintelligence that applies the non-parametric intuition, the intuition that we cant know all the factors but can partially discover them with well-motivated trade-offs.

However, Ive come to the perspective that the non-parametric intuition, while correct, on its own can be cripplingly misguided. Unfortunately, going through a discovery-rich design process doesnt promise an appropriate outcome. It is possible for all of the apparently relevant sources not to reflect significant consequences.

How could one possibly do better than accepting this limitation, that relevant information is sometimes not present in all apparently relevant information sources? The answer is that, while in some cases it is impossible, there is always the background knowledge that all flourishing is grounded in material conditions, and that staying grounded in these conditions is one way to know that important design information is missing and seek it out. The Onion article Mans Garbage To Have Much More Significant Effect On Planet Than He Will is one example of a common failure at living in a grounded way.

In other words, staying grounded means recognizing that just because we do not know all of the goals informing our actions does not mean that we do not know any of them. There are some goals that are given to us by the nature of how we are embedded in the world and cannot be responsibly ignored. Our continual flourishing as sentient creatures means coming to know and care for those systems that sustain us and creatures like us. A functioning participation in these systems at a basic level means we should aim to see that our inputs are securely supplied, our wastes properly processed, and the supporting conditions of our environment maintained.

Suppose that there were a superintelligence where individual agents have a capacity as compared to us such that we are as mice are to us. What might we reasonably hope from the agents of such an intelligence? My hope is that these agents are ecologists who wish for us to flourish in our natural lifeways. This does not mean that they leave us all to our own preserves, though hopefully they will see the advantage to having some unaltered wilderness in which to observe how we choose to live left to our own devices. Instead, we can be participants in patterned arrangements aimed to satisfy our needs in return for our engaged participation in larger systems of resource management. By this standard, our human systems might be found wanting by many living creatures today.

Given this, a productive approach to developing superintelligence would not only be concerned with its technical creation, but also by being in the position to demonstrate how all can flourish through good stewardship, setting a proper example for when these systems emerge and are trying to understand what goals should be like. We would also want the facts of its and our material conditions readily apparent, so that it doesnt start from a disconnected and disembodied basis.

Overall, this means that in addition to the capacity to discover more goals, it would be instructive to supply this superintelligence with a schema of describing the relationships and conditions under which current participants flourish, as well as the goal to promote such flourishing whenever the means are clear and circumstances indicate such flourishing will not emerge of its own accord. This kind of information technology for ecological engineering might also be useful for our own purposes.

What will a superintelligence take as its flourishing? It is hard to say. However, hopefully it will find sustaining, extending, and promoting the flourishing of the ecology that allowed its emergence as a inspiring, challenging, and creative goal.

Go here to see the original:

Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog)

Posted in Superintelligence | Comments Off on Limits to the Nonparametric Intuition: Superintelligence and Ecology – Lifeboat Foundation (blog)

The Nonparametric Intuition: Superintelligence and Design Methodology – Lifeboat Foundation (blog)

Posted: April 7, 2017 at 9:09 pm

I will admit that I have been distracted from both popular discussion and the academic work on the risks of emergent superintelligence. However, in the spirit of an essay, let me offer some uninformed thoughts on a question involving such superintelligence based on my experience thinking about a different area. Hopefully, despite my ignorance, this experience will offer something new or at least explain one approach in a new way.

The question about superintelligence I wish to address is the paperclip universe problem. Suppose that an industrial program, aimed with the goal of maximizing the number of paperclips, is otherwise equipped with a general intelligence program as to tackle with this objective in the most creative ways, as well as internet connectivity and text information processing facilities so that it can discover other mechanisms. There is then the possibility that the program does not take its current resources as appropriate constraints, but becomes interested in manipulating people and directing devices to cause paperclips to be manufactured without consequence for any other objective, leading in the worse case to widespread destruction but a large number of surviving paperclips.

This would clearly be a disaster. The common response is to take as a consequence that when we specify goals to programs, we should be much more careful about specifying what those goals are. However, we might find it difficult to formulate a set of goals that dont admit some kind of loophole or paradox that, if pursued with mechanical single-mindedness, are either similarly narrowly destructive or self-defeating.

Suppose that, instead of trying to formulate a set of foolproof goals, we should find a way to admit to the program that the set of goals weve described is not comprehensive. We should aim for the capacity to add new goals with a procedural understanding that the list may never be complete. If done well, we would have a system that would couple this initial set of goals to the set of resources, operations, consequences, and stakeholders initially provided to it, with an understanding that those goals are only appropriate to the initial list and finding new potential means requires developing a richer understanding of potential ends.

How can this work? Its easy to imagine such an algorithmic admission leading to paralysis, either from finding contradictory objectives that apparently admit no solution or an analysis/paralysis which perpetually requires no undiscovered goals before proceeding. Alternatively, stated incorrectly, it could backfire, with finding more goals taking the place of making more paperclips as it proceeds singlemindedly to consume resources. Clearly, a satisfactory superintelligence would need to reason appropriately about the goal discovery process.

There is a profession that has figured out a heuristic form of reasoning about goal discovery processes: designers. Designers have coined the phrase the fuzzy front end when talking about the very early stages of a project before anyone has figured out what it is about. Designers engage in low-cost elicitation exercises with a variety of stakeholders. They quickly discover who the relevant stakeholders are and what impacts their interventions might have. Adept designers switch back and forth rapidly from candidate solutions to analyzing the potential impacts of those designs, making new associations about the area under study that allows for further goal discovery. As designers undertake these explorations, they advise going slightly past the apparent wall of diminishing returns, often using an initial brainstorming session to reveal all of the obvious ideas before undertaking a deeper analysis. Seasoned designers develop an understanding when stakeholders are holding back and need to be prompted, or when equivocating stakeholders should be encouraged to move on. Designers will interleave a series of prototypes, experiential exercises, and pilot runs into their work, to make sure that interventions really behave the way their analysis seems to indicate.

These heuristics correspond well to an area of statistics and machine learning called nonparametric Bayesian inference. Nonparametric does not mean that there are no parameters, but instead that the parameters are not given, and that inferring that there are further parameters is part of the task. Suppose that you were to move to a new town, and ask around about the best restaurant. The first answer would definitely be new, but as one asked more, eventually you would start getting new answers more rarely. The likelihood of a given answer would also begin to converge. In some cases the answers will be more concentrated on a few answers, and in some cases the answers will be more dispersed. In either case, once we have an idea of how concentrated the answers are, we might see that a particular period of not discovering new answers might just be unlucky and that we should pursue further inquiry.

Asking why provides a list of critical features that can be used to direct different inquiries that fill out the picture. Whats the best restaurant in town for Mexican food? Which is best at maintaining relationships to local food providers/has the best value for money/is the tastiest/has the most friendly service? Designers discover aspects about their goals in an open-ended way, that allows discovery to act in quick cycles of learning through taking on different aspects of the problem. This behavior would work very well for an active learning formulation of relational nonparametric inference.

There is a point at which information gathering activities are less helpful at gathering information than attending to the feedback to activities that more directly act on existing goals. This happens when there is a cost/risk equilibrium between the cost of more discovery activities and the risk of making an intervention on incomplete information. In many circumstances, the line between information gathering and direct intervention will be fuzzier, as exploration proceeds through reversible or inconsequential experiments, prototypes, trials, pilots, and extensions that gather information while still pursuing the goals found so far.

From this perspective, many frameworks for assessing engineering discovery processes make a kind of epistemological error: they assess the quality of the solution from the perspective of the information that they have gathered, paying no attention to the rates and costs which that information was discovered, and whether or not the discovery process is at equilibrium. This mistake comes from seeing the problems as finding a particular point in a given search space of solutions, rather than taking the search space as a variable requiring iterative development. A superintelligence equipped to see past this fallacy would be unlikely to deliver us a universe of paperclips.

Having said all this, I think the nonparametric intuition, while right, can be cripplingly misguided without being supplemented with other ideas. To consider discovery analytically is to not discount the power of knowing about the unknown, but it doesnt intrinsically value non-contingent truths. In my next essay, I will take on this topic.

For a more detailed explanation and an example of how to extend engineering design assessment to include nonparametric criteria, see The Methodological Unboundedness of Limited Discovery Processes. Form Academisk, 7:4.

See original here:

The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog)

Posted in Superintelligence | Comments Off on The Nonparametric Intuition: Superintelligence and Design Methodology – Lifeboat Foundation (blog)

Page 19«..10..18192021..»