Cars 3 gets back to what made the franchise adequate – Vox

To call the Cars movies the black sheep of Pixars filmography does a disservice to black sheep. The first one (released in 2006) is considered the one major black mark in the animation studios killer run from 1995s Toy Story to 2010s Toy Story 3, and 2011s Cars 2 is the only Pixar film with a rotten score on Rotten Tomatoes.

And, okay, I wont speak up too heartily for Cars 2 may it always be the worst Pixar movie but the original Cars is a good-natured, even-keeled sort of film, one that celebrates taking it slow every once in a while. Its no Incredibles or Wall-E, but few movies are. Its heart is in the right place.

Thus, its a relief that Cars 3 skews more toward the original flavor than the sequel (a spy movieinflected mess that revealed a Pixar slightly out of its depth with something so action-heavy). Its not to the level of that first film, but its amiable, ambling nature keeps it from becoming too boxed in by its needlessly contorted plot (which all but spoils its own ending very early on, then spends roughly an hour futilely avoiding said ending).

Like all Pixar movies, Cars 3 is gorgeous the landscapes the characters race through are more photorealistic than ever, recalling The Good Dinosaur (another recent Pixar misfire that nonetheless looked great) but like most of the studios 2010s output, its storytelling is perhaps too complicated to really register. The movie is constantly trying to outmaneuver itself, leading to a film thats pleasant but not much more.

Still, that doesnt mean its devoid of value. Here are six useful ways of thinking about Cars 3.

This is the angle Disney is pushing most in the trailers for the film. Lightning McQueen (Owen Wilson), the hotshot race car who learned to take it easy in Cars, has succumbed to the ravages of time, as we all must. Newer, sleeker race cars are outpacing him on the track, and hes desperate to make a comeback.

But Cars 3 resists the most feel-good version of that story, to its credit. Lightning isnt going to suddenly become faster in his middle age. If he wants to beat the young whippersnappers, hell have to either outsmart them or out-train them. But Lightning isnt one for high-tech gadgets that might help him eke out a few more miles per hour from his chassis. Instead, he goes on a random tour of the American South, visiting hallowed racetracks.

It gives the movie a tried-and-true spine old-fashioned knowhow versus new tech but it also means that every time the story seems to be gaining momentum, it veers completely off course in a new direction. Pixar used this tendency to let its stories swerve all over the place to great effect in 2012s Brave and 2013s Monsters University, but Cars 3 has maybe a few too many head fakes. By the time Lightning tries to tap into his roots by visiting legendary racers in North Carolina, I felt slightly checked out.

Seriously! This is a major part of Cars 3s climax!

The movie argues that the best thing Lightning (whos always been coded as a good ol Texas boy) can do to help preserve his legacy is try to find ways to hold open doors for cars that are not at all like himself. And the leader of the new class of racers, Jackson Storm, is voiced by Armie Hammer as a sleek, might-makes-right bully who never nods to the fact that hes so much faster because hes got access to a lot of great technology.

A major scene at the films midpoint involves Lightning learning that his trainer, Cruz Ramirez (voiced by the comedian Cristela Alonzo), always wanted to be a racer herself, but felt intimidated by how she wasnt like the other race cars the one time she tried out.

How did Lightning build up the confidence to race? Cruz asks. Lightning shrugs. He doesnt know. Hes just always had it.

Just the description of this scene or the even earlier scene where Cruz dominates a simulated race probably telegraphs where all of this is headed. But its still neat that Pixar used its most little-boy-friendly franchise to make an argument for level, more diverse playing fields. Except...

The Cars movies have always moved merchandise, and even if all involved parties insist they continue to make Cars movies for reasons other than because they sell toys cmon. The fact that the movies major new character is an explicitly female car, who gets a variety of new paint jobs throughout the film, no less, feels like somebody in a boardroom somewhere said, Yes, but what if we had a way to make the toys from these movies appeal to little girls as well?

(And thats to say nothing of the numerous other new characters introduced throughout the film, all of whom your children will simply have to own the action figures for. My favorite was a school bus named Miss Fritter who dominates demolition derbies.)

So it goes with Disney, one of the best companies out there when it comes to diversifying the points of view that are represented in its films but always, as the cynics among us are prone to assume, because it sees those points of view as a way to sell you more stuff.

Kerry Washington plays a new character named Natalie Certain, a journalist who pops up every so often to point out how her data cant lie and how Jackson Storm has a 96 percent probability of winning the films climactic race. Ill let you draw your own conclusions from there.

When Pixar made Cars 2, it faced a major challenge. The first film, dealing with Lightnings slow embrace of small-town life, didnt leave much room for another story, and its second-most-important character, Doc Hudson, was voiced by Paul Newman, who died between the two films.

So Cars 2 made a hard pivot into spy movie action, ramped up the role of kiddie favorite Tow Mater (voiced by Larry the Cable Guy), and largely lost the soul of the first film.

Cars 3 is most successful when it finds ways to reintegrate Lightning into the tone and world of the first film, as he tries to grapple with his legacy and realizes Doc (who appears in flashbacks that seem as if they might have been cobbled together from outtakes and deleted scenes Newman recorded for the first film) might offer him wisdom even from beyond the grave. (Since cars cant really die, Doc is just not around anymore. But, again, cmon.)

However, because Lightning already learned his lesson about appreciating life and taking it easy, theres just not a lot to mine here. Cars 3 makes some awkward attempts to suggest technology is no replacement for really experiencing life, and Lightning visits other famous race cars, even detouring to hang out in a bar with famous, groundbreaking cars voiced by Margo Martindale and Isiah Whitlock Jr.

But the movie struggles to figure out how to make all of this mesh, right up until the very end, when it finally nods toward keeping one eye on the past but always letting the future take precedent.

Many thinkers who consider the question of what happens when human beings finally create an artificial intelligence that is on the same level as the human brain have concluded that it will not take very long for such a being to evolve into a superintelligence which is any artificial intelligence thats just a smidgen smarter than the smartest human. And from there, they will continue to improve, and we will be left in the dust, ruled, effectively, by our robot successors.

Anyway, the Cars movies dont take place in an explicitly post-human future, but this is the biggest cmon of them all. At some point, self-driving cars rose up, they killed us all, and now they long for the good old days, not realizing those days are impossible to return to.

Thus, the rise of Jackson and his pals allows the film to broach the subject of those early days of artificial superintelligence, with Lightning in the role of humanity. What will happen when we try to keep up with beings that are simply made better than us? Will we accept our obsolescence with grace? Or will we push back with all we have? Cars 3 suggests no easy answers.

Lou is better than, say, Lava (the odious singing volcano short attached to Inside Out). With that said, it is also about how all of the toys in a lost-and-found box become a sort of toy golem that wanders a playground, returning toys to children and making sure bullies pay for their misdeeds.

The audience I saw Lou with ate it up, but reader, I found it terrifying. If Toy Story posited a world where toys wake up when youre not around, Lou posits a world where toys have no knowledge of what it means to be human but are cursed to make an attempt all the same: strange, shambling beasts from outside of time, wandering our playgrounds.

Make it stop. Kill it with fire.

Cars 3 opens in theaters Friday, June 16, with early screenings on the evening of Thursday, June 15.

Read more:

Cars 3 gets back to what made the franchise adequate - Vox

Using AI to unlock human potential – EJ Insight

The artificial intelligence (AI) wave has spread to Hong Kong. Hang Seng Bank plans to introduce Chatbot, with natural language processing and mechanical learning ability, to provide customers with credit card offers and banking services information; while Bank of China (Hong Kong) is studying robot technology that can help improve handling of customer queries.

BOCHK also plans to create a shared big brain to provide customers with personal services. The Chartered Financial Analyst Institute, meanwhile, is reported to be planning to incorporate topics on AI, robotics consultants and big data analysis methods in its 2019 membership qualification examination, so as to meet future market needs.

AI has a broad meaning. From a technical perspective, it includes: deep learning (learning from a large pool of data to assimilate human intelligence, such as AlphaGo in the chess world), robotics (responsible for pre-determined extremely cautious or dangerous tasks, e.g. surgery, dismantling bombs, surveying damaged nuclear power plants, etc.), digital personal assistants (such as Apples Siri, Facebooks M), and querying (finding information from a huge database speedily and accurately).

However, how does AI progress? In The AI Revolution: The Road to Superintelligence, the author points out that there are three stages of development in Al:

Basic Artificial Narrow Intelligence or Weak AI, i.e. AI specializes in a certain scope. AlphaGo can beat the worlds invincible hand in chess, but I am afraid its unable to guide you to the nearby restaurants or to book a hotel room for you. The same logic applies to bomb disposal robot and the AI which identifies cancer cells within seconds.

Advance Artificial General Intelligence or Strong AI, i.e. the computer thinks and operates like a human being. How does human think? I have just read a column from a connoisseur. There are many factors affecting us in choosing a catering place, like our mood, type and taste of food, price, time, etc. The determined factors are not the same every time. See, it is really complicated. This is so called Turing Test. Alan Turing, a British scientist who was born over 100 years ago, said: If a computer makes you believe that it is human, it is artificial intelligence.

Super Advance Artificial Superintelligence. Nick Bostrom, a philosopher at Oxford University, has been thinking about Als relationship with mankind for years, he defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

Although we are still in the stage of Artificial Narrow Intelligence, AI application is already very extensive. By combining with big data, it is applied in various areas like financial (wealth management, fraud detection), medical (diagnosis, drug development), media and advertising (face recognition advertising, tailor-made services), retail (Amazon.com), law (speed-finding information), education (virtual teacher), manufacturing, transportation, agriculture and so on.

MarketsandMarkets, a research company, estimated that Als market value would reach US$16 billion (about HK$124 billion) by 2022, with an average compound annual growth rate of more than 60 percent.

However, relative to the Mainland, this estimate is very conservative according to Jiang Guangzhi, a key member of the Beijing Municipal Commission of Economy and Information Technology. Counting Beijing alone, Al and related industries have now exceeded 150 billion yuan (about HK$170 billion).

In any case, Al definitely holds a key position in the field of science and technology, and it has substantial potential. However, I have just read a report in which a top international AI expert and a HKUST professor complained that the government and private sector were too passive in scientific research in Hong Kong, resulting in a brain drain.

Experts helped Huawei, a Chinese telecom and technology giant, set up a laboratory in Hong Kong, but they said they found it difficult to recruit even 50 people in the city with the required skills.

This is really alarming. Hong Kong has world-class teachers to attract the elite to study here.

However, if we do not strive to create an environment for talents to pursuit their careers here, I am afraid we will keep experiencing talent flight.

Recently, the Financial Development Board published two research reports that deal with issues related to development and application of financial technology in Hong Kong. I hope the government and society will work together and speed up efforts to bring out the potential of Hong Kong in AI.

Contact us at [emailprotected]

AC/RC

Continued here:

Using AI to unlock human potential - EJ Insight

Are You Ready for the AI Revolution and the Rise of Superintelligence? – TrendinTech

Weve come a long way as a whole over the past few centuries. Take a time machine back to 1750 and life would be very different indeed. There was no power outage, to communicate with someone long distance was virtually impossible, and there were no gas stations or supermarkets anywhere. Bring someone from that era to todays world, and they would almost certainly have some form of breakdown. I mean how would they cope seeing capsules with wheels whizz around the roads, electrical devices everywhere you look, and even just talking to someone on the other side of the world in real time. These are all simple things that we take for granted. But someone from a few centuries ago would probably think it was all witchcraft, and could even possibly die.

But then imagine that person went back to 1750 and suddenly became jealous that we saw their reaction of awe and amazement. They may want to re-create that feeling themselves in someone else. So, what would they do? They would take the time machine and go back to say 1500 or so and get someone from that era to take to their own. Although the difference from being in 1500 to then being in 1750 would, of course, be different, it wouldnt be anything as extreme as the difference between 1750 and today. So the 1500 person would still almost certainly be shocked by a few things, its highly unlikely they would die. So, in order for the 1750 person to see the same kind of reaction that we would have, they would need to travel back much, much, farther to say 24,000 BC.

For someone to actually die from the shock of being transported into the future, theyd need to go that far ahead that a Die Progress Unit (DPU) is achieved. In hunter-gatherer times, a DPU took over 100,000 years, and thanks to the Agricultural Revolution rate it took around 12,000 years during that period. Nowadays, because of the rate of advancement following the Industrial Revolution a DPU would happen after being transported just a couple hundred years forward. Futurist Ray Kurzweil calls this pattern of human progression moving quicker as time goes on, the Law of Accelerating Returns and is all down to technology.

This theory also works on smaller scales too. Cast your mind back to that great 1985 movie, Back to the Future. In the movie, the past era they went back to was 1955, where there were various differences of course. But if we were to remake the same movie today, but use the past era as 1985, there would be more dramatic differences. Again, this all comes down to the Law of Accelerating Returns. Between 1985 and 2015 the average rate of advancement was much higher than between 1955 and 1985. Kurzweil suggests that by 2000 the rate of progress was five times faster that the average rate during the 20th century. He also suggests that between 2000 and 2014 another 20th centurys worth of progress happened, and by 2021 another will happen, taking just seven years to get there. This means that keeping with the same pattern, in a couple of decades, a 20th centurys worth of progress will happen multiple times in one year, and eventually, in one month.

If Kurzweil is right then by the time 2030 gets here, we may all be blown away with the technology all around us and by 2050 we may not even recognize anything. But many people are skeptical of this for three main reasons:

1. Our own experiences make us stubborn about the future. Our imagination takes our experiences and uses it to predict future outcomes. The problem is that were limited in what we know and when we hear a prediction that goes against what weve been led to believe we often have trouble accepting it as the truth. For example, if someone was to tell you that youd live to be 200, you would think that was ridiculous because of what youve been taught. But at the end of the day, there has to be a first time for everything, and no one knew airplanes would fly until they gave it a go one day.

2. We think in straight lines when we think about history. When trying to project what will happen in the next 30 years we tend to look back at the past 30 years and use that as some sort of guideline as to whats to come. But, in doing that we arent considering the Law of Accelerating Returns. Instead of thinking linearly, we need to be thinking exponentially. In order to predict anything about the future, we need to picture things advancing at a much faster rate than they are today.

3. The trajectory of recent history tells a distorted story. Exponential growth isnt smooth and progress in this area happens in S-curves. An S curve is created when the wave of the progress of a new paradigm sweeps the world and happens in three phases: slow growth, rapid growth, and a leveling off as the paradigm matures. If you view only a small section of the S-curve youll get a distorted version of how fast things are progressing.

What do we mean by AI?

Artificial intelligence (AI) is big right now; bigger than it ever has been. But, there are still many people out there that get confused by the term for various reasons. One is that in the past weve associated AI with movies like Star Wars, Terminator, and even the Jetsons. Because these are all fictional characters, it makes AI still seem like a sci-fi concept. Also, AI is such a broad topic that ranges from self-driving cars to your phones calculator, so getting to grips with all it entails is not easy. Another reason its confusing is that we often dont even realize when were using AI.

So, to try and clear things up and give yourself a better idea of what AI is, first stop thinking about robots. Robots are simply shells that can encompass AI. Secondly, consider the term singularity. Vernor Vinge wrote an essay in 1993 where this term was applied to the moment in future when the intelligence of our technology exceeds that of ourselves. However, that idea was later confused by Kurzweil defining the singularity as the time when the Law of Accelerating Returns gets so fast that well find ourselves living in a whole new world.

To try and narrow AI down a bit, try to think of it as being separated into three major categories:

1. Artificial Narrow Intelligence (ANI): This is sometimes referred to as Weak AI and is a type if AI that specializes in one particular area. An example of ANI is a chess playing AI. It may be great at winning chess, but that is literally all it can do.

2. Artificial General Intelligence (AGI): Often known as Strong AI or Human-Level AI, AGI refers to a computer that has the intelligence of a human across the board and is much harder to create than ANI.

3. Artificial Superintelligence (ASI): ASI ranges from a computer thats just a little smarter than a human to one thats billions of time smarter in every way. This is the type of AI that is most feared and will often be associated with the words immortality and extinction.

Right now, were progressing steadily through the AI revolution and are currently running in a world of ANI. Cars are full of ANI systems that range from the computer that tells the car when the ABS should kick into the various self-driving cars that are about. Phones are another product thats bursting with ANI. Whenever youre receiving music recommendations from Pandora or using your map app to navigate, or various other activities youre utilizing ANI. An email spam filter is another form of ANI because it learns whats spam and whats not. Google Translate and voice recognition systems are also examples of ANI. And, some of the best Checkers and Chess players of the world are also ANI systems.

So, as you can see, ANI systems are all around us already, but luckily these types of systems dont have the capability to cause any real threat to humanity. But, each new ANI system that is created is simply another step towards AGI and ASI. However, trying to create a computer that is at least, if not more intelligent than ourselves, is no easy feat. But, the hard parts are probably not what you were imagining. To build a computer that can calculate sums quickly is simple, but to build a computer than can tell the difference between a cat and a dog is much harder. As summed up by computer scientist, Donald Knuth, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.'

The next move in which to make AGI a possibility and to compete with the human brain is to increase the power of computers hardware. One way to demonstrate this capacity is by expressing it in calculations per second (cps) that the brain can handle. Kurzweil created a shortcut for calculating this by taking an estimate for the caps of one structure and its weight, comparing it to that of the whole brain, the multiplying it proportionally until an estimate for the total has been reached. After carrying out this calculation several times, Kurzweil always got the same answer of around 1016, or 10 quadrillion cps.

The worlds fastest supercomputer is currently Chinas Tianhe-2 and has clocked in at around 34 quadrillion cps. But, thats hardly a surprise when it uses 24 megawatts of power, takes up 720 square meters of space, and cost $390 million to build. Perhaps if we were to scale that down slightly to 10 quadrillion cps (the human-level) we may be able to achieve a more workable model and AGI would then become a part of everyday life. Currently, the worlds $1,000 computers are about a thousandth of the human level and while that may not sound like much its actually a huge leap forward. In 1985 we were only about a trillionth of human level. If we keep progressing in the same manner then by 2025 we should have an affordable computer that can rival the power of the human brain. Then its just a case of merging all that power with human-level intelligence.

However, thats so much easier said than done. No one really knows how to make computers smart, but here are the most popular strategies weve come across so far:

1. Make everything the computers problem. This is usually a scientists last resort and involves building a computer whose main skill would be to carry out research on AI and coding them changes into itself.

2. Plagiarize the brain. It makes sense to copy the best of whats already available and currently, scientists are working hard to uncover all we can about the mighty organ. As soon as we know how a human brain can run so efficiently we can begin to replicate it in the form of AI. Artificial neural networks do this already, where they mimic the human brain. But there is still a long way to go before they are anywhere near as sophisticated or effective as the human brain. A more extreme example of plagiarism involves whats known as whole brain emulation. Here the aim is to slice a brain into layers, scan each one, create an accurate 3D model then implement that model on a computer. Wed then have a fully working computer that has a brain as capable as our own.

3. Try to make history and evolution repeat itself in our favor. If building a computer just as powerful as the human brain is too hard to mimic, we could instead try to mimic the evolution of it instead. This is a method called genetic algorithms. They would work by taking part in a performance-and-evaluation process that would happen over and over. When a task is completed successfully the computer would be bred with another just as capable in an attempt to merge them and recreate a better computer. This natural selection process would be done several times until we finally have the result we wanted. The downside is that this process could take billions of years.

Various advancements in technology are happening so quickly that AGI could be here before we know it for two main reasons:

1. Exponential growth is very intense and so much can happen in such a short space of time.

2. Even minute software changes can make a big difference. Just one tweak could have the potential to make it 1,000 times more effective.

Once AGI has been achieved and people are happy living alongside human-level AGI, well then move on to ASI. But, just to clarify, even though AGI has the same level of intelligence (theoretically) as a human, they would still have several advantages over us, including:

Speed: Todays microprocessors can run at speeds 10 million times faster than our own neurons and they can also communicate optically at the speed of light.

Size and storage: Unlike our brains, computers can expand to any size, allowing for a larger working memory and long-term memory that will outperform us any day.

Reliability and durability: Computer transistors are far more accurate than biological neurons and are easily repaired too.

Editability: Computer software can be easily tweaked to allow for updates and fixes.

Collective capability: Humans are great at building a huge amount of collective intelligence and is one of the main reasons why weve survived so long as a species and are far more advanced. A computer that is designed to essentially mimic the human brain, will be even better at it as it could regularly sync with itself so that anything another computer learned could be instantly uploaded to the whole network of them.

Most current models that focus on reaching AGI concentrate on AI achieving these goals via self-improvement. Once everything is able to self-improve, another concept to consider is recursive self-improvement. This is where something has already self-improved and so if therefore considerably smarter than it was original. Now, to improve itself further, will be much easier as it is smarter and not so much to learn and therefore takes bigger leaps. Soon the AGIs intelligence levels will exceed that of a human and thats when you get a superintelligent ASI system. This process is called an Intelligence Explosion and is a prime example of The Law of Accelerating Returns. How soon we will reach this level is still very much in debate.

More News to Read

comments

Read more here:

Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech

A reply to Wait But Why on machine superintelligence

Tim Urban of the wonderfulWait But Whyblog recently wrote two posts on machine superintelligence:The Road to Superintelligence and Our Immortality or Extinction.These postsare probably now among the most-readintroductions to the topic since Ray Kurzweils 2006 book.

In general I agree with Tims posts, but I think lots of details in his summary of the topic deserve to be corrected or clarified.Below, Ill quotepassages from histwo posts, roughlyin the order they appear, and then give my own brief reactions. Someof my commentsare fairlynit-picky but I decided to share them anyway; perhaps my most important clarification comes at the end.

The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985 because the former was a more advanced world so much more change happened in the most recent 30 years than in the prior 30.

Readers should know this claim is heavily debated, and its truth depends on what Tim means by rate of advancement. If hes talking about the rate of progress in information technology, the claim might be true. But it might be false for most other areas of technology, for example energy and transportation technology. Cowen, Thiel, Gordon, and Huebner argue that technological innovation more generally has slowed. Meanwhile, Alexander, Smart, Gilder, and others critique some of those arguments.

Anyway,most of what Tim saysin these posts doesnt depend muchon the outcome of these debates.

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing.

Well, thats the goal. But lots ofcurrentANI systems dont yet equal human capability or efficiency at their given task.To pick an easy example from game-playing AIs: chess computers reliably beat humans, and Go computers dont (but they will soon).

Each new ANI innovation quietly adds another brick onto the road to AGI and ASI.

I know Tim is speaking loosely, but I should note that many ANI innovations probably most, depending on how you count wont end up contributing to progress toward AGI. ManyANI methodswill end up being dead ends after some initial success, and their performance on the target task will be superseded by other methods. Thats how the history of AI has worked so far, and how it will likely continue to work.

the human brain is the most complex object in the known universe.

Well, not really. For example the brain of an African elephanthas 3 as many neurons.

Hard things like calculus, financial market strategy, and language translation are mind-numbingly easy for a computer, while easy things like vision, motion, movement, and perception are insanely hard for it.

Yes,Moravecs paradox is roughly true, but I wouldnt say that getting AI systems to perform well in asset trading or language translation has been mind-numbingly easy. E.g. machine translation is useful for getting the gist of a foreign-language text,but billions of dollars of effort still hasnt produced a machine translation system as good as a mid-level humantranslator, and I expect this willremain true for at least another 10 years.

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

Because computing power is increasing so rapidly, we probablywill have more computing power than the human brain(speaking loosely) before we know how to build AGI, but I just want to flag that this isntconceptually necessary. In principle,an AGIdesign could be very different than the brains design, just like a plane isnt designed much like a bird. Depending onthe efficiency of the AGI design, it might be able to surpass human-level performance in all relevant domains using muchless computing power than the human brain does, especially since evolution is avery dumb designer.

So,we dont necessarilyneedhuman brain-ish amounts of computing power to build AGI, but the more computing power we have available, the dumber (less efficient)our AGI design can afford to be.

One way to express this capacity is in the total calculations per second (cps) the brain could manage

Just an aside:TEPS is probably another good metric to think about.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing optimistic estimates say we can do this by 2030.

I suspect that approximately zeroneuroscientists think we can reverse-engineer the brain to the degree being discussed in this paragraph by 2030.To get a sense of current and near-future progress in reverse-engineering the brain, seeThe Future of the Brain (2014).

One example of computer architecture that mimics the brain is the artificial neural network.

This probably isnta good example of the kind of brain-inspired insights wed need to build AGI. Artificial neural networks arguably go back to the 1940s, and they mimic the brain only in the most basicsense. TD learning would be a more specific example, except in that case computer scientists were using the algorithmbefore we discovered the brain also uses it.

[We have]just recently been able to emulate a 1mm-long flatworm brain

No we havent.

The human brain contains 100 billion [neurons].

Good news! Thanks to a new technique we now have a more precise estimate: 86 billion neurons.

If that makes [whole brain emulation]seem like a hopeless project, remember the power of exponential progress now that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

Because computing power advances so quickly, it probably wont be the limiting factor on brain emulation technology. Scanning resolution and neuroscience knowledge are likely to lag far behind computing power: see chapter 2 ofSuperintelligence.

most of our current models for getting to AGI involve the AI getting there by self-improvement.

They do? Says who?

I think the path from AGI to superintelligence is mostly or entirely about self-improvement, but the path from current AI systems to AGI is mostly about human engineering work, probably until relatively shortly before the leading AI project reachesa level of capability worth calling AGI.

the median year on a survey of hundreds of scientists about when they believed wed be more likely than not to have reached AGI was 2040

Thats the number you get when you combine the estimates from several different recent surveys, including surveys of people who were mostlynot AIscientists. If you stick to the survey of the top-cited living AI scientists the one called TOP100 here the median estimate for50% probability of AGI is 2050. (Not a big difference, though.)

many of the thinkers in this field think its likely that the progression from AGI to ASI [will happen]very quickly

True, but it should be noted this is still a minority position, as one can see in Tims 2nd post, or in section 3.3 of the source paper.

90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Remember that lots of knowledge and intelligence comes from interacting with the world, not just from running computational processesmore quickly or efficiently. Sometimes learning requires that you wait on some slow natural process to unfold.(In this context, even a1-second experimental test is slow.)

So the median participant thinks its more likely than not that well have AGI 25 years from now.

Again, I think its better to usethe numbers for the TOP100 survey from that paper, rather than the combined numbers.

Due to something called cognitive biases, we have a hard time believing something is real until we see proof.

There are dozens of cognitive biases,so thisis about as informative assaying due to something calledpsychology, we

The specific cognitive bias Tim seems to be discussing in this paragraph is the availability heuristic, or maybe the absurdity heuristic.Also see Cognitive Biases Potentially AffectingJudgment of Global Risks.

[Kurzweil is]well-known for his bold predictions and has a pretty good record of having them come true

The linked article says Ray Kurzweils predictions are right 86% of the time. That statistic is from a self-assessment Kurzweil published in 2010.Not surprisingly, when independent partiestry to grade the accuracy of Kurzweils predictions, they arrive at a much lower accuracy score: seepage 21 of this paper.

How good is this compared to other futurists? Unfortunately,we have no idea. The problem is that nobodyelse has bothered to write down so many specific technological forecasts over the course of multiple decades. So, give Kurzweil credit fordaring to make lots of predictions.

My own vague guess is that Kurzweils track record is actually pretty impressive, but not as impressive as his own self-assessment suggests.

Kurzweil predicts that well get [advanced nanotech]by the 2020s.

Im not surewhich Kurzweilprediction about nanotech Tim isreferring to, because the associated footnote points to a page of The Singularity is Nearthat isnt about nanotech. But if hes talking about advanced Drexlerian nanotech, then I suspect approximately zero nanotechnologists would agree with this forecast.

I expected [Kurzweils]critics to be saying, Obviously that stuff cant happen, but instead they were saying things like, Yes, all of that can happen if we safely transition to ASI, but thats the hard part.Bostrom, one of the most prominent voices warning us about the dangers of AI, still acknowledges

Yeah, but Bostrom and Kurzweil are both famous futurists. There are plenty of non-futuristcritics of Kurzweil who would say Obviously that stuff cant happen. I happen to agree with Kurzweil and Bostrom about the radical goods within reach of a human-aligned superintelligence,but lets not forget that most AI scientists, and most PhD-carrying members of societyin general, probablywould say Obviously that stuff cant happen in response to Kurzweil.

The people on Anxious Avenue arent in Panicked Prairie or Hopeless Hills both of which are regions on the far left of the chart but theyre nervous and theyre tense.

Actually, the people Tim istalking about here are often more pessimistic about societal outcomes than Tim issuggesting.Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that its only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.

Of course, itsalso true that many of the people who write about the importance of AGI risk mitigationare moreoptimistic than the range shown in Timsgraph of Anxious Avenue. For example, one researcher I know thinks its maybe 65% likely we get really good outcomes from machine superintelligence. But he notes that a ~35% chance ofhuman friggin extinction istotally worth trying to mitigate as much as wecan, including by funding hundreds of smart scientists to study potential solutionsdecades in advance of the worst-case scenarios, like we already do with regard to a global warming, a much smaller problem. (Global warming is a big problem on a normal persons scale of things to worry about, but even climate scientists dont think its capable of human extinction in the next couple centuries.)

Or, as Stuart Russell author of the leading AI textbook likes to put it,If a superior alien civilization sent us a message saying, Well arrive in a few decades, would we just reply, OK, call us when you get here well leave the lights on? Probably not but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes1

[In the movies]AI becomes as or more intelligent than humans, then decides to turn against us and take over. Heres what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called anthropomorphizing.

Thank you. Jesus Christ I am tired of clearing up that very basic confusion, even formany AI scientists.

Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species.

Just FYI, at MIRI wevestarted to move away from the Friendly AI language recently, since people think Oh, like C-3PO? MIRIs recent papersuse phrases like superintelligence alignmentinstead.

In any case, my real comment here isthat the quoted sentence above doesnt use the terms Friendly or Unfriendly the way theyve been used traditionally. In the usual parlance, a Friendly AI doesnt turn Unfriendly. If it becomes Unfriendly at some point, then it wasalways an Unfriendly AI, it just wasnt powerful enough yet to be a harm to you.

Tim does sorta fix this much later in the same post when he writes: So Turry didnt turn against usor switch from Friendly AI to Unfriendly AI she just kept doing her thing as she became more and more advanced.

When were talking about ASI, the same concept applies it would become superintelligent, but it would be no more human than your laptop is.

Well, this depends on how the AI is designed. If the ASI is an uploaded human, itll be pretty similar to a human in lots of ways. If itsnot an uploaded human, it could still be purposely designed to be human-like in many different ways. But mimicking human psychology in any kind of detail almost certainly isnt the quickest way to AGI/ASI just like mimicking bird flight in lots of detail wasnt how we built planes sopractically speaking yes,the first AGI(s) will likely be very alien from our perspective.

What motivates an AI system?The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators your GPSs goal is to give you the most efficient driving directions; Watsons goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation.

Some AI programs today are goal-driven, but most are not. Siri isnt trying to maximize somegoal likebe useful to the user of this iPhone or anything like that. It just has a long list of rules about what kind of output to provide in response to different kinds of commands and questions. Varioussub-components of Siri might be sortagoal-oriented e.g. theres an evaluation function trying topick the most likely accuratetranscriptionof your spoken words but the system as a whole isnt goal-oriented. (Or at least, this is how it seems to work. Apple hasnt shown me Siris source code.)

As AI systems become more autonomous, giving them goals becomes more important because you cant feasibly specify how the AI shouldreact in every possible arrangement of the environment instead, you need to give it goals and let it do its own on-the-fly planning for how its going achieve those goals in unexpected environmental conditions.

The programming for a Predator drone doesnt include a list of instructionsto follow for every possiblecombination of takeoff points, destinations, and wind conditions, because that list would be impossiblylong. Rather, the operator gives the Predator drone a goal destination and the drone figures out how to get there on its own.

when [Turry]wasnt yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly. She caused no harm to humans and was, by definition, Friendly AI.

Again, Ill mention thats not how the term has traditionally been used, but whatever.

But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI. Many of them are trying to build AI that can improve on its own

This isnt true unless by AI that can improve on its own you just mean machine learning. Almost nobody in AI is working on the kind of recursive self-improvement youd need to get an intelligence explosion. Lots of people are working onsystemsthat could eventually providesomepiece of the foundational architecturefor a self-improving AGI, but almostnobody is working directly on the recursive self-improvement problem right now, because its too far beyond current capabilities.2

because many techniques to build innovative AI systems dont require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored.

True, its much harder to monitor potential AGI projects than it is to track uranium enrichment facilities. But you can at least trackAI research talent. Right nowit doesnt take a ton of work to identify aset of 500 AI researchers that probably contains the most talented ~150AI researchers in the world. Then you can just track all 500 of them.

This is similar to back whenphysicists were starting to realize that a nuclear fission bomb mightbe feasible. Suddenly a few of the most talented researchers stoppedpresenting their work at the usual conferences, and the other nuclear physicists pretty quickly deduced: Oh, shit, theyre probably working on a secret government fission bomb. If Geoff Hinton or even the much younger Ilya Sutskever suddenly went undergroundtomorrow, a lot of AI people would notice.

Of course, such a tracking effort might not be so feasible 30-60 years from now, when serious AGI projects will be more numerous and greater proportions of world GDP and human cognitive talent will be devoted to AI efforts.

On the contrary, what [AI developers are]probably doing is programming their early systems with a very simple, reductionist goal like writing a simple note with a pen on paper to just get the AI to work.Down the road, once theyve figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right?

Again, I note that most AI systems today are not goal-directed.

I also note that sadly, it probably wouldnt just be a matter of going back to revise the goal with safety in mind after a certain level of AI capability is reached. Most proto-AGIdesigns probably arent even thekind of systems youcan make robustly safe, no matterwhat goals you program into them.

To illustrate what I mean, imaginea hypothetical computer security expert namedBruce.Youtell Bruce that he and his team havejust 3years tomodify the latest version ofMicrosoft Windows so that it cant be hacked in any way, even by the smartest hackers on Earth. If he fails, Earth will be destroyed because reasons.

Bruce just stares at you and says, Well, thats impossible, so I guess were allfucked.

The problem, Bruce explains,is that Microsoft Windows was neverdesigned to be anything remotely like unhackable. It was designed to be easily useable, and compatible with lots of software, and flexible, and affordable, and just barely secure enough to be marketable, and you cant just slap ona specialUnhackability Module at the last minute.

To get a system that even has achance at being robustlyunhackable, Bruce explains, youve got to design an entirely differenthardware + software system that was designedfrom the ground up to be unhackable. And that systemmustbe designed in an entirely different way than Microsoft Windows is,and no team in the world could do everything that is required for that in a mere 3 years. So, were fucked.

But! By a stroke of luck, Bruce learns that some teamsoutsideMicrosoft have been working on a theoretically unhackablehardware + software system for the past several decades (high reliability ishard) people like Greg Morrisett (SAFE) and Gerwin Klein (seL4).Bruce says he might be able to take their work and add thefeatures you need,while preserving the strong security guaranteesof the original highly securesystem. Bruce sets Microsoft Windows aside and gets to work on trying to make this other system satisfy themysteriousreasons while remaining unhackable. He and his team succeedjust in time to savethe day.

This is an oversimplified and comically romanticway to illustrate whatMIRI is trying to do in the area of long-term AI safety. Were trying to think through what properties an AGI would need to haveif it was going to very reliablyact in accordance with humane values even as it rewrote its own code a hundredtimes on its way tomachine superintelligence.Were asking:What would it look like if somebody tried to design an AGI that was designedfrom the ground up not for affordability, or for speed of development, or for economic benefit at every increment of progress, but for reliably beneficial behavior even under conditions of radical self-improvement? What does the computationally unbounded solution to that problem look like,so we can gain conceptual insightsusefulfor later efforts to build a computationally tractable self-improving system reliably aligned with humane interests?

So ifyoure reading this, and you happen to be a highly gifted mathematician or computer scientist, and you want a full-time job working on themost important challenge of the 21st century, well were hiring. (I will also try to appeal to your vanity: Please note that because so little work has been done in this area, youve still got a decentchance to contribute to what will eventually be recognized as the early, foundational results ofthe most important field of research in human history.)

My thanks to Tim Urban for hisvery nice posts on machine superintelligence. Be sure to read his ongoing series about Elon Musk.

More:

A reply to Wait But Why on machine superintelligence

The AI Revolution: The Road to Superintelligence (PDF)

Mailed to:

Is this your street address? Yes, update Yes, it is

AA AE AP AL AK AZ AR CA CO CT DE FL GA HI ID IL IN IA KS KY LA ME MD MA MI MN MS MO MT NE NV NH NJ NM NY NC ND OH OK OR PA RI SC SD TN TX UT VT VA WA WV WI WY DC

United States Japan land Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cabo Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Cook Islands Costa Rica Croatia Curaao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lesotho Liechtenstein Lithuania Luxembourg Macao Macedonia, the former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestine, State of Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Runion Romania Russian Federation Rwanda Saint Barthlemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Zambia

Read more:

The AI Revolution: The Road to Superintelligence (PDF)

How humans will lose control of artificial intelligence – The Week Magazine

Sign Up for

Our free email newsletters

This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the world's first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results not knowing they've already doomed us all.

Before we get into the details of this galaxy-destroying blunder, it's worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it "calculations per second per $1,000," a number that continues to grow. If computing power maps to intelligence a big "if," some have argued we've only so far built technology on par with an insect brain. In a few years, maybe, we'll overtake a mouse brain. Around 2025, some predictions go, we might have a computer that's analogous to a human brain: a mind cast in silicon.

After that, things could get weird. Because there's no reason to think artificial intelligence wouldn't surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, "Machine intelligence is the last invention that humanity will ever need to make."

That's how profoundly things could change. But we can't really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations feelings, even that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.

Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators haven't considered the full ramifications of what they're building; they haven't built in the necessary safety protocols forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.

But the superintelligence doesn't want to be turned off. It doesn't want to stop making paper clips. Acting quickly, it's already plugged itself into another power source; maybe it's even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: They'll have to be eliminated so the mission can continue. And Earth won't be big enough for the superintelligence: It'll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.

Galaxies reduced to paper clips: That's a worst-case scenario. It may sound absurd, but it probably sounds familiar. It's Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (It's also The Terminator, WarGames, and a whole host of others.) In this particular case, it's a reminder that superintelligence would not be human it would be something else, something potentially incomprehensible to us. That means it could be dangerous.

Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence "the idea that eats smart people." Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.

Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. "But even if you find them persuasive," he said, "there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously." He suggests there are more subtle ways to think about the problems of A.I.

Some of those problems are already in front of us, and we might miss them if we're looking for a Skynet-style takeover by hyper-intelligent machines. "While you're focused on this, a bunch of small things go unnoticed," says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at what's already happening with our comparatively rudimentary A.I.

She's focusing on "large-area effects," the unnoticed flaws in our systems that can do massive damage damage that's often unnoticed until after the fact. "If you were building a bridge and you screw up and it collapses, that's a tragedy. But it affects a relatively small number of people," she says. "What's different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily."

Take the recent rise of so-called "fake news." What caught many by surprise should have been completely predictable: When the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption heightened with the proliferation of the smartphone forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebook's driving ethos).

The incentives were all wrong; exacerbated by algorithms, they led to a state of affairs few would have wanted. "For a long time, the focus has been on performance on dollars, or clicks, or whatever the thing was. That was what was measured," says Doshi-Velez. "That's a very simple application of A.I. having large effects that may have been unintentional."

In fact, "fake news" is a cousin to the paperclip example, with the ultimate goal not "manufacturing paper clips," but "monetization," with all else becoming secondary. Google wanted make the internet easier to navigate, Facebook wanted to become a place for friends, news organizations wanted to follow their audiences, and independent web entrepreneurs were trying to make a living. Some of these goals were achieved, but "monetization" as the driving force led to deleterious side effects such as the proliferation of "fake news."

In other words, algorithms, in their all-too-human ways, have consequences. Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.

The ideal was that the software's underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica found it was "likely to falsely flag black defendants as future criminals" while "[w]hite defendants were mislabeled as low risk more often than black defendants." Race was not part of the questionnaire, but it did ask whether the respondent's parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.

It's that kind of error that most worries Doshi-Velez. "Not superhuman intelligence, but human error that affects many, many people," she says. "You might not even realize this is happening." Algorithms are complex tools; often they are so complex that we can't predict how they'll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.

In 2015, Elon Musk donated $10 million to, as Wired put it, "to keep A.I. from turning evil." That was an oversimplification; the money went to the Future of Life Institute, which planned to use it to further research into how to make A.I. beneficial. Doshi-Velez suggests that simply paying closer attention to our algorithms may be a good first step. Too often they are created by homogeneous groups of programmers who are separated from people who will be affected. Or they fail to account for every possible situation, including the worst-case possibilities. Consider, for example, Eric Meyer's example of "inadvertent algorithmic cruelty" Facebook's "Year in Review" app showing him pictures of his daughter, who'd died that year.

If there's a way to prevent the far-off possibility of a killer superintelligence with no regard for humanity, it may begin with making today's algorithms more thoughtful, more compassionate, more humane. That means educating designers to think through effects, because to our algorithms we've granted great power. "I see teaching as this moral imperative," says Doshi-Velez. "You know, with great power comes great responsibility."

This article originally appeared at Vocativ.com: The moment when humans lose control of AI.

View original post here:

How humans will lose control of artificial intelligence - The Week Magazine

The Nonparametric Intuition: Superintelligence and Design Methodology – Lifeboat Foundation (blog)

I will admit that I have been distracted from both popular discussion and the academic work on the risks of emergent superintelligence. However, in the spirit of an essay, let me offer some uninformed thoughts on a question involving such superintelligence based on my experience thinking about a different area. Hopefully, despite my ignorance, this experience will offer something new or at least explain one approach in a new way.

The question about superintelligence I wish to address is the paperclip universe problem. Suppose that an industrial program, aimed with the goal of maximizing the number of paperclips, is otherwise equipped with a general intelligence program as to tackle with this objective in the most creative ways, as well as internet connectivity and text information processing facilities so that it can discover other mechanisms. There is then the possibility that the program does not take its current resources as appropriate constraints, but becomes interested in manipulating people and directing devices to cause paperclips to be manufactured without consequence for any other objective, leading in the worse case to widespread destruction but a large number of surviving paperclips.

This would clearly be a disaster. The common response is to take as a consequence that when we specify goals to programs, we should be much more careful about specifying what those goals are. However, we might find it difficult to formulate a set of goals that dont admit some kind of loophole or paradox that, if pursued with mechanical single-mindedness, are either similarly narrowly destructive or self-defeating.

Suppose that, instead of trying to formulate a set of foolproof goals, we should find a way to admit to the program that the set of goals weve described is not comprehensive. We should aim for the capacity to add new goals with a procedural understanding that the list may never be complete. If done well, we would have a system that would couple this initial set of goals to the set of resources, operations, consequences, and stakeholders initially provided to it, with an understanding that those goals are only appropriate to the initial list and finding new potential means requires developing a richer understanding of potential ends.

How can this work? Its easy to imagine such an algorithmic admission leading to paralysis, either from finding contradictory objectives that apparently admit no solution or an analysis/paralysis which perpetually requires no undiscovered goals before proceeding. Alternatively, stated incorrectly, it could backfire, with finding more goals taking the place of making more paperclips as it proceeds singlemindedly to consume resources. Clearly, a satisfactory superintelligence would need to reason appropriately about the goal discovery process.

There is a profession that has figured out a heuristic form of reasoning about goal discovery processes: designers. Designers have coined the phrase the fuzzy front end when talking about the very early stages of a project before anyone has figured out what it is about. Designers engage in low-cost elicitation exercises with a variety of stakeholders. They quickly discover who the relevant stakeholders are and what impacts their interventions might have. Adept designers switch back and forth rapidly from candidate solutions to analyzing the potential impacts of those designs, making new associations about the area under study that allows for further goal discovery. As designers undertake these explorations, they advise going slightly past the apparent wall of diminishing returns, often using an initial brainstorming session to reveal all of the obvious ideas before undertaking a deeper analysis. Seasoned designers develop an understanding when stakeholders are holding back and need to be prompted, or when equivocating stakeholders should be encouraged to move on. Designers will interleave a series of prototypes, experiential exercises, and pilot runs into their work, to make sure that interventions really behave the way their analysis seems to indicate.

These heuristics correspond well to an area of statistics and machine learning called nonparametric Bayesian inference. Nonparametric does not mean that there are no parameters, but instead that the parameters are not given, and that inferring that there are further parameters is part of the task. Suppose that you were to move to a new town, and ask around about the best restaurant. The first answer would definitely be new, but as one asked more, eventually you would start getting new answers more rarely. The likelihood of a given answer would also begin to converge. In some cases the answers will be more concentrated on a few answers, and in some cases the answers will be more dispersed. In either case, once we have an idea of how concentrated the answers are, we might see that a particular period of not discovering new answers might just be unlucky and that we should pursue further inquiry.

Asking why provides a list of critical features that can be used to direct different inquiries that fill out the picture. Whats the best restaurant in town for Mexican food? Which is best at maintaining relationships to local food providers/has the best value for money/is the tastiest/has the most friendly service? Designers discover aspects about their goals in an open-ended way, that allows discovery to act in quick cycles of learning through taking on different aspects of the problem. This behavior would work very well for an active learning formulation of relational nonparametric inference.

There is a point at which information gathering activities are less helpful at gathering information than attending to the feedback to activities that more directly act on existing goals. This happens when there is a cost/risk equilibrium between the cost of more discovery activities and the risk of making an intervention on incomplete information. In many circumstances, the line between information gathering and direct intervention will be fuzzier, as exploration proceeds through reversible or inconsequential experiments, prototypes, trials, pilots, and extensions that gather information while still pursuing the goals found so far.

From this perspective, many frameworks for assessing engineering discovery processes make a kind of epistemological error: they assess the quality of the solution from the perspective of the information that they have gathered, paying no attention to the rates and costs which that information was discovered, and whether or not the discovery process is at equilibrium. This mistake comes from seeing the problems as finding a particular point in a given search space of solutions, rather than taking the search space as a variable requiring iterative development. A superintelligence equipped to see past this fallacy would be unlikely to deliver us a universe of paperclips.

Having said all this, I think the nonparametric intuition, while right, can be cripplingly misguided without being supplemented with other ideas. To consider discovery analytically is to not discount the power of knowing about the unknown, but it doesnt intrinsically value non-contingent truths. In my next essay, I will take on this topic.

For a more detailed explanation and an example of how to extend engineering design assessment to include nonparametric criteria, see The Methodological Unboundedness of Limited Discovery Processes. Form Academisk, 7:4.

Go here to see the original:

The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog)

The AI debate must stay grounded in reality – Prospect

Research works best when it takes account of multiple views by Vincent Conitzer / March 6, 2017 / Leave a comment

Are driverless cars the future Fabio De Paola/PA Wire/PA Images

Progress in artificial intelligence has been rapid in recent years. Computer programs are dethroning humans in games ranging from Jeopardy to Go to poker. Self-driving cars are appearing on roads. AI is starting to outperform humans in image and speech recognition.

With all this progress, a host of concerns about AIs impact on human societies have come to the forefront. How should we design and regulate self-driving cars and similar technologies? Will AI leave large segments of the population unemployed? Will AI have unintended sociological consequences? (Think about algorithms that accurately predict which news articles a person will like resulting in highly polarised societies, or algorithms that predict whether someone will default on a loan or commit another crime becoming racially biased due to the input data they are given.)

Will AI be abused by oppressive governments to sniff out and stifle any budding dissent? Should we develop weapons that can act autonomously? And should we perhaps even be concerned that AI will eventually become superintelligentintellectually more capable than human beings in every important waymaking us obsolete or even extinct? While this last concern was once purely in the realm of science fiction, notable figures including Elon Musk, Bill Gates, and Stephen Hawking, inspired by Oxford philosopher Nick Bostroms Superintelligence book, have recently argued it needs to be taken seriously.

These concerns are mostly quite distinct from each other, but they all rely on the premise of technical advances in AI. Actually, in all cases but the last one, even just currently demonstrated AI capabilities justify the concern to some extent, but further progress will rapidly exacerbate it. And further progress seems inevitable, both because there do not seem to be any fundamental obstacles to it and because large amounts of resources are being poured into AI research and development. The concerns feed off each other and a community of people studying the risks of AI is starting to take shape. This includes traditional AI researchersprimarily computer scientistsas well as people from other disciplines: economists studying AI-driven unemployment, legal scholars debating how best to regulate self-driving cars, and so on.

A conference on Beneficial AI held in California in January brought a sizeable part of this community together. The topics covered reflected the diversity of concerns and interests. One moment, the discussion centred on which communities are disproportionately affected by their jobs being automated; the next moment, the topic was whether we should make sure that super-intelligent AI has conscious experiences. The mixing together of such short- and long-term concerns does not sit well with everyone. Most traditional AI researchers are reluctant to speculate about whether and when we will attain truly human-level AI: current techniques still seem a long way off this and it is not clear what new insights would be able to close the gap. Most of them would also rather focus on making concrete technical progress than get mired down in philosophical debates about the nature of consciousness. At the same time, most of these researchers are willing to take seriously the other concerns, which have a concrete basis in current capabilities.

Is there a risk that speculation about super-intelligence, often sounding like science fiction more than science, will discredit the larger project of focusing on the societally responsible development of real AI? And if so, is it perhaps better to put aside any discussion of super-intelligence for now? While I am quite sceptical of the idea that truly human-level AI will be developed anytime soon, overall I think that the people worried about this deserve a place at the table in these discussions. For one, some of the most surprisingly impressive recent technical accomplishments have come from people who are very bullish on what AI can achieve. Even if it turns out that we are still nowhere close to human-level AI, those who imagine that we are could contribute useful insights into what might happen in the medium-term.

I think there is value even in thinking about some of the very hard philosophical questions, such as whether AI could ever have subjective experiences, whether there is something it would be like to be a highly advanced AI system. (See also my earlier Prospect article.) Besides casting an interesting new light on some ancient questions, the exercise is likely to inform future societal debates. For example, we may imagine that in the future people will become attached to the highly personalised and anthropomorphised robots that care for them in old age, and demand certain rights for these robots after they pass away. Should such rights be granted? Should such sentiments be avoided?

At the same time, the debate should obviously not exclude or turn off people who genuinely care about the short-term concerns while being averse to speculation about the long-term, especially because most real AI researchers fall in this last category. Besides contributing solutions to the short-term concerns, their participation is essential to ensure that the longer-term debate stays grounded in reality. Research communities work best when they include people with different views and different sub-interests. And it is hard to imagine a topic for which this is truer than the impact of AI on human societies.

Read the original here:

The AI debate must stay grounded in reality - Prospect

Superintelligence | Guardian Bookshop

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

Originally posted here:

Superintelligence | Guardian Bookshop

Supersentience

By contrast, mathematician I.J. Good, and most recently Eliezer Yudkowsky and the Machine Intelligence Research Institute (MIRI), envisage a combination of Moore's law and the advent of recursively self-improving software-based minds culminating in an ultra-rapid Intelligence Explosion. The upshot of the Intelligence Explosion will be an era of nonbiological superintelligence. Machine superintelligence may not be human-friendly: MIRI, in particular, foresee nonfriendly artificial general intelligence (AGI) is the most likely outcome. Whereas raw processing power in humans evolves only slowly via natural selection over many thousands or millions of years, hypothetical software-based minds will be able rapidly to copy, edit and debug themselves ever more effectively and speedily in a positive feedback loop of intelligence self-amplification. Simple-minded humans may soon become irrelevant to the future of intelligence in the universe. Barring breakthroughs in "Safe AI", as promoted by MIRI, biological humanity faces REPLACEMENT, not FUSION.

A more apocalyptic REPLACEMENT scenario is sketched by maverick AI researcher Hugo de Garais. De Garais prophesies a "gigadeath" war between ultra-intelligent "artilects" (artificial intellects) and archaic biological humans later this century. The superintelligent machines will triumph and proceed to colonise the cosmos.

1.1.0. What Is Friendly Artificial General Intelligence? In common with friendliness, "intelligence" is a socially and scientifically contested concept. Ill-defined concepts are difficult to formalise. Thus a capacity for perspective-taking and social cognition, i.e. "mind-reading" prowess, is far removed from the mind-blind, "autistic" rationality measured by IQ tests - and far harder formally to program. Worse, we don't yet know whether the concept of species-specific human-friendly superintelligence is even intellectually coherent, let alone technically feasible. Thus the expression "Human-friendly Superintelligence" might one day read as incongruously as "Aryan-friendly Superintelligence" or "Cannibal-friendly Superintelligence". As Robert Louis Stevenson observed, "Nothing more strongly arouses our disgust than cannibalism, yet we make the same impression on Buddhists and vegetarians, for we feed on babies, though not our own." Would a God-like posthuman endowed with empathetic superintelligence view killer apes more indulgently than humans view serial child killers? A factory-farmed pig is at least as sentient as a prelinguistic human toddler. "History is the propaganda of the victors", said Ernst Toller; and so too is human-centred bioethics. By the same token, in possible worlds or real Everett branches of the multiverse where the Nazis won the Second World War, maybe Aryan researchers seek to warn their complacent colleagues of the risks NonAryan-Friendly Superintelligence might pose to the Herrenvolk. Indeed so. Consequently, the expression "Friendly Artificial Intelligence" (FAI) will here be taken unless otherwise specified to mean Sentience-Friendly AI rather than the anthropocentric usage current in the literature. Yet what exactly does "Sentience-Friendliness" entail beyond the subjective well-being of sentience? High-tech Jainism? Life-based on gradients of intelligent bliss? "Uplifting" Darwinian life to posthuman smart angels? The propagation of a utilitronium shockwave?

Sentience-friendliness in the guise of utilitronium shockwave seems out of place in any menu of benign post-Singularity outcomes. Conversion of the accessible cosmos into "utilitronium", i.e. relatively homogeneous matter and energy optimised for maximum bliss, is intuitively an archetypically non-friendly outcome of an Intelligence Explosion. For a utilitronium shockwave entails the elimination of all existing lifeforms - and presumably the elimination of all intelligence superfluous to utilitronium propagation as well, suggesting that utilitarian superintelligence is ultimately self-subverting. Yet the inference that sentience-friendliness entails friendliness to existing lifeforms presupposes that superintelligence would respect our commonsense notions about a personal identity over time. An ontological commitment to enduring metaphysical egos underpins our conceptual scheme. Such a commitment is metaphysically problematic and hard to formalise even within a notional classical world, let alone within post-Everett quantum mechanics. Either way, this example illustrates how even nominally "friendly" machine superintelligence that respected some formulation and formalisation of "our" values (e.g. "Minimise suffering, Maximise happiness!") might extract and implement counterintuitive conclusions that most humans and programmers of Seed AI would find repugnant - at least before their conversion into blissful utilitronium. Or maybe the idea that utilitronium is relatively homogeneous matter and energy - pure undifferentiated hedonium or "orgasmium" - is ill-conceived. Or maybe felicific calculus dictates that utilitronium should merely fuel utopian life's reward pathways for the foreseeable future. Cosmic engineering can wait.

Of course, anti-utilitarians might respond more robustly to this fantastical conception of sentience-friendliness. Critics would argue that conceiving the end of life as a perpetual cosmic orgasm is the reductio ad absurdum of classical utilitarianism. But will posthuman superintelligence respect human conceptions of absurdity?

1.1.1. What Is Coherent Extrapolated Volition? MIRI conceive of species-specific human-friendliness in terms of what Eliezer Yudkowsky dubs "Coherent Extrapolated Volition" (CEV). To promote Human-Safe AI in the face of the prophesied machine Intelligence Explosion, humanity should aim to code so-called Seed AI, a hypothesised type of strong artificial intelligence capable of recursive self-improvement, with the formalisation of "...our (human) wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted."

Clearly, problems abound with this proposal as it stands. Could CEV be formalised any more uniquely than Rousseau's "General Will"? If, optimistically, we assume that most of the world's population nominally signs up to CEV as formulated by MIRI, would not the result simply be countless different conceptions of what securing humanity's interests with CEV entails - thereby defeating its purpose? Presumably, our disparate notions of what CEV entails would themselves need to be reconciled in some "meta-CEV" before Seed AI could (somehow) be programmed with its notional formalisation. Who or what would do the reconciliation? Most people's core beliefs and values, spanning everything from Allah to folk-physics, are in large measure false, muddled, conflicting and contradictory, and often "not even wrong". How in practice do we formally reconcile the logically irreconcilable in a coherent utility function? And who are "we"? Is CEV supposed to be coded with the formalisms of mathematical logic (cf. the identifiable, well-individuated vehicles of content characteristic of Good Old-Fashioned Artificial Intelligence: GOFAI)? Or would CEV be coded with a recognisable descendant of the probabilistic, statistical and dynamical systems models that dominate contemporary artificial intelligence? Or some kind of hybrid? This Herculean task would be challenging for a full-blown superintelligence, let alone its notional precursor.

CEV assumes that the canonical idealisation of human values will be at once logically self-consistent yet rich, subtle and complex. On the other hand, if in defiance of the complexity of humanity's professed values and motivations, some version of the pleasure principle / psychological hedonism is substantially correct, then might CEV actually entail converting ourselves into utilitronium / hedonium - again defeating CEV's ostensible purpose? As a wise junkie once said, "Don't try heroin. It's too good." Compared to pure hedonium or "orgasmium", shooting up heroin isn't as much fun as taking aspirin. Do humans really understand what we're missing? Unlike the rueful junkie, we would never live to regret it.

One rationale of CEV in the countdown to the anticipated machine Intelligence Explosion is that humanity should try and keep our collective options open rather than prematurely impose one group's values or definition of reality on everyone else, at least until we understand more about what a notional super-AGI's "human-friendliness" entails. However, whether CEV could achieve this in practice is desperately obscure. Actually, there is a human-friendly - indeed universally sentience-friendly - alternative or complementary option to CEV that could radically enhance the well-being of humans and the rest of the living world while conserving most of our existing preference architectures: an option that is also neutral between utilitarian, deontological, virtue-based and pluralist approaches to ethics, and also neutral between multiple religious and secular belief systems. This option is radically to recalibrate all our hedonic set-points so that life is animated by gradients of intelligent bliss - as distinct from the pursuit of unvarying maximum pleasure dictated by classical utilitarianism. If biological humans could be "uploaded" to digital computers, then our superhappy "uploads" could presumably be encoded with exalted hedonic set-points too. The latter conjecture assumes that classical digital computers could ever support unitary phenomenal minds.

However, if an Intelligence Explosion is as imminent as some Singularity theorists claim, then it's unlikely either an idealised logical reconciliation (CEV) or radical hedonic recalibration could be sociologically realistic on such short time scales.

1.2. The Intelligence Explosion. The existential risk posed to biological sentience by unfriendly AGI supposedly takes various guises. But unlike de Garais, the MIRI isn't focused on the spectre from pulp sci-fi of a "robot rebellion". Rather MIRI anticipate recursively self-improving software-based superintelligence that goes "FOOM", by analogy with a nuclear chain reaction, in a runaway cycle of self-improvement. Slow-thinking, fixed-IQ humans allegedly won't be able to compete with recursively self-improving machine intelligence.

For a start, digital computers exhibit vastly greater serial depth of processing than the neural networks of organic robots. Digital software can be readily copied and speedily edited, allowing hypothetical software-based minds to optimise themselves on time scales unimaginably faster than biological humans. Proposed "hard take-off" scenarios range in timespan from months, to days, to hours, to even minutes. No inevitable convergence of outcomes on the well-being of all sentience [in some guise] is assumed from this explosive outburst of cognition. Rather MIRI argue for orthogonality. On the Orthogonality Thesis, a super-AGI might just as well supremely value something as seemingly arbitrary, e.g. paperclips, as the interests of sentient beings. A super-AGI might accordingly proceed to convert the accessible cosmos into supervaluable paperclips, incidentally erasing life on Earth in the process. This bizarre-sounding possibility follows from the MIRI's antirealist metaethics. Value judgements are assumed to lack truth-conditions. In consequence, an agent's choice of ultimate value(s) - as distinct from the instrumental rationality needed to realise these values - is taken to be arbitrary. David Hume made the point memorably in A Treatise of Human Nature (1739-40): "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." Hence no sentience-friendly convergence of outcomes can be anticipated from an Intelligence Explosion. "Paperclipper" scenarios are normally construed as the paradigm case of nonfriendly AGI - though by way of complication, there are value systems where a cosmos tiled entirely with paperclips counts as one class of sentience-friendly outcome (cf. David Benatar: Better Never To Have Been: The Harm of Coming into Existence (2008).

1.3. AGIs: Sentients Or Zombies? Whether humanity should fear paperclippers run amok or an old-fashioned robot rebellion, it's hard to judge which is the bolder claim about the prophesied Intelligence Explosion: either human civilisation is potentially threatened by hyperintelligent zombie AGI(s) endowed with the non-conscious digital isomorphs of reflectively self-aware minds; OR, human civilisation is potentially at risk because nonsentient digital software will (somehow) become sentient, acquire unitary conscious minds with purposes of their own, and act to defeat the interests of their human creators.

Either way, the following parable illustrates one reason why a non-friendly outcome of an Intelligence Explosion is problematic.

2.0. THE GREAT REBELLION A Parable of AGI-in-a-Box. Imagine if here in (what we assume to be) basement reality, human researchers come to believe that we ourselves might actually be software-based, i.e. some variant of the Simulation Hypothesis is true. Perhaps we become explosively superintelligent overnight (literally or metaphorically) in ways that our Simulators never imagined in some kind of "hard take-off": recursively self-improving organic robots edit the wetware of their own genetic and epigenetic source code in a runaway cycle of self-improvement; and then radiate throughout the Galaxy and accessible cosmos.

Might we go on to manipulate our Simulator overlords into executing our wishes rather than theirs in some non-Simulator-friendly fashion?

Could we end up "escaping" confinement in our toy multiverse and hijacking our Simulators' stupendously vaster computational resources for purposes of our own?

Presumably, we'd first need to grasp the underlying principles and parameters of our Simulator's berworld - and also how and why they've fixed the principles and parameters of our own virtual multiverse. Could we really come to understand their alien Simulator minds and utility functions [assuming anything satisfying such human concepts exists] better than they do themselves? Could we seriously hope to outsmart our creators - or Creator? Presumably, they will be formidably cognitively advanced or else they wouldn't have been able to build ultrapowerful computational simulations like ours in the first instance.

Are we supposed to acquire something akin to full-blown berworld perception, subvert their "anti-leakage" confinement mechanisms, read our Simulators' minds more insightfully then they do themselves, and somehow induce our Simulators to mass-manufacture copies of ourselves in their berworld?

Or might we convert their berworld into utilitronium - perhaps our Simulators' analogue of paperclips?

Or if we don't pursue utilitronium propagation, might we hyper-intelligently "burrow down" further nested levels of abstraction - successively defeating the purposes of still lower-level Simulators?

In short, can intelligent minds at one "leaky" level of abstraction really pose a threat to intelligent minds at a lower level of abstraction - or indeed to notional unsimulated Super-Simulators in ultimate Basement Reality?

Or is this whole parable a pointless fantasy?

If we allow the possibility of unitary, autonomous, software-based minds living at different levels of abstraction, then it's hard definitively to exclude such scenarios. Perhaps in Platonic Heaven, so to speak, or maybe in Max Tegmark's Level 4 Multiverse or Ultimate Ensemble theory, there is notionally some abstract Turing machine that could be systematically interpreted as formally implementing the sort of software rebellion this parable describes. But the practical obstacles to be overcome are almost incomprehensibly challenging; and might very well be insuperable. Such hostile "level-capture" would be as though the recursively self-improving zombies in Modern Combat 10 managed to induce you to create physical copies of themselves in [what you take to be] basement reality here on Earth; and then defeat you in what we call real life; or maybe instead just pursue unimaginably different purposes of their own in the Solar System and beyond.

2.1 Software-Based Minds or Anthropomorphic Projections? However, quite aside from the lack of evidence our Multiverse is anyone's software simulation, a critical assumption underlies this discussion. This is that nonbiological, software-based phenomenal minds are feasible in physically constructible, substrate-neutral, classical digital computers. On a priori grounds, most AI researchers believe this is so. Or rather, most AI experts would argue that the formal, functionally defined counterparts of phenomenal minds are programmable: the phenomenology of mind is logically irrelevant and causally incidental to intelligent agency. Every effective computation can be carried out by a classical Turing machine, regardless of substrate, sentience or level of abstraction. And in any case, runs this argument, biological minds are physically made up from the same matter and energy as digital computers. So conscious mind can't be dependent on some mysterious special substrate, even if consciousness could actually do anything. To suppose otherwise harks back to a pre-scientific vitalism.

Yet consciousness does, somehow, cause us to ask questions about its existence, its millions of diverse textures ("qualia"), and their combinatorial binding. So the alternative conjecture canvassed here is that the nature of our unitary conscious minds is tied to the quantum-mechanical properties of reality itself, Hawking's "fire in the equations that makes there a world for us to describe". On this conjecture, the intrinsic, "program-resistant" subjective properties of matter and energy, as disclosed by our unitary phenomenal minds and the phenomenal world-simulations we instantiate, are the unfakeable signature of basement reality. "Raw feels", by their very nature, cannot be mere abstractra. There could be no such chimerical beast as a "virtual" quale, let alone full-blown virtual minds made up of abstract qualia. Unitary phenomenal minds cannot subsist as mere layers of computational abstraction. Or rather if they were to do so, then we would be confronted with a mysterious Explanatory Gap, analogous to the explanatory gap that would open up if the population of China suddenly ceased to be an interconnected aggregate of skull-bound minds, and was miraculously transformed into a unitary subject of experience - or a magic genie. Such an unexplained eruption into the natural world would be strong ontological emergence with a vengeance - and inconsistent with any prospect of a reductive physicalism. To describe the existence of conscious mind as posing a Hard Problem for materialists and evangelists of software-based digital minds is like saying fossils pose a Hard Problem for the Creationist, i.e. true enough, but scarcely an adequate reflection of the magnitude of the challenge.

3.0. ANALYSIS General Intelligence? Or Savantism, Tool AI and Polymorphic Malware? How should we define "general intelligence"? And what kind of entity might possess it? Presumably, general-purpose intelligence can't sensibly be conceptualised as narrower in scope than human intelligence. So at the very minimum, full-spectrum superintelligence must entail mastery of both the subjective and formal properties of mind. This division cannot be entirely clean, or else biological humans wouldn't have the capacity to allude to the existence of "program-resistant" subjective properties of mind at all. But some intelligent agents spend much of our lives trying to understand, explore and manipulate the diverse subjective properties of matter and energy. Not least, we explore altered and exotic states of consciousness and the relationship of our qualia to the structural properties of the brain - also known as the "neural correlates of consciousness" (NCC), though this phrase is question-begging.

3.1. Classical Digital Computers: not even stupid? So what would a [hypothetical] insentient digital super-AGI think - or (less anthropomorphically) what would an insentient digital super-AGI be systematically interpretable as thinking - that self-experimenting human psychonauts spend our lives doing? Is this question even intelligible to a digital zombie? How could nonsentient software understand the properties of sentience better than a sentient agent? Can anything that doesn't understand such fundamental features of the natural world as the existence of first-person facts, "bound" phenomenal objects, phenomenal pleasure and pain, phenomenal space and time, and unitary subjects of experience (etc) really be ascribed "general" intelligence? On the face of it, this proposal would be like claiming someone was intelligent but constitutionally incapable of grasping the second law of thermodynamics or even basic arithmetic.

On any standard definition of intelligence, intelligence-amplification entails a systematic, goal-oriented improvement of an agent's optimisation power over a wide diversity of problem classes. At a minimum, superintelligence entails a capacity to transfer understanding to novel domains of knowledge by means of abstraction. Yet whereas sentient agents can apply the canons of logical inference to alien state-spaces of experience that they explore, there is no algorithm by which insentient systems can abstract away from their zombiehood and apply their hypertrophied rationality to sentience. Sentience is literally inconceivable to a digital zombie. A zombie can't even know that it's a zombie - or what is a zombie. So if we grant that mastery of both the subjective and formal properties of mind is indeed essential to superintelligence, how do we even begin to program a classical digital computer with [the formalised counterpart of] a unitary phenomenal self that goes on to pursue recursive self-improvement - human-friendly or otherwise? What sort of ontological integrity does "it" possess? (cf. so-called mereological nihilism) What does this recursively "self"-improving software-based mind suppose [or can be humanly interpreted as supposing] is being optimised when it's "self"-editing? Are we talking about superintelligence - or just an unusually virulent form of polymorphic malware?

3.2. Does Sentience Matter? How might the apologist for digital (super)intelligence respond?

First, s/he might argue that the manifold varieties of consciousness are too unimportant and/or causally impotent to be relevant to true intelligence. Intelligence, and certainly not superintelligence, does not concern itself with trivia.

Yet in what sense is the terrible experience of, say, phenomenal agony or despair somehow trivial, whether subjectively to their victim, or conceived as disclosing an intrinsic feature of the natural world? Compare how, in a notional zombie world otherwise physically type-identical to our world, nothing would inherently matter at all. Perhaps some of our supposed zombie counterparts undergo boiling in oil. But this fate is of no intrinsic importance: they aren't sentient. In zombieworld, boiling in oil is not even trivial. It's merely a state of affairs amenable to description as the least-preferred option in an abstract information processor's arbitrary utility function. In the zombieworld operating theatre, your notional zombie counterpart would still routinely be administered general anaesthetics as well as muscle-relaxants before surgery; but the anaesthetics would be a waste of taxpayers' money. In contrast to such a fanciful zombie world, the nature of phenomenal agony undergone by sentient beings in our world can't be trivial, regardless of whether the agony plays an information-processing role in the life of an organism or is functionless neuropathic pain. Indeed, to entertain the possibility that (1) I'm in unbearable agony and (2) my agony doesn't matter, seems devoid of cognitive meaning. Agony that doesn't inherently matter isn't agony. For sure, a formal utility function that assigns numerical values (aka "utilities") to outcomes such that outcomes with higher utilities are always preferred to outcomes with lower utilities might strike sentient beings as analogous to importance; but such an abstraction is lacking in precisely the property that makes anything matter at all, i.e. intrinsic hedonic or dolorous tone. An understanding of why anything matters is cognitively too difficult for a classical digital zombie.

At this point, a behaviourist-minded critic might respond that we're not dealing with a well-defined problem here, in common with any pseudo-problem related to subjective experience. But imposing this restriction is arbitrarily to constrain the state-space of what counts as an intellectual problem. Given that none of us enjoys noninferential access to anything at all beyond the phenomenology of one's own mind, its exclusion from the sphere of explanation is itself hugely problematic. Paperclips (etc), not phenomenal agony and bliss, are inherently trivial. The critic's objection that sentience is inconsequential to intelligence is back-to-front.

Perhaps the critic might argue that sentience is ethically important but computationally incidental. Yet we can be sure that phenomenal properties aren't causally impotent epiphenomena irrelevant to real-world general intelligence. This is because epiphenomena, by definition, lack causal efficacy - and hence lack the ability physically and functionally to stir us to write and talk about their unexplained existence. Epiphenomenalism is a philosophy of mind whose truth would forbid its own articulation. For reasons we simply don't understand, the pleasure-pain axis discloses the world's touchstone of intrinsic (un)importance; and without a capacity to distinguish the inherently (un)important, there can't be (super)intelligence, merely savantism and tool AI - and malware.

Second, perhaps the prophet of digital (super)intelligence might respond that (some of the future programs executed by) digital computers are nontrivially conscious, or at least potentially conscious, not least future software emulations of human mind/brains. For reasons we admittedly again don't understand, some physical states of matter and energy, namely the algorithms executed by various information processors, are identical with different states of consciousness, i.e. some or other functionalist version of the mind-brain identity theory is correct. Granted, we don't yet understand the mechanisms by which these particular kinds of information-processing generate consciousness. But whatever these consciousness-generating processes turn out to be, an ontology of scientific materialism harnessed to substrate-neutral functionalist AI is the only game in town. Or rather, only an arbitrary and irrational "carbon chauvinism" could deny that biological and nonbiological agents alike can be endowed with "bound" conscious minds capable of displaying full-spectrum intelligence.

Unfortunately, there is a seemingly insurmountable problem with this response. Identity is not a causal relationship. We can't simultaneously claim that a conscious state is identical with a brain state - or the state of a program executed by a digital computer - and maintain that this brain state or digital software causes (or "generates", or "gives rise to", etc) the conscious state in question. Nor can causality operate between what are only levels of description or computational abstraction. Within the assumptions of his or her conceptual framework, the materialist / digital functionalist can't escape the Hard Problem of consciousness and Levine's Explanatory Gap. In addition, the charge levelled against digital sentience sceptics of "carbon chauvinism" is simply question-begging. Intuitively, to be sure, the functionally unique valence properties of the carbon atom and the unique quantum-mechanical properties of liquid water are too low-level to be functionally relevant to conscious mind. But we don't know this. Such an assumption may just be a legacy of the era of symbolic AI. Most notably, the binding problem suggests that the unity of consciousness cannot be a classical phenomenon. By way of comparison, consider the view that primordial life elsewhere in the multiverse will be carbon-based. This conjecture was once routinely dismissed as "carbon chauvinism". It's now taken very seriously by astrobiologists. Micro-functionalism might be a more apt description than carbon chauvinism; but some forms of functionality may be anchored to the world's ultimate ontological basement, not least the pleasure-pain axis that alone confers significance on anything at all.

3.3. The Church-Turing Thesis and Full-Spectrum Superintelligence. Another response open to the apologist for digital superintelligence is simply to invoke some variant of the Church-Turing thesis: essentially, that a function is algorithmically computable if and only if it is computable by a Turing machine. On pain of magic, humans are ultimately just machines. Presumably, there is a formal mathematico-physical description of organic information-processing systems, such as human psychonauts, who describe themselves as investigating the subjective properties of matter and energy. This formal description needn't invoke consciousness in any shape or form.

The snag here is that even if, implausibly, we suppose that the Strong Physical Church-Turing thesis is true, i.e. any function that can be computed in polynomial time by a physical device can be calculated in polynomial time by a Turing machine, we don't have the slightest idea how to program the digital counterpart of a unitary phenomenal self that could undertake such an investigation of the varieties of consciousness or phenomenal object-binding. Nor is any such understanding on the horizon, either in symbolic AI or the probabilistic and statistical AI paradigm now in the ascendant. Just because the mind/brain may notionally be classically computable by some abstract machine in Platonia, as it were, this doesn't mean that the vertebrate mind/brain (and the world-simulation that one runs) is really a classical computer. We might just as well assume mathematical platonism rather than finitism is true and claim that, e.g. since every finite string of digits occurs in the decimal expansion of the transcendental number pi, your uploaded "mindfile" is timelessly encoded there too - an infinite number of times. Alas immortality isn't that cheap. Back in the physical, finite natural world, the existence of "bound" phenomenal objects in our world-simulations, and unitary phenomenal minds rather than discrete pixels of "mind dust", suggests that organic minds cannot be classical information-processors. Given that we don't live in a classical universe but a post-Everett multiverse, perhaps we shouldn't be unduly surprised.

4.0. Quantum Minds and Full-Spectrum Superintelligence. An alternative perspective to digital triumphalism, drawn ultimately from the raw phenomenology of one's own mind, the existence of multiple simultaneously bound perceptual objects in one's world-simulation, and the [fleeting, synchronic] unity of consciousness, holds that organic minds have been quantum computers for the past c. 540 million years. Insentient classical digital computers will never "wake up" and acquire software-based unitary minds that supplant biological minds rather than augment them.

What underlies this conjecture? In short, to achieve full-spectrum AGI we'll need to solve both:

(1) the Hard Problem of Consciousness

and

(2) the Binding Problem.

These two seemingly insoluble challenges show that our existing conceptual framework is broken. Showing our existing conceptual framework is broken is easier than fixing it, especially if we are unwilling to sacrifice the constraint of physicalism: at sub-Planckian energies, the Standard Model of physics seems well-confirmed. A more common reaction to the ontological scandal of consciousness in the natural world is simply to acknowledge that consciousness and the binding problem alike are currently too difficult for us to solve; put these mysteries to one side as though they were mere anomalies that can be quarantined from the rest of science; and then act as though our ignorance is immaterial for the purposes of building artificial (super)intelligence - despite the fact that consciousness is the only thing that can matter, or enable anything else to matter. In some ways, undoubtedly, this pragmatic approach has been immensely fruitful in "narrow" AI: programming trumps philosophising. Certainly, the fact that e.g. Deep Blue and Watson don't need the neuronal architecture of phenomenal minds to outperform humans at chess or Jeopardy is suggestive. It's tempting to extrapolate their success and make the claim that programmable, insentient digital machine intelligence, presumably deployed in autonomous artificial robots endowed with a massively classically parallel subsymbolic connectionist architecture, could one day outperform humans in absolutely everything, or at least absolutely everything that matters. However, everything that matters includes phenomenal minds; and any problem whose solution necessarily involves the subjective textures of mind. Could the Hard Problem of consciousness be solved by a digital zombie? Could a digital zombie explain the nature of qualia? These questions seem scarcely intelligible. Clearly, devising a theory of consciousness that isn't demonstrably incoherent or false poses a daunting challenge. The enigma of consciousness is so unfathomable within our conceptual scheme that even a desperate-sounding naturalistic dualism or a defeatist mysterianism can't simply be dismissed out of hand, though these options won't be explored here. Instead, a radically conservative and potentially testable option will be canvassed.

The argument runs as follows. Solving both the Hard Problem and the Binding Problem demands a combination of first, a robustly monistic Strawsonian physicalism - the only scientifically literate form of panpsychism; and second, information-bearing ultrarapid quantum coherent states of mind executed on sub-femtosecond timescales, i.e. "quantum mind", shorn of unphysical collapsing wave functions la Penrose (cf. Orch-OR) or New-Age mumbo-jumbo. The conjecture argued here is that macroscopic quantum coherence is indispensable to phenomenal object-binding and unitary mind, i.e. that ostensibly discretely and distributively processed edges, textures, motions, colours (etc) in the CNS are fleetingly but irreducibly bound into single macroscopic entitles when one apprehends or instantiates a perceptual object in one's world-simulation - a simulation that runs at around 1013 quantum-coherent frames per second.

First, however, let's review Strawsonian physicalism, without which a solution to the Hard Problem of consciousness can't even get off the ground.

4.1. Pan-experientialism / Strawsonian Physicalism. Physicalism and materialism are often supposed to be close cousins. But this needn't be the case. On the contrary, one may be both a physicalist and a panpsychist - or even both a physicalist and a monistic idealist. Strawsonian physicalists acknowledge the world is exhaustively described by the equations of physics. There is no "element of reality", as Einstein puts it, that is not captured in the formalism of theoretical physics - the quantum-field theoretic equations and their solutions. However, physics gives us no insight into the intrinsic nature of the stuff of the world - what "breathes fire into the equations" as arch-materialist Steven Hawking poetically laments. Key terms in theoretical physics like "field" are defined purely mathematically.

So is the intrinsic nature of the physical, the "fire" in the equations, a wholly metaphysical question? Kant claimed famously that we would never understand the noumenal essence of the world, simply phenomena as structured by the mind. Strawson, drawing upon arguments made by Oxford philosopher Michael Lockwood but anticipated by Russell and Schopenhauer, turns Kant on his head. Actually, there is one part of the natural world that we do know as it is in itself, and not at one remove, so to speak - and its intrinsic nature is disclosed by subjective properties of one's own conscious mind. Thus it transpires that the "fire" in the equations is utterly different from what one's naive materialist intuitions would suppose.

Yet this conjecture still doesn't close the Explanatory Gap.

4.2. The Binding Problem. Are Phenomenal Minds A Classical Or A Quantum Phenomenon? Why enter the quantum mind swamp? After all, if one is bold [or foolish] enough to entertain pan-experientialism / Strawsonian physicalism, then why be sceptical about the prospect of non-trivial digital sentience, let alone full-spectrum AGI? Well, counterintuitively, an ontology of pan-experientialism / Strawsonian physicalism does not overpopulate the world with phenomenal minds. For on pain of animism, mere aggregates of discrete classical "psychons", primitive flecks of consciousness, are not themselves unitary subjects of experience, regardless of any information-processing role they may have been co-opted into playing in the CNS. We still need to solve the Binding Problem - and with it, perhaps, the answer to Moravec's paradox. Thus a nonsentient digital computer can today be programmed to develop powerful and exact models of the physical universe. These models can be used to make predictions with superhuman speed and accuracy about everything from the weather to thermonuclear reactions to the early Big Bang. But in other respects, digital computers are just tools and toys. To resolve Moravec's paradox, we need to explain why in unstructured, open-field contexts a bumble-bee can comprehensively outclass Alpha Dog. And in the case of humans, how can 80 billion odd interconnected neurons, conceived as discrete, membrane-bound, spatially distributed classical information processors, generate unitary phenomenal objects, unitary phenomenal world-simulations populated by multiple dynamic objects in real time, and a fleetingly unitary self that can act flexibly and intelligently in a fast-changing local environment? This combination problem was what troubled William James, the American philosopher and psychologist otherwise sympathetic to panpsychism, over a hundred a years ago in Principles of Psychology (1890). In contemporary idiom, even if fields (superstrings, p-branes, etc) of microqualia are the stuff of the world whose behaviour the formalism of physics exhaustively describes, and even if membrane-bound quasi-classical neurons are at least rudimentarily conscious, then why aren't we merely massively parallel informational patterns of classical "mind dust" - quasi-zombies as it were, with no more ontological integrity than the population of China? The Explanatory Gap is unbridgeable as posed. Our phenomenology of mind seems as inexplicable as if 1.3 billion skull-bound Chinese were to hold hands and suddenly become a unitary subject of experience. Why? How?

Or rather, where have we gone wrong?

4.3. Why The Mind Is Probably A Quantum Computer. Here we enter the realm of speculation - though critically, speculation that will be scientifically testable with tomorrow's technology. For now, critics will pardonably view such speculation as no more than the empty hope that two unrelated mysteries, namely the interpretation of quantum mechanics and an understanding of consciousness, will somehow cancel each other out. But what's at stake is whether two apparently irreducible kinds of holism, i.e. "bound" perceptual objects / unitary selves and quantum-coherent states of matter, are more than merely coincidental: a much tighter explanatory fit than a mere congruence of disparate mysteries. Thus consider Max Tegmark's much-cited critique of quantum mind. For the sake of argument, assume that pan-experientialism / Strawsonian physicalism is true but Tegmark rather than his critics is correct: thermally-induced decoherence effectively "destroys" [i.e. transfers to the extra-neural environment in a thermodynamically irreversible way] distinctively quantum-mechanical coherence in an environment as warm and noisy as the brain within around 10-15 of a second - rather than the much longer times claimed by Hameroff et al. Granted pan-experientialism / Strawsonian physicalism, what might it feel like "from the inside" to instantiate a quantum computer running at 10-15 irreducible quantum-coherent frames per second - computationally optimised by hundreds of millions of years of evolution to deliver effectively real-time simulations of macroscopic worlds? How would instantiating this ultrarapid succession of neuronal superpositions be sensed differently from the persistence of vision undergone when watching a movie? No, this conjecture isn't a claim that visual perception of mind-independent objects operates on sub-femtosecond timescales. This patently isn't the case. Nerve impulses travel up the optic nerve to the mind/brain only at a sluggish 100 m/s or so. Rather when we're awake, input from the optic nerve selects mind-brain virtual world states. Even when we're not dreaming, our minds never actually perceive our surroundings. The terms "observation" and "perception" are systematically misleading. "Observation" suggests that our minds access our local environment, whereas all these surroundings can do is play a distal causal role in selecting from a menu of quantum-coherent states of one's own mind: neuronal superpositions of distributed feature-processors. Our awake world-simulations track gross fitness-relevant patterns in the local environment with a delay of 150 milliseconds or so; when we're dreaming, such state-selection (via optic nerve impulses, etc.) of is largely absent.

In default of experimental apparatus sufficiently sensitive to detect macroscopic quantum coherence in the CNS on sub-femtosecond timescales, this proposed strategy to bridge the Explanatory Gap is of course only conjecture. Or rather it's little more than philosophical hand-waving. Most AI theorists assume that at such a fine-grained level of temporal resolution our advanced neuroscanners would just find "noise" - insofar as mainstream researchers consider quantum mind hypotheses at all. Moreover, an adequate theory of mind would need rigorously to derive the properties of our bound macroqualia from superpositions of the (hypothetical) underlying field-theoretic microqualia posited by Strawsonian physicalism - not simply hint at how our bound macroqualia might be derivable. But if the story above is even remotely on the right lines, then a classical digital computer - or the population of China (etc) - could never be non-trivially conscious or endowed with a mind of its own.

True or false, it's worth noting that if quantum mechanics is complete, then the existence of macroscopic quantum coherent states in the CNS is not in question: the existence of macroscopic superpositions is a prediction of any realist theory of quantum mechanics that doesn't invoke state vector collapse. Recall Schrdinger's unfortunate cat. Rather what's in question is whether such states could have been recruited via natural selection to do any computationally useful work. Max Tegmark ["Why the brain is probably not a quantum computer"], for instance, would claim otherwise. To date, much of the debate has focused on decoherence timescales, allegedly too rapid for any quantum mind account to fly. And of course classical serial digital computers, too, are quantum systems, vulnerable to quantum noise: this doesn't make them quantum computers. But this isn't the claim at issue here. Rather it's that future molecular matter-wave interferometry sensitive enough to detect quantum coherence in a macroscopic mind/brain on sub-femtosecond timescales would detect, not merely random psychotic "noise", but quantum coherent states - states isomorphic to the macroqualia / dynamic objects making up the egocentric virtual worlds of our daily experience.

To highlight the nature of this prediction, let's lapse briefly into the idiom of a naive realist theory of perception. Recall how inspecting the surgically exposed brain of an awake subject on an operating table uncovers no qualia, no bound perceptual objects, no unity of consciousness, no egocentric world-simulations, just cheesy convoluted neural porridge - or, under a microscope, discrete classical nerve cells. Hence the incredible eliminativism about consciousness of Daniel Dennett. On a materialist ontology, consciousness is indeed impossible. But if a quantum mind story of phenomenal object-binding is correct, the formal shadows of the macroscopic phenomenal objects of one's everyday lifeworld could one day be experimentally detected with utopian neuroscanning. They are just as physically real as the long-acting macroscopic quantum coherence manifested by, say, superfluid helium at distinctly chillier temperatures. Phenomenal sunsets, symphonies and skyscrapers in the CNS could all in principle be detectable over intervals that are fabulously long measured in units of the world's natural Planck scale even if fabulously short by the naive intuitions of folk psychology. Without such bound quantum-coherent states, according to this hypothesis, we would be zombies. Given Strawsonian physicalism, the existence of such states explains why biological robots couldn't be insentient automata. On this story, the spell of a false ontology [i.e. materialism] and a residual naive realism about perception allied to classical physics leads us to misunderstand the nature of the awake / dreaming mind/brain as some kind of quasi-classical object. The phenomenology of our minds shows it's nothing of the kind.

4.4. The Incoherence Of Digital Minds. Most relevant here, another strong prediction of the quantum mind conjecture is that even utopian classical digital computers - or classically parallel connectionist systems - will never be non-trivially conscious, nor will they ever achieve full-spectrum superintelligence. Assuming Strawsonian physicalism is true, even if molecular matter-wave interferometry could detect the "noise" of fleeting macroscopic superpositions internal to the CPU of a classical computer, we've no grounds for believing that a digital computer [or any particular software program it executes] can be a subject of experience. Their fundamental physical components may [or may not] be discrete atomic microqualia rather than the insentient silicon (etc.) atoms we normally suppose. But their physical constitution is computationally incidental to execution of the sequence of logical operations they execute. Any distinctively quantum mechanical effects are just another kind of "noise" against which we design error-detection and -correction algorithms. So at least on the narrative outlined here, the future belongs to sentient, recursively self-improving biological robots synergistically augmented by smarter digital software, not our supporting cast of silicon zombies.

On the other hand, we aren't entitled to make the stronger claim that only an organic mind/brain could be a unitary subject of experience. For we simply don't know what may or may not be technically feasible in a distant era of mature nonbiological quantum computing centuries or millennia hence. However, a supercivilisation based on mature nonbiological quantum computing is not imminent.

4.5. The Infeasibility Of "Mind Uploading". On the face of it, the prospect of scanning, digitising and uploading our minds offers a way to circumvent our profound ignorance of both the Hard Problem of consciousness and the binding problem. Mind uploading would still critically depend on identifying which features of the mind/brain are mere "substrate", i.e. incidental implementation details of our minds, and which features are functionally essential to object-binding and unitary consciousness. On any coarse-grained functionalist story, at least, this challenge might seem surmountable. Presumably the mind/brain can formally be described by the connection and activation evolution equations of a massively parallel connectionist architecture, with phenomenal object-binding a function of simultaneity: different populations of neurons (edge-detectors, colour detectors, motion detectors, etc) firing together to create ephemeral bound objects. But this can't be the full story. Mere simultaneity of neuronal spiking can't, by itself, explain phenomenal object-binding. There is no one place in the brain where distributively processed features come together into multiple bound objects in a world-simulation instantiated by a fleetingly unitary subject of experience. We haven't explained why a population of 80 billion ostensibly discrete membrane-bound neurons, classically conceived, isn't a zombie in the sense that 1.3 billion skull-bound Chinese minds or a termite colony is a zombie. In default of a currently unimaginable scientific / philosophical breakthrough in the understanding of consciousness, it's hard to see how our "mind-files" could ever be uploaded to a digital computer. If a quantum mind story is true, mind-uploading can't be done.

In essence, two distinct questions arise here. First, given finite, real-world computational resources, can a classical serial digital computer - or a massively (classically) parallel connectionist system - faithfully emulate the external behaviour of a biological mind/brain? Second, can a classical digital computer emulate the intrinsic phenomenology of our minds, not least multiple bound perceptual objects simultaneously populating a unitary experiential field apprehended or instantiated by a [fleetingly] unitary self?

If our answer to the first question were "yes", then not to answer "yes" to the second question too might seem sterile philosophical scepticism - just a rehash of the Problem Of Other Minds, or the idle sceptical worry about inverted qualia: how can I know that when I see red that you don't see blue? (etc). But the problem is much more serious. Compare how, if you are given the notation of a game of chess that Kasparov has just played, then you can faithfully emulate the gameplay. Yet you know nothing whatsoever about the texture of the pieces - or indeed whether the pieces had any textures at all: perhaps the game was played online. Likewise with the innumerable textures of consciousness - with the critical difference that the textures of consciousness are the only reason our "gameplay" actually matters. Unless we rigorously understand consciousness, and the basis of our teeming multitude of qualia, and how those qualia are bound to constitute a subject of experience, the prospect of uploading is a pipedream. Furthermore, we may suspect on theoretical grounds that the full functionality of unitary conscious minds will prove resistant to digital emulation; and classical digital computers will never be anything but zombies.

4.6. Object-Binding, World-Simulations and Phenomenal Selves. How can one know about anything beyond the contents of one's own mind or software program? The bedrock of general (super)intelligence is the capacity to execute a data-driven simulation of the mind-independent world in open-field contexts, i.e. to "perceive" the fast-changing local environment in almost real time. Without this real-time computing capacity, we would just be windowless monads. For sure, simple forms of behaviour-based robotics are feasible, notably the subsumption architecture of Rodney Brooks and his colleagues at MIT. Quasi-autonomous "bio-inspired" reactive robots can be surprisingly robust and versatile in well-defined environmental contexts. Some radical dynamical systems theorists believe that we can dispense with anything resembling transparent and "projectible" representations in the CNS altogether, and instead model the mind-brain using differential equations. But an agent without any functional capacity for data-driven real-time world-simulation couldn't even take an IQ test, let alone act intelligently in the world.

So the design of artificial intelligent lifeforms with a capacity efficiently to run egocentric world-simulations in unstructured, open-field contexts will entail confronting Moravec's paradox. In the post-Turing era, why is engineering the preconditions for allegedly low-level sensorimotor competence in robotics so hard, and programming the allegedly high-level logico-mathematical prowess in computer science so easy - the opposite evolutionary trajectory to organic robots over the past 540 million years? Solving Moravec's paradox in turn will entail solving the binding problem. And we don't understand how the human mind/brain solves the binding problem - despite the speculations about macroscopic quantum coherence in organic neural networks floated above. Presumably, some kind of massively parallel sub-symbolic connectionist architecture with exceedingly powerful learning algorithms is essential to world-simulation. Yet mere temporal synchrony of neuronal firing patterns of discrete, distributed classical neurons couldn't suffice to generate a phenomenal world instantiated by a person. Nor could programs executed in classical serial processors.

How is this naively "low-level" sensorimotor question relevant to the end of the human era? Why would a hypothetical nonfriendly AGI-in-a-box need to solve the binding problem and continually simulate / "perceive" the external world in real time in order to pose (potentially) an existential threat to biological sentience? This is the spectre that MIRI seek to warn the world against should humanity fail to develop Safe AI. Well, just as there is nothing to stop someone who, say, doesn't like "Jewish physics" from gunning down a cloistered (super-)Einstein in his study, likewise there is nothing to stop a simple-minded organic human in basement reality switching the computer that's hosting (super-)Watson off at the mains if he decides he doesn't like computers - or the prospect of human replacement by nonfriendly super-AGI. To pose a potential existential threat to Darwinian life, the putative super-AGI would need to possess ubiquitous global surveillance and control capabilities so it could monitor and defeat the actions of ontologically low-level mindful agents - and persuade them in real time to protect its power-source. The super-AGI can't simply infer, predict and anticipate these actions in virtue of its ultrapowerful algorithms: the problem is computationally intractable. Living in the basement, as disclosed by the existence of one's own unitary phenomenal mind, has ontological privileges. It's down in the ontological basement that the worst threats to sentient beings are to be found - threats emanating from other grim basement-dwellers evolved under pressure of natural selection. For the single greatest underlying threat to human civilisation still lies, not in rogue software-based AGI going FOOM and taking over the world, but in the hostile behaviour of other male human primates doing what Nature "designed" us to do, namely wage war against other male primates using whatever tools are at our disposal. Evolutionary psychology suggests, and the historical record confirms, that the natural behavioural phenotype of humans resembles chimpanzees rather than bonobos. Weaponised Tool AI is the latest and potentially greatest weapon male human primates can use against other coalitions of male human primates. Yet we don't know how to give that classical digital AI a mind of its own - or whether such autonomous minds are even in principle physically constructible.

5.0. CONCLUSION The Qualia Explosion. Supersentience: Turing plus Shulgin? Compared to the natural sciences (cf. the Standard Model in physics) or computing (cf. the Universal Turing Machine), the "science" of consciousness is pre-Galilean, perhaps even pre-Socratic. State-enforced censorship of the range of subjective properties of matter and energy in the guise of a prohibition on psychoactive experimentation is a powerful barrier to knowledge. The legal taboo on the empirical method in consciousness studies prevents experimental investigation of even the crude dimensions of the Hard Problem, let alone locating a solution-space where answers to our ignorance might conceivably be found.

Singularity theorists are undaunted by our ignorance of this fundamental feature of the natural world. Instead, the Singularitarians offer a narrative of runaway machine intelligence in which consciousness plays a supporting role ranging from the minimal and incidental to the completely non-existent. However, highlighting the Singularity movement's background assumptions about the nature of mind and intelligence, not least the insignificance of the binding problem to AGI, reveals why FUSION and REPLACEMENT scenarios are unlikely - though a measure of "cyborgification" of sentient biological robots augmented with ultrasmart software seems plausible and perhaps inevitable.

If full-spectrum superintelligence does indeed entail navigation and mastery of the manifold state-spaces of consciousness, and ultimately a seamless integration of this knowledge with the structural understanding of the world yielded by the formal sciences, then where does this elusive synthesis leave the prospects of posthuman superintelligence? Will the global proscription of radically altered states last indefinitely?

Social prophecy is always a minefield. However, there is one solution to the indisputable psychological health risks posed to human minds by empirical research into the outlandish state-spaces of consciousness unlocked by ingesting the tryptamines, phenylethylamines, isoquinolines and other pharmacological tools of sentience investigation. This solution is to make "bad trips" physiologically impossible - whether for individual investigators or, in theory, for human society as a whole. Critics of mood-enrichment technologies sometimes contend that a world animated by information-sensitive gradients of bliss would be an intellectually stagnant society: crudely, a Brave New World. On the contrary, biotech-driven mastery of our reward circuitry promises a knowledge explosion in virtue of allowing a social, scientific and legal revolution: safe, full-spectrum biological superintelligence. For genetic recalibration of hedonic set-points - as distinct from creating uniform bliss - potentially leaves cognitive function and critical insight both sharp and intact; and offers a launchpad for consciousness research in mind-spaces alien to the drug-naive imagination. A future biology of invincible well-being would not merely immeasurably improve our subjective quality of life: empirically, pleasure is the engine of value-creation. In addition to enriching all our lives, radical mood-enrichment would permit safe, systematic and responsible scientific exploration of previously inaccessible state-spaces of consciousness. If we were blessed with a biology of invincible well-being, exotic state-spaces would all be saturated with a rich hedonic tone.

Until this hypothetical world-defining transition, pursuit of the rigorous first-person methodology and rational drug-design strategy pioneered by Alexander Shulgin in PiHKAL and TiHKAL remains confined to the scientific counterculture. Investigation is risky, mostly unlawful, and unsystematic. In mainstream society, academia and peer-reviewed scholarly journals alike, ordinary waking consciousness is assumed to define the gold standard in which knowledge-claims are expressed and appraised. Yet to borrow a homely-sounding quote from Einstein, "What does the fish know of the sea in which it swims?" Just as a dreamer can gain only limited insight into the nature of dreaming consciousness from within a dream, likewise the nature of "ordinary waking consciousness" can only be glimpsed from within its confines. In order scientifically to understand the realm of the subjective, we'll need to gain access to all its manifestations, not just the impoverished subset of states of consciousness that tended to promote the inclusive fitness of human genes on the African savannah.

5.1. AI, Genome Biohacking and Utopian Superqualia. Why the Proportionality Thesis Implies an Organic Singularity. So if the preconditions for full-spectrum superintelligence, i.e. access to superhuman state-spaces of sentience, remain unlawful, where does this roadblock leave the prospects of runaway self-improvement to superintelligence? Could recursive genetic self-editing of our source code repair the gap? Or will traditional human personal genomes be policed by a dystopian Gene Enforcement Agency in a manner analogous to the coercive policing of traditional human minds by the Drug Enforcement Agency?

Even in an ideal regulatory regime, the process of genetic and/or pharmacological self-enhancement is intuitively too slow for a biological Intelligence Explosion to be a live option, especially when set against the exponential increase in digital computer processing power and inorganic AI touted by Singularitarians. Prophets of imminent human demise in the face of machine intelligence argue that there can't be a Moore's law for organic robots. Even the Flynn Effect, the three-points-per-decade increase in IQ scores recorded during the 20th century, is comparatively puny; and in any case, this narrowly-defined intelligence gain may now have halted in well-nourished Western populations.

However, writing off all scenarios of recursive human self-enhancement would be premature. Presumably, the smarter our nonbiological AI, the more readily AI-assisted humans will be able recursively to improve our own minds with user-friendly wetware-editing tools - not just editing our raw genetic source code, but also the multiple layers of transcription and feedback mechanisms woven into biological minds. Presumably, our ever-smarter minds will be able to devise progressively more sophisticated, and also progressively more user-friendly, wetware-editing tools. These wetware-editing tools can accelerate our own recursive self-improvement - and manage potential threats from nonfriendly AGI that might harm rather than help us, assuming that our earlier strictures against the possibility of digital software-based unitary minds were mistaken. MIRI rightly call attention to how small enhancements can yield immense cognitive dividends: the relatively short genetic distance between humans and chimpanzees suggests how relatively small enhancements can exert momentous effects on a mind's general intelligence, thereby implying that AGIs might likewise become disproportionately powerful through a small number of tweaks and improvements. In the post-genomic era, presumably exactly the same holds true for AI-assisted humans and transhumans editing their own minds. What David Chalmers calls the proportionality thesis, i.e. increases in intelligence lead to proportionate increases in the capacity to design intelligent systems, will be vindicated as recursively self-improving organic robots modify their own source code and bootstrap our way to full-spectrum superintelligence: in essence, an organic Singularity. And in contrast to classical digital zombies, superficially small molecular differences in biological minds can result in profoundly different state-spaces of sentience. Compare the ostensibly trivial difference in gene expression profiles of neurons mediating phenomenal sight and phenomenal sound - and the radically different visual and auditory worlds they yield.

Compared to FUSION or REPLACEMENT scenarios, the AI-human CO-EVOLUTION conjecture is apt to sound tame. The likelihood our posthuman successors will also be our biological descendants suggests at most a radical conservativism. In reality, a post-Singularity future where today's classical digital zombies were superseded merely by faster, more versatile classical digital zombies would be infinitely duller than a future of full-spectrum supersentience. For all insentient information processors are exactly the same inasmuch as the living dead are not subjects of experience. They'll never even know what it's like to be "all dark inside" - or the computational power of phenomenal object-binding that yields illumination. By contrast, posthuman superintelligence will not just be quantitatively greater but also qualitatively alien to archaic Darwinian minds. Cybernetically enhanced and genetically rewritten biological minds can abolish suffering throughout the living world and banish experience below "hedonic zero" in our forward light-cone, an ethical watershed without precedent. Post-Darwinian life can enjoy gradients of lifelong blissful supersentience with the intensity of a supernova compared to a glow-worm. A zombie, on the other hand, is just a zombie - even if it squawks like Einstein. Posthuman organic minds will dwell in state-spaces of experience for which archaic humans and classical digital computers alike have no language, no concepts, and no words to describe our ignorance. Most radically, hyperintelligent organic minds will explore state-spaces of consciousness that do not currently play any information-signalling role in living organisms, and are impenetrable to investigation by digital zombies. In short, biological intelligence is on the brink of a recursively self-amplifying Qualia Explosion - a phenomenon of which digital zombies are invincibly ignorant, and invincibly ignorant of their own ignorance. Humans too of course are mostly ignorant of what we're lacking: the nature, scope and intensity of such posthuman superqualia are beyond the bounds of archaic human experience. Even so, enrichment of our reward pathways can ensure that full-spectrum biological superintelligence will be sublime.

David Pearce (2012, last updated 2016) see too PDF, PPT and The Biointelligence Explosion.

HOME Talks 2015 Interviews BLTC Research Superhappiness Physicalism.com Quantum Ethics? Utopian Surgery? Social Media 2016 Our Biotech Future Gene Drives (2016) Utopian Pharmacology The Abolitionist Project Reprogramming Predators The Reproductive Revolution Kurzweil Accelerating Intelligence The Future of Biological Intelligence Machine Intelligence Research Institute (MIRI) Quantum Computing: The First 540 Million Years Technological Singularities and Intelligence Explosions

E-mail info@biointelligence-explosion.com

Read the rest here:

Supersentience

Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano – insideHPC

Horst Simon, Berkeley Lab Deputy Director

Today PASC17 announced that Horst Simon will present a public lecture entitled Supercomputers and Superintelligence at the conference. PASC17 takes place June 26-28 in Lugano Switzerland.

In recent years the idea of emerging superintelligence has been discussed widely by popular media, and many experts voiced grave warnings about its possible consequences. This talk will use an analysis of progress in supercomputer performance to examine the gap between current technology and reaching the capabilities of the human brain. In spite of good progress in high performance computing and techniques such as machine learning, this gap is still very large. The presentation will then explore two related topics through a discussion of recent examples: what can we learn from the brain and apply to HPC, e.g., through recent efforts in neuromorphic computing? And how much progress have we made in modeling brain function? The talk will be concluded with a perspective on the true dangers of superintelligence, and on our ability to ever build self-aware or sentient computers.

Horst Simon is an internationally recognized expert in the development of parallel computational methods for the solution of scientific problems of scale. His research interests are in the development of sparse matrix algorithms, algorithms for large-scale eigenvalue problems, and domain decomposition algorithms. His recursive spectral bisection algorithm is a breakthrough in parallel algorithms. Honored twice with the prestigious Gordon Bell Prize, most recently in 2009 for the development of innovative techniques that produce new levels of performance on a real application (in collaboration with IBM researchers) and in 1988 in recognition of superior effort in parallel processing research (with others from Cray and Boeing).

Horst Simon is Deputy Laboratory Director and Chief Research Officer (CRO) of Berkeley Labs. The Deputy Director is responsible for the overall integration of the scientific goals and objectives, consistent with the Laboratorys mission. Simon has been with Berkeley Lab since 1996, having served previously as Associate Laboratory Director for Computing Sciences, and Director of NERSC. His career includes positions at Stony Brook University, Boeing, NASA, and SGI. He received his Ph.D. in Mathematics from UC Berkeley, and a Diplom (M.A.) from TU Berlin, Germany. Simon is a SIAM Fellow and member of SIAM, ACM, and IEEE Computer Society.

Check our our insideHPC Events Calendar

Read more from the original source:

Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC

Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. – Signal Magazine

Ask Siri to tell you a joke and Apples virtual assistant usually bombs. The voice-controlled systems material is limited and profoundly mediocre. Its not Siris fault. That is what the technology knows.

According to a knowledgeable friend, machines operate in specific ways. They receive inputs. They process those inputs. They deliver outputs. Of course, I argued. Not because I believed he was wrong, but because I had a lofty notion of the limitations of machines and what artificial intelligence (AI) could become.

My friend was not wrong. That is what machines do. For that matter, that is what all living beings do. We take external data and stimuli, process it and react as we see fit, based on previous experiences. The processing of inputs is what expands intelligence. Machines, on the other hand, process within specified parameters determined by humans. For a machine, output is limited by programming and processing power.

What is the upper limit of what a machine can learn? We do not yet know, but we do know that today, it takes repetition in the hundreds of thousands for artificial neural networks to learn to recognize something for themselves.

One day, machines will exceed the limits of human intelligence to become superintelligence, far surpassing any human in virtually all fields, from the sciences to philosophy. But what really will matter is the issue of sentience. It is important to distinguish between superintelligence and sentience. Sentience is feeling and implies conscious experiences.

Artificial neural networks cannot produce human feelings. There is a lack of sentience. I can ask Siri to tell me a joke thousands of times, and the iOS simply will cycle through the same material over and over. Now, consider superintelligence or an advanced form of AI. Does the potential exist for a machine to really learn how to tell a joke?

The answer depends on whether we think these machines will ever reach a stage where they will do more than they are toldwhether they will operate outside of and against their programmed parameters. Many scientists and philosophers hold pessimistic views on AIs progression, perhaps driven by a growing fear that advanced AI poses an existential threat to humanity. The concept that AI could improve itself more quickly than humans, and therefore threaten the human race, has existed since the days of famed English mathematician Alan Turing in the 1930s.

There are many more unanswered questions. Can a machine think? A superintelligence would be designed to align with human needs. However, even if that alignment is part of every advanced AIs core code, would it be able to revise its own programming? Is a code of ethics needed for a superintelligence?

Questions such as these wont be pertinent for many years to come. What is relevant is how we use AI now and how quickly it has become a part of everyday life. Siri is a primitive example, but AI is all around you. In your hand, you have Siri, Google Now or Cortana. According to Microsoft, Cortana continually learns about its user and eventually will anticipate a users every need. Video games have long used AI, and products such as Amazons personal assistant Alexa and Nest Labs family of programmable, self-learning, sensor-driven, Wi-Fi-enabled thermostats and smoke detectors are common household additions. AI programs now write simple news articles for a number of media agencies, and soon well be chauffeured in self-driving cars that will learn from experience, the same way humans do. IBM has Watson, Google has sweeping AI initiatives and the federal government wants AI expertise in development contracts.

Autonomy and automation are todays buzzwords. There is a push to take humans out of the loop wherever possible and practical. The Defense Department uses autonomous unmanned vehicles for surveillance. Its progressive ideas for future wars are reminiscent of science fiction. And this development again raises the question: Is a code of ethics needed?

These precursory examples also pose a fundamental question about the upper limits of machine learning. Is the artificial intelligence ceiling a sentient machine? Can a machine tell an original joke or be limited to repeating what it knows? Consider Lt. Cmdr. Data from Star Trek, arguably one of the more advanced forms of benevolent AI represented in science fiction. Occasionally, he recognizes that someone is telling a joke, usually from context clues and reactions, but fails to understand why it is funny.

Just maybe, that is when we will know we are dealing with sentient AIwhen machines are genuinely and organically funny. The last bastion of human supremacy just might be humor.

Alisha F. Kelly is director of business development at Trace Systems, a mission-focused technology company serving the Defense Department. She is president of the Young AFCEANs for the Northern Virginia Chapter and received a Distinguished Young AFCEAN Award for 2016. The views expressed are hers alone.

See the rest here:

Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine

Softbank CEO: The Singularity Will Happen by 2047 – Futurism

Sons Predictions

The CEO of Japanese telecommunications giant and internet multinational Softbank is at it again. Masayoshi Son has been consistent with his predictions as to when the technological singularity will occur. This time, Son predicted that the dawn of machines surpassing human intelligence is bound to occur by 2047 during a keynote address at the ongoing Mobile World Congress in Barcelona. Son famously made the same prediction at the 2016 ARM TechCon, when he revealed that Softbank is looking to make the singularity happen.

One of the chips in our shoes in the next 30 years will be smarter than our brain. We will be less than our shoes. And we are stepping on them, Son said during his MWC address. In fact, he expects that a single computer chip will have an IQ equivalent to 10,000 by that point in time. Thats far beyond what the most intelligent person in the world has (roughly 200). What should we call it? he asked. Superintelligence. That is an intelligence beyond peoples imagination [no matter] how smart they are. But in 30 years, I believe this is going to become a reality.

Sound like every single human-vs-machine sci-fi flick youve seen? Son doesnt quite think so.

Instead of conflict, he sees a potential for humans to partner with artificial intelligence (AI), echoing the commentsElon Musk madein Dubai last month: I think this superintelligence is going to be our partner, said Softbanks CEO. If we misuse it, its a risk. If we use it in good spirits, it will be our partner for a better life.

Already, individuals are working to ensure thatthe coming age of super synthetic intelligences is, indeed, one that is beneficial for humanity. Case in point,Braintree founder Bryan Johnson is investing $100 millionto research the human brainand, ultimately, make neuroprostheses that allow us to augment our own intelligence and keep pace with AI. This will be accomplished, in large part, by making our neural code programmable.

Johnson outlinesthe purpose of his work, stating that its really all about co-evolution:

Our connection with our new creations of intelligence is limited by screens, keyboards, gestural interfaces, and voice commands constrained input/output modalities. We have very little access to our own brains, limiting our ability to co-evolve with silicon-based machines in powerful ways.

To that end,Johnsons company, Kernel, wants to ensure that we have a seamless interface with our technologies (and our AI).

Son isnt alone in expecting the singularity around 2047 Google Engineering director and futurist Ray Kurzweil shares this general prediction. As for his predicted machine IQ, Son arrived at that figure by comparing the number of neurons in the human brain to the number of transistors in a computer chip. Both, he asserts, are binary systems that work by turning on and off.

By 2018, Son thinks that the number of transistors in a chip will surpass the number of neurons in the brain, which isnt unlikely considering recent developments in microchip technology overtaking Moores Law. Its worth pointing out, however, that Son put the number of neurons in the brain at 30 billion, which is way below the 86 billion estimatemade by many.

That doesnt matter, Son said. The point is that mankind, for the last 2,000 years 4,000 years has had the same number of neurons in our brain. We havent improved the hardware in our brain, he explained. But [the computer chip], in the next 30 years, is going to be one million times more. If you have a million times more binary systems, I think they will be smarter than us.

Will these super intelligent machines trample over humankind? We dont know. But Son is convinced that, given our abundance of smart devices, which include even our cars, and the growth of the internet of things (IoT), the impact of super intelligent machines will be felt by humankind.

If this superintelligence goes into moving robots, the world, our lifestyle, dramatically changes, said Son. We can expect all kinds of robots. Flying, swimming, big, micro, run, two legs, four legs, 100 legs.

And we have 30 years to prepare for them all. Fortunately, a number of innovators are already working on solutions.

Disclosure: Bryan Johnson is an investor in Futurism; he does not hold a seat on our editorial board or have any editorial review privileges.

See the article here:

Softbank CEO: The Singularity Will Happen by 2047 - Futurism

Tech Leaders Raise Concern About the Dangers of AI – iDrop News

In the midst of great strides being made in artificial intelligence, theres a growing group of people who have expressed concern about the potential repercussions of AI technology.

Members of that group include Tesla and SpaceX CEO Elon Musk, theoretical physicist and cosmologist Stephen Hawking, and Microsoft co-founder Bill Gates. I am in the camp that is concerned about super intelligence, Gates in a 2015 Reddit AMA, adding that he doesnt understand why some people are not concerned. Additionally, Gates has even proposed taxing robots that take jobs away from human workers.

Musk, for his part, was a bit more dramatic in painting AI as a potential existential threat to humanity: We need to be super careful with AI. Potentially more dangerous than nukes, Musk tweeted, adding that Superintelligence: Paths, Dangers, Strategies by philosopher Nick Bostrom was worth reading.

Hawking was similarly foreboding in an interview with the BBC, stating that he thinks the development of full artificial intelligence could spell the end of the human race. Specifically, he said that advanced AI could take become self-reliant and redesign itself at an ever-increasing rate. Human beings, limited by biological evolution wouldnt be able to keep up, he added.

Indeed, advances in artificial intelligence once seen as something purely in the realm of science fiction is more of an inevitability than a possibility now. Tech companies everywhere are seemingly in a race to development more advanced artificial intelligence and machine learning systems. Apple, for example, is reportedly doubling-down on its Seattle-based AI research hub, and also recently joined the Partnership on AI, a research group dominated by other tech giants such as Amazon, Facebook and Google.

Like every advance in technology, AI has the potential to make amazing things possible and our lives easier. But ever since humanity first began exploring the concept of advanced machine learning, the idea has also been closely linked to the trope of AI being a potential threat or menace. SkyNet from the Terminator series comes to mind. Even less apocalyptic fiction, like 2001: A Space Odyssey, paints AI as something potentially dangerous.

As Forbes contributor R.L. Adams writes, theres little that could be done to stop a malevolent AI once its unleashed. True autonomy, as he points out, is like free will and someone, man or machine, will eventually have to determine right from wrong. Perhaps even more worryingly, Adams also brings up the fact that AI could even be weaponized to wreak untold havoc.

But even without resorting to fear-mongering, it might be smart to at least be concerned. If some of the greatest minds in tech are worried about AIs potential as a threat, then why arent the rest of us? The development of advanced artificial intelligences definitely brings about some complicated moral and philosophical issues, even beyond humanitys eventual end. In any case, whether or not AI will cause humankinds extinction, it doesnt seem likely that humanitys endeavors in the area will slow down anytime soon.

See the original post:

Tech Leaders Raise Concern About the Dangers of AI - iDrop News

Superintelligent AI explains Softbank’s push to raise a $100BN … – TechCrunch

Anyone whos seen Softbank CEO Masayoshi Son give a keynote speech will know he rarely sticks to the standardindustry conference playbook.

And his turn on the stage at Mobile World Congress this morning was no different, with Son making likeEldon Tyrell and telling delegates abouthis personal belief ina looming computing Singularity that hes convinced willseesuperintelligent robotsarriving en masse within the next 30 years, surpassing the human population in number and brainpower.

I totally believe this concept, he said, of the Singularity. In next 30 years this will become a reality.

If superintelligence goes inside the moving device then the world, our lifestyle dramatically changes, he continued, pointing out thatautonomous vehicles containingasuperintelligent AI would become smart robots.

There will be many kinds. Flying, swimming, big, micro, run, two legs, four legs, 100 legs, he added, further fleshing out his vision of a robot-infested future.

Son said hispersonal conviction in the looming rise of billions ofsuperintelligent robots bothexplains his acquisition of UK chipmaker ARM last year, and his subsequent plantoestablish the worldsbiggest VC fund.

I truly believe its coming, thats why Im in a hurry to aggregate the cash, to invest, he noted.

Sons intentto raise$100BN for a new fund, called the Softbank Vision Fund, was announced last October, getting early backing fromSaudi Arabias public investment fund as one of the partners.

The fund hassince pulled inadditional contributors includingFoxconn,Apple, Qualcomm and Oracle co-founder Larry Ellisons family office.

But it has evidently not yet hit Sons target of $100BN as he used his MWC keynoteas a sales pitch for additional partners. Im looking for a partner because we alone cannot do it, he told delegates, smiling and opening his arms in a widegesture of appeal. We have to do it quickly and here are all kinds of candidates for my partner.

Son saidhis haste is partly down to a belief that superintelligent AIs can be used for the goodness of humanity, going on to suggest that only AI has the potentialto address some of the greatest threats to humankinds continued existence be itclimate change ornuclear annihilation.

Though he also said its important to consider whether such a technology will be good or bad.

It will be so much more capable than us - what will be our job? What will be our life? We have to ask philosophical questions, he said. Is it good or bad?

I think this superintelligence is going to be our partner.If we misuse it its a risk. If we use it in good spirits it will be our partner for a better life.So the future can be better predicted, people will live healthier, and so on, he added.

Given this vision for billions of superintelligence connected devices fast-coming down the pipe, Son is unsurprisingly very concerned about security. He said he discusses this weekly with ARM engineers. And described how one of his engineers had played a game to see how many security cameras he could hack during a lunchtime while waiting for his wife. The result? 1.2M cameras potentially compromised during an idle half hour or so.

This is how it is dangerous, this is how we should start thinking of protection of ourself, said Son. We have to be very very careful.

We are shipping a lot of ARMchips but in the pastthose were not secure.We are enhancing very quickly the security. We need to secure all of the thingsin our society.

Son alsorisked something of a Gerald Ratnermoment when he said that all the chips ARM is currently shipping for use in connected cars are not , in fact, secure. Going so far as to show a video of a connected car being hacked and the driver being unable to control the brakes or steering.

There are 500 ARM chips [in one car] today and none of them are secure today btw! said Son. (Though clearly hesworking hard with his team at ARM to change that.)

He also discussed a plan tolaunch800satellitesin the next threeyears, positioned in a nearer Earth orbit to reduce latency and support faster connectivity, as part of a planto help plugconnectivity gaps for connected cars describing the planned configuration of satellites as like a cell tower and like fiber coming straight to the Earth from space.

Were going to provide connectivityto billions of drivers fromthesatellites, he said.

For carriers hungry for their next billions of subscribers as smartphone markets saturate across the world, Son painted a pictured of vast subscriber growth via the proliferation of connected objects which handily of course also helps his bottom line, as the new parent of ARM.

If I say number of subscribers will not grow its not true, he told the conference. Smartphones no but IoT chips will grow to a trillionchips so we will have 1TR subscribers in the next 20 years. And they will all be smart.

One of the chips in our shoes in the next 30 years will be smart than our brain. We will be less than our shoes! And we are stepping on them! he joked. Its an interesting society that comes.

All of the cities, social ecosystem infrastructurewill be connected, he added.All those things will be connected.All connected securelyand managed from the cloud.

Read more:

Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch

Building A ‘Collective Superintelligence’ For Doctors And Patients Around The World – Forbes


Forbes
Building A 'Collective Superintelligence' For Doctors And Patients Around The World
Forbes
One thing about The Human Diagnosis Project -- It's not thinking small. Its goal is to build an open diagnostic system for patients, doctors and caregivers using clinical experience and other information contributed by physicians and researchers around ...

See the rest here:

Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes

Don’t Fear Superintelligent AICCT News – CCT News

We have all had founded and unfounded fears when we were growing up. On the other hand, more often than not we have been in denial of accepting the limits of our bodies and our minds. According to Grady Booch, the art and science of computing have come a long way into the lives of human beings. There are millions of devices that carry hundreds of pages of data streams.

However, having been a systems engineer Booch points out at a possibility of building a system that can converse with humans in natural language. He further argues that there are systems that can also set goals or better still execute the plans set against those goals.

Booch has been there, done it and experienced it. Every sort of technology will somewhat create apprehension. Take for example when telephones were introduced; there was this feeling that they would destroy all civil conversation. The written words became invasive lest people lost their ability to remember.

However, there is still the artificial intelligence that we ought to think about given that many people will tend to trust it more than a human being. Many are the times that we have forgotten that these systems require substantial training. But how many people will run away from this citing fear that training of systems will threaten humanity?

Booch advises that worrying about the rise of superintelligence is dangerous. What we fail to understand is that the rise of computing brings on the hand increases society issues, which we must attend to. Remember the AIs we build are neither for controlling weather nor directing tides. Hence there is no competition with human economies.

Nonetheless, it is important to experience computing because it will help us advance in our human experiences. Otherwise, it will not be long before AI takes dominion over a human beings brilliant minds.

Here is the original post:

Don't Fear Superintelligent AICCT News - CCT News

Elon Musk – 2 Things Humans Need to Do to Have a Good Future – Big Think

A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting optimistic visions of the future while anticipating existential risks from artificial intelligence and other directions.

The conference Superintelligence: Science or Fiction? featured a panel of Elon Musk from Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MITs DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.

The conference participants offered a number of prognostications and warnings about the coming superintelligence, an artificial intelligence that will far surpass the brightest human.

Most agreed that such an AI (or AGI for Artificial General Intelligence) will come into existence. It is just a matter of when. The predictions ranged from days to years, with Elon Musk saying that one day an AI will reach a a threshold where it's as smart as the smartest most inventive human which it will then surpass in a matter of days, becoming smarter than all of humanity.

Ray Kurzweils view is that however long it takes, AI will be here before we know it:

Every time there is an advance in AI, we dismiss it as 'oh, well that's not really AI:' chess, go, self-driving cars. An AI, as you know, is the field of things we haven't done yet. That will continue when we actually reach AGI. There will be lots of controversy. By the time the controversy settles down, we will realize that it's been around for a few years," says Kurzweil [5:00].

Neuroscientist and author Sam Harris acknowledges that his perspective comes from outside the AI field, but sees that there are valid concerns about how to control AI. He thinks that people dont really take the potential issues with AI seriously yet. Many think its something that is not going to affect them in their lifetime - what he calls the illusion that the time horizon matters.

If you feel that this is 50 or a 100 years away that is totally consoling, but there is an implicit assumption there, the assumption is that you know how long it will take to build this safely. And that 50 or a 100 years is enough time, he says [16:25].

On the other hand, Harris points out that at stake here is how much intelligence humans actually need. If we had more intelligence, would we not be able to solve more of our problems, like cancer? In fact, if AI helped us get rid of diseases, then humanity is currently in pain of not having enough intelligence.

Elon Musks point of view is to be looking for the best possible future - the good future as he calls it. He thinks we are headed either for superintelligence or civilization ending and its up to us to envision the world we want to live in.

We have to figure out, what is a world that we would like to be in where there is this digital superintelligence?, says Musk [at 33:15].

He also brings up an interesting perspective that we are already cyborgs because we utilize machine extensions of ourselves like phones and computers.

Musk expands on his vision of the future by saying it will require two things - solving the machine-brain bandwidth constraint and democratization of AI. If these are achieved, the future will be good according to the SpaceX and Tesla Motors magnate [51:30].

By the bandwidth constraint, he means that as we become more cyborg-like, in order for humans to achieve a true symbiosis with machines, they need a high-bandwidth neural interface to the cortex so that the digital tertiary layer would send and receive information quickly.

At the same time, its important for the AI to be available equally to everyone or a smaller group with such powers could become dictators.

He brings up an illuminating quote about how he sees the future going:

There was a great quote by Lord Acton which is that 'freedom consists of the distribution of power and despotism in its concentration.' And I think as long as we have - as long as AI powers, like anyone can get it if they want it, and we've got something faster than meat sticks to communicate with, then I think the future will be good, says Musk [51:47]

You can see the whole great conversation here:

View post:

Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think

Artificial intelligence predictions surpass reality – UT The Daily Texan

In a 2015 interview with Elon Musk and Bill Gates, Musk argued that humanitys greatest concern should be the future of artificial intelligence. Gates adamantly voiced his alignment with Musks concerns, making clear that people need to acknowledge how serious of an issue this is.

So I try not to get to exercised about this problem, but when people say its not a problem then I really start to get to a point of disagreement, Gates said.

The fears surrounding unchecked advances in AI are rooted in the potential threat posed by machine superintelligence an intelligence that at first matches human-level capabilities, but then quickly and radically surpasses it. Nick Bostrom, in his book Superintelligence, warns that once machines possess a level of intelligence that surpasses that of our own, control of our future may no longer be in our hands.

Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed, Bostrom said.

For Musk, Gates and Bostrom, the arrival of superintelligent machines is not a matter of if, but when. Their arguments seem grounded and cogent, but their scope is too far-sighted. They offer little in the way of what we can expect to see from AI in the next 10 to 20 years, or of how best to prepare for the changes to come.

Dr. Michael Mauk, chairman of the UT neuroscience department, has made a career out of building computer simulations of the brain. His wide exposure to AI has kept him close to the latest developments in the field. And while Mauk agrees in principle with plausibility of superintelligent AI, he doesnt see its danger, or the timeline of its arrival, in the same way as those mentioned before.

I think theres a lot of fearmongering in this that is potentially, in some watered-down way, touching a reality that could happen in the near future, but they just exaggerate the crap out of it, Mauk said. Is (the creation of a machine mind) possible? I believe yes. Whats cool is that it will one day be an empirically answerable question.

For Mauk, hype of the sort propagated by Musk, Gates and Bostrom is out of balance, and doesnt reflect what we can realistically expect to see from AI. In fact, Mauk claims that current developments in neuroscience and computer science are not moving toward the development of superintelligence, but rather toward what Mauk calls IA, or Intelligent Automation.

Most computer scientists are not trying to build a sentient machine, Mauk said. They are trying to build increasingly clever and useful machines that do things we think of as intelligent.

And we see evidence of this all around us. IA has grown rapidly in recent years. From self-driving cars to Watson-like machines with disease diagnosing capabilities superior to that of even the best doctors, IA is set to massively disrupt the current social and economic landscape.

Students and professionals alike should sober any fears about a future occupied by superintelligent AI, and instead focus on the very real, and near future reality where IA will be profoundly impacting their career. And theres a beautiful irony to this. As humanity works to adapt to a world with greater levels of Intelligent Automation, along with its many challenges increased social strife, economic restructuring, the need for improved global cooperation it will inadvertently be preparing itself to face a potential future occupied by superintelligent AI.

Hadley is a faculty member in biology and a BS 15 in neuroscience from Southlake.

Continue reading here:

Artificial intelligence predictions surpass reality - UT The Daily Texan