Software environments for working on AI projects

In the new global economy of driving production and service costs towards zero, it makes a lot of sense for computer scientists to learn specialized skills to differentiate themselves in the marketplace. Since you are reading this blog I assume that you are interested in learning more about AI so I thought that I would list the AI development environments that I have found to be particularly useful - and a lot of them are free.

Classic AI Languages
Although not strictly required for work in AI, a few AI oriented languages have proven especially useful in the past: Lisp, Scheme, and Prolog. Scheme is a great language but suffers from an "embarrassment of riches": there are almost too many fine implementations available to choose from. That said, I would recommend the excellent and free DrScheme and MzScheme as a very good place to start because it is supported by a repository of useful libraries that are very easy to install. If you want to mix logic programming with Scheme then the following book (with examples that work with DrScheme) is recommended: The Reasoned Schemer

If you want to use Common Lisp (which is what I use for most of my AI development consulting) there are two commercial products that are very good and have free (non-commercial only!) versions: Franz Lisp and LispWorks. There is no need however to stick just with commercial offerings: SBCL (MIT license) and CLisp (GPL license) are two good choices among many.

If you want to use Prolog, the open source (LGPL) SWI-prolog and the commercial Amzi Prolog are both excellent choices and have lots of third party libraries.

Scripting Languages
I have found two scripting scripting languages to be particularly useful for AI projects: Ruby and Python. Python has more third party libraries and projects for AI but I personally enjoy developing in Ruby.

Pick an environment and stick with it
Believe it or not, I tend to follow this advice myself: I tend to use one language for a year or so, and then switch (usually because of customer preference or the availability of a great library written in one specific language). It pays to take the time to master one language and environment, then use that environment a lot.

So my advice is to spend just a few hours each with a few of my suggestions in order to pick one to learn really well. Once you pick a language stick with it until you master it.

New version of my NLP toolkit

I have done a fair amount of work in the last year on my KBtextmaster product (although not in the last 5 months due to a consulting contract). I hope to release version 4 next spring. For previous versions I did my R&D in Common Lisp, then converted to Java (bigger market!). While I may eventually do a Java port, I decided that I would rather stick with Common Lisp and go for maximum features and performance for the next release.

I did just formed a VAR relationship with Franz to use their Allegro Common Lisp for development and deployment. Allegro has support for compiling to a library that is accessible from Java applications, so that may be be OK for Java customers. The high runtime performance of Allegro is amazing.

Semantic Web: through the back door with HTML and CSS

I have spent a lot of time over the last 10 years working on technology for harvesting semantic information from a variety of existing sources. I was an early enthusiast for Semantic web standards like RDF and later OWL. The problem is that too few web site creators invest the effort to add Semantic Web meta data to their sites.

I believe that as web site creators start using CSS and drop HTML formatting tags like <font ...>, etc. (HTML should be used for structure, not formatting!), writing software that is able to understand the structure of web pages will get simpler. Furthermore, the use of meaningful id and class property values in <div> tags will act as a crude but probably effective source of meta data; for example: a <div> tag with an id or class property value that contains the string "menu" is likely to be navigational information and can be ignored or be of value depending on the requirements of your knowledge gathering application.

Just as extracting semantic information from natural language text is very difficult, analyzing the structure and HTML/CSS markup to augment web data scraping information software is difficult. That said, HTML + CSS is likely to be much simpler to process in software than plain HTML with formatting tags. BTW, I am in the process of converting all of my own sites to using only CSS for formatting - I have been writing HTML with embedded formatting since my first web site in 1993 - time for an update in methodology.

Defining AI and Knowledge Engineering

Our own intelligence is defined by our abilities to predict and generalize. As Jeff Hawkins points out, we live our lives constantly predicting what will happen to us in the next few seconds. (See Numenta.com - Hawkin's company - for the source code to NTA hierarchical temporal memory system.)

We also generalize by learning to recognize patterns and ignore noise.

AI systems need to implement prediction and generalization, and do this in a way that scales so that we can move past small toy problems. Scalability is most important in prediction because of the size of data required to model the environment that an AI will live in and the real time requirements (prediction does us little good if the calculation takes too long).

Knowledge Engineering is not AI, it is the engineering discipline for the understanding and re-implementation in software of human level expertise in narrow problem domains.

Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks

My current project (for a large health care company) involves three AI components (a semantic network, and reasoning system based on PowerLoom, and a module using probalistic networks).

We are in a hedge our bets mode, basically using three promising approaches - I can not talk too much about the application domain, but it will be useful to see which approaches end up being most useful.

BTW, I have written about this on my regular web blog, but the release of a new version of PowerLoom under liberal open source licensing (that is, commercial use OK) is a great addition to anyone's 'AI toolkit'.

Using Amazon’s cloud service for computationally expensive calculations

I did not get too excited about Amazon's S3 online storage web services, but the Amazon cloud web services look very compelling: $0.10 per instance-hour consumed (or part of an hour consumed) where an instance is roughly equivalent to 1.7Ghz Xeon CPU, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth.

I sometimes do large machine learning runs, and the two computers on my local LAN that I usually use (dual CPU Mac tower with 1.5 gigs RAM, Dell dual Pentium with 1 gig of RAM) are fairly fast, but there is often that pesky overnight wait to get a run in.

I need to spend more time checking out Amazon's code samples and documentation, but the idea of spending perhaps $10 and getting a long running machine learning run done quickly is very compelling.

PS. I just tried to sign up for the service, but their beta is full 🙁

PSS. I just got an account 🙂

Yudkowsky on "Value is fragile"

If I had to pick a single statement that relies on more Overcoming Bias content I've written than any other, that statement would be:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

If you believe this statement, there is cause to be very worried about the future of humanity. Currently, the future gets its detailed, reliable inheritance from human morals and metamorals because your children will have almost exactly the same kind of brain that you do, and (to a lesser extent) because they will be immersed in a culture that is (in the grand scheme of things) extremely similar to the culture we have today. Over many generations and technological changes, the inheritance of values between human generations breaks to some small extent, though it seems to the author that human hunter-gatherers from the very distant past want roughly the same things that modern humans do; they would be relatively at home in a utopia that we designed. That is a chain of reliable inheritance of values that spans fifty thousand years, from mother to daughter and father to son.

When intelligence passes to another medium, it seems that the "default" outcome is the breaking of that chain, as Frank puts it:

Each aspiration and hope in a human heart, every dream you’ve ever had, stopped in its tracks by a towering, boring, grey slate wall.

How would it happen? Those who lusted after power and money would unleash the next version of intelligence, probably in competition with other groups. They would engage in wishful thinking, understate the risks, they would push each other forward in a race to be first. Perhaps the race might involve human intelligence enhancement or human uploads. The end result could be systems that have more effective ways of modeling and influencing the world than ordinary humans do. These systems might work by attempting to shape the universe in some way; if they did, they would shape it to not include humans, unless very carefully specified. But humans do not have a good track record of achieving some task perfectly the first time around under conditions of pressure and competition.

----------------------------------------------------------------------------------------------

To answer a few critics on Facebook: Stefan Pernar writes:

The argument in the linked post goes something like this: a) higher intelligence = b) more power = c) we will be crushed. The jump from b) -> c) is a bare assertion.

This post does not claim that any highly intelligent, powerful AI will crush us. It implicitly claims (amongst other things) that any highly intelligent, powerful AI whose goal system does not contain "detailed reliable inheritance from human morals and metamorals" will effectively delete us from reality. The justification for this statement is eluded to in the value is fragile post. As Yudkowsky states in that post, the set of arguments for this statement and counterarguments against it and counter-counterarguments constitutes a large amount of written material, much of which ought to appear on the Less Wrong wiki, but most of which is currently buried in the Less Wrong posts of Eliezer Yudkowsky.

The most important concepts seem to be listed as Major Sequences on the LW wiki. In particular, the Fun theory sequence, the Metaethics sequence, and the How to actually change your mind sequence.

Response to Pearce

David Pearce writes, in response to my recent blog post:

Crucial to the cognitive success of organic robots like us seems to be superior "mind-reading" skills - the ability to "take the intentional stance". So presumably post-biological intelligence will need the functional analogues of empathetic understanding if it is successfully to interact with (post)human sentients. "Mind-blind" autistics who are mathematical prodigies are still vulnerable. Even a SuperAsperger would be vulnerable: calculating everything at the level of microphysics is too computationally demanding even for a SuperAsperger.
So presumably post-biological intelligence will need a sophisticated theory of mind - otherwise it's just a glorified idiot-savant. Or does your scenario assume that sophisticated functional analogues of empathy are feasible without phenomenal consciousness? Are you assuming a runaway growth in empathetic understanding by post-biological intelligence that outclasses "mind-reading" organic sentients - and yet has no insight into why organic sentients find some states (e.g. agony) intrinsically normative but others (e.g. cosmic paperclip tiling) totally trivial???


It is entirely possible to have a post-biological optimizing-intelligence that outclasses "mind-reading" organic sentients and knows exactly why organic sentients find some states intrinsically normative, but just doesn't care. It knows that the punishment it is meting out to you hurts you, it knows that you don't want to be killed, but yet it doesn't care. It just wants to produce the maximal number of paperclips. This is highly conterintuitive for humans, because we possess mirror neurons and we instinctively sympathize with the suffering of other human beings. But that is just another human universal trait that doesn't generalize to all minds. Heck, it doesn't even generalize to all evolved minds; predators do not empathize with the suffering of their prey, and, as David Pearce is keen to point out, this causes the natural world to be an agony machine.

Alternatively _if_ post-biological intelligence is subject to the pleasure-pain axis, then I can't see the cosmic outcome is likely to be different than (hypothetically) for organic life i.e. some friendly sentient version of "Heaven" - not paperclips. Phenomenal pleasure and pain will be no less intrinsically normative if they can be instantiated in other substrates. [ I confess here I'm a sceptical carbon chauvinist / micro-functionalist.] Crudely, what unites Einstein and a flatworm is the pleasure pain-axis. All sentient life is subject to the pleasure principle.

It seems unlikely to me that all possible optimizing minds are subject to the "pleasure/pain" axis.

For reasons we don't understand, the phenomenology of pleasure and suffering is intrinsically normative. [Try plunging your hand into iced cold water and holding it there for as long as you can for a nasty reminder of what this means.] Perhaps what _will_ mark a major discontinuity in the evolution of sentient life is that we'll shortly be able to rewrite our own source code and gain direct control over our own reward circuitry. I don't pretend to know what guise "Heaven" will take. ["orgasmium", cerebral bliss, modes of blissful well-being yet unknown - choose your favourite utopia.] But I reckon in future the hedonic tone of all experience will be hugely enriched. One can argue whether such hedonically amplified states will "really" be as valuable as they feel. But they'll certainly seem to be valuable - more subjectively valuable than anything accessible now - and therefore worth striving for. IMO 🙂

And, the "IMO" is key here. In the opinion of the paperclip-maximizer, the only thing worth striving for is more paperclips.

Pleasure and pain are intrinsically normative to minds that have a pleasure/pain reward system. Other minds don't. And even then, there is a difference between my pain and your pain; your pain is not intrinsically motivating to me. To quote from value is fragile:

You do have values, even when you're trying to be "cosmopolitan", trying to display a properly virtuous appreciation of alien minds. Your values are then faded further into the invisible background - they are less obviously human. Your brain probably won't even generate an alternative so awful that it would wake you up, make you say "No! Something went wrong!" even at your most cosmopolitan. E.g. "a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips". You'll just imagine strange alien worlds to appreciate.

Trying to be "cosmopolitan" - to be a citizen of the cosmos - just strips off a surface veneer of goals that seem obviously "human".

But if you wouldn't like the Future tiled over with paperclips, and you would prefer a civilization of...

...sentient beings...

...with enjoyable experiences...

...that aren't the same experience over and over again...

...and are bound to something besides just being a sequence of internal pleasurable feelings...

...learning, discovering, freely choosing...

...well, I've just been through the posts on Fun Theory that went into some of the hidden details on those short English words.

Values that you might praise as cosmopolitan or universal or fundamental or obvious common sense, are represented in your brain just as much as those values that you might dismiss as merely human. Those values come of the long history of humanity, and the morally miraculous stupidity of evolution that created us.

These values do not emerge in all possible minds. They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer.

If you want a vision of the defualt future without special effort spent on AI friendliness, look at this video. You are the baby Wildebeast, the next form of intelligence is the hyena pack:

Katja Grace: world-dominating superintelligence is "unlikely"

Katja Grace at Meteuphoric:

In order to grow more powerful than everyone else you need to get significantly ahead at some point. You can imagine this could happen either by having one big jump in progress or by having slightly more growth over a long period of time. Having slightly more growth over a long period is staggeringly unlikely to happen by chance, so it needs to share some cause too. Anything that will give you higher growth for long enough to take over the world is a pretty neat innovation, and for you to take over the world everyone else has to not have anything close. So again, this is a big jump in progress. So for AI to help a small group take over the world, it needs to be a big jump.

Notice that no jumps have been big enough before in human invention. Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing. This makes it hard for anyone to get far ahead of everyone else. While you can see there are barriers to insights passing between groups, such as incompatible approaches to a kind of technology by different people working on it, these have not so far caused anything like a gap allowing permanent separation of one group. ...

Read the rest here

Some thoughts: a lot of these issues have been hashed out on the internet before. Making reliable predictions about the future is hard, and high quality debate about futuristic scenarios seems hard to do. High-quality criticism of singularitarian ideas is also hard to come by, so this post seems encouraging.

Moving to the object-level, a criticism. Consider:

Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing. This makes it hard for anyone to get far ahead of everyone else. While you can see there are barriers to insights passing between groups, such as incompatible approaches to a kind of technology by different people working on it, these have not so far caused anything like a gap allowing permanent separation of one group.

and translate it one step backwards in the history of the world:

Some stable patterns, such as life, have somewhat taken over the world of other stable patterns, at least on the surface of earth. The seeming reason for this is that there was virtually no correlation of relevant information (about which patterns are likely to stick around in the current environment) between life and nonlife. Life makes incremental improvements, nonlife executes some random walk or just sits there. In ecosystems, there is a lot of information sharing because species coevolve with each other. This makes it hard for any one species to get far ahead of any other species. While you can see there are barriers to information passing between species, such as the inability to mate with each other or living on different continents, these have not so far caused anything like a gap allowing permanent separation of one species.

we see that there must be something wrong with the argument presented. The flaw could be that if an advantage that one entity gains over its competitors gives it both an advantage and at the same time cuts off information sharing with those competitors (for example, by changing so fast that the competitors simply cannot keep up with it because their ability to adapt is rate-limited), then that entity can surge ahead, leaving its competitors in the dust. This is exactly what humans did to other species. The phrase that biologists use for this particular case of competitors being left in the dust is the "Holocene extinction".

Many arguments claiming that no one superintelligence can surge ahead of the rest of the world are also, upon appropriate word replacement, arguments that Homo Sapiens could not possibly (or is highly unlikley to) have surged ahead of the rest of the global ecosystem. Yes, we had competitors (such as cave hyenas or other apes or hominids). Yes, those competitors felt a pressure to adapt to our innovations. Yes, relative to the diversity in the global ecosystem, our competitor species were very, very closely related to us. There were even certain (now extinct) hominid lines such as Homo neanderthalis that competed against us throughout certain key parts of the human intelligence explosion. All seven other hominid lines are now dead; a winner emerged and took all.

Normal Human Heroes on "Nightmare futures"

The thing with honestly imagining the future as it probably will be is that it can make for a depressing read, but Frank Adamek almost makes it seem literary:

Everyone would soon be dead. Human civilization ended its 10 thousand year run, the 200,000 year reign of Homo Sapiens was over, a pretentious and innocent little light suddenly and uneventfully turning off. In our place was some meaningless mechanical future, a small technical error propagating its way through the galaxy, covering existence with an alert message about a bad variable reference. Each person’s future, from their career hopes to the date they had planned on Friday, was matter-of-factly discarded by reality. Each aspiration and hope in a human heart, every dream you’ve ever had, was stopped in its tracks by a towering, boring, grey slate wall. And each of us knew with a numb and simple knowledge, that there was nothing. we. could. do. The probability of stopping The Machine was a page full of zeroes.

See Value is fragile if you're confused about what Frank is talking about.

Anissimov on Intelligence Enhancement

Widespread intelligence enhancement in humans is one major cause for hope in terms of better outcomes in the future, so it is encouraging that the technology seems to be closer than one might naively think.

Over-expressing a gene that lets brain cells communicate just a fraction of a second longer makes a smarter rat, report researchers from the Medical College of Georgia and East China Normal University.

Dubbed Hobbie-J after a smart rat that stars in a Chinese cartoon book, the transgenic rat was able to remember novel objects, such as a toy she played with, three times longer than the average Long Evans female rat, which is considered the smartest rat strain. Hobbie-J was much better at more complex tasks as well, such as remembering which path she last traveled to find a chocolate treat.

One simple modification, three times longer memory plus a problem-solving ability boost. People underestimate the potential value of intelligence enhancement in humans because what they expect are just smarter humans, not humans that are smarter than any human that ever lived.

What do you get when you cross slightly evolved, status seeking monkeys with the scientific method?

You get... a mockery of the great method of science, in the same way that monkeys, when presented with a keyboard use it primarily to defacate on, and take only a secondary interest in the fact that pressing a key makes a character appear. Analogously, slightly-evolved monkey scientists use scientific methods primarily to increase their own status within the slightly-evolved monkey scientist clan. I am not joking, see this:

An Example: Science at Low Pre-Study Odds:

Let us assume that a team of investigators performs a whole genome association study to test whether any of 100,000 gene polymorphisms are associated with susceptibility to schizophrenia. Based on what we know about the extent of heritability of the disease, it is reasonable to expect that probably around ten gene polymorphisms among those tested would be truly associated with schizophrenia, with relatively similar odds ratios around 1.3 for the ten or so polymorphisms and with a fairly similar power to identify any of them. Then R = 10/100,000 = 10?4, and the pre-study probability for any polymorphism to be associated with schizophrenia is also R/(R + 1) = 10?4. Let us also suppose that the study has 60% power to fi nd an association with an odds ratio of 1.3 at ? = 0.05. Then it can be estimated that if a statistically signifi cant association is found with the p-value barely crossing the 0.05 threshold, the post-study probability that this is true increases about 12-fold compared with the pre-study probability, but it is still only 12 × 10?4. Now let us suppose that the investigators manipulate their design, analyses, and reporting so as to make more relationships cross the p = 0.05 threshold even though this would not have been crossed with a perfectly adhered to design and analysis and with perfect comprehensive reporting of the results, strictly according to the original study plan. Such manipulation could be done, for example, with serendipitous inclusion or exclusion of certain patients or controls, post hoc subgroup analyses, investigation of genetic contrasts that were not originally specifi ed, changes in the disease or control defi nitions, and various combinations of selective or distorted reporting of the results. Commercially available “data mining” packages actually are proud of their ability to yield statistically signifi cant results through data dredging. In the presence of bias with u = 0.10, the poststudy probability that a research fi nding is true is only 4.4 × 10?4. Furthermore, even in the absence of any bias, when ten independent research teams perform similar experiments around the world, if one of them fi nds a formally statistically signifi cant association, the probability that the research fi nding is true is only 1.5 × 10?4, hardly any higher than the probability we had before any of this extensive research was undertaken!

Robin hanson has written extensively on science as mostly being about status seeking behaviour, with little interest in whether the global pool of scientific knowledge is increased.

Seeking the optimal philanthropic strategy: Global Warming or AI risk?

Over on Beware of the Train some people are discussing what the optimal philanthropic strategy is for people who want to do their bit to help the world as a whole, save people's lives, etc.

The two options under consideration are: (a) Mitigating Anthropogenic Global Warming and (b) working on the risk of artificial intelligence. To quote pozorvlak:

Last night I made a serious strategic error: I dared to suggest to some Less Wrongers that unFriendly transcendent AI was not the most pressing danger facing Humanity.

In particular, I made the following claims:

  • That runaway anthropogenic climate change, while unlikely to cause Humanity's extinction, was very likely (with a probability of the order of 70%) to cause tens of millions of deaths through war, famine, pestilence, etc. in my expected lifetime (so before about 2060).
  • That with a lower but still worryingly high probability (of the order of 10%) ACC could bring about the end of our current civilisation in the same time frame.
  • That should our current civilisation end, it would be hard-to-impossible to bootstrap a new one from its ashes.
  • That unFriendly AI, by contrast, has a much lower (

Machine Learning – harbinger of the future of AI?

Attempts to create Artificial Intelligence that can perform tasks at or beyond the level of a human being have, to date, been limited by the tendency of AI researchers to attempt to hand-code knowledge into AI systems, typically using something like the first order predicate calculus. Recently, the discipline of machine learning has shown that for some limited problems, you can create an algorithm that learns its own knowledge in a formalism that is suited towards making predictions - probability theory.
Why didn't the attempts to hand-code knowledge work? There are several reasons. The most prominent is that humans have an ability to introspect, but that ability is severely limited. Imagine a large cake of cognitive algorithms that we all have but are unaware of, and a thin layer of icing consisting of pieces of knowledge and algorithms that we are aware of; this seems to be an accurate picture empirically, and plausible evolutionarily. The first attempts to create artificial intelligence were bound to fail, because the obvious thing to do is to introspect on your own cognition (and, of course, you can only see the icing) and then code that into a computer. The result is rather like the cargo cultists who

"attemped to get cargo to fall by parachute or land in planes or ships again, by imitating the same practices they had seen the soldiers, sailors, and airmen use. They carved headphones from wood and wore them while sitting in fabricated control towers. They waved the landing signals while standing on the runways. They lit signal fires and torches to light up runways and lighthouses. "

In fact, it is worse than that. Computers themselves were invented by men and women who were trying to formalize the notion of "computation" - which was a notion derived from the icing that they could see in their own heads and the activities such as adding up a list of numbers for accountancy purposes, following definite algorithms or evaluating deterministic computations, which themselves were derived from what humans could easily introspect upon and what customs humans found socially useful to implement. My hypothesis is that humans found (and still find) it socially useful to have definite, digital laws and procedures, even when there are very few aspects of the world humans used to inhabit that were digital; for example the custom of giving an exact amount of money for a given product (try making an agreement with your local shop to buy 99p chocolate bars for 99 + Gaussian(0,10) pence, for example) or having laws with crisp boundaries (for example, having sex with a person who is 5844 days old is fine, but if s/he is 5843 days old, then you are a criminal.) . Why do we find such crisp, digital boundaries useful? Because they make it easier to catch people who cheat and to impose punishments and rewards that solve the social co-ordination problems that humans are beset with. This must have influenced our notion of computation.
Those computing pioneers probably invented a notion of computation that was biased in favour of the icing (their own introspection and the social customs that they were immersed in), and missed the cake. Vikash Kumar Mansinghka at MIT who has just published a thesis on Natively Probabilistic Computation that might get closer to the "cake" - the invisible mainstay of human cognition:
Probabilistic algorithms and state machines work by massively parallel stochastic walks, rather than carefully coordinated sequences of deterministic steps. We expect them to eventually produce desired outputs in reasonable proportions, rather than perform any given step precisely. This may help us model biological, neural, psychological and social systems, which robustly exhibit reasonable behavior under a wide range of conditions but rarely - if ever - can be made to repeat themselves perfectly.

Our machines will begin to sanity check implausible inputs and sample plausible alternatives rather than blindly follow our instructions. Our interactions will someday be taken as noisy evidence, interpreted with respect to probabilistic programs that model our intent, rather than taken as definite inputs to some deterministic function. This epistemological flexibility, arising from the wiggle room afforded by probability, could potentially allow us to one day build a probabilistic computer that is not well described by the phrase “garbage in, garbage out”.

Probabilistic computation may also provide clues for understanding neural computation and cognitive architecture. We can let go of our focus on calculating probabilities and optimal actions, instead favoring systems that sample good guesses. For example, neural systems may appear noisy because they are trying to solve problems of inference and decision making under uncertainty by sampling. The variability might not be Gaus sian error around some linearized set-point, but rather the natural dynamics of a distributed circuit that is robustly hallucinating world states in accordance with a generative probabilistic model and the evidence of the senses.


Mansinghka proposes re-building computation from the ground up, with the basic physical components of computers replaced with components that are optimized for massively parallel stochastic simulation at low accuracy rather than deterministic sequential operations at very high accuracy.
Researchers like Shane Legg and Marcus Hutter have proposed the machine learning paradigm as a foundation for general intelligence, and in the final chapter of my thesis I argue that finding structure in data overcomes many problems that the formal logic/hand-coding paradigm is beset with.

At the Singularity Summit in NYC

I am in NYC for the Singularity Summit, which starts tomorrow morning. This will be the first Summit I have attended; there are some interesting talks on. Here is the program for tomorrow:

9:00 am
Introduction
Michael Vassar, Singularity Institute

9:05 am
Shaping the Intelligence Explosion
Anna Salamon, Singularity Institute
9:35 am
Technical Roadmap for Whole Brain Emulation
Anders Sandberg, Future of Humanity Institute
10:00 am
The time is now: As a species and as individuals we need whole brain emulation
Randal Koene, Fatronik-Tecnalia Foundation
10:25 am
Technological Convergence Leading to Artificial General Intelligence
Itamar Arel, University of Tennessee
11:10 am
Pathways to Beneficial Artificial General Intelligence: Virtual Pets, Robot Children, Artificial Bioscientists, and Beyond
Ben Goertzel, Novamente
11:35 am
Neural Substrates of Consciousness and the 'Conscious Pilot' Model
Stuart Hameroff, University of Arizona
11:55 am
Quantum Computing: What It Is, What It Is Not, What We Have Yet to Learn
Michael Nielsen
12:35 am
DNA: Not Merely the Secret of Life
Ned Seeman, New York University
1:00 pm
Lunch
2:20 pm
Compression Progress: The Algorithmic Principle Behind Curiosity, Creativity, Art, Science, Music, Humor
Juergen Schmidhuber, IDSIA
3:00 pm
Conversation on the Singularity
Stephen Wolfram and Gregory Benford
3:30 pm
Simulation and the Singularity
David Chalmers, Australian National University
4:15 pm
Choice Machines, Causality, and Cooperation
Gary Drescher
4:45 pm
Coffee break
5:05 pm
Synthetic Neurobiology: Optically Engineering the Brain to Augment Its Function
Ed Boyden, MIT Media Lab
5:30 pm
Foundations of Intelligent Agents
Marcus Hutter, Australian National University
5:55 pm
Cognitive Ability: Past and Future Enhancements and Implications
William Dickens, Northeastern University
6:30 pm
The Ubiquity and Predictability of the Exponential Growth of Information Technology
Ray Kurzweil, Kurzweil Technologies
I hope to write summaries of these talks tomorrow, assuming some kind person lends me a laptop...

Surprisingly good solutions, falling in love and life in a materialistic universe

We live in a universe that conforms to what one might call reductionistic analysis: the behavior of the macroscale objects that we see around us (objects like people) can be explained by regarding those objects as conglomerates of simple building blocks called elementary particles which obey the same physical laws whether they are part of a machine gun or part of a dopamine receptor in your brain.

There are some highly counterintuitive predictions that this fact about our world makes: the physical laws that govern interactions between these elementary particles must give rise to parts of our lives that we don't usually associate with "physics" - such as feeling tired and defeated on a saturday morning, being in pain, or falling - and staying - in love. Furthermore, if those parts of our lives - parts that we care about deeply - are really just special cases of physics, then there is nothing stopping us from making them do more of what we want, and less of what we don't want. If our deepest feelings are ultiumately the result of the interactions of elementary particles, and we find that some other people around us sometimes seem to fare better in those most intimate aspects of life, then we have an opportunity: self-modification to make oneself more the person one wants to be.
The particular stimulus that jolted me into writing a blog post is this article (H/T David Pearce, Kaj Sotala) , which shows that there is a lot of natural variation in one of the most important aspects of our lives - being in love with our partner:
Suzanne Bernstein said she and her husband, Sidney, eat side-by-side when they go out, always walk hand-in-hand, and begin and end each day with "I love you." The couple from Weehawken, N.J., have been married 18 years and Suzanne said the relationship is as passionate as when they first met.


Now research exists to support her claim.

Stony Brook University researchers looked at the brains of Bernstein and 16 other people who had been married an average of 20 years and claimed to be still intensely in love. They found that their MRIs showed activity in the same regions of the brain as those who had just fallen in love. "It's always been assumed that passionate love inevitably declines over time," said Arthur Aron, a social psychologist at Stony Brook University and one of four authors of the study, presented in November at the Society for Neuroscience annual meeting in Washington, D.C.

"But in survey after survey we always have these people who have been together a long time and say they are intensely in love. It was always chalked up to self-deception or trying to make a good impression," he said. In fact, she said, the study found an advantage to the longer-term relationships she studied: The brains of those people showed less anxiety and obsessiveness.

Aron had conducted an earlier MRI study published in 2005 among 17 people who had recently fallen in love. He found that regions of the brain associated generally with reward and motivation -- the same regions that light up when cocaine is taken -- activated when the subjects were shown pictures of their beloved. These regions, Aron said, are not the same as those associated with sexual arousal.

If there's that much natural variation, one wonders what could be done with deliberate interventions? This is the real allure of the humanity+ endeavour: forget space elevators and jupiter brains. Think about the fact that falling in love is a physical feat involving both you and your partner's ability to secrete certain hormones, and that neither of you are the best in the world at it, just as neither of you are the best in the world at other physical feats like running the 100m sprint in world-record breaking time.
In the limit of extremely high technology and extreme wisdom to steer that technology to good ends, we end up with the so-called surprisingly good solutions - states of existence that are so good that when we experience them, we will be shocked that it can get this good, and outraged that we didn't get there sooner.
For the moment, the article offers the following advice for people who are interested in improving the quality of their relationship with the best technology we have today - self-help:

Keeping the Fires Burning - research has found that passionate, long-lasting relationships generally have several things in common:

  • The couple is not facing terrible "external stressors," such as war or the loss of a child.
  • One partner is not highly depressed or anxious.
  • Both know how to communicate with each other.
  • The couple does new, challenging things together.
  • When one partner is successful, the other celebrates the success.