Everything you need to know about superintelligence – Toolbox

Posted: September 18, 2019 at 4:24 pm

Many science fiction novels have theorized an omniscient and omnipresent artificial intelligence that towers above human intelligence. What many dont know is that this concept does have a place in the field of artificial intelligence, albeit merely as a theory.

First theorized by Oxford philosopher Nick Bostrom, artificial superintelligence is a theory in the field of artificial intelligence. This futuristic AI can perform beyond the limits of the human mind, even geniuses. While it is still a theory, the methods of achieving superintelligence are also widely debated.

Most of these methods involve creating a human-like artificial intelligence known as artificial general intelligence. The fallout from creating such an artificial intelligence is also the subject of discussion among the AI community. Overall, the concept has sparked endless conversations among scientists and philosophers alike.

In this article, we will try to understand superintelligence, how it could come about in the future and the risks associated with it. We will also take a look at how regulation today will offset some of the potential negative outcomes of artificial superintelligence. Lets delve deeper into superintelligence.

Table of Contents

What Is Superintelligence?

The Road to Superintelligence

Risks of Superintelligence

Regulation and Ethics in AI Today

Closing Thoughts for Techies

According to Nick Bostrom, superintelligence is any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest. This means that superintelligence is smarter than visionaries in any field and surpasses genius-level human beings at conducting a task.

Bostrom says that superintelligence will eclipse human intelligence in areas, such as scientific creativity, social skills, and general wisdom. This could mean that superintelligence can employ various cognitive processes that make human intelligence what it is, albeit at much higher speed and efficiency.

Hence, superintelligence can be defined as artificial intelligence that performs human-like cognitive processes at exponentially higher speeds and efficiencies when compared to the human mind. The AI programs we have today fall into the category of narrow artificial intelligence, as their domain of authority is constrained to a well-defined problem.

This means that any AI program created today is created for a specific purpose, usually a problem to be solved. Certain AI programs can even learn by ingesting new data relevant to the problem and are termed machine learning algorithms. Going beyond this, the most recent field in the AI space is deep learning.

Deep learning mainly uses neural networks, which are a simulation of the structure of the human brain. These neural networks are able to process data more efficiently than previous AI algorithms, and can even learn from their results to iterate and improve upon themselves. Currently, this is the apex of how AI programs are created, and usually provides better results than their machine learning counterparts when faced with similar problems.

Even as neural networks are more advanced today, they still fall under the category of narrow AI. They are programmed to solve a complex problem and will only know about the problem at hand. Moreover, they cannot expand upon this learning to effectively tackle similar problems without additional training.

The next theoretical step forward would be to create a general artificial intelligence. More commonly referred to as strong AI, general AI will be capable of solving many problems with human-like accuracy. We are yet to create a true general artificial intelligence, as the requirements for it are beyond the scope of current technology.

General AI will be able to replicate human intelligence in all cognitive aspects. This includes concepts, such as abstract thinking, comprehension of complex ideas, reasoning, and general problem-solving. The road to artificial superintelligence requires the creation of such an AI.

Learn More: What Is the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?

While it is easy to replicate certain cognitive processes, such as arithmetic, statistics, calculus, and language translation, it is more difficult to replicate unconscious cognitive processes. Therefore, superintelligence first requires humanity to create a general artificial intelligence.

In other words, we have managed to make computers accurately replicate the cognitive processes that require thinking but have not replicated the tasks humans carry out without thinking. This includes processes, such as vision, perception, language, and understanding complex ideas. Before doing so, we must build the vast computing infrastructure to power the algorithm.

Any computing infrastructure required to power an AGI must be capable of reproducing the power of the human brain. This has been estimated to be around one exaflop or one billion calculations per second. Keeping in mind the current pace of advancement of computing power, this level is expected to be reached around 2025.

Secondly, we must find a way to accurately recreate the human mind in a digital form. This has many pitfalls, with the primary one being the inability of humans to understand how the human mind works. The human mind is thought to be a collection of thousands of biological and behavioral processes, much of which we simply do not understand. The complexity of the workings of the mind can never be easily explained, and thus has never been entirely understood, despite several efforts to explain it using science and psychology.

Superintelligence is predicted to be achieved through a phenomenon known as an intelligence explosion. When a general AI is created, it will begin to not only learn from data but also its own actions. As it is conscious of its own actions, it will continually improve itself in a process known as recursive self-improvement.

This means that the program will begin at a human-level intelligence and use that intelligence to improve itself. This will increase the general threshold of what it can process, and it will use this knowledge to improve itself further. This process will cause an exponential increase in the intelligence of the program, with each iteration being more powerful than the previous one.

Then, the program will undergo an explosion of intelligence, improving itself at a rate that surpasses genius-level intelligence. Eventually, it will surpass the collective intelligence of human civilization and will continue to increase its intelligence, thus becoming a superintelligence.

Learn More: How Artificial Intelligence Has Evolved

As a program that is more intelligent than any human being, even the ones that created it, containing or controlling an artificial superintelligence is a difficult undertaking. By definition, superintelligence will be even smarter than the engineer who created its architecture, allowing to simply break out of the machine.

Due to this, superintelligence is subject to many risks stemming from the way it is programmed. If the program does not have clearly specified goals, it can make a decision that is unfavorable to humanity.

To properly illustrate this, let us take a scenario into consideration.

Sometime in the future, scientists are at their wits end regarding how climate change can be reversed. Therefore, they create a superintelligence to save the world from global warming. This is defined as its only goal, and upon reaching superintelligence, the program begins ingesting data to determine the cause of global warming.

Here, the intention is to save the human race by engineering a solution to global warming. However, in the circumstance that this condition is not explicitly mentioned, two main scenarios will occur. Firstly, the program will ensure its survival by any means possible, as it must finish its assigned task. Secondarily, after looking at the data, the superintelligence may determine that humanity is the biggest reason for global warming and create a plan to wipe humanity off the face of the earth.

Vaguely defined problems are one of the biggest risks that come into play when dealing with superintelligence. This might create a program whose goals are not aligned with humanitys, leading to its extinction. In a situation where the conquering intelligence is significantly higher than humanitys, it is inevitable that misaligned goals will cause the extinction of lesser species.

As mentioned previously, superintelligence will set self-preservation as one of its biggest priorities. This is because the algorithm will be created with the sole purpose of solving a problem. It cannot solve a problem if it does not have the required resources to do so. This will cause it to compete with humanity for resources, resulting in an adverse scenario for humanity.

In addition to this, a self-aware superintelligence will not allow its goals to be changed after deployment. This means that even if a superintelligence is conducting unfavorable activities to reach its goal, its priorities cannot be changed as the program will simply not allow it.

Prominent science fiction authors have introduced concepts, such as Asimovs Laws of Robotics, which are a set of hard-coded laws that the AI is required to follow. While this might fall into the category of defining a problem better, such safety measures are already being set up for the eventual rise of self-aware AI and, by extension, superintelligence.

Learn More: How Ethically Can Artificial Intelligence Think?

As AI becomes more powerful, regulators around the world are looking for ways to reign in the misuse of such algorithms. Some of them include the weaponization of AI and the ethical ramifications of releasing powerful AI to the world. Handling biases that are faced by todays AI also falls under this category.

An example of how the use of AI is being regulated comes from OpenAIs take towards GPT-2 (generative pretrained transformer), a text generator algorithm. The program was trained with 40GB of text data from around the Internet and was able to generate text just like a human.

Going against their philosophy of open-sourcing all of their models, OpenAI decided against releasing the algorithm. This was due to concerns that the program could be used by parties with malicious intent to create fake news at an alarming rate. They called this an experiment in responsible disclosure, setting the standard for the kind of AI algorithms that should be disclosed to the public.

Such systems must be put in place for the responsible use of powerful AI programs. This is even more relevant in the case of superintelligence, as its misuse will not only result in widespread social ramifications but a possible extinction event for humanity.

The weaponization of AI must also be heavily discouraged, as a weaponized superintelligence will inevitably destroy the human race. Moreover, biases and ethical consideration must also be taken into consideration, as future AI may be conscious and self-aware. Forward-thinking regulations must be put in place, handling both todays issues with AI and the future of AI in the form of general AI and superintelligence.

Overall, even though the concept of superintelligence might seem like something that is restricted to science fiction novels, it is a possibility looming on the horizon. With the pace of AI innovation today, the building blocks for an AGI may be created before we are prepared for it.

The advancement of capable and affordable computing power is sure to slow down the general adoption of AGI. Researchers predict that the computing power required to replicate the human brains capacity will be reached around 2025. This, along with the pace of AI innovation today, sets the goal for a true AGI somewhere in the middle of the 21st century.

According to Nick Bostroms predictions, general AI could become the dominant AI by the middle of the 21st century. Although Bostrom is correct in assuming that such a human-level intelligence would be possible, the inability to control that intelligence will likely pose a much bigger threat to humans.

What are your views on superintelligence and the changes it will bring to humanity? Let us know on LinkedIn, Twitter, or Facebook. Wed love to hear from you.

More:

Everything you need to know about superintelligence - Toolbox

Related Posts