Weve come a long way as a whole over the past few centuries. Take a time machine back to 1750 and life would be very different indeed. There was no power outage, to communicate with someone long distance was virtually impossible, and there were no gas stations or supermarkets anywhere. Bring someone from that era to todays world, and they would almost certainly have some form of breakdown. I mean how would they cope seeing capsules with wheels whizz around the roads, electrical devices everywhere you look, and even just talking to someone on the other side of the world in real time. These are all simple things that we take for granted. But someone from a few centuries ago would probably think it was all witchcraft, and could even possibly die.
But then imagine that person went back to 1750 and suddenly became jealous that we saw their reaction of awe and amazement. They may want to re-create that feeling themselves in someone else. So, what would they do? They would take the time machine and go back to say 1500 or so and get someone from that era to take to their own. Although the difference from being in 1500 to then being in 1750 would, of course, be different, it wouldnt be anything as extreme as the difference between 1750 and today. So the 1500 person would still almost certainly be shocked by a few things, its highly unlikely they would die. So, in order for the 1750 person to see the same kind of reaction that we would have, they would need to travel back much, much, farther to say 24,000 BC.
For someone to actually die from the shock of being transported into the future, theyd need to go that far ahead that a Die Progress Unit (DPU) is achieved. In hunter-gatherer times, a DPU took over 100,000 years, and thanks to the Agricultural Revolution rate it took around 12,000 years during that period. Nowadays, because of the rate of advancement following the Industrial Revolution a DPU would happen after being transported just a couple hundred years forward. Futurist Ray Kurzweil calls this pattern of human progression moving quicker as time goes on, the Law of Accelerating Returns and is all down to technology.
This theory also works on smaller scales too. Cast your mind back to that great 1985 movie, Back to the Future. In the movie, the past era they went back to was 1955, where there were various differences of course. But if we were to remake the same movie today, but use the past era as 1985, there would be more dramatic differences. Again, this all comes down to the Law of Accelerating Returns. Between 1985 and 2015 the average rate of advancement was much higher than between 1955 and 1985. Kurzweil suggests that by 2000 the rate of progress was five times faster that the average rate during the 20th century. He also suggests that between 2000 and 2014 another 20th centurys worth of progress happened, and by 2021 another will happen, taking just seven years to get there. This means that keeping with the same pattern, in a couple of decades, a 20th centurys worth of progress will happen multiple times in one year, and eventually, in one month.
If Kurzweil is right then by the time 2030 gets here, we may all be blown away with the technology all around us and by 2050 we may not even recognize anything. But many people are skeptical of this for three main reasons:
1. Our own experiences make us stubborn about the future. Our imagination takes our experiences and uses it to predict future outcomes. The problem is that were limited in what we know and when we hear a prediction that goes against what weve been led to believe we often have trouble accepting it as the truth. For example, if someone was to tell you that youd live to be 200, you would think that was ridiculous because of what youve been taught. But at the end of the day, there has to be a first time for everything, and no one knew airplanes would fly until they gave it a go one day.
2. We think in straight lines when we think about history. When trying to project what will happen in the next 30 years we tend to look back at the past 30 years and use that as some sort of guideline as to whats to come. But, in doing that we arent considering the Law of Accelerating Returns. Instead of thinking linearly, we need to be thinking exponentially. In order to predict anything about the future, we need to picture things advancing at a much faster rate than they are today.
3. The trajectory of recent history tells a distorted story. Exponential growth isnt smooth and progress in this area happens in S-curves. An S curve is created when the wave of the progress of a new paradigm sweeps the world and happens in three phases: slow growth, rapid growth, and a leveling off as the paradigm matures. If you view only a small section of the S-curve youll get a distorted version of how fast things are progressing.
What do we mean by AI?
Artificial intelligence (AI) is big right now; bigger than it ever has been. But, there are still many people out there that get confused by the term for various reasons. One is that in the past weve associated AI with movies like Star Wars, Terminator, and even the Jetsons. Because these are all fictional characters, it makes AI still seem like a sci-fi concept. Also, AI is such a broad topic that ranges from self-driving cars to your phones calculator, so getting to grips with all it entails is not easy. Another reason its confusing is that we often dont even realize when were using AI.
So, to try and clear things up and give yourself a better idea of what AI is, first stop thinking about robots. Robots are simply shells that can encompass AI. Secondly, consider the term singularity. Vernor Vinge wrote an essay in 1993 where this term was applied to the moment in future when the intelligence of our technology exceeds that of ourselves. However, that idea was later confused by Kurzweil defining the singularity as the time when the Law of Accelerating Returns gets so fast that well find ourselves living in a whole new world.
To try and narrow AI down a bit, try to think of it as being separated into three major categories:
1. Artificial Narrow Intelligence (ANI): This is sometimes referred to as Weak AI and is a type if AI that specializes in one particular area. An example of ANI is a chess playing AI. It may be great at winning chess, but that is literally all it can do.
2. Artificial General Intelligence (AGI): Often known as Strong AI or Human-Level AI, AGI refers to a computer that has the intelligence of a human across the board and is much harder to create than ANI.
3. Artificial Superintelligence (ASI): ASI ranges from a computer thats just a little smarter than a human to one thats billions of time smarter in every way. This is the type of AI that is most feared and will often be associated with the words immortality and extinction.
Right now, were progressing steadily through the AI revolution and are currently running in a world of ANI. Cars are full of ANI systems that range from the computer that tells the car when the ABS should kick into the various self-driving cars that are about. Phones are another product thats bursting with ANI. Whenever youre receiving music recommendations from Pandora or using your map app to navigate, or various other activities youre utilizing ANI. An email spam filter is another form of ANI because it learns whats spam and whats not. Google Translate and voice recognition systems are also examples of ANI. And, some of the best Checkers and Chess players of the world are also ANI systems.
So, as you can see, ANI systems are all around us already, but luckily these types of systems dont have the capability to cause any real threat to humanity. But, each new ANI system that is created is simply another step towards AGI and ASI. However, trying to create a computer that is at least, if not more intelligent than ourselves, is no easy feat. But, the hard parts are probably not what you were imagining. To build a computer that can calculate sums quickly is simple, but to build a computer than can tell the difference between a cat and a dog is much harder. As summed up by computer scientist, Donald Knuth, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.'
The next move in which to make AGI a possibility and to compete with the human brain is to increase the power of computers hardware. One way to demonstrate this capacity is by expressing it in calculations per second (cps) that the brain can handle. Kurzweil created a shortcut for calculating this by taking an estimate for the caps of one structure and its weight, comparing it to that of the whole brain, the multiplying it proportionally until an estimate for the total has been reached. After carrying out this calculation several times, Kurzweil always got the same answer of around 1016, or 10 quadrillion cps.
The worlds fastest supercomputer is currently Chinas Tianhe-2 and has clocked in at around 34 quadrillion cps. But, thats hardly a surprise when it uses 24 megawatts of power, takes up 720 square meters of space, and cost $390 million to build. Perhaps if we were to scale that down slightly to 10 quadrillion cps (the human-level) we may be able to achieve a more workable model and AGI would then become a part of everyday life. Currently, the worlds $1,000 computers are about a thousandth of the human level and while that may not sound like much its actually a huge leap forward. In 1985 we were only about a trillionth of human level. If we keep progressing in the same manner then by 2025 we should have an affordable computer that can rival the power of the human brain. Then its just a case of merging all that power with human-level intelligence.
However, thats so much easier said than done. No one really knows how to make computers smart, but here are the most popular strategies weve come across so far:
1. Make everything the computers problem. This is usually a scientists last resort and involves building a computer whose main skill would be to carry out research on AI and coding them changes into itself.
2. Plagiarize the brain. It makes sense to copy the best of whats already available and currently, scientists are working hard to uncover all we can about the mighty organ. As soon as we know how a human brain can run so efficiently we can begin to replicate it in the form of AI. Artificial neural networks do this already, where they mimic the human brain. But there is still a long way to go before they are anywhere near as sophisticated or effective as the human brain. A more extreme example of plagiarism involves whats known as whole brain emulation. Here the aim is to slice a brain into layers, scan each one, create an accurate 3D model then implement that model on a computer. Wed then have a fully working computer that has a brain as capable as our own.
3. Try to make history and evolution repeat itself in our favor. If building a computer just as powerful as the human brain is too hard to mimic, we could instead try to mimic the evolution of it instead. This is a method called genetic algorithms. They would work by taking part in a performance-and-evaluation process that would happen over and over. When a task is completed successfully the computer would be bred with another just as capable in an attempt to merge them and recreate a better computer. This natural selection process would be done several times until we finally have the result we wanted. The downside is that this process could take billions of years.
Various advancements in technology are happening so quickly that AGI could be here before we know it for two main reasons:
1. Exponential growth is very intense and so much can happen in such a short space of time.
2. Even minute software changes can make a big difference. Just one tweak could have the potential to make it 1,000 times more effective.
Once AGI has been achieved and people are happy living alongside human-level AGI, well then move on to ASI. But, just to clarify, even though AGI has the same level of intelligence (theoretically) as a human, they would still have several advantages over us, including:
Speed: Todays microprocessors can run at speeds 10 million times faster than our own neurons and they can also communicate optically at the speed of light.
Size and storage: Unlike our brains, computers can expand to any size, allowing for a larger working memory and long-term memory that will outperform us any day.
Reliability and durability: Computer transistors are far more accurate than biological neurons and are easily repaired too.
Editability: Computer software can be easily tweaked to allow for updates and fixes.
Collective capability: Humans are great at building a huge amount of collective intelligence and is one of the main reasons why weve survived so long as a species and are far more advanced. A computer that is designed to essentially mimic the human brain, will be even better at it as it could regularly sync with itself so that anything another computer learned could be instantly uploaded to the whole network of them.
Most current models that focus on reaching AGI concentrate on AI achieving these goals via self-improvement. Once everything is able to self-improve, another concept to consider is recursive self-improvement. This is where something has already self-improved and so if therefore considerably smarter than it was original. Now, to improve itself further, will be much easier as it is smarter and not so much to learn and therefore takes bigger leaps. Soon the AGIs intelligence levels will exceed that of a human and thats when you get a superintelligent ASI system. This process is called an Intelligence Explosion and is a prime example of The Law of Accelerating Returns. How soon we will reach this level is still very much in debate.
More News to Read
Read more here:
- Can Apocalypse Be Dealt With? The Diplomat - The Diplomat - October 19th, 2020
- Thirty books to help us understand the world in 2020 - The Guardian - October 19th, 2020
- These Are The 10 Highest-Paid Actresses Of 2020 - Marie Claire - October 19th, 2020
- Forbes 10 highest-paid actresses of 2020 have been revealed - The Independent - October 10th, 2020
- Hulk Just Exploded Into Bits and Pieces in The Immortal Hulk #35 - Screen Rant - August 10th, 2020
- Should You Get Down (And Occasionally Dirty) With Star Trek: Lower Decks? - PRIMETIMER - August 7th, 2020
- The Era Of Autonomous Army Bots is Here - Forbes - August 6th, 2020
- AI Could Overtake Humans in 5 Years, Says Elon Musk, Whose 'Top Concern' is Google-Owned DeepMind - International Business Times, Singapore Edition - August 5th, 2020
- Superintelligence - Wikipedia - July 31st, 2020
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom - July 31st, 2020
- Artificial Intelligence - A Way To Superintelligence ... - July 31st, 2020
- AI governance and the future of humanity with the Rockefeller Foundations senior VP of innovation - The Sociable - July 31st, 2020
- neXt Trailer: It's Not Paranoia If The Threat Is Real - Bleeding Cool News - July 27th, 2020
- The Famous AI Turing Test Put In Reverse And Upside-Down, Plus Implications For Self-Driving Cars - Forbes - July 21st, 2020
- Scoop: Coming Up on a Rebroadcast of MATCH GAME on ABC - Sunday, July 26, 2020 - Broadway World - July 17th, 2020
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel - July 17th, 2020
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India - July 13th, 2020
- The Shadow of Progress - Merion West - July 13th, 2020
- Josiah Henson: the forgotten story in the history of slavery - The Guardian - June 21st, 2020
- The world's best virology lab isn't where you think - Spectator.co.uk - April 3rd, 2020
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes - March 4th, 2020
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria - February 18th, 2020
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com - February 18th, 2020
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends - December 21st, 2019
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom - December 21st, 2019
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence - December 21st, 2019
- AI R&D is booming, but general intelligence is still out of reach - The Verge - December 18th, 2019
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes - December 5th, 2019
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction - October 24th, 2019
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline - October 24th, 2019
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire - October 24th, 2019
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi - October 24th, 2019
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan - October 24th, 2019
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs - October 22nd, 2019
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute - October 22nd, 2019
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think - October 1st, 2019
- Superintelligence: Paths, Dangers, Strategies - Wikipedia - May 3rd, 2019
- Global Risks Report 2017 - Reports - World Economic Forum - March 6th, 2019
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... - March 6th, 2019
- The Artificial Intelligence Revolution: Part 1 - Wait But Why - November 3rd, 2018
- Superintelligence: From Chapter Eight of Films from the ... - October 11th, 2018
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... - August 18th, 2018
- Superintelligence survey - Future of Life Institute - June 23rd, 2018
- Artificial Superintelligence: The Coming Revolution ... - June 3rd, 2018
- Steam Workshop :: Superintelligence - February 15th, 2018
- Steam Workshop :: Superintelligence (BNW) - February 15th, 2018
- The AI Revolution: The Road to Superintelligence | Inverse - August 25th, 2017
- Friendly artificial intelligence - Wikipedia - August 25th, 2017
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian - August 25th, 2017
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders - August 25th, 2017
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) - August 25th, 2017
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard - August 4th, 2017
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro - July 27th, 2017
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour - July 22nd, 2017
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) - July 22nd, 2017
- AI researcher: Why should a superintelligence keep us around? - TNW - July 18th, 2017
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune - July 16th, 2017
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) - July 4th, 2017
- No need to fear Artificial Intelligence - Livemint - Livemint - June 30th, 2017
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard - June 20th, 2017
- The bots are coming - The New Indian Express - June 20th, 2017
- U.S. Navy reaches out to gamers to troubleshoot post ... - June 18th, 2017
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering - June 17th, 2017
- Cars 3 gets back to what made the franchise adequate - Vox - June 13th, 2017
- Using AI to unlock human potential - EJ Insight - June 9th, 2017
- A reply to Wait But Why on machine superintelligence - June 6th, 2017
- The AI Revolution: The Road to Superintelligence (PDF) - June 6th, 2017
- How humans will lose control of artificial intelligence - The Week Magazine - April 8th, 2017
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) - April 8th, 2017
- Who is afraid of AI? - The Hindu - April 8th, 2017
- The AI debate must stay grounded in reality - Prospect - March 8th, 2017
- Superintelligence | Guardian Bookshop - March 7th, 2017
- Supersentience - March 7th, 2017
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC - March 3rd, 2017
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism - March 2nd, 2017
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine - March 2nd, 2017
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News - March 2nd, 2017
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch - February 28th, 2017
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes - February 28th, 2017
- Don't Fear Superintelligent AICCT News - CCT News - February 27th, 2017