Artificial Intelligence is probably the most complex and astounding creations of humanity yet. And that is disregarding the fact that the field remains largely unexplored, which means that every amazing AI application that we see today represents merely the tip of the AI iceberg, as it were. While this fact may have been stated and restated numerous times, it is still hard to comprehensively gain perspective on the potential impact of AI in the future. The reason for this is the revolutionary impact that AI is having on society, even at such a relatively early stage in its evolution.
AIs rapid growth and powerful capabilities have made people paranoid about the inevitability and proximity of an AI takeover. Also, the transformation brought about by AI in different industries has made business leaders and the mainstream public think that we are close to achieving the peak of AI research and maxing out AIs potential. However, understanding the types of AI that are possible and the types that exist now will give a clearer picture of existing AI capabilities and the long road ahead for AI research.
Since AI research purports to make machines emulate human-like functioning, the degree to which an AI system can replicate human capabilities is used as the criterion for determining the types of AI. Thus, depending on how a machine compares to humans in terms of versatility and performance, AI can be classified under one, among the multiple types of AI. Under such a system, an AI that can perform more human-like functions with equivalent levels of proficiency will be considered as a more evolved type of AI, while an AI that has limited functionality and performance would be considered a simpler and less evolved type.
Based on this criterion, there are two ways in which AI is generally classified. One type is based on classifying AI and AI-enabled machines based on their likeness to the human mind, and their ability to think and perhaps even feel like humans. According to this system of classification, there are four types of AI or AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.
These are the oldest forms of AI systems that have extremely limited capability. They emulate the human minds ability to respond to different kinds of stimuli. These machines do not have memory-based functionality. This means such machines cannot use previously gained experiences to inform their present actions, i.e., these machines do not have the ability to learn. These machines could only be used for automatically responding to a limited set or combination of inputs. They cannot be used to rely on memory to improve their operations based on the same. A popular example of a reactive AI machine is IBMs Deep Blue, a machine that beat chess Grandmaster Garry Kasparov in 1997.
Limited memory machines are machines that, in addition to having the capabilities of purely reactive machines, are also capable of learning from historical data to make decisions. Nearly all existing applications that we know of come under this category of AI. All present-day AI systems, such as those using deep learning, are trained by large volumes of training data that they store in their memory to form a reference model for solving future problems. For instance, an image recognition AI is trained using thousands of pictures and their labels to teach it to name objects it scans. When an image is scanned by such an AI, it uses the training images as references to understand the contents of the image presented to it, and based on its learning experience it labels new images with increasing accuracy.
Almost all present-day AI applications, from chatbots and virtual assistants to self-driving vehicles are all driven by limited memory AI.
While the previous two types of AI have been and are found in abundance, the next two types of AI exist, for now, either as a concept or a work in progress. Theory of mind AI is the next level of AI systems that researchers are currently engaged in innovating. A theory of mind level AI will be able to better understand the entities it is interacting with by discerning their needs, emotions, beliefs, and thought processes. While artificial emotional intelligence is already a budding industry and an area of interest for leading AI researchers, achieving Theory of mind level of AI will require development in other branches of AI as well. This is because to truly understand human needs, AI machines will have to perceive humans as individuals whose minds can be shaped by multiple factors, essentially understanding humans.
This is the final stage of AI development which currently exists only hypothetically. Self-aware AI, which, self explanatorily, is an AI that has evolved to be so akin to the human brain that it has developed self-awareness. Creating this type of Ai, which is decades, if not centuries away from materializing, is and will always be the ultimate objective of all AI research. This type of AI will not only be able to understand and evoke emotions in those it interacts with, but also have emotions, needs, beliefs, and potentially desires of its own. And this is the type of AI that doomsayers of the technology are wary of. Although the development of self-aware can potentially boost our progress as a civilization by leaps and bounds, it can also potentially lead to catastrophe. This is because once self-aware, the AI would be capable of having ideas like self-preservation which may directly or indirectly spell the end for humanity, as such an entity could easily outmaneuver the intellect of any human being and plot elaborate schemes to take over humanity.
The alternate system of classification that is more generally used in tech parlance is the classification of the technology into Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI).
This type of artificial intelligence represents all the existing AI, including even the most complicated and capable AI that has ever been created to date. Artificial narrow intelligence refers to AI systems that can only perform a specific task autonomously using human-like capabilities. These machines can do nothing more than what they are programmed to do, and thus have a very limited or narrow range of competencies. According to the aforementioned system of classification, these systems correspond to all the reactive and limited memory AI. Even the most complex AI that uses machine learning and deep learning to teach itself falls under ANI.
Artificial General Intelligence is the ability of an AI agent to learn, perceive, understand, and function completely like a human being. These systems will be able to independently build multiple competencies and form connections and generalizations across domains, massively cutting down on time needed for training. This will make AI systems just as capable as humans by replicating our multi-functional capabilities.
The development of Artificial Superintelligence will probably mark the pinnacle of AI research, as AGI will become by far the most capable forms of intelligence on earth. ASI, in addition to replicating the multi-faceted intelligence of human beings, will be exceedingly better at everything they do because of overwhelmingly greater memory, faster data processing and analysis, and decision-making capabilities. The development of AGI and ASI will lead to a scenario most popularly referred to as the singularity. And while the potential of having such powerful machines at our disposal seems appealing, these machines may also threaten our existence or at the very least, our way of life.
At this point, it is hard to picture the state of our world when more advanced types of AI come into being. However, it is clear that there is a long way to get there as the current state of AI development compared to where it is projected to go is still in its rudimentary stage. For those holding a negative outlook for the future of AI, this means that now is a little too soon to be worrying about the singularity, and there's still time to ensure AI safety. And for those who are optimistic about the future of AI, the fact that we've merely scratched the surface of AI development makes the future even more exciting.
See the article here:
7 Types Of Artificial Intelligence - Forbes
- The Great AI Race: Forecasts Diverge on the Arrival of Superintelligence - elblog.pl - April 14th, 2024 [April 14th, 2024]
- ASI Alliance Voting Opens: What Lies Ahead for AGIX, FET and OCEAN? - CCN.com - April 6th, 2024 [April 6th, 2024]
- Revolutionary AI: The Rise of the Super-Intelligent Digital Masterminds - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI, arms control and the new cold war | The Strategist - The Strategist - November 16th, 2023 [November 16th, 2023]
- The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co - November 16th, 2023 [November 16th, 2023]
- 20 Movies About AI That Came Out in the Last 5 Years - MovieWeb - November 16th, 2023 [November 16th, 2023]
- Can You Imagine Life Without White Supremacy? - Dallasweekly - November 16th, 2023 [November 16th, 2023]
- Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco ... - Medium - October 31st, 2023 [October 31st, 2023]
- The Tesla Trap; Ellison Going Nuclear; Dont Count Headsets Out - Equities News - October 31st, 2023 [October 31st, 2023]
- Future Investment Initiative emphasizes global cooperation and AI ... - Saudi Gazette - October 31st, 2023 [October 31st, 2023]
- AI systems favor sycophancy over truthful answers, says new report - CoinGeek - October 31st, 2023 [October 31st, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 31st, 2023 [October 31st, 2023]
- Invincible's Guardians Of The Globe Team Members, History ... - Screen Rant - October 31st, 2023 [October 31st, 2023]
- From streaming wars to superintelligence with John Oliver & Calum ... - KNEWS - The English Edition of Kathimerini Cyprus - October 22nd, 2023 [October 22nd, 2023]
- Reckoning with self-destruction in Oppenheimer, Indiana Jones, and ... - The Christian Century - October 22nd, 2023 [October 22nd, 2023]
- How Microsoft's CEO tackles the ethical dilemma of AI and its ... - Medium - October 22nd, 2023 [October 22nd, 2023]
- Managing risk: Pandemics and plagues in the age of AI - The Interpreter - October 22nd, 2023 [October 22nd, 2023]
- Artificial Intelligence Has No Reason to Harm Us - The Wire - August 2nd, 2023 [August 2nd, 2023]
- Fischer Black and Artificial Superintelligence - InformationWeek - August 2nd, 2023 [August 2nd, 2023]
- OpenAI Forms Specialized Team to Align Superintelligent AI with ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- 10 Best Books on Artificial Intelligence | TheReviewGeek ... - TheReviewGeek - August 2nd, 2023 [August 2nd, 2023]
- The Concerns Surrounding Advanced Artificial Intelligence and the ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Decentralized AI: Revolutionizing Technology and Addressing ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- An 'Oppenheimer Moment' For The Progenitors Of AI - NOEMA - Noema Magazine - August 2nd, 2023 [August 2nd, 2023]
- The Implications of AI Advancements on Human Thinking and ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Focusing on Tackling Algorithmic Bias is Key to Ethical AI ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Discover This SUPER Early AI Crypto Gem - Altcoin Buzz - August 2nd, 2023 [August 2nd, 2023]
- Biden meets with AI leaders to discuss its 'enormous promise and its ... - KULR-TV - June 20th, 2023 [June 20th, 2023]
- VivaTech: The Secret of Elon Musk's Success? 'Crystal Meth' - The New Stack - June 20th, 2023 [June 20th, 2023]
- AI meets the other AI - POLITICO - POLITICO - June 20th, 2023 [June 20th, 2023]
- Squid Game trailer for real-life reality contest prompts confusion from Netflix users - Yahoo News - June 20th, 2023 [June 20th, 2023]
- Our Future Inside The Fifth Column- Or, What Chatbots Are Really For - Tech Policy Press - June 20th, 2023 [June 20th, 2023]
- Elon Musk refuses to 'censor' Twitter in face of EU rules - Roya News English - June 20th, 2023 [June 20th, 2023]
- AI alignment - Wikipedia - January 4th, 2023 [January 4th, 2023]
- Are We Living In A Simulation? Can We Break Out Of It? - December 28th, 2022 [December 28th, 2022]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook ... - October 13th, 2022 [October 13th, 2022]
- What is Artificial Super Intelligence (ASI)? - GeeksforGeeks - October 13th, 2022 [October 13th, 2022]
- Literature and Religion | Literature and Religion - Patheos - October 13th, 2022 [October 13th, 2022]
- Why AI will never rule the world - Digital Trends - September 27th, 2022 [September 27th, 2022]
- Why DART Is the Most Important Mission Ever Launched to Space - Gizmodo Australia - September 27th, 2022 [September 27th, 2022]
- 'Sweet Home Alabama' turns 20: See how the cast has aged - Wonderwall - September 27th, 2022 [September 27th, 2022]
- Research Shows that Superintelligent AI is Impossible to be Controlled - Analytics India Magazine - September 24th, 2022 [September 24th, 2022]
- Eight best books on AI ethics and bias - INDIAai - September 20th, 2022 [September 20th, 2022]
- AI Art Is Here and the World Is Already Different - New York Magazine - September 20th, 2022 [September 20th, 2022]
- Would "artificial superintelligence" lead to the end of life on Earth ... - September 14th, 2022 [September 14th, 2022]
- Instrumental convergence - Wikipedia - September 14th, 2022 [September 14th, 2022]
- The Best Sci-Fi Movies on HBO Max - CNET - September 14th, 2022 [September 14th, 2022]
- Elon Musk Shares a Summer Reading Idea - TheStreet - August 2nd, 2022 [August 2nd, 2022]
- Bullet Train Review: Brad Pitt Even Shines in an Action-Packed Star Vehicle that Goes Nowhere Fast - IndieWire - August 2nd, 2022 [August 2nd, 2022]
- Peter McKnight: The day of sentient AI is coming, and we're not ready - Vancouver Sun - June 20th, 2022 [June 20th, 2022]
- SC judges should have minimum of seven to eight years of judgeship tenure: Justice L Nageswara Rao - The Tribune India - May 25th, 2022 [May 25th, 2022]
- WATCH: Neuralink, mind uploading and the AI apocalypse - Hamilton Spectator - May 3rd, 2022 [May 3rd, 2022]
- Artificial Intelligence And the Human Context of War - The National Interest Online - May 3rd, 2022 [May 3rd, 2022]
- Elon Musk and the Posthumanist Threat | John Waters - First Things - May 3rd, 2022 [May 3rd, 2022]
- Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution - Analytics Insight - April 15th, 2022 [April 15th, 2022]
- If You Want to Succeed With Artificial Intelligence in Marketing, Invest in People - CMSWire - April 15th, 2022 [April 15th, 2022]
- Here Are All The TV Shows And Movies To Watch Featuring The Cast Of "Atlanta" - BuzzFeed - April 15th, 2022 [April 15th, 2022]
- What2Watch: This week's worth-the-watch - Review - Review - March 31st, 2022 [March 31st, 2022]
- Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 - Analytics Insight - March 29th, 2022 [March 29th, 2022]
- AI Ethics Keeps Relentlessly Asking Or Imploring How To Adequately Control AI, Including The Matter Of AI That Drives Self-Driving Cars - Forbes - March 18th, 2022 [March 18th, 2022]
- What to watch next on Showmax - News24 - March 18th, 2022 [March 18th, 2022]
- Does Kimi deliver the goods? Thriller aims to capitalize on Alexa, Siri and Seattles tech cachet - GeekWire - February 17th, 2022 [February 17th, 2022]
- 'Downfall' follows chain of bad decisions that led up to Boeing 737-Max crashes - Lewiston Morning Tribune - February 17th, 2022 [February 17th, 2022]
- Maybe it is Not Too Late to Buy Bitcoin! BTC has a Long Way to Go - Analytics Insight - February 17th, 2022 [February 17th, 2022]
- Giving an AI control of nuclear weapons: What could possibly go wrong? - Bulletin of the Atomic Scientists - February 7th, 2022 [February 7th, 2022]
- Meet the cast of The Afterparty - Radio Times - January 27th, 2022 [January 27th, 2022]
- 8 big threats to human stability and even existence in 2022 - AMEinfo - January 27th, 2022 [January 27th, 2022]
- Artificial Intelligence in Cardiology | AER - January 17th, 2022 [January 17th, 2022]
- Are We Living in a Computer Simulation? Artificial Superintelligence Could Provide the Answer - BBN Times - January 17th, 2022 [January 17th, 2022]
- Movie Review: The 355 - mxdwn.com - January 9th, 2022 [January 9th, 2022]
- AI control problem - Wikipedia - December 10th, 2021 [December 10th, 2021]
- REPORT : Baltic Event Works in Progress 2021 - Cineuropa - December 5th, 2021 [December 5th, 2021]
- Top Books On AI Released In 2021 - Analytics India Magazine - November 27th, 2021 [November 27th, 2021]
- Inside the MIT camp teaching kids to spot bias in code - Popular Science - November 27th, 2021 [November 27th, 2021]
- The Flash Season 8 Poster Kicks Off Five-Part Armageddon Story Tonight on The CW - TVweb - November 17th, 2021 [November 17th, 2021]
- Nick Bostrom - Wikipedia - November 15th, 2021 [November 15th, 2021]
- Cowboy Bebop; Ein dogs were really spoiled on set - Dog of the Day - November 13th, 2021 [November 13th, 2021]
- Inside the Impact on Marvel of Brian Tyree Henry's Openly Gay Character in 'Eternals' - Black Girl Nerds - November 13th, 2021 [November 13th, 2021]
- The funny formula: Why machine-generated humor is the holy grail of A.I. - Digital Trends - November 13th, 2021 [November 13th, 2021]
- The World's First Decentralized Search Engine for Web3 to Be Launched at the Blockchain Conference in Lisbon - NewsBTC - November 5th, 2021 [November 5th, 2021]