The world of Artificial… – The American Bazaar

Sophia. Source: https://www.hansonrobotics.com/press/

Humans are the most advanced form of Artificial Intelligence (AI), with an ability to reproduce.

Artificial Intelligence (AI) is no longer a theory but is part of our everyday life. Services like TikTok, Netflix, YouTube, Uber, Google Home Mini, and Amazon Echo are just a few instances of AI in our daily life.

This field of knowledge always attracted me in strange ways. I have been an avid reader and I read a variety of subjects of non-fiction nature. I love to watch movies not particularly sci-fi, but I liked Innerspace, Flubber, Robocop, Terminator, Avatar, Ex Machina, and Chappie.

When I think of Artificial Intelligence, I see it from a lay perspective. I do not have an IT background. I am a researcher and a communicator; and, I consider myself a happy person who loves to learn and solve problems through simple and creative ideas. My thoughts on AI may sound different, but Im happy to discuss them.

Humans are the most advanced form of AI that we may know to exit. My understanding is that the only thing that differentiates humans and Artificial Intelligence is the capability to reproduce. While humans have this ability to multiply through male and female union and transfer their abilities through tiny cells, machines lack that function. Transfer of cells to a newborn is no different from the transfer of data to a machine. Its breathtaking that how a tiny cell in a human body has all the necessary information of not only that particular individual but also their ancestry.

Allow me to give an introduction to the recorded history of AI. Before that, I would like to take a moment to share with you my recent achievement that I feel proud to have accomplished. I finished a course in AI from Algebra University in Croatia in July. I could attend this course through a generous initiative and bursary from Humber College (Toronto). Such initiatives help intellectually curious minds like me to learn. I would also like to express that the views expressed are my own understanding and judgment.

What is AI?

AI is a branch of computer science that is based on computer programming like several other coding programs. What differentiates Artificial Intelligence, however, is its aim that is to mimic human behavior. And this is where things become fascinating as we develop artificial beings.

Origins

I have divided the origins of AI into three phases so that I can explain it better and you dont miss on the sequence of incidents that led to the step by step development of AI.

Phase 1

AI is not a recent concept. Scientists were already brainstorming about it and discussing the thinking capabilities of machines even before the term Artificial Intelligence was coined.

I would like to start from 1950 with Alan Turing, a British intellectual who brought WW II to an end by decoding German messages. Turing released a paper in the October of 1950 Computing Machinery and Intelligence that can be considered as among the first hints to thinking machines. Turing starts the paper thus: I propose to consider the question, Can machines think?. Turings work was also the beginning of Natural Language Processing (NLP). The 21st-century mortals can relate it with the invention of Apples Siri. The A.M. Turing Award is considered the Nobel of computing. The life and death of Turing are unusual in their own way. I will leave it at that but if you are interested in delving deeper, here is one article by The New York Times.

Five years later, in 1955, John McCarthy, an Assistant Professor of Mathematics at Dartmouth College, and his team proposed a research project in which they used the term Artificial Intelligence, for the first time.

McCarthy explained the proposal saying, The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. He continued, An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

It started with a few simple logical thoughts that germinated into a whole new branch of computer science in the coming decades. AI can also be related to the concept of Associationism that is traced back to Aristotle from 300 BC. But, discussing that in detail will be outside the scope of this article.

It was in 1958 that we saw the first model replicating the brains neuron system. This was the year when psychologist Frank Rosenblatt developed a program called Perceptron. Rosenblatt wrote in his article, Stories about the creation of machines having human qualities have long been fascinating province in the realm of science fiction. Yet we are now about to witness the birth of such a machine a machine capable of perceiving, recognizing, and identifying its surroundings without any human training or control.

A New York Times article published in 1958 introduced the invention to the general public saying, The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.

My investigation in one of the papers of Rosenblatt hints that even in the 1940s scientists talked about artificial neurons. Notice in the Reference section of Rosenblatts paper published in 1958. It lists Warren S. McCulloch and Walter H. Pitts paper of 1943. If you are interested in more details, I would suggest an article published in Medium.

The first AI conference took place in 1959. However, by this time, the leads in Artificial Intelligence had already exhausted the computing capabilities of the time. It is, therefore, no surprise that not much could be achieved in AI in the next decade.

Thankfully, the IT industry was catching up quickly and preparing the ground for stronger computers. Gordon Moore, the co-founder of Intel, made a few predictions in his article in 1965. Moore predicted a huge growth of integrated circuits, more components per chip, and reduced costs. Integrated circuits will lead to such wonders as home computers or at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment, Moore predicted. Although scientists had been toiling hard to launch the Internet, it was not until the late 1960s that the invention started showing some promises. On October 29, 1969, ARPAnet delivered its first message: a node-to-node communication from one computer to another, notes History.com.

With the Internet in the public domain, computer companies had a reason to accelerate their own developments. In 1971, Intel introduced its first chip. It was a huge breakthrough. Intel impressively compared the size and computing abilities of the new hardware saying, This revolutionary microprocessor, the size of a little fingernail, delivered the same computing power as the first electronic computer built in 1946, which filled an entire room.

Around the 1970s more popular versions of languages came in use, for instance, C and SQL. I mention these two as I remember when I did my Diploma in Network-Centered Computing in 2002, the advanced versions of these languages were still alive and kicking. Britannica has a list of computer programming languages if you care to read more on when the different languages came into being.

These advancements created a perfect amalgamation of resources to trigger the next phase in AI.

Phase 2

In the late 1970s, we see another AI enthusiast coming in the scene with several research papers on AI. Geoffrey Hinton, a Canadian researcher, had confidence in Rosenblatts work on Perceptron. He resolved an inherent problem with Rosenblatts model that was made up of a single layer perceptron. To be fair to Rosenblatt, he was well aware of the limitations of this approach he just didnt know how to learn multiple layers of features efficiently, Hinton noted in his paper in 2006.

This multi-layer approach can be referred to as a Deep Neural Network.

Another scientist, Yann LeCun, who studied under Hinton and worked with him, was making strides in AI, especially Deep Learning (DL, explained later in the article) and Backpropagation Learning (BL). BL can be referred to as machines learning from their mistakes or learning from trial and error.

Similar to Phase 1, the developments of Phase 2 end here due to very limited computing power and insufficient data. This was around the late 1990s. As the Internet was fairly recent, there was not much data available to feed the machines.

Phase 3

In the early 21st-century, the computer processing speed entered a new level. In 2011, IBMs Watson defeated its human competitors in the game of Jeopardy. Watson was quite impressive in its performance. On September 30, 2012, Hinton and his team released the object recognition program called Alexnet and tested it on Imagenet. The success rate was above 75 percent, which was not achieved by any such machine before. This object recognition sent ripples across the industry. By 2018, image recognition programming became 97% accurate! In other words, computers were recognizing objects more accurately than humans.

In 2015, Tesla introduced its self-driving AI car. The company boasts its autopilot technology on its web site saying, All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today, and full self-driving capabilities in the futurethrough software updates designed to improve functionality over time.

Go enthusiasts will also remember the 2016 incident when Google-owned DeepMinds AlphaGo defeated the human Go world-champion Lee Se-dol. This incident came at least a decade too soon. We know that Go is considered one of the most complex games in human history. And, AI could learn it in just 3 days, to a level to beat a world champion who, I would assume must have spent decades to achieve that proficiency!

The next phase shall be to work on Singularity. Singularity can be understood as machines building better machines, all by themselves. In 1993, scientist Vernor Vinge published an essay in which he wrote, Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Scientists are already working on the concept of technological singularity. If these achievements can be used in a controlled way, these can help several industries, for instance, healthcare, automobile, and oil exploration.

I would also like to add here that Canadian universities are contributing significantly to developments in Artificial Intelligence. Along with Hinton and LeCun, I would like to mention Richard Sutton. Sutton, Professor at the University of Alberta, is of the view that advancements in the singularity can be expected around 2040. This makes me feel that when AI will no longer need human help, it will be a kind of specie in and of itself.

To get to the next phase, however, we would need more computer power to achieve the goals of tomorrow.

Now that we have some background on the genesis of AI and some information on the experts who nourished this advancement all these years, it is time to understand a few key terms of AI. By the way, if you ask me, every scientist who is behind these developments is a new topic in themselves. I have tried to put a good number of researched sources in the article to generate your interest and support your knowledge in AI.

Big Data

With the Internet of Things (IoT), we are saving tons of data every second from every corner of the world. Consider, for instance, Google. It seems that it starts tracking our intentions as soon as we type the first alphabet on our keyboard. Now think for a second how much data is generated from all the internet users from all over the World. Its already making predictions of our likes, dislikes, actionseverything.

The concept of big data is important as that makes the memory of Artificial Intelligence. Its like a parent sharing their experience with their child. If the child can learn from that experience, they develop cognizant abilities and venture into making their own judgments and decisions. Similarly, big data is the human experience that is shared with machines and they develop on that experience. This can be supervised as well as unsupervised learning.

Symbolic Reasoning and Machine Learning

The basics of all processes are some mathematical patterns. I think that this is because math is something that is certain and easy to understand for all humans. 2 + 2 will always be 4 unless there is something we havent figured out in the equation.

Symbolic reasoning is the traditional method of getting work done through machines. According to Pathmind, to build a symbolic reasoning system, first humans must learn the rules by which two phenomena relate, and then hard-code those relationships into a static program. Symbolic reasoning in AI is also known as the Good Old Fashioned AI (GOFAI).

Machine Learning (ML) refers to the activity where we feed big data to machines and they identify patterns and understand the data by themselves. The outcomes are not as predicted as here machines are not programmed to specific outcomes. Its like a human brain where we are free to develop our own thoughts. A video by ColdFusion explains ML thus: ML systems analyze vast amounts of data and learn from their past mistakes. The result is an algorithm that completes its task effectively. ML works well with supervised learning.

Here I would like to make a quick tangent for all those creative individuals who need some motivation. I feel that all inventions were born out of creativity. Of course, creativity comes with some basic understanding and knowledge. Out of more than 7 billion brains, somewhere someone is thinking out of the box, verifying their thoughts, and trying to communicate their ideas. Creativity is vital for success. This may also explain why some of the most important inventions took place in a garage (Google and Microsoft). Take, for instance, a small creative tool like a pizza cutter. Someone must have thought about it. Every time I use it, I marvel how convenient and efficient it is to slice a pizza without disturbing the toppings with that running cutter. Always stay creative and avoid preconceived ideas and stereotypes.

Alright, back to the topic!

Deep Learning

Deep Learning (DL) is a subset of ML. This technology attempts to mimic the activity of neurons in our brain using matrix mathematics, explains ColdFusion. I found this article that describes DL well. With better computers and big data, it is now possible to venture into DL. Better computers provide the muscle and the big data provides the experience to a neuron network. Together, they help a machine think and execute tasks just like a human would do. I would suggest reading this paper titled Deep Leaning by LeCun, Bengio, and Hinton (2015) for a deeper perspective on DL.

The ability of DL makes it a perfect companion for unsupervised learning. As big data is mostly unlabelled, DL processes it to identify patterns and make predictions. This not only saves a lot of time but also generates results that are completely new to a human brain. DL offers another benefit it can work offline; meaning, for instance, a self-driving car. It can take instantaneous decisions while on the road.

What next?

I think that the most important future development will be AI coding AI to perfection, all by itself.

Neural nets designing neural nets have already started. Early signs of self-production are in vision. Google has already created programs that can produce its own codes. This is called Automatic Machine Learning or AutoML. Sundar Pichai, CEO of Google and Alphabet, shared the experiment in his blog. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. Thats why weve created an approach called AutoML, showing that its possible for neural nets to design neural nets, said Pichai (2017).

Full AI capabilities will also trigger several other programs like fully-automated self-driving cars, full-service assistance in sectors like health care and hospitality.

Among the several useful programs of AI, ColdFusion has identified the five most impressive ones in terms of image outputs. These are AI generating an image from a text (Plug and Play Generative Networks: Conditional Iterative Generation of Images in Latent Space), AI reading lip movements from a video with 95% accuracy (LipNet), Artificial Intelligence creating new images from just a few inputs (Pix2Pix), AI improving the pixels of an image (Google Brains Pixel Recursive Super Resolution), and AI adding color to b/w photos and videos (Let There Be Color). In the future, these technologies can be used for more advanced functions like law enforcement et cetera.

AI can already generate images of non-existing humans and add sound and body movements to the videos of individuals! In the coming years, these tools can be used for gaming purposes, or maybe fully capable multi-dimensional assistance like the one we see in the movie Iron Man. Of course, all these developments would require new AI laws to avoid misuse; however, that is a topic for another discussion.

Humans are advanced AI

Artificial Intelligence is getting so good at mimicking humans that it seems that humans themselves are some sort of AI. The way Artificial Intelligence learns from data, retains information, and then develops analytical, problem solving, and judgment capabilities are no different from a parent nurturing their child with their experience (data) and then the child remembering the knowledge and using their own judgments to make decisions.

We may want to remember here that there are a lot of things that even humans have not figured out with all their technology. A lot of things are still hidden from us in plain sight. For instance, we still dont know about all the living species in the Amazon rain forest. Astrology and astronomy are two other fields where, I think, very little is known. Air, water, land, and celestial bodies control human behavior, and science has evidence for this. All this hints that we as humans are not in total control of ourselves. This feels similar to AI, which so far requires external intervention, like from humans, to develop it.

I think that our past has answers to a lot of questions that may unravel our future. Take for example the Great Pyramid at Giza, Egypt, which we still marvel for its mathematical accuracy and alignment with the earths equator as well as the movements of celestial bodies. By the way, we could compare the measurements only because we have already reached a level to know the numbers relating to the equator.

Also, think of Indias knowledge of astrology. It has so many diagrams of planetary movements that are believed to impact human behavior. These sketches have survived several thousand years. One of Indias languages, Vedic, is considered more than 4,000 years old, perhaps one of the oldest in human history. This was actually a question asked from IBM Watson during the 2011 Jeopardy competition. Understanding the literature in this language might unlock a wealth of information.

I feel that with the kind of technology we have in AI, we should put some of it at work to unearth our wisdom from the past. It is a possibility that if we overlook it, we may waste resources by reinventing the wheel.

Read the original post:

The world of Artificial... - The American Bazaar

Related Posts

Comments are closed.