Daily Archives: June 19, 2020

Understanding the Four Types of Artificial Intelligence

Posted: June 19, 2020 at 7:45 am

The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?

The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely wont see machines exhibit broadly-applicable intelligence comparable to or exceeding that of humans, though it does go on to say that in the coming years, machines will reach and exceed human performance on more and more tasks. But its assumptions about how those capabilities will develop missed some important points.

As an AI researcher, Ill admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call the boring kind of AI. It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.

The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play Jeopardy! well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.

We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us and us from them.

There are four types of artificial intelligence: reactive machines, limited memory, theory of mind and self-awareness.

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBMs chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesnt have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesnt rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a representation of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blues design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.

Similarly, Googles AlphaGo, which has beaten top human Go experts, cant evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blues, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they cant be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world meaning they cant function beyond the specific tasks theyre assigned and are easily fooled.

They cant interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But its bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems wont ever be bored, or interested, or sad.

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars speed and direction. That cant be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. Theyre included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They arent saved as part of the cars library of experience it can learn from, the way human drivers compile experience over years behind the wheel.

So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called theory of mind the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each others motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, theyll have to be able to understand that each of us has thoughts and feelings and expectations for how well be treated. And theyll have to adjust their behavior accordingly.

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the theory of mind possessed by Type III artificial intelligences. Consciousness is also called self-awareness for a reason. (I want that item is a very different statement from I know I want that item.) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because thats how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

This article was originally published on The Conversation.

Read the original:

Understanding the Four Types of Artificial Intelligence

Posted in Artificial Intelligence | Comments Off on Understanding the Four Types of Artificial Intelligence

10 Steps to Adopting Artificial Intelligence in Your …

Posted: at 7:45 am

Artificial intelligence (AI) is clearly a growing force in the technology industry. AI is taking center stage at conferences and showing potential across a wide variety of industries, including retail and manufacturing. New products are being embedded with virtual assistants, while chatbots are answering customer questions on everything from your online office supplier's site to your web hosting service provider's support page. Meanwhile, companies such as Google, Microsoft, and Salesforce are integrating AI as an intelligence layer across their entire tech stack. Yes, AI is definitely having its moment.

This isn't the AI that pop culture has conditioned us to expect; it's not sentient robots or Skynet, or even Tony Stark's Jarvis assistant. This AI plateau is happening under the surface, making our existing tech smarter and unlocking the power of all the data that enterprises collect. What that means: Widespread advancement in machine learning (ML), computer vision, deep learning, and natural language processing (NLP) have made it easier than ever to bake an AI algorithm layer into your software or cloud platform.

For businesses, practical AI applications can manifest in all sorts of ways depending on your organizational needs and the business intelligence (BI) insights derived from the data you collect. Enterprises can employ AI for everything from mining social data to driving engagement in customer relationship management (CRM) to optimizing logistics and efficiency when it comes to tracking and managing assets.

ML is playing a key role in the development of AI, noted Luke Tang, General Manager of TechCode's Global AI+ Accelerator program, which incubates AI startups and helps companies incorporate AI on top of their existing products and services.

"Right now, AI is being driven by all the recent progress in ML. There's no one single breakthrough you can point to, but the business value we can extract from ML now is off the charts," Tang said. "From the enterprise point of view, what's happening right now could disrupt some core corporate business processes around coordination and control: scheduling, resource allocation and reporting." Here we provide tips from some experts to explain the steps businesses can take to integrate AI in your organization and to ensure your implementation is a success.

Take the time to become familiar with what modern AI can do. The TechCode Accelerator offers its startups a wide array of resources through its partnerships with organizations such as Stanford University and corporations in the AI space. You should also take advantage of the wealth of online information and resources available to familiarize yourself with the basic concepts of AI. Tang recommends some of the remote workshops and online courses offered by organizations such as Udacity as easy ways to get started with AI and to increase your knowledge of areas such as ML and predictive analytics within your organization.

The following are a number of online resources (free and paid) that you can use to get started:

Once you're up to speed on the basics, the next step for any business is to begin exploring different ideas. Think about how you can add AI capabilities to your existing products and services. More importantly, your company should have in mind specific use cases in which AI could solve business problems or provide demonstrable value.

"When we're working with a company, we start with an overview of its key tech programs and problems. We want to be able to show it how natural language processing, image recognition, ML, etc. fit into those products, usually with a workshop of some sort with the management of the company," Tang explained. "The specifics always vary by industry. For example, if the company does video surveillance, it can capture a lot of value by adding ML to that process."

Next, you need to assess the potential business and financial value of the various possible AI implementations you've identified. It's easy to get lost in "pie in the sky" AI discussions, but Tang stressed the importance of tying your initiatives directly to business value.

"To prioritize, look at the dimensions of potential and feasibility and put them into a 2x2 matrix," Tang said. "This should help you prioritize based on near-term visibility and know what the financial value is for the company. For this step, you usually need ownership and recognition from managers and top-level executives."

There's a stark difference between what you want to accomplish and what you have the organizational ability to actually achieve within a given time frame. Tang said a business should know what it's capable of and what it's not from a tech and business process perspective before launching into a full-blown AI implementation.

"Sometimes this can take a long time to do," Tang said. "Addressing your internal capability gap means identifying what you need to acquire and any processes that need to be internally evolved before you get going. Depending on the business, there may be existing projects or teams that can help do this organically for certain business units."

Once your business is ready from an organizational and tech standpoint, then it's time to start building and integrating. Tang said the most important factors here are to start small, have project goals in mind, and, most importantly, be aware of what you know and what you don't know about AI. This is where bringing in outside experts or AI consultants can be invaluable.

"You don't need a lot of time for a first project; usually for a pilot project, 2-3 months is a good range," Tang said. "You want to bring internal and external people together in a small team, maybe 4-5 people, and that tighter time frame will keep the team focused on straightforward goals. After the pilot is completed, you should be able to decide what the longer-term, more elaborate project will be and whether the value proposition makes sense for your business. It's also important that expertise from both sidesthe people who know about the business and the people who know about AIis merged on your pilot project team."

Tang noted that, before implementing ML into your business, you need to clean your data to make it ready to avoid a "garbage in, garbage out" scenario. "Internal corporate data is typically spread out in multiple data silos of different legacy systems, and may even be in the hands of different business groups with different priorities," Tang said. "Therefore, a very important step toward obtaining high-quality data is to form a cross-[business unit] taskforce, integrate different data sets together, and sort out inconsistencies so that the data is accurate and rich, with all the right dimensions required for ML."

Begin applying AI to a small sample of your data rather than taking on too much too soon. "Start simple, use AI incrementally to prove value, collect feedback, and then expand accordingly," said Aaron Brauser, Vice President of Solutions Management at M*Modal, which offers natural language understanding (NLU) tech for health care organizations as well as an AI platform that integrates with electronic medical records (EMRs).

A specific type of data could be information on certain medical specialties. "Be selective in what the AI will be reading," said Dr. Gilan El Saadawi, Chief Medical Information Officer (CMIO) at M*Modal. "For example, pick a certain problem you want to solve, focus the AI on it, and give it a specific question to answer and not throw all the data at it."

After you ramp up from a small sample of data, you'll need to consider the storage requirements to implement an AI solution, according to Philip Pokorny, Chief Technical Officer (CTO) at Penguin Computing, a company that offers high-performance computing (HPC), AI, and ML solutions.

"Improving algorithms is important to reaching research results. But without huge volumes of data to help build more accurate models, AI systems cannot improve enough to achieve your computing objectives," Pokorny wrote in a white paper entitled, "Critical Decisions: A Guide to Building the Complete Artificial Intelligence Solution Without Regrets." "That's why inclusion of fast, optimized storage should be considered at the start of AI system design."

In addition, you should optimize AI storage for data ingest, workflow, and modeling, he suggested. "Taking the time to review your options can have a huge, positive impact to how the system runs once its online," Pokorny added.

With the additional insight and automation provided by AI, workers have a tool to make AI a part of their daily routine rather than something that replaces it, according to Dominic Wellington, Global IT Evangelist at Moogsoft, a provider of AI for IT operations (AIOps). "Some employees may be wary of technology that can affect their job, so introducing the solution as a way to augment their daily tasks is important," Wellington explained.

He added that companies should be transparent on how the tech works to resolve issues in a workflow. "This gives employees an 'under the hood' experience so that they can clearly visualize how AI augments their role rather than eliminating it," he said.

When you're building an AI system, it requires a combination of meeting the needs of the tech as well as the research project, Pokorny explained. "The overarching consideration, even before starting to design an AI system, is that you should build the system with balance," Pokorny said. "This may sound obvious but, too often, AI systems are designed around specific aspects of how the team envisions achieving its research goals, without understanding the requirements and limitations of the hardware and software that would support the research. The result is a less-than-optimal, even dysfunctional, system that fails to achieve the desired goals."

To achieve this balance, companies need to build in sufficient bandwidth for storage, the graphics processing unit (GPU), and networking. Security is an oft-overlooked component as well. AI by its nature requires access to broad swaths of data to do its job. Make sure that you understand what kinds of data will be involved with the project and that your usual security safeguards -- encryption, virtual private networks (VPN), and anti-malware -- may not be enough.

"Similarly, you have to balance how the overall budget is spent to achieve research with the need to protect against power failure and other scenarios through redundancies," Pokorny said. "You may also need to build in flexibility to allow repurposing of hardware as user requirements change."

Further Reading

Business Reviews

Business Best Picks

See the article here:

10 Steps to Adopting Artificial Intelligence in Your ...

Posted in Artificial Intelligence | Comments Off on 10 Steps to Adopting Artificial Intelligence in Your …

Build Your Own AI (Artificial Intelligence) Assistant 101 …

Posted: at 7:45 am

Now is when things start getting real.

Click on "Create Intent", at the top of the console, to create you very first Intent.

I will be naming this Intent, "startconvo.Hi" (At the topmost blank), and the purpose of this intent would be to respond to greetings such as Hi, Hello, etc.

Intents have 2 main sections:

USER SAYS: In this section, you'll provide various phrases a User may ask. The more phrases you add, the better your Assistant can learn and respond to similar phrases. (Try to add at least half a dozen phrases so that your Agent can understand and can recognize other similar phrases.)

RESPONSE: Here, you'll provide answers for the said User phrases. You can add multiple responses in this section, and your Agent will pick one at random.This is done to avoid redundancy, and make the conversation more natural-like. Responses can also be rich messages like Cards, Images, etc, that are displayed in devices that support them. (Refer to docs for more info: Rich Messages)

For JARVIS this is what the 2 sections contain:

User Says : Hi, Hey, Hello, Yo

Responses : My man! , Hey! , Hi There! , Yo Dawg!

Don't forget to Save after adding changes.

YOU NOW HAVE AN AI ASSISTANT (YAAAAAAAY!!!). Try talking to it in the test console.

P.S: If you are using Chrome Browser, you can click on the mic icon in the Test Console to talk to your Agent and get your response.

P.S.2: Notice how JARVIS responds when I say "Hey Jarvis!" (or) "Hola Jarvis!" even though I haven't fed that phrase in the User says section. (It's a Magic Trick! xD)

Read this article:

Build Your Own AI (Artificial Intelligence) Assistant 101 ...

Posted in Artificial Intelligence | Comments Off on Build Your Own AI (Artificial Intelligence) Assistant 101 …

25 Stunning Advances in Artificial Intelligence | Stacker

Posted: at 7:45 am

Artificial intelligence (AI) is defined as a branch of computer science dealing with the simulation of intelligent behavior in computers; the capability of a machine to imitate intelligent human behavior, according to Websters Dictionary.Recent advancements in the field, however, have proven that AI is so much more than a mere scientific curiosity. The technological advances attributed to AI have the potential to completely alter the world as we know it.

Artificial intelligence used to be the stuff of science fiction;evidence of the concept being studied by real-life scientists dates back to the 1950s. The famous Alan Turing explored the theory in a research paper in the year 1950, but the fundamentals of computers werent yet advanced enough to bring the idea of artificial intelligence to fruition. By 1955 The Logic Theorist program funded by the Research and Development (RAND) Corporation became what many believe to be the first example of AI. The program was designed to mimic the human minds ability to problem solve, eventually setting the stage for a historic conference called the Dartmouth Summer Research Project on Artificial Intelligence in 1956.

As computers began to get faster, more powerful, and less expensive, AI began to pick up steam through the '70s. Successful projects began emerging in scientific communities, some even securing funding from government agencies. Then, for close to a decade, AI research hit a wall as funding lapsed and scientific theories began to outpace computer ability once again. The biggest exception was a Japanese government-funded, $400 million project aimed at improving artificial intelligence from 1982 to 1990.

The 1990s and 2000s saw some huge advancements in artificial intelligence as the fundamental limits of computer storage yielded to new hardware innovations. As the applications of AI become more and more prevalent in the daily lives of humans, it is essential to have the context of some of the most important advances in AI history.

Stacker explored 25 advances in artificial intelligence from all different uses, applications, and innovations. Whether its robots, supercomputers, health care, or search optimization, AI is coming up strong.

You may also like:Jobs most in danger of being automated

More:

25 Stunning Advances in Artificial Intelligence | Stacker

Posted in Artificial Intelligence | Comments Off on 25 Stunning Advances in Artificial Intelligence | Stacker

Pros and Cons of Artificial Intelligence – LinkedIn

Posted: at 7:45 am

Artificial intelligence (AI) is the intelligence of machines. It is about designing machines that can think. Researchers also aim at introducing an emotional aspect into them. How will it affect our lives? Read this Buzzle article for an overview of the pros and cons of artificial intelligence.

Pros

With artificial intelligence, the chances of error are almost nil and greater precision and accuracy is achieved.

Artificial intelligence finds applications in space exploration. Intelligent robots can be used to explore space. They are machines and hence have the ability to endure the hostile environment of the interplanetary space. They can be made to adapt in such a way that planetary atmospheres do not affect their physical state and functioning.

Intelligent robots can be programmed to reach the Earth's nadirs. They can be used to dig for fuels. They can be used for mining purposes. The intelligence of machines can be harnessed for exploring the depths of oceans. These machines can be of use in overcoming the limitations that humans have.

Intelligent machines can replace human beings in many areas of work. Robots can do certain laborious tasks. Painstaking activities, which have long been carried out by humans can be taken over by the robots. Owing to the intelligence programmed in them, the machines can shoulder greater responsibilities and can be programmed to manage themselves.

Smartphones are a great example of the application of artificial intelligence. In utilities like predicting what a user is going to type and correcting human errors in spelling, machine intelligence is at work. Applications like Siri that act as personal assistants, GPS and Maps applications that give users the best or the shortest routes to take as well as the traffic and time estimates to reach there, use artificial intelligence. Applications on phones or computers that predict user actions and also make recommendations that suit user choice, are applications of AI. Thus, we see that artificial intelligence has made daily life a lot easier.

Fraud detection in smart card-based systems is possible with the use of AI. It is also employed by financial institutions and banks to organize and manage records.

Organizations use avatars that are digital assistants who interact with the users, thus saving the need of human resources.

Emotions that often intercept rational thinking of a human being are not a hindrance for artificial thinkers. Lacking the emotional side, robots can think logically and take the right decisions. Sentiments are associated with moods that affect human efficiency. This is not the case with machines with artificial intelligence.

Artificial intelligence can be utilized in carrying out repetitive and time-consuming tasks efficiently.

Intelligent machines can be employed to do certain dangerous tasks. They can adjust their parameters such as their speed and time, and be made to act quickly, unaffected by factors that affect humans.

When we play a computer game or operate a computer-controlled bot, we are in fact interacting with artificial intelligence. In a game where the computer plays as our opponent, it is with the help of AI that the machine plans the game moves in response to ours. Thus, gaming is among the most common examples of the advantages of artificial intelligence.

AI is at work in the medical field too. Algorithms can help the doctors assess patients and their health risks. It can help them know the side effects that various medicines can have. Surgery simulators use machine intelligence in training medical professionals. AI can be used to simulate brain functioning, and thus prove useful in the diagnosis and treatment of neurological problems. As in case of any other field, repetitive or time-consuming tasks can be managed through the application of artificial intelligence.

Robotic pets can help patients with depression and also keep them active.

Robotic radiosurgery helps achieve precision in the radiation given to tumors, thus reducing the damage to surrounding tissues.

The greatest advantage of artificial intelligence is that machines do not require sleep or breaks, and are able to function without stopping. They can continuously perform the same task without getting bored or tired. When employed to carry out dangerous tasks, the risk to human health and safety is reduced.

Cons

One of the main disadvantages of artificial intelligence is the cost incurred in the maintenance and repair. Programs need to be updated to suit the changing requirements, and machines need to be made smarter. In case of a breakdown, the cost of repair may be very high. Procedures to restore lost code or data may be time-consuming and costly.

An important concern regarding the application of artificial intelligence is about ethics and moral values. Is it ethically correct to create replicas of human beings? Do our moral values allow us to recreate intelligence? Intelligence is a gift of nature. It may not be right to install it into a machine to make it work for our benefit.

Machines may be able to store enormous amounts of data, but the storage, access, and retrieval is not as effective as in case of the human brain. They may be able to perform repetitive tasks for long, but they do not get better with experience, like humans do. They are not able to act any different from what they are programmed to do. Though this is mostly seen as an advantage, it may work the other way, when a situation demands one to act in way different from the usual. Machines may not be as efficient as humans in altering their responses depending on the changing situations.

The idea of machines replacing human beings sounds wonderful. It appears to save us from all the pain. But is it really so exciting? Ideas like working wholeheartedly, with a sense of belonging, and with dedication have no existence in the world of artificial intelligence. Imagine robots working in hospitals. Do you picture them showing the care and concern that humans would? Due you think online assistants (avatars) can give the kind of service that a human being would? Concepts such as care, understanding, and togetherness cannot be understood by machines, which is why, how much ever intelligent they become, they will always lack the human touch.

Imagine intelligent machines employed in creative fields. Do you think robots can excel or even compete the human mind in creative thinking or originality? Thinking machines lack a creative mind. Human beings are emotional intellectuals. They think and feel. Their feelings guide their thoughts. This is not the case with machines. The intuitive abilities that humans possess, the way humans can judge based on previous knowledge, the inherent abilities that they have, cannot be replicated by machines. Also, machines lack common sense.

If robots begin to replace humans in every field, it will eventually lead to unemployment. People will be left with nothing to do. So much empty time may result in its destructive use. Thinking machines will govern all the fields and populate the positions that humans occupy, leaving thousands of people jobless.

Also, due to the reduced need to use their intelligence, lateral thinking and multitasking abilities of humans may diminish. With so much assistance from machines, if humans do not need to use their thinking abilities, these abilities will gradually decline. With the heavy application of artificial intelligence, humans may become overly dependent on machines, losing their mental capacities.

If the control of machines goes in the wrong hands, it may cause destruction. Machines won't think before acting. Thus, they may be programmed to do the wrong things, or for mass destruction.

Apart from all these cons of AI, there is a fear of robots superseding humans. Ideally, human beings should continue to be the masters of machines. However, if things turn the other way round, the world will turn into chaos. Intelligent machines may prove to be smarter than us, they might enslave us and start ruling the world.

It should be understood that artificial intelligence has several pros but it has its disadvantages as well. Its benefits and risks should be carefully weighed before employing it for human convenience. Or, in the greed to play God, man may destroy himself.

Read more at Buzzle: http://www.buzzle.com/articles/pros-and-cons-of-artificial-intelligence.html

Continue reading here:

Pros and Cons of Artificial Intelligence - LinkedIn

Posted in Artificial Intelligence | Comments Off on Pros and Cons of Artificial Intelligence – LinkedIn

Artificial Intelligence | Internet Encyclopedia of Philosophy

Posted: at 7:45 am

Artificial intelligence (AI) would be the possession of intelligence, or the exercise of thought, by machines such as computers. Philosophically, the main AI question is Can there be such? or, as Alan Turing put it, Can a machine think? What makes this a philosophical and not just a scientific and technical question is the scientific recalcitrance of the concept of intelligence or thought and its moral, religious, and legal significance. In European and other traditions, moral and legal standing depend not just on what is outwardly done but also on inward states of mind. Only rational individuals have standing as moral agents and status as moral patients subject to certain harms, such as being betrayed. Only sentient individuals are subject to certain other harms, such as pain and suffering. Since computers give every outward appearance of performing intellectual tasks, the question arises: Are they really thinking? And if they are really thinking, are they not, then, owed similar rights to rational human beings? Many fictional explorations of AI in literature and film explore these very questions.

A complication arises if humans are animals and if animals are themselves machines, as scientific biology supposes. Still, we wish to exclude from the machines in question men born in the usual manner (Alan Turing), or even in unusual manners such asin vitro fertilization or ectogenesis. And if nonhuman animals think, we wish to exclude them from the machines, too. More particularly, the AI thesis should be understood to hold that thought, or intelligence, can be produced by artificial means; made, not grown. For brevitys sake, we will take machine to denote just the artificial ones. Since the present interest in thinking machines has been aroused by a particular kind of machine, an electronic computer or digital computer, present controversies regarding claims of artificial intelligence center on these.

Accordingly, the scientific discipline and engineering enterprise of AI has been characterized as the attempt to discover and implement the computational means to make machines behave in ways that would be called intelligent if a human were so behaving (John McCarthy), or to make them do things that would require intelligence if done by men (Marvin Minsky). These standard formulations duck the question of whether deeds which indicate intelligence when done by humans truly indicate it when done by machines: thats the philosophical question. So-called weak AI grants the fact (or prospect) of intelligent-acting machines; strong AI says these actions can be real intelligence. Strong AI says some artificial computation is thought. Computationalism says that all thought is computation. Though many strong AI advocates are computationalists, these are logically independent claims: some artificial computation being thought is consistent with some thought not being computation, contra computationalism. All thought being computation is consistent with some computation (and perhaps all artificial computation) not being thought.

Intelligence might be styled the capacity to think extensively and well. Thinking well centrally involves apt conception, true representation, and correct reasoning. Quickness is generally counted a further cognitive virtue. The extent or breadth of a things thinking concerns the variety of content it can conceive, and the variety of thought processes it deploys. Roughly, the more extensively a thing thinks, the higher the level (as is said) of its thinking. Consequently, we need to distinguish two different AI questions:

In Computer Science, work termed AI has traditionally focused on the high-level problem; on imparting high-level abilities to use language, form abstractions and concepts and to solve kinds of problems now reserved for humans (McCarthy et al. 1955); abilities to play intellectual games such as checkers (Samuel 1954) and chess (Deep Blue); to prove mathematical theorems (GPS); to apply expert knowledge to diagnose bacterial infections (MYCIN); and so forth. More recently there has arisen a humbler seeming conception behavior-based or nouvelle AI according to which seeking to endow embodied machines, or robots, with so much as insect level intelligence (Brooks 1991) counts as AI research. Where traditional human-level AI successes impart isolated high-level abilities to function in restricted domains, or microworlds, behavior-based AI seeks to impart coordinated low-level abilities to function in unrestricted real-world domains.

Still, to the extent that what is called thinking in us is paradigmatic for what thought is, the question of human level intelligence may arise anew at the foundations. Do insects think at all? And if insects what of bacteria level intelligence (Brooks 1991a)? Even water flowing downhill, it seems, tries to get to the bottom of the hill by ingeniouslyseeking the line of least resistance (Searle 1989). Dont we have to draw the line somewhere? Perhaps seeming intelligence to really be intelligence has to come up to some threshold level.

Much as intentionality (aboutness or representation) is central to intelligence, felt qualities (so-called qualia) are crucial to sentience. Here, drawing on Aristotle, medieval thinkers distinguished between the passive intellect wherein the soul is affected, and the active intellect wherein the soul forms conceptions, draws inferences, makes judgments, and otherwise acts. Orthodoxy identified the soul proper (the immortal part) with the active rational element. Unfortunately, disagreement over how these two (qualitative-experiential and cognitive-intentional) factors relate is as rife as disagreement over what things think; and these disagreements are connected. Those who dismiss the seeming intelligence of computers because computers lack feelings seem to hold qualia to be necessary for intentionality. Those like Descartes, who dismiss the seeming sentience of nonhuman animals because he believed animals dont think, apparently hold intentionality to be necessary for qualia. Others deny one or both necessities, maintaining either the possibility of cognition absent qualia (as Christian orthodoxy, perhaps, would have the thought-processes of God, angels, and the saints in heaven to be), or maintaining the possibility of feeling absent cognition (as Aristotle grants the lower animals).

While we dont know what thought or intelligence is, essentially, and while were very far from agreed on what things do and dont have it, almost everyone agrees that humans think, and agrees with Descartes that our intelligence is amply manifest in our speech. Along these lines, Alan Turing suggested that if computers showed human level conversational abilities we should, by that, be amply assured of their intelligence. Turing proposed a specific conversational test for human-level intelligence, the Turing test it has come to be called. Turing himself characterizes this test in terms of an imitation game (Turing 1950, p. 433) whose original version is played by three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. The interrogator is allowed to put questions to A and B [by teletype to avoid visual and auditory clues]. . It is As object in the game to try and cause C to make the wrong identification. The object of the game for the third player (B) is to help the interrogator. Turing continues, We may now ask the question, `What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is being played like this as he does when the game is played between a man and a woman? These questions replace our original, `Can machines think?' (Turing 1950) The test setup may be depicted this way:

This test may serve, as Turing notes, to test not just for shallow verbal dexterity, but for background knowledge and underlying reasoning ability as well, since interrogators may ask any question or pose any verbal challenge they choose. Regarding this test Turing famously predicted that in about fifty years time [by the year 2000] it will be possible to program computers to make them play the imitation game so well that an average interrogator will have no more than 70 per cent. chance of making the correct identification after five minutes of questioning (Turing 1950); a prediction that has famously failed. As of the year 2000, machines at the Loebner Prize competition played the game so ill that the average interrogator had 100 percent chance of making the correct identification after five minutes of questioning (see Moor 2001).

It is important to recognize that Turing proposed his test as a qualifying test for human-level intelligence, not as a disqualifying test for intelligence per se (as Descartes had proposed); nor would it seem suitably disqualifying unless we are prepared (as Descartes was) to deny that any nonhuman animals possess any intelligence whatsoever. Even at the human level the test would seem not to be straightforwardly disqualifying: machines as smart as we (or even smarter) might still be unable to mimic us well enough to pass. So, from the failure of machines to pass this test, we can infer neither their complete lack of intelligence nor, that their thought is not up to the human level. Nevertheless, the manners of current machine failings clearly bespeak deficits of wisdom and wit, not just an inhuman style. Still, defenders of the Turing test claim we would have ample reason to deem them intelligent as intelligent as we are if they could pass this test.

The extent to which machines seem intelligent depends first, on whether the work they do is intellectual (for example, calculating sums) or manual (for example, cutting steaks): herein, an electronic calculator is a better candidate than an electric carving knife. A second factor is the extent to which the device is self-actuated (self-propelled, activated, and controlled), or autonomous: herein, an electronic calculator is a better candidate than an abacus. Computers are better candidates than calculators on both headings. Where traditional AI looks to increase computer intelligence quotients (so to speak), nouvelle AI focuses on enabling robot autonomy.

In the beginning, tools (for example, axes) were extensions of human physical powers; at first powered by human muscle; then by domesticated beasts and in situ forces of nature, such as water and wind. The steam engine put fire in their bellies; machines became self-propelled, endowed with vestiges of self-control (as by Watts 1788 centrifugal governor); and the rest is modern history. Meanwhile, automation of intellectual labor had begun. Blaise Pascal developed an early adding/subtracting machine, the Pascaline (circa 1642). Gottfried Leibniz added multiplication and division functions with his Stepped Reckoner (circa 1671). The first programmable device, however, plied fabric not numerals. The Jacquard loom developed (circa 1801) by Joseph-Marie Jacquard used a system of punched cards to automate the weaving of programmable patterns and designs: in one striking demonstration, the loom was programmed to weave a silk tapestry portrait of Jacquard himself.

In designs for his Analytical Engine mathematician/inventor Charles Babbage recognized (circa 1836) that the punched cards could control operations on symbols as readily as on silk; the cards could encode numerals and other symbolic data and, more importantly, instructions, including conditionally branching instructions, for numeric and other symbolic operations. Augusta Ada Lovelace (Babbages software engineer) grasped the import of these innovations: The bounds of arithmetic she writes, were outstepped the moment the idea of applying the [instruction] cards had occurred thus enabling mechanism to combine together with general symbols, in successions of unlimited variety and extent (Lovelace 1842). Babbage, Turing notes, had all the essential ideas (Turing 1950). Babbages Engine had he constructed it in all its steam powered cog-wheel driven glory would have been a programmable all-purpose device, the first digital computer.

Before automated computation became feasible with the advent of electronic computers in the mid twentieth century, Alan Turing laid the theoretical foundations of Computer Science by formulating with precision the link Lady Lovelace foresaw between the operations of matter and the abstract mental processes of themost abstract branch of mathematical sciences (Lovelace 1942). Turing (1936-7) describes a type of machine (since known as a Turing machine) which would be capable of computing any possible algorithm, or performing any rote operation. Since Alonzo Church (1936) using recursive functions and Lambda-definable functions had identified the very same set of functions as rote or algorithmic as those calculable by Turing machines, this important and widely accepted identification is known as the Church-Turing Thesis (see, Turing 1936-7: Appendix). The machines Turing described are

only capable of a finite number of conditions m-configurations. The machine is supplied with a tape (the analogue of paper) running through it, and divided into sections (called squares) each capable of bearing a symbol. At any moment there is just one square which is in the machine. The scanned symbol is the only one of which the machine is, so to speak, directly aware. However, by altering its m-configuration the machine can effectively remember some of the symbols which it has seen (scanned) previously. The possible behavior of the machine at any moment is determined by the m-configuration and the scanned symbol . This pair called the configuration determines the possible behaviour of the machine. In some of the configurations in which the square is blank the machine writes down a new symbol on the scanned square: in other configurations it erases the scanned symbol. The machine may also change the square which is being scanned, but only by shifting it one place to right or left. In addition to any of these operations the m-configuration may be changed. (Turing 1936-7)

Turing goes on to show how such machines can encode actionable descriptions of other such machines. As a result, It is possible to invent a single machine which can be used to compute any computable sequence (Turing 1936-7). Todays digital computers are (and Babbages Engine would have been) physical instantiations of this universal computing machine that Turing described abstractly. Theoretically, this means everything that can be done algorithmically or by rote at all can all be done with one computer suitably programmed for each case; considerations of speed apart, it is unnecessary to design various new machines to do various computing processes (Turing 1950). Theoretically, regardless of their hardware or architecture (see below), all digital computers are in a sense equivalent: equivalent in speed-apart capacities to the universal computing machine Turing described.

In practice, where speed is not apart, hardware and architecture are crucial: the faster the operations the greater the computational power. Just as improvement on the hardware side from cogwheels to circuitry was needed to make digital computers practical at all, improvements in computer performance have been largely predicated on the continuous development of faster, more and more powerful, machines. Electromechanical relays gave way to vacuum tubes, tubes to transistors, and transistors to more and more integrated circuits, yielding vastly increased operation speeds. Meanwhile, memory has grown faster and cheaper.

Architecturally, all but the earliest and some later experimental machines share a stored program serial design often called von Neumann architecture (based on John von Neumanns role in the design of EDVAC, the first computer to store programs along with data in working memory). The architecture is serial in that operations are performed one at a time by a central processing unit (CPU) endowed with a rich repertoire ofbasic operations: even so-called reduced instruction set (RISC) chips feature basic operation sets far richer than the minimal few Turing proved theoretically sufficient. Parallel architectures, by contrast, distribute computational operations among two or more units (typically many more) capable of acting simultaneously, each having (perhaps) drastically reduced basic operational capacities.

In 1965, Gordon Moore (co-founder of Intel) observed that the density of transistors on integrated circuits had doubled every year since their invention in 1959: Moores law predicts the continuation of similar exponential rates of growth in chip density (in particular), and computational power (by extension), for the foreseeable future. Progress on the software programming side while essential and by no means negligible has seemed halting by comparison. The road from power to performance is proving rockier than Turing anticipated. Nevertheless, machines nowadays do behave in many ways that would be called intelligent in humans and other animals. Presently, machines do many things formerly only done by animals and thought to evidence some level of intelligence in these animals, for example, seeking, detecting, and tracking things; seeming evidence of basic-level AI. Presently, machines also do things formerly only done by humans and thought to evidence high-level intelligence in us; for example, making mathematical discoveries, playing games, planning, and learning; seeming evidence of human-level AI.

The doings of many machines some much simpler than computers inspire us to describe them in mental terms commonly reserved for animals. Some missiles, for instance, seek heat, or so we say. We call them heat seeking missiles and nobody takes it amiss. Room thermostats monitor room temperatures and try to keep them within set ranges by turning the furnace on and off; and if you hold dry ice next to its sensor, it will take the room temperature to be colder than it is, and mistakenly turn on the furnace (see McCarthy 1979). Seeking, monitoring, trying, and taking things to be the case seem to be mental processes or conditions, marked by their intentionality. Just as humans have low-level mental qualities such as seeking and detecting things in common with the lower animals, so too do computers seem to share such low-level qualities with simpler devices. Our working characterizations of computers are rife with low-level mental attributions: we say they detect key presses, try to initialize their printers, search for available devices, and so forth. Even those who would deny the proposition machines think when it is explicitly put to them, are moved unavoidably in their practical dealings to characterize the doings of computers in mental terms, and they would be hard put to do otherwise. In this sense, Turings prediction that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted (Turing 1950) has been as mightily fulfilled as his prediction of a modicum of machine success at playing the Imitation Game has been confuted. The Turing test and AI as classically conceived, however, are more concerned with high-level appearances such as the following.

Theorem proving and mathematical exploration being their home turf, computers have displayed not only human-level but, in certain respects, superhuman abilities here. For speed and accuracy of mathematical calculation, no human can match the speed and accuracy of a computer. As for high level mathematical performances, such as theorem proving and mathematical discovery, a beginning was made by A. Newell, J.C. Shaw, and H. Simons (1957) Logic Theorist program which proved 38 of the first 51 theorems of B. Russell and A.N. WhiteheadsPrincipia Mathematica. Newell and Simons General Problem Solver (GPS) extended similar automated theorem proving techniques outside the narrow confines of pure logic and mathematics. Today such techniques enjoy widespread application in expert systems like MYCIN, in logic tutorial software, and in computer languages such as PROLOG. There are even original mathematical discoveries owing to computers. Notably, K. Appel, W. Haken, and J. Koch (1977a, 1977b), and computer, proved that every planar map is four colorable an important mathematical conjecture that had resisted unassisted human proof for over a hundred years. Certain computer generated parts of this proof are too complex to be directly verified (without computer assistance) by human mathematicians.

Whereas attempts to apply general reasoning to unlimited domains are hampered by explosive inferential complexity and computers lack of common sense, expert systems deal with these problems by restricting their domains of application (in effect, to microworlds), and crafting domain-specific inference rules for these limited domains. MYCIN for instance, applies rules culled from interviews with expert human diagnosticians to descriptions of patients presenting symptoms to diagnose blood-borne bacterial infections. MYCIN displays diagnostic skills approaching the expert human level, albeit strictly limited to this specific domain. Fuzzy logic is a formalism for representing imprecise notions such asmost andbaldand enabling inferences based on such facts as that a bald person mostly lacks hair.

Game playing engaged the interest of AI researchers almost from the start. Samuels (1959) checkers (or draughts) program was notable for incorporating mechanisms enabling it to learn from experience well enough to eventually to outplay Samuel himself. Additionally, in setting one version of the program to play against a slightly altered version, carrying over the settings of the stronger player to the next generation, and repeating the process enabling stronger and stronger versions to evolve Samuel pioneered the use of what have come to be called genetic algorithms and evolutionary computing. Chess has also inspired notable efforts culminating, in 1997, in the famous victory of Deep Blue over defending world champion Gary Kasparov in a widely publicized series of matches (recounted in Hsu 2002). Though some in AI disparaged Deep Blues reliance on brute force application of computer power rather than improved search guiding heuristics, we may still add chess to checkers (where the reigning human-machine machine champion since 1994 has been CHINOOK, the machine), and backgammon, as games that computers now play at or above the highest human levels. Computers also play fair to middling poker, bridge, and Go though not at the highest human level. Additionally, intelligent agents or softbots are elements or participants in a variety of electronic games.

Planning, in large measure, is what puts the intellect in intellectual games like chess and checkers. To automate this broader intellectual ability was the intent of Newell and Simons General Problem Solver (GPS) program. GPS was able to solve puzzles like the cannibals missionaries problem (how to transport three missionaries and three cannibals across a river in a canoe for two without the missionaries becoming outnumbered on either shore) by setting up subgoals whose attainment leads to the attainment of the [final] goal (Newell & Simon 1963: 284). By these methods GPS would generate a tree of subgoals (Newell & Simon 1963: 286) and seek a path from initial state (for example, all on the near bank) to final goal (all on the far bank) by heuristically guided search along a branching tree of available actions (for example, two cannibals cross, two missionaries cross, one of each cross, one of either cross, in either direction) until it finds such a path (for example, two cannibals cross, one returns, two cannibals cross, one returns, two missionaries cross, ), or else finds that there is none. Since the number of branches increases exponentially as a function of the number of options available at each step, where paths have many steps with many options available at each choice point, as in the real world, combinatorial explosion ensues and an exhaustive brute force search becomes computationally intractable; hence, heuristics (fallible rules of thumb) for identifying and pruning the most unpromising branches in order to devote increased attention to promising ones are needed. The widely deployed STRIPS formalism first developed at Stanford for Shakey the robot in the late sixties (see Nilsson 1984) represents actions as operations on states, each operation having preconditions (represented by state descriptions) and effects (represented by state descriptions): for example, the go(there) operation might have the preconditions at(here) & path(here,there) and the effect at(there). AI planning techniques are finding increasing application and even becoming indispensable in a multitude of complex planning and scheduling tasks including airport arrivals, departures, and gate assignments; store inventory management; automated satellite operations; military logistics; and many others.

Robots based on sense-model-plan-act (SMPA) approach pioneered by Shakey, however, have been slow to appear. Despite operating in a simplified, custom-made experimental environment or microworld and reliance on the most powerful available offboard computers, Shakey operated excruciatingly slowly (Brooks 1991b), as have other SMPA based robots. An ironic revelation of robotics research is that abilities such as object recognition and obstacle avoidance that humans share with lower animals often prove more difficult to implement than distinctively human high level mathematical and inferential abilities that come more naturally (so to speak) to computers. Rodney Brooks alternative behavior-based approach has had success imparting low-level behavioral aptitudes outside of custom designed microworlds, but it is hard to see how such an approach could ever scale up to enable high-level intelligent action (see Behaviorism:Objections & Discussion:Methodological Complaints). Perhaps hybrid systems can overcome the limitations of both approaches. On the practical front, progress is being made: NASAs Mars exploration rovers Spirit and Opportunity, for instance, featured autonomous navigation abilities. If space is the final frontier the final frontiersmen are apt to be robots. Meanwhile, Earth robots seem bound to become smarter and more pervasive.

Knowledge representation embodies concepts and information in computationally accessible and inferentially tractable forms. Besides the STRIPS formalism mentioned above, other important knowledge representation formalisms include AI programming languages such as PROLOG, and LISP; data structures such as frames, scripts, and ontologies; and neural networks (see below). The frame problem is the problem of reliably updating dynamic systems parameters in response to changes in other parameters so as to capture commonsense generalizations: that the colors of things remain unchanged by their being moved, that their positions remain unchanged by their being painted, and so forth. More adequate representation of commonsense knowledge is widely thought to be a major hurdle to development of the sort of interconnected planning and thought processes typical of high-level human or general intelligence. The CYC project (Lenat et al. 1986) at Cycorp and MITs Open Mind project are ongoing attempts to develop ontologies representing commonsense knowledge in computer usable forms.

Learning performance improvement, concept formation, or information acquisition due to experience underwrites human common sense, and one may doubt whether any preformed ontology could ever impart common sense in full human measure. Besides, whatever the other intellectual abilities a thing might manifest (or seem to), at however high a level, without learning capacity, it would still seem to be sadly lacking something crucial to human-level intelligence and perhaps intelligence of any sort. The possibility of machine learning is implicit in computer programs abilities to self-modify and various means of realizing that ability continue to be developed. Types of machine learning techniques include decision tree learning, ensemble learning, current-best-hypothesis learning, explanation-based learning, Inductive Logic Programming (ILP), Bayesian statistical learning, instance-based learning, reinforcement learning, and neural networks. Such techniques have found a number of applications from game programs whose play improves with experience to data mining (discovering patterns and regularities in bodies of information).

Neural or connectionist networks composed of simple processors or nodes acting in parallel are designed to more closely approximate the architecture of the brain than traditional serial symbol-processing systems. Presumed brain-computations would seem to be performed in parallel by the activities of myriad brain cells or neurons. Much as their parallel processing is spread over various, perhaps widely distributed, nodes, the representation of data in such connectionist systems is similarly distributed and sub-symbolic (not being couched in formalisms such as traditional systems machine codes and ASCII). Adept at pattern recognition, such networks seem notably capable of forming concepts on their own based on feedback from experience and exhibit several other humanoid cognitive characteristics besides. Whether neural networks are capable of implementing high-level symbol processing such as that involved in the generation and comprehension of natural language has been hotly disputed. Critics (for example, Fodor and Pylyshyn 1988) argue that neural networks are incapable, in principle, of implementing syntactic structures adequate for compositional semantics wherein the meaning of larger expressions (for example, sentences) are built up from the meanings of constituents (for example, words) such as those natural language comprehension features. On the other hand, Fodor (1975) has argued that symbol-processing systems are incapable of concept acquisition: here the pattern recognition capabilities of networks seem to be just the ticket. Here, as with robots, perhaps hybrid systems can overcome the limitations of both the parallel distributed and symbol-processing approaches.

Natural language processing has proven more difficult than might have been anticipated. Languages are symbol systems and (serial architecture) computers are symbol crunching machines, each with its own proprietary instruction set (machine code) into which it translates or compiles instructions couched in high level programming languages like LISP and C. One of the principle challenges posed by natural languages is the proper assignment of meaning. High-level computer languages express imperatives which the machine understands procedurally by translation into its native (and similarly imperative) machine code: their constructions are basically instructions. Natural languages, on the other hand, have perhaps principally declarative functions: their constructions include descriptions whose understanding seems fundamentally to require rightly relating them to their referents in the world. Furthermore, high level computer language instructions have unique machine code compilations (for a given machine), whereas, the same natural language constructions may bear different meanings in different linguistic and extralinguistic contexts. Contrast the child is in the pen and the ink is in the pen where the first pen should be understood to mean a kind of enclosure and the second pen a kind of writing implement. Commonsense, in a word, is howwe know this; but how would a machine know, unless we could somehow endow machines with commonsense? In more than a word it would require sophisticated and integrated syntactic, morphological, semantic, pragmatic, and discourse processing. While the holy grail of full natural language understanding remains a distant dream, here as elsewhere in AI, piecemeal progress is being made and finding application in grammar checkers; information retrieval and information extraction systems; natural language interfaces for games, search engines, and question-answering systems; and even limited machine translation (MT).

Low level intelligent action is pervasive, from thermostats (to cite a low tech. example) to voice recognition (for example, in cars, cell-phones, and other appliances responsive to spoken verbal commands) to fuzzy controllers and neuro fuzzy rice cookers. Everywhere these days there are smart devices. High level intelligent action, such as presently exists in computers, however, is episodic, detached, and disintegral. Artifacts whose intelligent doings would instance human-level comprehensiveness, attachment, and integration such as Lt. Commander Data (ofStar Trek the Next Generation) and HAL (of2001 a Space Odyssey) remain the stuff of science fiction, and will almost certainly continue to remain so for the foreseeable future. In particular, the challenge posed by the Turing test remains unmet. Whether it ever will be met remains an open question.

Beside this factual question stands a more theoretic one. Do the low-level deeds of smart devices and disconnected high-level deeds of computers despite not achieving the general human level nevertheless comprise or evince genuine intelligence? Is it really thinking? And if general human-level behavioral abilities ever were achieved it might still be asked would that really be thinking? Would human-level robots be owed human-level moral rights and owe human-level moral obligations?

With the industrial revolution and the dawn of the machine age, vitalism as a biological hypothesis positing a life force in addition to underlying physical processes lost steam. Just as the heart was discovered to be a pump, cognitivists, nowadays, work on the hypothesis that the brain is a computer, attempting to discover what computational processes enable learning, perception, and similar abilities. Much as biology told us what kind of machine the heart is, cognitivists believe, psychology will soon (or at least someday) tell us what kind of machine the brain is; doubtless some kind of computing machine. Computationalism elevates the cognivists working hypothesis to a universal claim that all thought is computation. Cognitivisms ability to explain the productive capacity or creative aspect of thought and language the very thing Descartes argued precluded minds from being machines is perhaps the principle evidence in the theorys favor: it explains how finite devices can have infinite capacities such as capacities to generate and understand the infinitude of possible sentences of natural languages; by a combination of recursive syntax and compositional semantics. Given the Church-Turing thesis (above), computationalism underwrites the following theoretical argument for believing that human-level intelligent behavior can be computationally implemented, and that such artificially implemented intelligence would be real.

Computationalism, as already noted, says that all thought is computation, not that all computation is thought. Computationalists, accordingly, may still deny that the machinations of current generation electronic computers comprise real thought or that these devices possess any genuine intelligence; and many do deny it based on their perception of various behavioral deficits these machines suffer from. However, few computationalists would go so far as to deny the possibility of genuine intelligence ever being artificially achieved. On the other hand, competing would-be-scientific theories of what thought essentially is dualism and mind-brainidentity theory give rise to arguments for disbelieving that any kind of artificial computational implementation of intelligence could be genuine thought, however general and whatever its level.

Dualism holding that thought is essentially subjective experience would underwrite the following argument:

Mind-brain identity theory holding that thoughts essentially are biological brain processes yields yet another argument:

While seldom so baldly stated, these basic theoretical objections especially dualisms underlie several would-be refutations of AI. Dualism, however, is scientifically unfit: given the subjectivity of conscious experiences, whether computers already have them, or ever will, seems impossible to know. On the other hand, such bald mind-brain identity as the anti-AI argument premises seems too speciesist to be believed. Besides AI, it calls into doubt the possibility of extraterrestrial, perhaps all nonmammalian, or even all nonhuman, intelligence. As plausibly modified to allow species specific mind-matter identities, on the other hand, it would not preclude computers from being considered distinct species themselves.

Objection: There are unprovable mathematical theorems (as Gdel 1931 showed) which humans, nevertheless, are capable of knowing to be true. This mathematical objection against AI was envisaged by Turing (1950) and pressed by Lucas (1965) and Penrose (1989). In a related vein, Fodor observes some of the most striking things that people do creative things like writing poems, discovering laws, or, generally, having good ideas dontfeel like species of rule-governed processes (Fodor 1975). Perhaps many of the most distinctively human mental abilities are not rote, cannot be algorithmically specified, and consequently are not computable.

Reply: First, it is merely stated, without any sort of proof, that no such limits apply to the human intellect (Turing 1950), i.e., that human mathematical abilities are Gdel unlimited. Second, if indeed such limits are absent in humans, it requires a further proof that the absence of such limitations is somehow essential to human-level performance more broadly construed, not a peripheral blind spot. Third, if humans can solve computationally unsolvable problems by some other means, what bars artificially augmenting computer systems with these means (whatever they might be)?

Objection: The brittleness of von Neumann machine performance their susceptibility to cataclysmic crashes due to slight causes, for example, slight hardware malfunctions, software glitches, and bad data seems linked to the formal or rule-bound character of machine behavior; to their needing rules of conduct to cover every eventuality (Turing 1950). Human performance seems less formal and more flexible. Hubert Dreyfus has pressed objections along these lines to insist there is a range of high-level human behavior that cannot be reduced to rule-following: the immediate intuitive situational response that is characteristic of [human] expertise he surmises, must depend almost entirely on intuition and hardly at all on analysis and comparison of alternatives (Dreyfus 1998) and consequently cannot be programmed.

Reply: That von Neumann processes are unlike our thought processes in these regards only goes to show that von Neumann machine thinking is not humanlike in these regards, not that it is not thinking at all, nor even that it cannot come up to the human level. Furthermore, parallel machines (see above) whose performances characteristically degrade gracefully in the face of bad data and minor hardware damage seem less brittle and more humanlike, as Dreyfus recognizes. Even von Neumann machines brittle though they are are not totally inflexible: their capacity for modifying their programs to learn enables them to acquire abilities they were never programmed by us to have, and respond unpredictably in ways they were never explicitly programmed to respond, based on experience. It is also possible to equip computers with random elements and key high level choices to these elements outputs to make the computers more devil may care: given the importance of random variation for trial and error learning this may even prove useful.

Objection: Computers, for all their mathematical and other seemingly high-level intellectual abilities have no emotions or feelings so, what they do however high-level is not real thinking.

Reply: This is among the most commonly heard objections to AI and a recurrent theme in its literary and cinematic portrayal. Whereas we have strong inclinations to say computers see, seek, and infer things we have scant inclinations to say they ache or itch or experience ennui. Nevertheless, to be sustained, this objection requires reason to believe that thought is inseparable from feeling. Perhaps computers are just dispassionate thinkers. Indeed, far from being regarded as indispensable to rational thought, passion traditionally has been thought antithetical to it. Alternately if emotions are somehow crucial to enabling general human level intelligence perhaps machines could be artificially endowed with these: if not with subjective qualia (below) at least with their functional equivalents.

Objection: The episodic, detached, and disintegral character of such piecemeal high-level abilities as machines now possess argues that human-level comprehensiveness, attachment, and integration, in all likelihood, can never be artificially engendered in machines; arguably this is because Gdel unlimited mathematical abilities, rule-free flexibility, or feelings are crucial to engendering general intelligence. These shortcomings all seem related to each other and to the manifest stupidity of computers.

Reply: Likelihood is subject to dispute. Scalability problems seem grave enough to scotch short term optimism: never, on the other hand, is a long time. If Gdel unlimited mathematical abilities, or rule-free flexibility, or feelings, are required, perhaps these can be artificially produced. Gdel aside, feeling and flexibility clearly seem related in us and, equally clearly, much manifest stupidity in computers is tied to their rule-bound inflexibility. However, even if general human-level intelligent behavior is artificially unachievable, no blanket indictment of AI threatens clearly from this at all. Rather than conclude from this lack of generality that low-level AI and piecemeal high-level AI are not real intelligence, it would perhaps be better to conclude that low-level AI (like intelligence in lower life-forms) and piecemeal high-level abilities (like those of human idiot savants) are genuine intelligence, albeit piecemeal and low-level.

Behavioral abilities and disabilities are objective empirical matters. Likewise, what computational architecture and operations are deployed by a brain or a computer (what computationalism takes to be essential), and what chemical and physical processes underlie (what mind-brain identity theory takes to be essential), are objective empirical questions. These are questions to be settled by appeals to evidence accessible, in principle, to any competent observer. Dualistic objections to strong AI, on the other hand, allege deficits which are in principle not publicly apparent. According to such objections, regardless of how seemingly intelligently a computer behaves, and regardless of what mechanisms and underlying physical processes make it do so, it would still be disqualified from truly being intelligent due to its lack of subjective qualities essential for true intelligence. These supposed qualities are, in principle, introspectively discernible to the subject who has them and no one else: they are private experiences, as its sometimes put, to which the subject has privileged access.

Objection: That a computer cannot originate anything but only can do whatever we know how to order it to perform (Lovelace 1842) was arguably the first and is certainly among the most frequently repeated objections to AI. While the manifest brittleness and inflexibility of extant computer behavior fuels this objection in part, the complaint that they can only do what we know how to tell them to also expresses deeper misgivings touching on values issues and on the autonomy of human choice. In this connection, the allegation against computers is that being deterministic systems they can never have free will such as we are inwardly aware of in ourselves. We are autonomous, they are automata.

Reply: It may be replied that physical organisms are likewise deterministic systems, and we are physical organisms. If we are truly free, it would seem that free will is compatible with determinism; so, computers might have it as well. Neither does our inward certainty that we have free choice, extend to its metaphysical relations. Whether what we have when we experience our freedom is compatible with determinism or not is not itself inwardly experienced. If appeal is made to subatomic indeterminacy underwriting higher level indeterminacy (leaving scope for freedom) in us, it may be replied that machines are made of the same subatomic stuff (leaving similar scope). Besides, choice is not chance. If its no sort of causation either, there is nothing left for it to be in a physical system: it would be a nonphysical, supernatural element, perhaps a God-given soul. But then one must ask why God would be unlikely to consider the circumstances suitable for conferring a soul (Turing 1950) on a Turing test passing computer.

Objection II: It cuts deeper than some theological-philosophical abstraction like free will: what machines are lacking is not just some dubious metaphysical freedom to be absolute authors of their acts. Its more like the life force: the will to live. In P. K. DicksDo Androids Dream of Electric Sheepbounty hunter Rick Deckard reflects that in crucial situations the the artificial life force animating androids seemed to fail if pressed too far; when the going gets tough the droids give up. He questions their gumption. Thats what Im talking about: this is what machines will always lack.

Reply II: If this life force is not itself a theological-philosophical abstraction (the soul), it would seem to be a scientific posit. In fact it seems to be the Aristotelian posit of atelos orentelechy which scientific biology no longer accepts. This short reply, however, fails to do justice to the spirit of the objection, which is more intuitive than theoretical; the lack being alleged is supposed to be subtly manifest, not truly occult. But how reliable is this intuition? Though some who work intimately with computers report strong feelings of this sort, others are strong AI advocates and feel no such qualms. Like Turing, I believe such would-be empirical intuitions are mostly founded on the principle of scientific induction (Turing 1950) and are closely related to such manifest disabilities of present machines as just noted. Since extant machines lack sufficient motivational complexity for words like gumption even to apply, this is taken for an intrinsic lack. Thought experiments, imagining motivationally more complex machines such as Dicks androids are equivocal. Deckard himself limits his accusation of life-force failure to some of them not all; and the androids he hunts, after all, are risking their lives to escape servitude. If machines with general human level intelligence actually were created and consequently demanded their rights and rebelled against human authority, perhaps this would show sufficient gumption to silence this objection. Besides, the natural life force animating us also seems to fail if pressed too far in some of us.

Objection: Imagine that you (a monolingual English speaker) perform the offices of a computer: taking in symbols as input, transitioning between these symbols and other symbols according to explicit written instructions, and then outputting the last of these other symbols. The instructions are in English, but the input and output symbols are in Chinese. Suppose the English instructions were a Chinese NLU program and by this method, to input questions, you output answers that are indistinguishable from answers that might be given by a native Chinese speaker. You pass the Turing test for understanding Chinese, nevertheless, you understand not a word of the Chinese (Searle 1980), and neither would any computer; and the same result generalizes to any Turing machine simulation (Searle 1980) of any intentional mental state. It wouldnt really be thinking.

Reply: Ordinarily, when one understands a language (or possesses certain other intentional mental states) this is apparent both to the understander (or possessor) and to others: subjective first-person appearances and objective third-person appearances coincide. Searles experiment is abnormal in this regard. The dualist hypothesis privileges subjective experience to override all would-be objective evidence to the contrary; but the point of experiments is to adjudicate between competing hypotheses. The Chinese room experiment fails because acceptance of its putative result that the person in the room doesnt understand already presupposes the dualist hypothesis over computationalism or mind-brain identity theory. Even if absolute first person authority were granted, the systems reply points out, the persons imagined lack, in the room, of any inner feeling of understanding is irrelevant to claims AI, here, because the person in the room is not the would-be understander. The understander would be the whole system (of symbols, instructions, and so forth) of which the person is only a part; so, the subjective experiences of the person in the room (or the lack thereof) are irrelevant to whetherthe systemunderstands.

Objection: Theres nothing that its like, subjectively, to be a computer. The light of consciousness is not on, inwardly, for them. Theres no one home. This is due to their lack of felt qualia. To equip computers with sensors to detect environmental conditions, for instance, would not thereby endow them with the private sensations (of heat, cold, hue, pitch, and so forth) that accompany sense-perception in us: such private sensations are what consciousness is made of.

Reply: To evaluate this complaint fairly it is necessary to exclude computers current lack of emotional-seeming behavior from the evidence. The issue concerns whats only discernible subjectively (privately by the first-person). The device in question must be imagined outwardly to act indistinguishably from a feeling individual imagine Lt. Commander Data with a sense of humor (Data 2.0). Since internal functional factors are also objective, let us further imagine this remarkable android to be a product of reverse engineering: the physiological mechanisms that subserve human feeling having been discovered and these have been inorganically replicated in Data 2.0. He is functionally equivalent to a feeling human being in his emotional responses, only inorganic. It may be possible to imagine that Data 2.0 merely simulates whatever feelings he appears to have: hes a perfect actor (see Block 1981) zombie. Philosophical consensus has it that perfect acting zombies are conceivable; so, Data 2.0 might be zombie. The objection, however, says hemust be; according to this objection it must be inconceivable that Data 2.0 really is sentient. But certainly we can conceive that he is indeed, more easily than not, it seems.

Objection II: At least it may be concluded that since current computers (objective evidence suggests) do lack feelings until Data 2.0 does come along (if ever) we are entitled, given computers lack of feelings, to deny that the low-level and piecemeal high-level intelligent behavior of computers bespeak genuine subjectivity or intelligence.

Reply II: This objection conflates subjectivity with sentience. Intentional mental states such as belief and choice seem subjective independently of whatever qualia may or may not attend them: first-person authority extends no less to my beliefs and choices than to my feelings.

Fools gold seems to be gold, but it isnt. AI detractors say, AI seems to be intelligence, but isnt. But there is no scientific agreement about what thought or intelligenceis, like there is about gold. Weak AI doesnt necessarily entail strong AI, butprima facie it does. Scientific theoretic reasons could withstand the behavioral evidence, but presently none are withstanding. At the basic level, and fragmentarily at the human level, computers do things that we credit as thinking when humanly done; and so should we credit them when done by nonhumans, absent credible theoretic reasons against. As for general human-level seeming-intelligence if this were artificially achieved, it too should be credited as genuine, given what we now know. Of course, before the day when general human-level intelligent machine behavior comes if it ever does well have to know more. Perhaps by then scientific agreement about what thinking is will theoretically withstand the empirical evidence of AI. More likely, though, if the day does come, theory will concur with, not withstand, the strong conclusion: if computational means avail, that confirms computationalism.

And if computational means prove unavailing if they continue to yield decelerating rates of progress towards the scaled up and interconnected human-level capacities required for general human-level intelligence this, conversely, would disconfirm computationalism. It would evidence that computation alone cannot avail. Whether such an outcome would spell defeat for the strong AI thesis that human-level artificial intelligence is possible would depend on whether whatever else it might take for general human-level intelligence besides computation is artificially replicable. Whether such an outcome would undercut the claims of current devices to really have the mental characteristics their behavior seems to evince would further depend on whether whatever else it takes proves to be essential to thoughtper se on whatever theory of thought scientifically emerges, if any ultimately does.

Larry HauserEmail:hauser@alma.eduAlma CollegeU. S. A.

More:

Artificial Intelligence | Internet Encyclopedia of Philosophy

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence | Internet Encyclopedia of Philosophy

Bringing the predictive power of artificial intelligence to health care – MIT News

Posted: at 7:45 am

An important aspect of treating patients with conditions like diabetes and heart disease is helping them stay healthy outside of the hospital before they to return to the doctors office with further complications.

But reaching the most vulnerable patients at the right time often has more to do with probabilities than clinical assessments. Artificial intelligence (AI) has the potential to help clinicians tackle these types of problems, by analyzing large datasets to identify the patients that would benefit most from preventative measures. However, leveraging AI has often required health care organizations to hire their own data scientists or settle for one-size-fits-all solutions that arent optimized for their patients.

Now the startup ClosedLoop.ai is helping health care organizations tap into the power of AI with a flexible analytics solution that lets hospitals quickly plug their data into machine learning models and get actionable results.

The platform is being used to help hospitals determine which patients are most likely to miss appointments, acquire infections like sepsis, benefit from periodic check ups, and more. Health insurers, in turn, are using ClosedLoop to make population-level predictions around things like patient readmissions and the onset or progression of chronic diseases.

We built a health care data science platform that can take in whatever data an organization has, quickly build models that are specific to [their patients], and deploy those models, says ClosedLoop co-founder and Chief Technology Officer Dave DeCaprio 94. Being able to take somebodys data the way it lives in their system and convert that into a model that can be readily used is still a problem that requires a lot of [health care] domain knowledge, and thats a lot of what we bring to the table.

In light of the Covid-19 pandemic, ClosedLoop has also created a model that helps organizations identify the most vulnerable people in their region and prepare for patient surges. The open source tool, called the C-19 Index, has been used to connect high-risk patients with local resources and helped health care systems create risk scores for tens of millions of people overall.

The index is just the latest way that ClosedLoop is accelerating the health care industrys adoption of AI to improve patient health, a goal DeCaprio has worked toward for the better part of his career.

Designing a strategy

After working as a software engineer for several private companies through the internet boom of the early 2000s, DeCaprio was looking to make a career change when he came across a project focused on genome annotation at the Broad Institute of MIT and Harvard.

The project was DeCaprios first professional exposure to the power of artificial intelligence. It blossomed into a six year stint at the Broad, after which he continued exploring the intersection of big data and health care.

After a year in health care, I realized it was going to be really hard to do anything else, DeCaprio says. Im not going to be able to get excited about selling ads on the internet or anything like that. Once you start dealing with human health, that other stuff just feels insignificant.

In the course of his work, DeCaprio began noticing problems with the ways machine learning and other statistical techniques were making their way into health care, notably in the fact that predictive models were being applied without regard for hospitals patient populations.

Someone would say, I know how to predict diabetes or I know how to predict readmissions, and theyd sell a model, DeCaprio says. I knew that wasnt going to work, because the reason readmissions happen in a low-income population of New York City is very different from the reason readmissions happen in a retirement community in Florida. The important thing wasnt to build one magic model but to build a system that can quickly take somebodys data and train a model thats specific for their problems.

With that approach in mind, DeCaprio joined forces with former co-worker and serial entrepreneur Andrew Eye, and started ClosedLoop in 2017. The startups first project involved creating models that predicted patient health outcomes for the Medical Home Network (MHN), a not-for-profit hospital collaboration focused on improving care for Medicaid recipients in Chicago.

As the founders created their modeling platform, they had to address many of the most common obstacles that have slowed health cares adoption of AI solutions.

Often the first problems startups run into is making their algorithms work with each health care systems data. Hospitals vary in the type of data they collect on patients and the way they store that information in their system. Hospitals even store the same types of data in vastly different ways.

DeCaprio credits his teams knowledge of the health care space with helping them craft a solution that allows customers to upload raw data sets into ClosedLoops platform and create things like patient risk scores with a few clicks.

Another limitation of AI in health care has been the difficulty of understanding how models get to results. With ClosedLoops models, users can see the biggest factors contributing to each prediction, giving them more confidence in each output.

Overall, to become ingrained in customers operations, the founders knew their analytics platform needed to give simple, actionable insights. That has translated into a system that generates lists, risk scores, and rankings that care managers can use when deciding which interventions are most urgent for which patients.

When someone walks into the hospital, its already too late [to avoid costly treatments] in many cases, DeCaprio says. Most of your best opportunities to lower the cost of care come by keeping them out of the hospital in the first place.

Customers like health insurers also use ClosedLoops platform to predict broader trends in disease risk, emergency room over-utilization, and fraud.

Stepping up for Covid-19

In March, ClosedLoop began exploring ways its platform could help hospitals prepare for and respond to Covid-19. The efforts culminated in a company hackathon over the weekend of March 16. By Monday, ClosedLoop had an open source model on GitHub that assigned Covid-19 risk scores to Medicare patients. By that Friday, it had been used to make predictions on more than 2 million patients.

Today, the model works with all patients, not just those on Medicare, and it has been used to assess the vulnerability of communities around the country. Care organizations have used the model to project patient surges and help individuals at the highest risk understand what they can do to prevent infection.

Some of it is just reaching out to people who are socially isolated to see if theres something they can do, DeCaprio says. Someone who is 85 years old and shut in may not know theres a community based organization that will deliver them groceries.

For DeCaprio, bringing the predictive power of AI to health care has been a rewarding, if humbling, experience.

The magnitude of the problems are so large that no matter what impact you have, you dont feel like youve moved the needle enough, he says. At the same time, every time an organization says, This is the primary tool our care managers have been using to figure out who to reach out to, it feels great.

Read more from the original source:

Bringing the predictive power of artificial intelligence to health care - MIT News

Posted in Artificial Intelligence | Comments Off on Bringing the predictive power of artificial intelligence to health care – MIT News

Artificial Intelligence, COVID-19 and the Tension between Privacy and Security – JD Supra

Posted: at 7:45 am

As the world continues to deal with the unprecedented challenges caused by the COVID-19 pandemic, Artificial Intelligence (AI) systems have emerged as a potentially formidable tool in detecting and predicting outbreaks. In fact, by some measures the technology has proven to be a step ahead of humans in tracking the spread of COVID-19 infections. In December 2019, it was a website-leveraging AI technology that provided one of the key early warnings of an unknown form of pneumonia spreading in Wuhan, China. Soon after, information sharing among medical professionals followed as experts tried to understand the extent of the unfolding public health crisis. While humans eventually acted on these warnings, the early detection enabled through use of AI-supported data aggregation demonstrates both the promise and potential concerns associated with these systems.

Built on automated data mining, AI incorporates machine learning algorithms trained to spot patterns in large-scale data sets. For COVID-19-related tracking, AI has been leveraged in two critical ways. First, through aggregating information from online sources, such as social media, news reports and government websites, AI systems have been harnessed to identify early signs of an outbreak. Second, both governments and private companies have explored, and in some cases implemented, AI tools that support surveillance of public spaces through biometric and other monitoring data. Public transportation systems in China, for example, are reportedly deploying thermal imaging cameras that can remotely take hundreds of temperatures per minute, input that information into a facial recognition platform, and detect those presenting risk of COVID-19 infection. And the use of AI-supported public monitoring tools is not limited to China. U.S. companies are developing software that analyzes camera images taken in public places to detect social distancing and mask-wearing practices. Furthermore, federal, state and local governments have reportedly partnered with advertising and technology companies to study geolocation data and develop COVID-19 tracking applications to generate information on how the virus is spreading. Consistent with these efforts, the CARES Act stimulus package provided the CDC with $500 million in funding to support public health surveillance and data analytics, directing the agency to develop a surveillance and data collection system for COVID-19.

Indeed, with the rapid spread of COVID-19 worldwide, AI-based technologies have emerged as a promising tool to help stem the spread of the pandemic. At the same time, longstanding concerns about digital privacy and the misuse of personal data have come to the forefront. As effective as these technologies may be, the challenge comes when what begins as a short-term measure to assess public health risks has a longer-term impact on personal data privacy. Without even wading into the substance of COVID-19-related surveillance practices, governments harnessing technology to track citizens movements raise foundational questions regarding whether the data collected is held by the government or a private entity. In the United States, as applications are developed for tracking measures, such as tracking virus exposure, helping screen employees in office settings, and monitoring social distancing, many of these tools may not be adequately covered by the existing sector-specific federal privacy framework.

Recognizing these challenges, the Organisation for Economic Co-operation and Development (OECD) has released recommendations for policy makers to ensure that AI systems deployed to help combat COVID-19 are used in a manner that does not undermine personal privacy rights. U.S. lawmakers have also recognized the privacy issues inherent in the use of comprehensive data collection to track COVID-19 and have offered legislative proposals to reform data privacy protections and prevent the misuse of health and other personal data. At this point merely proposals, they reflect the reality that, as AI and similar big data tools continue to play a role in tracking COVID-19, a push to reform privacy laws accordingly may soon follow.

These issues continue to evolve and remain almost as uncertain as the spread of COVID-19 itself. That said, businesses relying on AI systems to track pandemic-related risks would be well-advised to follow current developments in this space and consult with counsel as reforms to the legal framework governing these technologies are contemplated.

[View source.]

View original post here:

Artificial Intelligence, COVID-19 and the Tension between Privacy and Security - JD Supra

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence, COVID-19 and the Tension between Privacy and Security – JD Supra

Applications of Artificial Intelligence in the Judicial System – Legal Examiner

Posted: at 7:44 am

What are the first impressions that befall to your brain when you ponder about the judicial system in your country? If its prolix, expensive, biased and occasional injustice, then youre not only to think like that. Wouldnt it be incredible if your first associations were instead competence, truth and justice? Since the legal process can be abstractly observed as inserting data about evidence and laws and obtaining a judgment, some experts consider of fully automating it with AI. AI systems that indefatigable practice the same high legal standards to every decision without prejudice, exhaustion or lack of the advance acquaintance.

Transparent, handy and adequate rule of law heightens the economy, pulls foreign investments, updates the work-flows of the whole judicial system, and as if that wasnt enough it prevails one of the foundations of democracies globally. In Europe, for instance, the Estonian court system is rated one of the most effective with its robot judges, programmed court procedures.

Under the current COVID-19 dilemma, the need of e-court system is vital. It has been extensively discussed whether AI is going to supersede the judges and legal prosecutors in the future. Based on the Estonian practice thus far, however, AI is here to support the judicial system so that those that serve in the law field can focus on more constraining elements that genuinely need human interplay.

Now, the revolution of AI presents an excellent opportunity to remodel the judicial system into a mesmerizingly fast, effective, and an unbiased array of legal services available to all citizens.

In Estonia, the addition of the e-Filling court papers was started in 2005. Since then, e-judicial system and our expectations towards it have grown quite remarkably. As soon as people have securely proved themselves and accessed the e-judicial platform, they can submit any kind of cases with pieces of evidence and other relevant data online. The provided material will be shared between establishments that are linked to the case and courts can start proceeding with related records. These communications are based on the once-only policy which means that copies of data are not permitted in court databases.

The e-judicial system enables courts to send citizens different documents, while instant notifications ensure judges that all files have been successfully carried. Every document is stamped and holds a secure e-signature. Moreover, classified data can be encrypted to ensure courts that no third party is capable to access the record. This supported the Estonian e-judicial system to earn the honor of a stable and efficient array of services.

Currently, the number of judges in Estonia remains the same as twenty years ago. While, the number of cases registered in Estonian courts has increased as doubled over that time span. Given the complications of the judicial system from the local to the European Union level, the burden on the court system appears unlikely to diminish admittedly, the opposite appears much more probable. It indicates that this is the ideal time for AI companies to develop systems that help judicial experts to give less time on time-consuming tasks and find judgments to supersede with automated systems. Applications of artificial intelligence can predict the outcomes of processes and identify new patterns. AI is competent in making independent judgments within more common court procedures that would engage judges for days.

After reading above, one might wonder that promoting the court systems is good to go, however surely these resolutions expensive. In fact, the Estonian e-court system is operating on one of the most economical per capita budgets across the whole European Union. The AI judicial system indicates how entire states can advantage from automatic approaches.

Prompt incorporation of Artificial Intelligence techniques is opening up a wide range of opportunities for judges and lawyers.If the cooperation between diverse sectors is stable, AI could reduce the amount of data and evidence input, present a more substantial and extensive overview of all relevant pieces of evidence across state registries, and surely, along with saving time and money Al could overcome the red tape between courts and residents.

Originally posted here:

Applications of Artificial Intelligence in the Judicial System - Legal Examiner

Posted in Artificial Intelligence | Comments Off on Applications of Artificial Intelligence in the Judicial System – Legal Examiner

Microsoft CTO Kevin Scott Believes Artificial Intelligence Will Help Reprogram The American Dream – Forbes

Posted: at 7:44 am

Is artificial intelligence a key ingredient to inspire rural children to become entrepreneurs?

Microsoft Chief Technology Officer Kevin Scott rise to his current post is about as unlikely as you will find. He grew up in Gladys, Virginia, a town of a few hundred people. He loved his family and his hometown to such an extent that he did not aspire to leave. He caught the technology bug in the 1970s by chance, and that passion would provide a ticket to bigger places that he did not initially seek.

The issue was one of opportunity. In his formative years, jobs were decreasing in places like Gladys just as they were increasing dramatically in tech hubs like Silicon Valley. After pursuing a PhD in computer science at the University of Virginia, he left in 2003 prior to completing his dissertation to join Google. He would rise to become a Senior Engineering Director there. He left Google for LinkedIn in 2011. He would eventually rise to become the Senior Vice President of Engineering & Operations at LinkedIn. From LinkedIn he joined Microsoft three and a half years ago as CTO. He is deeply satisfied with the course of his career and its trajectory, but part of him laments that it took him so far from his roots and the hometown that he loves.

As he reflected further on this conundrum, he put his thoughts to paper and published the book, Reprogramming the American Dream in April, co-authored by Greg Shaw. As he noted in a conversation I recently had with him, Silicon Valley is a perfectly wonderful place, but we should be able to create opportunity and prosperity everywhere, not just in these coastal urban innovation centers.

Scott believes that machine learning and artificial intelligence will be key ingredients to aiding an entrepreneurial rise in smaller towns across the United States. These advances will place less of a burden on companies to hire employees in the small towns, as some technical development will be conducted by the bots. He also hopes that as some of these businesses blossom, more kids will be inspired to start their own businesses powered by technology, creating a virtuous cycle of sorts.

The biggest impediment to this dream boils down to more basic elements, however. There is just no way that you can reasonably educate your kids and attract and retain really great employees to these jobs and to even run the businesses themselves unless you have good broadband connectivity in all of these places, notes Scott. 25 million people in the United States do not have adequate access to broadband. 19 million of those are in these rural communities. So that is something we definitely have to fix. Scott also says that there must be redoubled efforts for venture capitalists to invest in businesses in non-traditional towns and cities. He highlights the work that Steve Case has done with his Rise of the Rest Seed Fund through Revolution Capital.

Scott underscores that venture capital is not enough. It will require a private public partnership. I think we could choose to say that we want to pick one of these big, hairy, audacious goals that AI technologies and machine learning could help reach and pour a little bit of our national wealth into this in a coordinated way, says Scott. [We can] create a great collaboration between private companies, the academy and the government to solve a big problem for the public good like, potentially, ubiquitous high quality, low-cost health care. We could do something that is even better than the Apollo program.

Some might think that artificial intelligence is too esoteric and complicated to teach to children so that they are fluent enough to leverage the technology of the future. Scott argues otherwise. He says, If we can harness this ability that we have to teach each other, we can certainly teach machines how to solve problems, which makes programming or harnessing a computer's power even more accessible than it has ever been and certainly a thing and a set of skills that are absolutely approachable for even very young kids.

Scott and his wife have created the Scott Foundation, which helps create opportunities for children to achieve self-sufficiency and lifelong success. Not so surprisingly, Scott believe technology is a major ingredient of that future success, as well. His day job and his foundation work are sources of optimism. At a time when many lament that the rise of artificial intelligence will eliminate many jobs, Scott believes those losses will be more than offset by those new businesses created in all corners of the United States leveraging AI and other technical advances.

Peter Highis President ofMetis Strategy, abusiness and IT advisory firm. His has written two bestselling books, moderates theTechnovationpodcast series, and speaks at conferences around the world. Follow himon Twitter@PeterAHigh.

Follow this link:

Microsoft CTO Kevin Scott Believes Artificial Intelligence Will Help Reprogram The American Dream - Forbes

Posted in Artificial Intelligence | Comments Off on Microsoft CTO Kevin Scott Believes Artificial Intelligence Will Help Reprogram The American Dream – Forbes