Artificial intelligence Facts for Kids

Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn. It is also a field of study which tries to make computers "smart". John McCarthy came up with the name "artificial intelligence" in 1955.

In general use, the term "artificial intelligence" means a machine which mimics human cognition. At least some of the things we associate with other minds, such as learning and problem solving can be done by computers, though not in the same way as we do.

An ideal (perfect) intelligent machine is a flexible agent which perceives its environment and takes actions to maximize its chance of success at some goal. As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence": it is just a routine technology.

At present we use the term AI for successfully understanding human speech, competing at a high level in strategic game systems (such as Chess and Go), self-driving cars, and interpreting complex data. Some people also consider AI a danger to humanity if it progresses unabatedly.

An extreme goal of AI research is to create computer programs that can learn, solve problems, and think logically. In practice, however, most applications have picked on problems which computers can do well. Searching data bases and doing calculations are things computers do better than people. On the other hand, "perceiving its environment" in any real sense is way beyond present-day computing.

AI involves many different fields like computer science, mathematics, linguistics, psychology, neuroscience, and philosophy. Eventually researchers hope to create a "general artificial intelligence" which can solve many problems instead of focusing on just one. Researchers are also trying to create creative and emotional AI which can possibly empathize or create art. Many approaches and tools have been tried.

Objects that look and act like humans exist in every major civilization. The first appearance of artificial intelligence is in Greek myths, like Talos of Crete or the bronze robot of Hephaestus. Humanoid robots were built by Yan Shi, Hero of Alexandria, and Al-Jazari. Sentient machines became popular in fiction during the 19th and 20th centuries with the stories of Frankenstein and R.U.R.

Formal logic was developed by ancient Greek philosophers and mathematicians. This study of logic produced the idea of a computer in the 19th and 20th century. Mathematician Alan Turing's theory of computation said that any mathematical problem could be solved by processing 1's and 0's. Advances in neurology, information theory, and cybernetics convinced a small group of researchers that an electronic brain was possible.

AI research really started with a conference at Dartmouth College in 1956. It was a month long brainstorming session attended by many people with interests in AI. At the conference they wrote programs that were amazing at the time, beating people at checkers or solving word problems. The Department of Defense started giving a lot of money to AI research and labs were created all over the world.

Unfortunately, researchers really underestimated just how hard some problems were. The tools they had used still did not give computers things like emotions or common sense. Mathematician James Lighthill wrote a report on AI saying that "in no part of the field have discoveries made so far produced the major impact that was then promised". The U.S and British governments wanted to fund more productive projects. Funding for AI research was cut, starting an "AI winter" where little research was done.

AI research revived in the 1980s because of the popularity of expert systems, which simulated the knowledge of a human expert. By 1985, 1 billion dollars were spent on AI. New, faster computers convinced U.S and British governments to start funding AI research again. However, the market for Lisp machines collapsed in 1987 and funding was pulled again, starting an even longer AI winter.

AI revived again in the 90s and early 2000s with its use in data mining and medical diagnosis. This was possible because of faster computers and focusing on solving more specific problems. In 1997, Deep Blue became the first computer program to beat chess world champion Garry Kasparov. Faster computers, advances in deep learning, and access to more data have made AI popular throughout the world. In 2011 IBM Watson beat the top two Jeopardy! players Brad Rutter and Ken Jennings, and in 2016 Google's AlphaGo beat top Go player Lee Sedol 4 out of 5 times.

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.

Read the original:

Artificial intelligence Facts for Kids

Understanding the Four Types of Artificial Intelligence

The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?

The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely wont see machines exhibit broadly-applicable intelligence comparable to or exceeding that of humans, though it does go on to say that in the coming years, machines will reach and exceed human performance on more and more tasks. But its assumptions about how those capabilities will develop missed some important points.

As an AI researcher, Ill admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call the boring kind of AI. It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.

The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play Jeopardy! well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.

We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us and us from them.

There are four types of artificial intelligence: reactive machines, limited memory, theory of mind and self-awareness.

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBMs chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesnt have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesnt rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a representation of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blues design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.

Similarly, Googles AlphaGo, which has beaten top human Go experts, cant evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blues, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they cant be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world meaning they cant function beyond the specific tasks theyre assigned and are easily fooled.

They cant interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But its bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems wont ever be bored, or interested, or sad.

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars speed and direction. That cant be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. Theyre included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They arent saved as part of the cars library of experience it can learn from, the way human drivers compile experience over years behind the wheel.

So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called theory of mind the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each others motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, theyll have to be able to understand that each of us has thoughts and feelings and expectations for how well be treated. And theyll have to adjust their behavior accordingly.

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the theory of mind possessed by Type III artificial intelligences. Consciousness is also called self-awareness for a reason. (I want that item is a very different statement from I know I want that item.) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because thats how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

This article was originally published on The Conversation.

See the article here:

Understanding the Four Types of Artificial Intelligence

4 Main Types of Artificial Intelligence – G2

Although AI is undoubtedly multifaceted, there are specific types of artificial intelligence under which extended categories fall.

What are the four types of artificial intelligence?

There are a plethora of terms and definitions in AI that can make it difficult to navigate the difference between categories, subsets, or types of artificial intelligence and no, theyre not all the same. Some subsets of AI include machine learning, big data, and natural language processing (NLP); however, this article covers the four main types of artificial intelligence: reactive machines, limited memory, theory of mind, and self-awareness.

These four types of artificial intelligence comprise smaller aspects of the general realm of AI.

Reactive machines are the most basic type of AI system. This means that they cannot form memories or use past experiences to influence present-made decisions; they can only react to currently existing situations hence reactive. An existing form of a reactive machine is Deep Blue, a chess-playing supercomputer created by IBM in the mid-1980s.

Deep Blue was created to play chess against a human competitor with intent to defeat the competitor. It was programmed with the ability to identify a chess board and its pieces while understanding the pieces functions. Deep Blue could make predictions about what moves it should make and the moves its opponent might make, thus having an enhanced ability to predict, select, and win. In a series of matches played between 1996 and 1997, Deep Blue defeated Russian chess grandmaster Garry Kasparov 3 to 2 games, becoming the first computerized program to defeat a human opponent.

Deep Blues unique skill of accurately and successfully playing chess matches highlight its reactive abilities. In the same vein, its reactive mind also indicates that it has no concept of past or future; it only comprehends and acts on the presently-existing world and components within it. To simplify, reactive machines are programmed for the here and now, but not the before and after.

Reactive machines have no concept of the world and therefore cannot function beyond the simple tasks for which they are programmed. A characteristic of reactive machines is that no matter the time or place, these machines will always behave the way they were programmed. There is no growth with reactive machines, only stagnation in recurring actions and behaviors.

Limited memory is comprised of machine learning models that derive knowledge from previously-learned information, stored data, or events. Unlike reactive machines, limited memory learns from the past by observing actions or data fed to them in order to build experiential knowledge.

Although limited memory builds on observational data in conjunction with pre-programmed data the machines already contain, these sample pieces of information are fleeting. An existing form of limited memory is autonomous vehicles.

Autonomous vehicles, or self-driving cars, use the principle of limited memory in that they depend on a combination of observational and pre-programmed knowledge. To observe and understand how to properly drive and function among human-dependent vehicles, self-driving cars read their environment, detect patterns or changes in external factors, and adjust as necessary.

Not only do autonomous vehicles observe their environment, but they also observe the movement of other vehicles and people in their line of vision. Previously, driverless cars without limited memory AI took as long as 100 seconds to react and make judgments on external factors. Since the introduction of limited memory, reaction time on machine-based observations has dropped sharply, depicting the value of limited memory AI.

GIF courtesy of ProStock/Getty via Tesla

What constitutes theory of mind is decision-making ability equal to the extent of a human mind, but by machines. While there are some machines that currently exhibit humanlike capabilities (voice assistants, for instance), none are fully capable of holding conversations relative to human standards. One component of human conversation is having emotional capacity, or sounding and behaving like a person would in standard conventions of conversation.

This future class of machine ability would include understanding that people have thoughts and emotions that affect behavioral output and thus influence a theory of mind machines thought process. Social interaction is a key facet of human interaction, so to make theory of mind machines tangible, the AI systems that control the now-hypothetical machines would have to identify, understand, retain, and remember emotional output and behaviors while knowing how to respond to them.

From this, said theory of mind machines would have to be able to use the information derived from people and adapt it into their learning centers to know how to communicate with and treat different situations. Theory of mind is a highly advanced form of proposed artificial intelligence that would require machines to thoroughly acknowledge rapid shifts in emotional and behavioral patterns in humans, and also understand that human behavior is fluid; thus, theory of mind machines would have to be able to learn rapidly at a moments notice.

Some elements of theory of mind AI currently exist or have existed in the recent past. Two notable examples are the robots Kismet and Sophia, created in 2000 and 2016, respectively.

Kismet, developed by Professor Cynthia Breazeal, was capable of recognizing human facial signals (emotions) and could replicate said emotions with its face, which was structured with human facial features: eyes, lips, ears, eyebrows, and eyelids.

Sophia, on the other hand, is a humanoid bot created by Hanson Robotics. What distinguishes her from previous robots is her physical likeness to a human being as well as her ability to see (image recognition) and respond to interactions with appropriate facial expressions.

GIF courtesy of GIPHY

These two humanlike robots are samples of movement toward full theory of mind AI systems materializing in the near future. While neither fully holds the ability to have full-blown human conversation with an actual person, both robots have aspects of emotive ability akin to that of their human counterparts one step toward seamlessly assimilating into human society.

Self-aware AI involves machines that have human-level consciousness. This form of AI is not currently in existence, but would be considered the most advanced form of artificial intelligence known to man.

Facets of self-aware AI include the ability to not only recognize and replicate humanlike actions, but also to think for itself, have desires, and understand its feelings. Self-aware AI, in essence, is an advancement and extension of theory of mind AI. Where theory of mind only focuses on the aspects of comprehension and replication of human practices, self-aware AI takes it a step further by implying that it can and will have self-guided thoughts and reactions.

We are presently in tier three of the four types of artificial intelligence, so believing that we could potentially reach the fourth (and final?) tier of AI doesnt seem like a far-fetched idea.

But for now, its important to focus on perfecting all aspects of types two and three in AI. Sloppily speeding through each AI tier could be detrimental to the future of artificial intelligence for generations to come.

TIP: Find out what AI software currently exists today, and see how it can help with your business processes.

Ready to learn more in-depth information about artificial intelligence? Check out articles on the benefits and risks of AI as well as the innovative minds behind the first genderless voice assistant!

See the rest here:

4 Main Types of Artificial Intelligence - G2

Artificial intelligence project lets Holocaust survivors …

Millions perished in the Holocaust, but a group of survivors will now be able to live on, at least via real-time video conversations about their experiences and perspectives, forever. In an innovative attempt to harness the artificial intelligence technologies of the present and the future to keep alive the stories of the past, Holocaust survivors may be the first people ever to be able to continue carrying on conversations (virtually, at least) even after their deaths. Lesley Stahl reports on this fascinating project on the next edition of 60 Minutes, Sunday, April 5 at 7 p.m., ET/PT on CBS.Heather Maio had worked for years on Holocaust-related exhibits and knew that "Schindler's List" director Steven Spielberg had created the Shoah Foundation to record the testimonies of thousands of Holocaust survivors. But Maio wanted to create something more interactive. "I wanted to talk to a Holocaust survivor like I would today, with that person sitting right in front of me," she told Stahl. Maio believed that artificial intelligence technology could make her notion realizable, so she pitched her idea to Stephen Smith, the executive director of the USC Shoah Foundation in Los Angeles, and now her husband.Smith was intrigued, but some of his colleagues initially feared it could cheapen, or "Disney-fy" the Holocaust. "We had a lot of pushback on this project," Smith said. "'Is it the right thing to do Are we trying to keep them alive beyond their deaths?' Everyone had questions except for one group of people, the survivors themselves, who said, 'Where do I sign up?'" So far, more than 20 interviews, including one with a 93-year-old U.S. Army veteran who helped liberate a concentration camp, have been recorded. Each subject spends a full five days answering questions in an attempt to record responses to every question conceivable. The questions are then logged and alternative questions are entered into the database. Each interview is recorded with more than 20 cameras so that as technology advances and 3D, hologram-type display becomes the norm, all required angles will be available.Three of the survivors interviewed have since died. One of them was Eva Kor, who appeared on 60 Minutes in 1992 to tell her story of having been experimented on, along with her identical twin sister, by Nazi S.S. physician Josef Mengele. Kor died last summer, but using the Shoah foundation's technology, Stahl was able to conduct another 60 Minutes interview with Kor's digital image. What was Mengele like? "He had a gorgeous face, a movie star face, and very pleasant, actually," Kors' digital image told Stahl. "Dark hair, dark eyes. When I looked into his eyes, I could see nothing but evil. People say that the eyes are the center of the soul, and in Mengele's case, that was correct."Stahl interviewed the first Holocaust survivor filmed for the project, Pinchas Gutter, who was sent to the Majdanek concentration camp at age 11 and was the only member of his family to survive. Gutter was asked some 2,000 questions. Stahl spoke to him in person, but she also spoke to his digital image, which can now be seen in Holocaust museums in Dallas, Indiana and Illinois, where visitors can ask him their own questions. As many may wonder far into the future, Stahl asked Gutter how he can still have faith in God after the horrors he experienced. "How can you possibly not believe in God?" Gutter's digital image replied. "God gave human beings the knowledge of right and wrong and he allowed them to do what they wished on this earth, to find their own way. To my mind, when God sees what human beings are up to, especially things like genocide, he weeps."

Follow this link:

Artificial intelligence project lets Holocaust survivors ...

Artificial intelligence – Simple English Wikipedia, the …

Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn. It is also a field of study which tries to make computers "smart". They work on their own without being encoded with commands. John McCarthy came up with the name "artificial intelligence" in 1955.

In general use, the term "artificial intelligence" means a programme which mimics human cognition. At least some of the things we associate with other minds, such as learning and problem solving can be done by computers, though not in the same way as we do.[1] Andreas Kaplan and Michael Haenlein define AI as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.[2]

An ideal (perfect) intelligent machine is a flexible agent which perceives its environment and takes actions to maximize its chance of success at some goal or objective.[3] As machines become increasingly capable, mental faculties once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence": it is just a routine technology.

At present we use the term AI for successfully understanding human speech,[1] competing at a high level in strategic game systems (such as Chess and Go), self-driving cars, and interpreting complex data.[4] Some people also consider AI a danger to humanity if it continues to progress at its current pace.[5]

An extreme goal of AI research is to create computer programs that can learn, solve problems, and think logically.[6][7] In practice, however, most applications have picked on problems which computers can do well. Searching data bases and doing calculations are things computers do better than people. On the other hand, "perceiving its environment" in any real sense is way beyond present-day computing.

AI involves many different fields like computer science, mathematics, linguistics, psychology, neuroscience, and philosophy. Eventually researchers hope to create a "general artificial intelligence" which can solve many problems instead of focusing on just one. Researchers are also trying to create creative and emotional AI which can possibly empathize or create art. Many approaches and tools have been tried.

Borrowing from the management literature, Kaplan and Haenlein classify artificial intelligence into three different types of AI systems: analytical, human-inspired, and humanized artificial intelligence.[8] Analytical AI has only characteristics consistent with cognitive intelligence generating cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive as well as emotional intelligence, understanding, in addition to cognitive elements, also human emotions considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), able to be self-conscious and self-aware in interactions with others.

The first appearance of artificial intelligence is in Greek myths, like Talos of Crete or the bronze robot of Hephaestus. Humanoid robots were built by Yan Shi, Hero of Alexandria, and Al-Jazari. Sentient machines became popular in fiction during the 19th and 20th centuries with the stories of Frankenstein and Rossum's Universal Robots.

Formal logic was developed by ancient Greek philosophers and mathematicians. This study of logic produced the idea of a computer in the 19th and 20th century. Mathematician Alan Turing's theory of computation said that any mathematical problem could be solved by processing 1's and 0's. Advances in neurology, information theory, and cybernetics convinced a small group of researchers that an electronic brain was possible.

AI research really started with a conference at Dartmouth College in 1956. It was a month long brainstorming session attended by many people with interests in AI. At the conference they wrote programs that were amazing at the time, beating people at checkers or solving word problems. The Department of Defense started giving a lot of money to AI research and labs were created all over the world.

Unfortunately, researchers really underestimated just how hard some problems were. The tools they had used still did not give computers things like emotions or common sense. Mathematician James Lighthill wrote a report on AI saying that "in no part of the field have discoveries made so far produced the major impact that was then promised".[9] The U.S and British governments wanted to fund more productive projects. Funding for AI research was cut, starting an "AI winter" where little research was done.

AI research revived in the 1980s because of the popularity of expert systems, which simulated the knowledge of a human expert. By 1985, 1 billion dollars were spent on AI. New, faster computers convinced U.S. and British governments to start funding AI research again. However, the market for Lisp machines collapsed in 1987 and funding was pulled again, starting an even longer AI winter.

AI revived again in the 90s and early 2000s with its use in data mining and medical diagnosis. This was possible because of faster computers and focusing on solving more specific problems. In 1997, Deep Blue became the first computer program to beat chess world champion Garry Kasparov. Faster computers, advances in deep learning, and access to more data have made AI popular throughout the world.[10] In 2011 IBM Watson beat the top two Jeopardy! players Brad Rutter and Ken Jennings, and in 2016 Google's AlphaGo beat top Go player Lee Sedol 4 out of 5 times.

Continue reading here:

Artificial intelligence - Simple English Wikipedia, the ...

Artificial Intelligence Market by Size, Share, Analysis …

CHAPTER 1: Introduction

1.1. Report description1.2. Key benefits for stakeholders1.3. Key market segments1.4. Research methodology

1.4.1. Secondary research1.4.2. Primary research1.4.3. Analyst tools & models

CHAPTER 2: Executive summary

2.1. CXO perspective

CHAPTER 3: Market overview

3.1. Market definition and scope3.2. Key findings

3.2.1. Top investment pockets3.2.2. Top winning strategies

3.3. Porter's five forces analysis3.4. Market share analysis, 20173.5. Market dynamics

3.5.1. Drivers

3.5.1.1. Increase in investment in AI technologies3.5.1.2. Growth in demand for analyzing and interpreting large amounts of data3.5.1.3. Increased customer satisfaction and increased adoption of reliable cloud applications

3.5.2. Restraint

3.5.2.1. Lack of trained and experienced staff

3.5.3. Opportunities

3.5.3.1. Increased adoption of AI in developing regions3.5.3.2. Developing smarter robots

CHAPTER 4: Artificial intelligence market, by technology

4.1. Market overview

4.1.1. Market size and forecast, by technology

4.2. Machine learning

4.2.1. Key market trends, growth factors, and opportunities4.2.2. Market size and forecast, by region4.2.3. Market analysis, by country

4.3. Natural language processing (NLP)

4.3.1. Key market trends, growth factors, and opportunities4.3.2. Market size and forecast, by region4.3.3. Market analysis, by country

4.4. Image processing

4.4.1. Key market trends, growth factors, and opportunities4.4.2. Market size and forecast, by region4.4.3. Market analysis, by country

4.5. Speech recognition

4.5.1. Key market trends, growth factors, and opportunities4.5.2. Market size and forecast, by region4.5.3. Market analysis, by country

CHAPTER 5: Artificial intelligence market, BY industry vertical

5.1. Market overview

5.1.1. Market size and forecast, by industry vertical

5.2. Media & advertising

5.2.1. Key market trends, growth factors, and opportunities5.2.2. Market size and forecast, by region5.2.3. Market analysis, by country

5.3. BFSI

5.3.1. Key market trends, growth factors, and opportunities5.3.2. Market size and forecast, by region5.3.3. Market analysis, by country

5.4. IT & telecom

5.4.1. Key market trends, growth factors, and opportunities5.4.2. Market size and forecast, by region5.4.3. Market analysis, by country

5.5. Retail

5.5.1. Key market trends, growth factors, and opportunities5.5.2. Market size and forecast, by region5.5.3. Market analysis, by country

5.6. Healthcare

5.6.1. Key market trends, growth factors, and opportunities5.6.2. Market size and forecast, by region5.6.3. Market analysis, by country

5.7. Automotive & transportation

5.7.1. Key market trends, growth factors, and opportunities5.7.2. Market size and forecast, by region5.7.3. Market analysis, by country

5.8. Other

5.8.1. Key market trends, growth factors, and opportunities5.8.2. Market size and forecast, by region5.8.3. Market analysis, by country

CHAPTER 6: Artificial intelligence market, BY region

6.1. Market overview6.2. North America

6.2.1. Key market trends, growth factors, and opportunities6.2.2. Market size and forecast, by technology6.2.3. Market size and forecast, by industry vertical6.2.4. Market size and forecast, by country

6.2.4.1. U.S.

6.2.4.1.1. U.S. market size and forecast, by technology6.2.4.1.2. U.S. market size and forecast, by industry vertical6.2.4.2. Canada6.2.4.2.1. Canada market size and forecast, by technology6.2.4.2.2. Canada Market size and forecast, by industry vertical

6.2.4.3. Mexico

6.2.4.3.1. Mexico market size and forecast, by technology6.2.4.3.2. Mexico market size and forecast, by industry vertical

6.3. Europe

6.3.1. Key market trends, growth factors, and opportunities6.3.2. Market size and forecast, by technology6.3.3. Market size and forecast, by industry vertical6.3.4. Market size and forecast, by country

6.3.4.1. Germany

6.3.4.1.1. Germany market size and forecast, by technology6.3.4.1.2. Germany market size and forecast, by industry vertical

6.3.4.2. UK

6.3.4.2.1. UK market size and forecast, by technology6.3.4.2.2. UK market size and forecast, by industry vertical

6.3.4.3. France

6.3.4.3.1. France market size and forecast, by technology6.3.4.3.2. France market size and forecast, by industry vertical

6.3.4.4. Russia

6.3.4.4.1. Russia market size and forecast, by technology6.3.4.4.2. Russia market size and forecast, by industry vertical

6.3.4.5. Rest of Europe

6.3.4.5.1. Rest of Europe market size and forecast, by technology6.3.4.5.2. Rest of Europe market size and forecast, by industry vertical

6.4. Asia-Pacific

6.4.1. Key market trends, growth factors, and opportunities6.4.2. Market size and forecast, by technology6.4.3. Market size and forecast, by industry vertical6.4.4. Market size and forecast, by country

6.4.4.1. China

6.4.4.1.1. China market size and forecast, by technology6.4.4.1.2. China market size and forecast, by industry vertical

6.4.4.2. Japan

6.4.4.2.1. Japan market size and forecast, by technology6.4.4.2.2. Japan market size and forecast, by industry vertical

6.4.4.3. India

6.4.4.3.1. India market size and forecast, by technology6.4.4.3.2. India market size and forecast, by industry vertical

6.4.4.4. Australia

6.4.4.4.1. Australia market size and forecast, by technology6.4.4.4.2. Australia market size and forecast, by industry vertical6.4.4.4.3. Rest of Asia-pacific market size and forecast, by technology6.4.4.4.4. Rest of Asia-pacific market size and forecast, by industry vertical

6.5. LAMEA

6.5.1. Key market trends, growth factors, and opportunities6.5.2. Market size and forecast, by technology6.5.3. Market size and forecast, by industry vertical6.5.4. Market size and forecast, by country

6.5.4.1. Latin America

6.5.4.1.1. Latin America market size and forecast, by technology6.5.4.1.2. Latin America market size and forecast, by industry vertical

6.5.4.2. Middle East

6.5.4.2.1. Middle East market size and forecast, by technology6.5.4.2.2. Middle East market size and forecast, by industry vertical

6.5.4.3. Africa

6.5.4.3.1. Africa market size and forecast, by technology6.5.4.3.2. Africa market size and forecast, by industry vertical

CHAPTER 7: Company profiles

7.1. Alphabet Inc. (Google Inc.)

7.1.1. Company overview7.1.2. Company snapshot7.1.3. Operating business segments7.1.4. Product portfolio7.1.5. Business performance7.1.6. Key strategic moves and developments

7.2. Apple Inc.

7.2.1. Company overview7.2.2. Company snapshot7.2.3. Operating business segments7.2.4. Product portfolio7.2.5. Business performance7.2.6. Key strategic moves and developments

7.3. Baidu, Inc.

7.3.1. Company overview7.3.2. Company snapshot7.3.3. Operating business segments7.3.4. Product portfolio7.3.5. Business performance7.3.6. Key strategic moves and developments

7.4. International Business Management Corporation

7.4.1. Company overview7.4.2. Company snapshot7.4.3. Operating business segments7.4.4. Product portfolio7.4.5. Business performance7.4.6. Key strategic moves and developments

7.5. IPsoft Inc.

7.5.1. Company overview7.5.2. Company snapshot7.5.3. Product portfolio7.5.4. Key strategic moves and developments

7.6. Microsoft Corporation

7.6.1. Company overview7.6.2. Company snapshot7.6.3. Operating business segments7.6.4. Product portfolio7.6.5. Business performance7.6.6. Key strategic moves and developments

7.7. MicroStrategy Incorporated

7.7.1. Company overview7.7.2. Company snapshot7.7.3. Operating business segment7.7.4. Product portfolio7.7.5. Business performance7.7.6. Key strategic moves and developments

7.8. NVIDIA Corporation

7.8.1. Company overview7.8.2. Company snapshot7.8.3. Operating business segments7.8.4. Product portfolio7.8.5. Business performance7.8.6. Key strategic moves and developments

7.9. Qlik Technologies Inc.

7.9.1. Company overview7.9.2. Company snapshot7.9.3. Operating business segments7.9.4. Product portfolio7.9.5. Key strategic moves and developments

See the article here:

Artificial Intelligence Market by Size, Share, Analysis ...

A.I. Artificial Intelligence movie review (2001) | Roger Ebert

In the final act, events take David and Teddy in a submersible to the drowned Coney Island, where they find not only Geppetto's workshop but a Blue Fairy. A collapsing Ferris wheel pins the submarine, and there they remain, trapped and immobile, for 2,000 years, as above them an ice age descends and humans become extinct. David is finally rescued by a group of impossibly slender beings that might be aliens, but are apparently very advanced androids. For them, David is an incalculable treasure: "He is the last who knew humans." From his mind they download all of his memories, and they move him into an exact replica of his childhood home. This reminded me of the bedroom beyond Jupiter constructed for Dave by aliens in Kubrick's "2001." It has the same purpose, to provide a familiar environment in an incomprehensible world. It allows these beings, like the unseen beings in "2001," to observe and learn from behavior.

Watching the film again, I asked myself why I wrote that the final scenes are "problematical," go over the top, and raise questions they aren't prepared to answer. This time they worked for me, and had a greater impact. I began with the assumption that the skeletal silver figures are indeed androids, of a much advanced generation from David's. They too must be programmed to know, love, and serve Man. Let's assume such instructions would be embedded in their programming DNA. They now find themselves in a position analogous to David in his search for his Mommy. They are missing an element crucial to their function.

After some pseudoscientific legerdemain involving a lock of Monica's hair, they are able to bring her back after 2,000 years of death--but only for 24 hours, which is all the space-time continuum permits. Do they do this to make David happy? No, because would they care? And is a computer happier when it performs its program than when it does not? No. It is either functioning or not functioning. It doesn't know how it feels.

Here is how I now read the film: These new generation mechas are advanced enough to perceive that they cannot function with humans in the absence of humans, and I didn't properly reflect this in my original review of the film. David is their only link to the human past. Whatever can be known about them, he is an invaluable source. In watching his 24 hours with Mommy, they observe him functioning at the top of his ability.

Read more from the original source:

A.I. Artificial Intelligence movie review (2001) | Roger Ebert

Drug research turns to artificial intelligence in COVID-19 fight – Business in Vancouver

Handol Kim, CEO of Variational AI Inc.: even if were able to collapse the front end, you still have five or six years of clinical trials and who knows if we need a drug in five or six years for COVID-19?|Rob Kruyt

Variational AI Inc.s bread and butter rests in novel drug discovery, specifically using artificial intelligence (AI) to compress the years-long preclinical process to perhaps a single year.

But in the midst of a pandemic, even a year might be too long to find a treatment for COVID-19, according to CEO Handol Kim.

Even if were able to collapse the front end, you still have five or six years of clinical trials and who knows if we need a drug in five or six years for COVID-19? he said.

We thought, Well, the fastest way to do this is repurposing existing drugs.

The pitch caught the interest of the Digital Technology Supercluster, which last month committed to spending $60 million of its $153 million budget to develop partnerships across its networks to address issues brought on by the pandemic.

Variational AIs partnership with adMare BioInnovations Inc., a not-for-profit organization that helps commercialize academic research, was among the first to get the nod from the Vancouver-based supercluster.

In pharmaceutical development, the way small molecules bind to a target such as a protein can be likened to how a key must fit a lock exactly for anything to happen.

If successful, the molecules can prevent the target (or receptor) from doing something or else excite the protein into doing something more in other words: they become the basis of a treatment.

That research take years, which is why Variational AI is using artificial intelligence to accelerate the process. But the companys algorithm can also take all the approved drugs in the world and use AI to see how they bind and determine which drugs would be most effective against the virus.

No clinical trials are necessary if the drugs have already been approved by authorities.

While novel drugs and vaccines that are being developed towards COVID-19 are moving along, those kinds of activities particularly drug development take many, many years, said Lana Janes, a venture partner at adMare, whose organization is shepherding tech company Variational AI through the pharma world.

It makes sense to apply it [Variational AIs technology] right now.

This early effort comes after Ottawa mandated on March 20 that the nations five superclusters reach out to their 1,800 members to come up with ways to tackle COVID-19.

Since then, the Digital Technology Supercluster is in the midst of reviewing more than 300 submissions from its 500 members.

The Digital Technology Supercluster has officially given four projects the go-ahead, including the Variational AI-led effort.

Separate local initiatives have also caught the eye of the federal government.

Prime Minister Justin Trudeau announced on May 3 Vancouver-based AbCellera Biologics Inc. would be receiving $175 million as it pursues the quick development, manufacturing and distribution of therapeutic antibodies.

Last month American pharmaceutical company Eli Lilly and Co. (NYSE:LLY) partnered with AbCellera to develop a new drug for the treatment and prevention of the COVID-19 virus.

Eli Lilly will use AbCelleras platform to zero in on antibodies generated in a natural immune response to the coronavirus.

The goal is to develop a new drug to treat people who have become infected with the virus.

Clinical trials are expected to begin as early as July.

Handol, meanwhile, said the fact Variational AI is a tech company wading into pharmaceutical waters allows his team to look at the drug development in a different way than the incumbents.

Whats unprecedented is the speed of the research that has been mobilized across the globe to try and fight this pandemic, Janes said. Its broken down those silos that can sometimes exist in science.

Read the original post:

Drug research turns to artificial intelligence in COVID-19 fight - Business in Vancouver

Artificial Intelligence is Evolving to Process the World Like Humans – Interesting Engineering

As engineers and researchers work on developing and perfecting their machine learning and AI algorithms, the end goal is ultimately to recreate the human brain. The most perfect AI imaginable would be able to process the world around us through typical sensory input but leverage the storage and computing strengths of supercomputers.

With that end goal in mind, it's not hard to understand the ways that AI is evolving as it continues to be developed. Deep learning AI is able to interpret patterns and derive conclusions. In essence, it's learning how to mimic the way that humans process the world around us.

That said, from the onset, AIs generally need typical computer input, like coded data. Developing AIs that can process the world through audio and visual input, sensory input, is a much harder task.

In order to understand artificial intelligence in the context of a perception-based interface, we need to understand what the end goal is. We need to understand how the brain is modeled and works.

Our brains are essentially the world's most powerful supercomputers, except for the fact that they're made out of organic material, rather than silicon and other materials.

Our right brain is largely perception-based, it's focused on the interpretation of environmental inputs like taste, feel, sound, sight, etc. Our left brain, on the other hand, is focused on rational thought. Our senses provide patterns to our right brain, and to our left brain, those senses provide the rationale for decision making. In a sense, we have two AIs in our head that work together to create a logical, yet also emotionally swayed machine.

RELATED: ELON MUSK SHARES HIS VIEWS ON AI, NEURALINK, AUTOPILOT, AND THE BLUE DOT IN A PODCAST

Human intelligence and our definition of what an intelligent thing is all drawback to how we ourselves process the world. In order for artificial intelligence to truly succeed, that is to be the best version of itself that it can be, then it needs to be intelligent from a human perspective.

All this draws back to modern AI in a simple way, AI is programmed in how to make a decision. Machine learning algorithms allow code to be pseudo-organically generated so that algorithms can "learn" in a sense. All of this programming is based on reasoning, on "if, then, do this."

Arguably, our brain's decision-making process is just as much based on emotions and feeling as it is reason. Emotional intelligence is a significant portion of what makes intelligence. It's the ability to read a situation, to understand other human's emotions and reactions. In order for AIs to evolve and be the best possible algorithm, they need to be able to process sensory input and emotion.

Most artificial intelligence systems are primarily created on the foundation of deep learning algorithms. This is the means of exposing a computer program to thousands of examples and AI learning how to solve problems through this process. Deep learning can be boiled down to teaching a computer how to be smart.

After any given deep learning phase for an AI, the system can perceive the inputs that it was trained on and make decisions therein. The decision-making tree that the AI forms from traditional deep learning mimics the way the right side of our brain works. It is based on the perception of inputs, of pseudo-senses.

RELATED: ARTIFICIAL INTELLIGENCE PROVES TO BE 10 PERCENT FASTER AND MORE EFFICIENT THAN HUMAN LAWYERS

Deep learning is a way of getting computers to reason, not just with if-then statements, but through the understanding of the situation. That said, the current situations AI are being trained on aren't as complex as interpreting a conversation with Becky to see if she's into you. Rather it's more along the lines of is this a dark cat, a black bag, or the night sky. Primitive, but still sensory perception...

While deep learning is currently heavily focused on one pathway, meaning AIs are developing specialties, eventually it won't be too far fetched to start training AIs on multiple things at once. Just like a toddler might learn colors and numbers at the same time. Expanding this out, as computer processing power grows, perhaps accelerated by practical quantum computing, there's no question that AIs will evolve to become more human.

Advanced AI will continue to deal with understanding and processing patterns from the world around us. Through this, it will develop more complex models on how to process that information. In a sense, AIs are like toddlers, but soon they're going to be teenagers, and eventually, they may graduate with a doctorate. All figuratively of course... though, an age where an AI graduates a university probably isn't that far off.

RELATED: ELON MUSK AND OPEN AI WANT TO CREATE AN ARTIFICIAL INTELLIGENCE THAT WON'T SPELL DOOM FOR HUMANITY

When we think about intelligent humans, we usually think of the most rationally minded people. Yet, we miss out on what is so unique about human intelligence creativity. In a sense, we take for granted our creativity, yet it is the thing that makes us the most intelligent of living beings. Our ability to process situations, not just understand what the sum of two numbers is, is what makes us uniquely intelligent. So uniquely intelligent that we can design and create artificially intelligent beings that will soon be able to match our human intelligence.

While modern AIs are primarily focused on singular strands of intelligence, whether that be finding which picture contains a bicycle or which email is spam, we're already training AIs to be all-around smart, humanly smart.

See the original post here:

Artificial Intelligence is Evolving to Process the World Like Humans - Interesting Engineering

Women wanted: Why now could be a good time for women to pursue a career in AI – CNBC

The coronavirus pandemic has upended countless jobs and even entire industries, leaving many wondering which will emerge out of the other side.

One industry likely to endure or even thrive under the virus, however, is artificial intelligence (AI), which could offer a glimpse into one of the rising careers of the future.

"This outbreak is creating overwhelming uncertainty and also greater demand for AI," IBM's vice president of data and AI, Ritika Gunnar told CNBC Make It.

Already, AI has been deployed sweepingly to help tackle the pandemic. Hospitals use the technology to diagnose patients; governments employ it in contact tracing apps and companies rely on it to support the biggest work from home experiment in history.

And that demand is only set to rise. Market research company International Data Corporation says it expects the number of AI jobs globally to grow 16% this year.

That could create new opportunities in an otherwise challenging jobs market. But the industry will need more women, in particular, if it is to overcome some of its historic bias challenges.

"In order to remove bias from AI, you need diverse perspectives among the people working on it. That means more women, and more diversity overall, in AI," said Gunnar.

The industry has been making progress lately. In a new report released Wednesday,IBMfound the majority (85%) of AI professionals think the industry has become more diverse over recent years, which has had a positive impact on the technology.

Of the more than 3,200 people surveyed acrossNorth America, Europe and India, 86% said they are now confident in AI systems' ability to make decisions without bias.

The AI opportunities from this crisis are numerous and the career opportunities are there.

Lisa Bouari

executive director, OutThought AI Assistants

However, Lisa Bouari, executive director at OutThought AI Assistants and a recipient of IBM's Women Leaders in AI awards, said more needs to be done to encourage women into the industry and keep them there.

"Attracting and retaining women are two halves of the same issue supporting a greater balance of women in AI," said Bouari. "The issues highlighted in the report around career progression, and hurdles, hold the keys to helping women stay in AI careers, and ultimately attracting more women as the status quo evolves."

For Gunnar, that means getting more women and girls excited about AI from a young age.

"We should expose girls to AI, math and science at a much earlier age so they have a support system in place," said Gunnar.

Indeed, IBM's report noted that although more women have been drawn to the industry over recent years, they did not consider AI a viable career path until later in life due to a lack of support during early education.

A plurality of men (46%) said they became interested in a tech career in high school or earlier, while a majority of women (53%) only considered it a possible path during their undergraduate degree or grad school.

But Bouari said she's hopeful that the surge in demand for AI currently can help drive the industry forward.

"The AI opportunities from this crisis are numerous and the career opportunities are there if we can successfully move hurdles and adopt it efficiently," she said.

Don't miss:Reaching gender equality at work means getting over this major hurdle first

Like this story?Subscribe to CNBC Make It on YouTube!

Read the rest here:

Women wanted: Why now could be a good time for women to pursue a career in AI - CNBC

Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements – Unite.AI

Julien Rebetez, is the Lead Software & Machine Learning Engineer at Picterra. Picterra provides a geospatial cloud-based-platform specially designed for training deep learning based detectors, quickly and securely.

Without a single line of code and with only few human-made annotations, Picterras users build and deploy unique actionable and ready to use deep learning models.

Itautomates the analysis of satellite and aerial imagery, enabling users to identify objects and patterns.

What is it that attracted you to machine learning and AI?

I started programming because I wanted to make video games and got interested in computer graphics at first. This led me to computer vision, which is kind of the reverse process where instead of having the computer create a fake environment, you have it perceive the real environment. During my studies, I took some Machine Learning courses and I got interested in the computer vision angle of it. I think whats interesting about ML is that its at the intersection between software engineering, algorithms and math and it still feels kind of magical when it works.

Youve been working on using machine learning to analyze satellite image for many years now. What was your first project?

My first exposure to satellite imagery was the Terra-i project (to detect deforestation) and I worked on it during my studies. I was amazed at the amount of freely available satellite data that is produced by the various space agencies (NASA, ESA, etc). You can get regular images of the planet for free every day or so and this is a great resource for many scientific applications.

Could you share more details regarding the Terra-i project?

The Terra-i project (http://terra-i.org/terra-i.html) was started by Professor Andrez Perez-Uribe, from HEIG-VD (Switzerland) and is now led by Louis Reymondin, from CIAT (Colombia). The idea of the project is to detect deforestation using freely available satellite images. At the time, we worked with MODIS imagery (250m pixel resolution) because it provided a uniform and predictable coverage (both spatially and temporally). We would get a measurement for each pixel every few days and from this time series of measurements, you can try to detect anomalies or novelties as we call them in ML sometimes.

This project was very interesting because the amount of data was a challenge at the time and there was also some software engineering involved to make it work on multiple computers and so on. From the ML side, it used Bayesian Neural Network (not very deep at the time ) to predict what the time series of a pixel should look like. If the measurement didnt match the prediction, then we would have an anomaly.

As part of this project, I also worked on cloud removal. We took a traditional signal processing approach there, where you have a time series of measurements and some of them will be completely off because of a cloud. We used a fourier-based approach (HANTS) to clean the time series before detecting novelties in it. One of the difficulties is that if we would clean it too strongly, wed also remove novelties, so there were quite some experiments to do to find the right parameters.

You also designed and implemented a deep learning system for automatic crop type classification from aerial (drone) imagery of farm fields. What were the main challenges at the time?

This was my first real exposure to Deep Learning. At the time, I think the main challenge were more on getting the framework to run and properly use a GPU than on the ML itself. We used Theano, which was one of the ancestors of Tensorflow.

The goal of the project was to classify the type of crop in a field, from drone imagery. We tried an approach where the Deep Learning Model was using color histograms as inputs as opposed to just the raw image. To make this work reasonably quickly, I remember having to implement a custom Theano layer, all the way to some CUDA code. That was a great learning experience at the time and a good way to dig a bit into the technical details of Deep Learning.

Youre officially the Lead Software and Machine Learning Engineer at Picterra. How would you best describe your day to day activities?

It really varies, but a lot of it is about keeping an eye on the overall architecture of the system and the product in general and communicating with the various stakeholders. Although ML is at the core of our business, you quickly realize that most of the time is not spent on ML itself, but all the things around it: data management, infrastructure, UI/UX, prototyping, understanding users, etc This is quite a change from Academia or previous experience in bigger companies where you are much more focused on a specific problem.

Whats interesting about Picterra is that we not only run Deep Learning Models for users, but we actually allow them to train their own. That is different from a lot of the typical ML workflows where you have the ML team train a model and then publish it to production. What this means is that we cannot manually play with the training parameters as you often do. We have to find some training method that will work for all of our users. This led us to create what we call our experiment framework, which is a big repository of datasets that simulates the training data our users would build on the platform. We can then easily test changes to our training methodology against these datasets and evaluate if they help or not. So instead of evaluating a single model, we are more evaluating an architecture + training methodology.

The other challenge is that our users are not ML practitioners, so they dont necessarily know what a training set is, what a label is and so on. Building a UI to allow non-ML practitioners to build datasets and train ML models is a constant challenge and there is a lot of back-and-forth between the UX and ML teams to make sure we guide users in the right direction.

Some of your responsibilities include prototyping new ideas and technologies. What are some of the more interesting projects that you have worked on?

I think the most interesting one at Picterra was the Custom Detector prototype. 1.5 years ago, we had built-in detectors on the platform: those were detectors that we trained ourselves and made accessible to users. For example, we had a building detector, a car detector, etc

This is actually the typical ML workflow: you have some ML engineer develop a model for a specific case and then you serve it to your clients.

But we wanted to do something differently and push the boundaries a bit. So we said: What if we allow users to train their own models directly on the platform ? There were a few challenges to make this work: first, we didnt want this to take multiple hours. If you want to keep this feeling of interactivity, training should take a few minutes at most. Second, we didnt want to require thousands of annotations, which is typically what you need for large Deep Learning models.

So we started with a super simple model, did a bunch of tests in jupyter and then tried to integrate it in our platform and test the whole workflow, with a basic UI and so on. At first, it wasnt working very well in most cases, but there were a few cases where it would work. This gave us hope and we started iterating on the training methodology and the model. After some months, we were able to reach a point where it worked well, and we now have our users using this all the time.

What was interesting about this is the double challenge of keeping the training fast (currently a few minutes) and therefore the model not too complex, but at the same time making it complex enough that it works and solves users problems. On top of that, it works with few (<100) labels for a lot of cases.

We also applied many of Googles Rules of Machine Learning, in particular the ones about implementing the whole pipeline and metrics before starting to optimize the model. It puts you into system thinking mode where you figure out that not all your problems should be handled by the core ML, but some of them can be pushed to the UI, some of them pre/post-processed, etc

What are some of the machine learning technologies that are used at Picterra?

In production, we are currently using Pytorch to train & run our models. We are also using Tensorflow from time to time, for some specific models developed for clients. Other than that, its a pretty standard scientific Python stack (numpy, scipy) with some geospatial libraries (gdal) thrown in.

Can you discuss how Picterra works in the backend once someone uploads images and wishes to train the neural network to properly annotate objects?

Sure, so first when you upload an image, we process it and store it in a Cloud-Optimized-Geotiff (COG) format on our blobstore (Google Cloud Storage), which allows us to quickly access blocks of the image without having to download the whole image later on. This is a key point because geospatial imagery can be huge: we have users routinely working with 5000050000 images.

So then, to train your model, you will have to create your training dataset through our web UI. You will do that by defining 3 types of areas:

Once you have created this dataset, you can simply click Train and well train a detector for you. What happens next is that we enqueue a training job, have one of our GPU worker pick it up (new GPU workers are started automatically if there are many concurrent jobs), train your model, save its weights to the blobstore and finally predict in the testing area to display on the UI. From there, you can iterate over your model. Typically, youll spot some mistakes in testing areas and add training areas to help the model improve.

Once you are happy with the score of your model, you can run it at scale. From the users point of view, this is really simple: just click on Detect next to the image you want to run it on. But its a bit more involved under the hood if the image is large. To speed things up, handle failures and avoid having detections taking multiple hours, we break down large detections in grid cells and run an independent detection job for each cell. This allows us to run very large-scale detections. For example, we had a customer run detection over the whole country of Denmark on 25cm imagery, which is in the range of TB of data for a single project. Weve covered a similar project in this medium post.

Is there anything else that you would like to share about Picterra?

I think whats great about Picterra is that it is a unique product, at the intersection between ML and Geospatial. What differentiates us from other companies that process geospatial data is that we equip our users with a self-serve platform. They can easily find locations, analyze patterns, and detect and count objects on Earth observation imagery. It would be impossible without machine learning, but our users dont even need basic coding skills the platform does the work based on a few human-made annotations. For those who want to go deeper and learn the core concepts of machine learning in the geospatial domain, we have launched a comprehensive online course.

What is also worth mentioning is that possible applications of Picterra are endless detectors built on the platform have been used in city management, precision agriculture, forestry management, humanitarian and disaster risk management, farming, etc., just to name the most common applications. We are basically surprised every day by what our users are trying to do with our platform. You can give it a try and let us know how it worked on social media.

Thank you for the great interview and for sharing with us how powerful Picterra is, readers who wish to learn more should visit the Picterra website.

Read this article:

Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements - Unite.AI

The Impending Artificial Intelligence Revolution in Healthcare – Op-Ed – HIT Consultant

Harjinder Sandhu, CEO of Saykara

For at least a decade, healthcare luminaries have been predicting the coming AI revolution. In other fields, AI has evolved beyond the hype and has begun to showcase real and transformative applications: autonomous vehicles, fraud detection, personalized shopping, virtual assistants, and so on. The list is long and impressive. But in healthcare, despite the expectations and the tremendous potential in improving the delivery of care, the AI revolution is just getting started. There have been definite advancements in areas such as diagnostic imaging, logistics within healthcare, and speech recognition for documentation. Still, the realm of AI technologies that impact the cost and quality of patient care continues to be rather narrow today.

Why has AI been slow in delivering change in the care processes of healthcare? With a wealth of new AI algorithms and computing power ready to take on new challenges, the limiting function in AIs successful application has been the availability of meaningful data sets to train on. This is surprising to many, given that EHRs were supposed to have solved the data barrier.

The promise of EHRs was that they would create a wealth of actionable data that could be leveraged for better patient care. Unfortunately, this promise never fully materialized. Most of the interesting information that can be captured in the course of patient care either is not or is captured minimally or inconsistently. Often, just enough information is recorded in the EHR to support billing and is in plain text (not actionable) form. Worse, documentation requirements have had a serious impact on physicians, to whom it ultimately fell to input much of that data. Burnout and job dissatisfaction among physicians have become endemic.

EHRs didnt create the documentation challenge. But using an EHR in the exam room can significantly detract from patient care. Speech recognition has come a long way since then, although it hasnt changed that fundamental dynamic of the screen interaction that takes away from the patient. Indeed, using speech recognition, physicians stare at the screen even more intently as they must be mindful of mistakes that the speech recognition system may generate.

Having been involved in the advancement of speech recognition in the healthcare domain and been witness to its successes and failures, I continue to believe that the next stage in the evolution of this technology would be to free physicians from the tyranny of the screen. To evolve from speech recognition systems to AI-based virtual scribes that listen to doctor-patient conversations, creating notes, and entering orders.

Using a human scribe solves a significant part of the problem for physicians scribes relieve the physician of having to enter data manually. For many physicians, a scribe has allowed them to reclaim their work lives (they can focus on patients rather than computers) as well as their personal lives (fewer evening hours completing patient notes). However, the inherent cost of both training and then employing a scribe has led to many efforts to build digital counterparts, AI-based scribes that can replicate the work of a human scribe.

Building an AI scribe is hard. It requires a substantially more sophisticated system than the current generation of speech recognition systems. Interpreting natural language conversation is one of the next major frontiers for AI in any domain. The current generation of virtual assistants, like Alexa and Siri, simplify the challenge by putting boundaries on speech, forcing a user, for example, to express a single idea at a time, within a few seconds and within the boundaries of a list of skills that these systems know how to interpret.

In contrast, an AI system that is listening to doctor-patient conversations must deal with the complexity of human speech and narrative. A patient visit could last five minutes or an hour, the speech involves at least two parties (the doctor and the patient), and a patients visit can meander to irrelevant details and branches that dont necessarily contribute to a physician making their diagnosis.

As a result of the complexity of conversational speech, it is still quite early for fully autonomous AI scribes. In the meantime, augmented AI scribes, AI systems augmented by human power, are filling in the gaps of AI competency and allowing these systems to succeed while incrementally chipping away at the goal of making these systems fully autonomous. These systems are beginning to do more than simply relieve doctors of the burden of documentation, though that is obviously important. The real transformative impact will be from capturing a comprehensive set of data about a patient journey in a structured and consistent fashion and putting that into the medical records, thereby building a base for all other AI applications to come.

About Harjinder Sandhu

Harjinder Sandhu, CEO of Saykara, a company leveraging the power and simplicity of the human voice to make delivering great care easier while streamlining physician workflow

Follow this link:

The Impending Artificial Intelligence Revolution in Healthcare - Op-Ed - HIT Consultant

Is artificial intelligence the answer to the care sector amid COVID-19? – Descrier

It is clear that the health and social care sectors in the United Kingdom have long been suffering from systematic neglect, and this has predictably resulted in dramatic workforce shortages. These shortages have been exacerbated by the current coronavirus crisis, and will be further compounded by the stricter immigration rules coming into force in January 2021. The Home Office is reportedly considering an unexpected solution to this; replacing staff with tech and artificial intelligence.

To paraphrase Aneurin Bevan, the mark of a civilised society is how it treats its sick and vulnerable. As a result, whenever technology is broached in healthcare, people are sceptical particularly if it means removing that all-important human touch.

Such fears are certainly justified. Technology and AI itself has become fraught with issues: there is a wealth of evidence that points to prove algorithms can become susceptible to absorbing the unconscious human biases of its designers, particularly around gender and race. Even the Home Office has been found using discriminatory algorithms that scan and evaluate visa applications while a similar algorithm utilised in hospitals in the US was found to be systematically discriminating against black people as the software was more likely to refer white patients to care programmes.

Such prejudices clearly present AI as unfit in healthcare. Indeed, technology is by no means a quick fix to staff shortages and should never be used at the expense of human interaction, especially in areas that are as emotionally intensive as care.

However, this does not mean that the introduction of AI into the UK care sector is necessarily a slippery slope to a techno-dystopia. Robotics have already made vital changes in the healthcare sector; surgical robots, breast cancer scanners and algorithms that can detect even the early stages of Alzheimers have proved revolutionary. The coronavirus crisis itself has reinforced just how much we rely on technology as we are able to keep in touch with our loved ones and work from home.

Yet in a more dramatic example of the potential help AI could deliver in the UK, robots have been utilised to disinfect the streets of China amid the coronavirus pandemic and one hospital at the centre of the outbreak in Wuhan outnumbered its doctor workforce with robotic aides to slow the spread of infection.

Evidently, if used correctly, AI and automation could improve care and ease the burden on staff in the UK. The Institute for Public Policy Research even calculated that 30% of work done by adult social care staff could be automated, saving the sector 6 billion. It is important to stress, though, that this initiative cannot be used as a cost cutting exercise if money is saved by automation, it should be put back into the care sector to improve both the wellbeing of those receiving care, and also the working conditions of the carers themselves.

There is much that care robots cannot do, but they can provide some level of companionship, and can serve as assistance with medication prep while smart speakers can remind or alert patients. AI can realistically monitor vulnerable patients safety 24/7 while allowing them to maintain their privacy and sense of independence.

There are examples of tech being used in social care around the world that demonstrate the positive effect that it can have; in Japan specifically, they have implemented the use of a robot called Robear that helps carry patients from their bed to their wheelchairs, a bionic suit called HAL that assists with motor tasks, and Paro a baby harp seal bot that is a therapeutic companion which has been shown to alleviate anxiety and depression in dementia sufferers. Another, a humanoid called Pepper, has been introduced as an entertainer, cleaner and corridor monitor to great success.

It is vital, though, that if automation and AI is to be introduced on a wide scale into the care sector, it must work in harmony with human caregivers. It could transform the care sector for the better if used properly, however the current government does not view it in this way; and the focus on automation is ushered in to coincide with the immigration rules that will prohibit migrant carers from entry. Rolling out care robots across the nation on such a huge scale in the next 9 months is mere blue sky thinking; replacing the fresh-and-blood and hard graft of staff with robots is therefore far-fetched at best, but disastrous to a sector that is suffering under a 110,000 staff shortage at worst. Besides, robots still disappointingly lack the empathy required for the job and simply cannot give the personal, compassionate touch that is so important; they can only ease the burden on carers, and cannot step in their shoes alone.

While in the long term it is possible that automation in the care sector could help ease the burden on staff, and plug gaps as an when it is needed, the best course of action that is currently attainable in order to solve the care crisis is for the government to reconsider just who it classifies as low skilled in relation to immigration as some Conservative MPs have already made overtures towards.

In order to remedy the failing care sector, the government should invest both in home grown talent and relax restrictions on carers from overseas seeking to work in the country. A renovation of the care sector is needed; higher wages, more reasonable hours, more secure contracts, and the introduction of a care worker visa is what is so desperately needed, and if this is implemented in conjunction with support from AI and automation we could see the growing and vibrant care sector for which this country is crying out.

Excerpt from:

Is artificial intelligence the answer to the care sector amid COVID-19? - Descrier

How Artificial Intelligence Is Totally Changing Everything …

Advertisement

Back in Oct. 1950, British techno-visionary Alan Turing published an article called "Computing Machinery and Intelligence," in the journal MIND that raised what at the time must have seemed to many like a science-fiction fantasy.

"May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" Turing asked.

Turing thought that they could. Moreover, he believed, it was possible to create software for a digital computer that enabled it to observe its environment and to learn new things, from playing chess to understanding and speaking a human language. And he thought machines eventually could develop the ability to do that on their own, without human guidance. "We may hope that machines will eventually compete with men in all purely intellectual fields," he predicted.

Nearly 70 years later, Turing's seemingly outlandish vision has become a reality. Artificial intelligence, commonly referred to as AI, gives machines the ability to learn from experience and perform cognitive tasks, the sort of stuff that once only the human brain seemed capable of doing.

AI is rapidly spreading throughout civilization, where it has the promise of doing everything from enabling autonomous vehicles to navigate the streets to making more accurate hurricane forecasts. On an everyday level, AI figures out what ads to show you on the web, and powers those friendly chatbots that pop up when you visit an e-commerce website to answer your questions and provide customer service. And AI-powered personal assistants in voice-activated smart home devices perform myriad tasks, from controlling our TVs and doorbells to answering trivia questions and helping us find our favorite songs.

But we're just getting started with it. As AI technology grows more sophisticated and capable, it's expected to massively boost the world's economy, creating about $13 trillion worth of additional activity by 2030, according to a McKinsey Global Institute forecast.

"AI is still early in adoption, but adoption is accelerating and it is being used across all industries," says Sarah Gates, an analytics platform strategist at SAS, a global software and services firm that focuses upon turning data into intelligence for clients.

It's even more amazing, perhaps, that our existence is quietly being transformed by a technology that many of us barely understand, if at all something so complex that even scientists have a tricky time explaining it.

"AI is a family of technologies that perform tasks that are thought to require intelligence if performed by humans," explains Vasant Honavar, a professor and director of the Artificial Intelligence Research Laboratory at Penn State University. "I say 'thought,' because nobody is really quite sure what intelligence is."

Honavar describes two main categories of intelligence. There's narrow intelligence, which is achieving competence in a narrowly defined domain, such as analyzing images from X-rays and MRI scans in radiology. General intelligence, in contrast, is a more human-like ability to learn about anything and to talk about it. "A machine might be good at some diagnoses in radiology, but if you ask it about baseball, it would be clueless," Honavar explains. Humans' intellectual versatility "is still beyond the reach of AI at this point."

According to Honavar, there are two key pieces to AI. One of them is the engineering part that is, building tools that utilize intelligence in some way. The other is the science of intelligence, or rather, how to enable a machine to come up with a result comparable to what a human brain would come up with, even if the machine achieves it through a very different process. To use an analogy, "birds fly and airplanes fly, but they fly in completely different ways," Honavar. "Even so, they both make use of aerodynamics and physics. In the same way, artificial intelligence is based upon the notion that there are general principles about how intelligent systems behave."

AI is "basically the results of our attempting to understand and emulate the way that the brain works and the application of this to giving brain-like functions to otherwise autonomous systems (e.g., drones, robots and agents)," Kurt Cagle, a writer, data scientist and futurist who's the founder of consulting firm Semantical, writes in an email. He's also editor of The Cagle Report, a daily information technology newsletter.

And while humans don't really think like computers, which utilize circuits, semi-conductors and magnetic media instead of biological cells to store information, there are some intriguing parallels. "One thing we're beginning to discover is that graph networks are really interesting when you start talking about billions of nodes, and the brain is essentially a graph network, albeit one where you can control the strengths of processes by varying the resistance of neurons before a capacitive spark fires," Cagle explains. "A single neuron by itself gives you a very limited amount of information, but fire enough neurons of varying strengths together, and you end up with a pattern that gets fired only in response to certain kinds of stimuli, typically modulated electrical signals through the DSPs [that is digital signal processing] that we call our retina and cochlea."

"Most applications of AI have been in domains with large amounts of data," Honavar says. To use the radiology example again, the existence of large databases of X-rays and MRI scans that have been evaluated by human radiologists, makes it possible to train a machine to emulate that activity.

AI works by combining large amounts of data with intelligent algorithms series of instructions that allow the software to learn from patterns and features of the data, as this SAS primer on artificial intelligence explains.

In simulating the way a brain works, AI utilizes a bunch of different subfields, as the SAS primer notes.

The concept of AI dates back to the 1940s, and the term "artificial intelligence" was introduced at a 1956 conference at Dartmouth College. Over the next two decades, researchers developed programs that played games and did simple pattern recognition and machine learning. Cornell University scientist Frank Rosenblatt developed the Perceptron, the first artificial neural network, which ran on a 5-ton (4.5-metric ton), room-sized IBM computer that was fed punch cards.

But it wasn't until the mid-1980s that a second wave of more complex, multilayer neural networks were developed to tackle higher-level tasks, according to Honavar. In the early 1990s, another breakthrough enabled AI to generalize beyond the training experience.

In the 1990s and 2000s, other technological innovations the web and increasingly powerful computers helped accelerate the development of AI. "With the advent of the web, large amounts of data became available in digital form," Honavar says. "Genome sequencing and other projects started generating massive amounts of data, and advances in computing made it possible to store and access this data. We could train the machines to do more complex tasks. You couldn't have had a deep learning model 30 years ago, because you didn't have the data and the computing power."

AI is different from, but related to, robotics, in which machines sense their environment, perform calculations and do physical tasks either by themselves or under the direction of people, from factory work and cooking to landing on other planets. Honavar says that the two fields intersect in many ways.

"You can imagine robotics without much intelligence, purely mechanical devices like automated looms," Honavar says. "There are examples of robots that are not intelligent in a significant way." Conversely, there's robotics where intelligence is an integral part, such as guiding an autonomous vehicle around streets full of human-driven cars and pedestrians.

"It's a reasonable argument that to realize general intelligence, you would need robotics to some degree, because interaction with the world, to some degree, is an important part of intelligence," according to Honavar. "To understand what it means to throw a ball, you have to be able to throw a ball."

AI quietly has become so ubiquitous that it's already found in many consumer products.

"A huge number of devices that fall within the Internet of Things (IoT) space readily use some kind of self-reinforcing AI, albeit very specialized AI," Cagle says. "Cruise control was an early AI and is far more sophisticated when it works than most people realize. Noise dampening headphones. Anything that has a speech recognition capability, such as most contemporary television remotes. Social media filters. Spam filters. If you expand AI to cover machine learning, this would also include spell checkers, text-recommendation systems, really any recommendation system, washers and dryers, microwaves, dishwashers, really most home electronics produced after 2017, speakers, televisions, anti-lock braking systems, any electric vehicle, modern CCTV cameras. Most games use AI networks at many different levels."

AI already can outperform humans in some narrow domains, just as "airplanes can fly longer distances, and carry more people than a bird could," Honavar says. AI, for example, is capable of processing millions of social media network interactions and gaining insights that can influence users' behavior an ability that the AI expert worries may have "not so good consequences."

It's particularly good at making sense of massive amounts of information that would overwhelm a human brain. That capability enables internet companies, for example, to analyze the mountains of data that they collect about users and employ the insights in various ways to influence our behavior.

But AI hasn't made as much progress so far in replicating human creativity, Honavar notes, though the technology already is being utilized to compose music and write news articles based on data from financial reports and election returns.

Given AI's potential to do tasks that used to require humans, it's easy to fear that its spread could put most of us out of work. But some experts envision that while the combination of AI and robotics could eliminate some positions, it will create even more new jobs for tech-savvy workers.

"Those most at risk are those doing routine and repetitive tasks in retail, finance and manufacturing," Darrell West, a vice president and founding director of the Center for Technology Innovation at the Brookings Institution, a Washington-based public policy organization, explains in an email. "But white-collar jobs in health care will also be affected and there will be an increase in job churn with people moving more frequently from job to job. New jobs will be created but many people will not have the skills needed for those positions. So the risk is a job mismatch that leaves people behind in the transition to a digital economy. Countries will have to invest more money in job retraining and workforce development as technology spreads. There will need to be lifelong learning so that people regularly can upgrade their job skills."

And instead of replacing human workers, AI may be used to enhance their intellectual capabilities. Inventor and futurist Ray Kurzweil has predicted that by the 2030s, AI have achieved human levels of intelligence, and that it will be possible to have AI that goes inside the human brain to boost memory, turning users into human-machine hybrids. As Kurzweil has described it, "We're going to expand our minds and exemplify these artistic qualities that we value."

More here:

How Artificial Intelligence Is Totally Changing Everything ...

Artificial Intelligence And Automation Top Focus For Venture Capitalists – Forbes

Artificial intelligence and automation have been two hot areas of investment, especially over the past decade. As the worldwide workforce increasingly shifts to a remote workforce, the need for automation, technology, and tools continues to grow. As such, its no surprise that automation and intelligent systems continue to be of significant interest to venture capitalists who are investing in growing firms focused in these areas. The AI Today podcast had the chance to talk to Oliver Mitchell, a Founding Partner of Autonomy Ventures. (disclosure: Im a co-host of the AI Today podcast).

Oliver Mitchell

For over 20 years Oliver has been working on technology startups and in the past decade he has been working on investing in automation. He spoke with us about seeing the big changes that are coming to the world with automation and the exciting possibilities that it still has to offer. He is a partner at venture firm Autonomy Ventures, an early stage venture capital firm that looks to invest in automation and robotics.

The best AI solutions are the ones that solve industry-specific problems

Despite the fact that Artificial Intelligence has been around for decades, there is still no commonly accepted definition. Because of this, artificial intelligence means something different to every industry, and this is reflected in the sort of investments that Oliver and other VCs are seeing. While some technology firms may be focused on how artificial intelligence can better help them manage funds, other companies might be more interested in how AI can supplement their human workforce. The various different tasks that artificial intelligence can help with is something that investors need to look at when making their investments.

Out of all of the investments that Oliver has made over the years, the best ones have been with companies that really focus on solving specific problems in an industry. In particular, applications of robotics to manufacturing, and specifically the concept of collaborative robots is appealing. Collaborative robots can be used to work alongside employees. To make the arm easier to use it has AI onboard and a suite of tools to enable anyone to operate the arm without technological training. With this arm, companies dont need to spend hundreds of thousands of dollars to hire specialists to train their robotic arms. Rather, the arm can be taught through movement how to carry out tasks through an iPad or similar device. This arm falls under the category of collaborative robots, or cobots for short, that are able to work side by side with humans.

About half of the Autonomy Ventures portfolio companies are based out of Israel. One portfolio company is Aurora Labs, which focuses on providing a software platform for autonomous and connected cars to monitor their onboard software. Aurora Labs calls their software a self-healing software for connected cars. Your average car needs to go to a dealership in order to receive any kind of firmware or software update if an issue is detected. This is because the technician needs to plug a device into the OBDII port of the car. Due to limited power in the chips in most current cars, they arent able to access the cloud. Even those cars that have OnStar onboard have very limited connectivity. Self-healing software for connected cars from Aurora Labs allows cars to connect to the cloud so that they can receive updates over the air. While much of this solution isnt AI per se, the use of machine learning for more adaptive updates is part of the indication that AI is finding its application in a wide range of niches.

Keeping AI in check

Something important that Oliver addressed is the view and aims of AI. A lot of people have a science fiction perspective on artificial intelligence. He believes that we need to manage our expectations on AI because there are many tasks that AI still cant do that even a child can. One example Oliver uses is the ability to tie a shoe. While a 7-year-old has been able to tie shoes for years, robots still cannot tie a shoe. We need to be able to address everyday problems before we can start to move on to what we see in movies.

Oliver also is concerned about issues of bias in AI and machine learning, especially as systems become more autonomous. Software around the world is used to help humans but so many of us are quick to turn to technology without a chance to evaluate its proper use. Oliver sites many examples including the AI-based criminal justice system that was biased in its assessment of an offenders likelihood of reoffending. Once the software was deployed in multiple states it was found that it rated people of color more likely to reoffend.

Oliver also points out bias in a type of technology that is used in emergency departments around the world to analyze patients. The software looks at a patients chief complaint, symptoms, and medical history along with demographics and gives the medical staff a recommendation about what to do. However, this software has been found to not take into account the human aspect of medical care. It will make a decision based on a perceived likelihood of effective treatment, not on saving every life possible.

Regardless of the challenges and limitations of AI, investors and entrepreneurs see significant potential for both simple automation and more complicated intelligent and autonomous systems. Companies are continuing to push the boundary of whats possible, especially in our increasingly remote and virtual world. It should be no surprise then that VCs will continue to look to invest in these types of companies as AI becomes part of our every day lives.

See original here:

Artificial Intelligence And Automation Top Focus For Venture Capitalists - Forbes

Benefits & Risks of Artificial Intelligence – Future of …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Continued here:

Benefits & Risks of Artificial Intelligence - Future of ...

Artificial intelligence can take banks to the next level – TechRepublic

Banking has the potential to improve its customer service, loan applications, and billing with the help of AI and natural language processing.

Image: Kubkoo, Getty Images/iStockPhoto

When I was an executive in banking, we struggled with how to transform tellers at our branches into customer service specialists instead of the "order takers" that they were. This struggle with customer service is ongoing for financial institutions. But it's an area in which artificial intelligence (AI), and its ability to work with unstructured data like voice and images, can help.

"There are two things that artificial intelligence does really well," said Ameek Singh, vice president of IBM's Watson applications and solutions. "It's really good with analyzing images and it also performs uniquely well with natural language processing (NLP)."

SEE:Managing AI and ML in the enterprise 2020 (free PDF)(TechRepublic)

AI's ability to process natural language helps behind the scenes as banks interact with their customers. In call center banking transactions, the ability to analyze language can detect emotional nuances from the speaker, and understand linguistic differences such as the difference between American and British English. AI works with other languages as well, understanding the emotional nuances and slang terms that different groups use.

Collectively, real-time feedback from AI aids bank customer service reps in call centersbecause if they know the sentiments of their customers, it's easier for them to relate to customers and to understand customer concerns that might not have been expressed directly.

"We've developed AI models for natural language processing in a multitude of languages, and the AI continues to learn and refine these linguistics models with the help of machine learning (ML)," Singh said.

SEE:AI isn't perfect--but you can get it pretty darn close(TechRepublic)

The result is higher quality NLP that enables better relationships between customers and the call center front line employees who are trying to help them.

But the use of AI in banking doesn't stop there. Singh explained how AI engines like Watson were also helping on the loans and billing side.

"The (mortgage) loan underwriter looks at items like pay stubs and credit card statements. He or she might even make a billing inquiry," Singh said.

Without AI, these document reviews are time consuming and manual. AI changes that because the AI can "read" the document. It understands what the salient information is and also where irrelevant items, like a company logo, are likely to be located. The AI extracts the relevant information, places the information into a loan evaluation model, and can make a loan recommendation that the underwriter reviews, with the underwriter making a final decision.

Of course, banks have had software for years that has performed loan evaluations. However, they haven't had an easy way to process foundational documents such as bills and pay stubs, that go into the loan decisioning process and that AI can now provide.

SEE:These five tech trends will dominate 2020(ZDNet)

The best news of all for financial institutions is that AI modeling and execution don't exclude them.

"The AI is designed to be informed by bank subject matter experts so it can 'learn' the business rules that the bank wants to apply," Singh said. "The benefit is that real subject matter experts get involvednot just the data scientists."

Singh advises banks looking at expanding their use of AI to carefully select their business use cases, without trying to do too much at once.

"Start small instead of using a 'big bang' approach," he said. "In this way, you can continue to refine your AI model and gain success with it that immediately benefits the business."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read this article:

Artificial intelligence can take banks to the next level - TechRepublic

Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic | TheHill – The Hill

SARS-COV-2 has upended modern health care, leaving health systems struggling to cope. Addressing a fast-moving and uncontrolled disease requires an equally efficient method of discovery, development and administration. Artificial Intelligence (AI) and Machine Learning driven health care solutions provide such an answer. AI-enabled health care is not the medicine of the future, nor does it mean robot doctors rolling room to room in hospitals treating patients. Instead of a hospital from some future Jetsons-like fantasy, AI is poised to make impactful and urgent contributions to the current health care ecosystem. Already AI-based systems are helping to alleviate the strain on health care providers overwhelmed by a crushing patient load, accelerate diagnostic and reporting systems, and enable rapid development of new drugs and existing drug combinations that better match a patients unique genetic profile and specific symptoms.

For the thousands of patients fighting for their lives against this deadly disease and the health care providers who incur a constant risk of infection, AI provides an accelerated route to understand the biology of COVID-19. Leveraging AI to assist in prediction, correlation and reporting allow health care providers to make informed decisions quickly. With the current standard of PCR based testing requiring up to 48 hours to return a result, New York-based Envisagenics has developed an AI platform that analyzes 1,000 patient samples in parallel in just two hours. Time saves lives, and the company hopes to release the platform for commercial use in the coming weeks.

AI-powered wearables, such as a smart shirt developed by Montreal-based Hexoskin to continuously measure biometrics including respiration effort, cardiac activity, and a host of other metrics, provide options for hospital staff to minimize exposure by limiting the required visits to infected patients. This real-time data provides an opportunity for remote monitoring and creates a unique dataset to inform our understanding of disease progression to fuel innovation and enable the creation of predictive metrics, alleviating strain on clinical staff. Hexoskin has already begun to assist hospitals in New York City with monitoring programs for their COVID-19 patients, and they are developing an AI/ML platform to better assess the risk profile of COVID-19 patients recovering at home. Such novel platforms would offer a chance for providers and researchers to get ahead of the disease and develop more effective treatment plans.

AI also accelerates discovery and enables efficient and effective interrogation of, the necessary chemistry to address COVID-19. An increasing number of companies are leveraging AI/ML to identify new treatment paths, whether from a list of existing molecules or de novo discovery. San Francisco-based Auransa is using AI to map the gene sequence of SARS-COV-2 to its effect on the host to generate a short-list of already approved drugs that have a high likelihood to alleviate symptoms of COVID-19. Similarly, UK-based Healx has set its AI platform to discover combination therapies, identifying multi-drug approaches to simultaneously treat different aspects of the disease pathology to improve patient outcomes. The company analyzed a library of 4,000 approved drugs to map eight million possible pairs and 10.5 billion triplets to generate combination therapy candidates. Preclinical testing will begin in May 2020.

Developers cannot always act alone - realizing the potential of AI often requires the resources of a collaboration to succeed. Generally, the best data sets and the most advanced algorithms do not exist within the same organization, and it is often the case that multiple data sources and algorithms need to be combined for maximum efficacy. Over the last month, we have seen the rise of several collaborations to encourage information sharing and hasten potential outcomes to patients.

Medopad, a UK-based AI developer, has partnered with Johns Hopkins University to mine existing datasets on COVID-19 and relevant respiratory diseases captured by the UK Biobank and similar databases to identify a biomarker associated with a higher risk for COVID-19. A biomarker database is essential in executing long-term population health measures, and can most effectively be generated by an AI system. In the U.S., over 500 leading companies and organizations, including Mayo Clinic, Amazon Web Services and Microsoft, have formed the COVID-19 Healthcare Coalition to assist in coordinating on all COVID-19 related matters. As part of this effort, LabCorp and HD1, among others, have come together to use AI to make testing and diagnostic data available to researchers to help build disease models including predictions of future hotspots and at-risk populations. On the international stage, the recently launched COAI, a consortium of AI-companies being assembled by French-US OWKIN, aims to increase collaborative research, to accelerate the development of effective treatments, and to share COVID-19 findings with the global medical and scientific community.

Leveraging the potential of AI and machine learning capabilities provides a potent tool to the global community in tackling the pandemic. AI presents novel ways to address old problems and opens doors to solving newly developing population health concerns. The work of our health care system, from the research scientists to the nurses and physicians, should be celebrated, and we should embrace the new tools which are already providing tremendous value. With the rapid deployment and integration of AI solutions into the COVID-19 response, the health care of tomorrow is already addressing the challenges we face today.

Brandon Allgood, PhD, is vice chair of the Alliance for Artificial Intelligence in Healthcare, a global advocacy organization dedicated to the discovery, development and delivery of better solutions to improve patient lives. Allgood is a SVP of DS&AI at Integral Health, a computationally driven biotechnology company in Boston.

See the article here:

Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic | TheHill - The Hill

How Artificial Intelligence, IoT And Big Data Can Save The Bees – Forbes

Modern agriculture depends on bees. In fact, our entire ecosystem, including the food we eat and the air we breathe, counts on pollinators. But the pollinator population is declining according to Sabiha Rumani Malik, the founder and executive president of The World Bee Project. But, in an intriguing collaboration with Oracle and by putting artificial intelligence, internet of things and big data to work on the problem, they hope to reverse the trend.

How Artificial Intelligence, IoT and Big Data Can Save The Bees

Why is the global bee population in decline?

According to an Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) report, pollinators are in danger. There are many reasons pollinators are being driven to extinction, including habitat destruction, urbanization, use of pesticides, pollution, fragmentation of natural flowering habitats, predators and parasites, and changing climate. However, until recently, with The World Bee Project's work, there hasn't been a global initiative to study bee populations or to research and attack the issue from a global perspective.

Why is it important to save the bees?

Did you know that bees, along with other pollinators, such as butterflies, are the reason plants can produce seeds and reproduce? According to the United States Department of Agriculture (USDA), 35 percent of food crops and three-quarters of the worlds flowering plants depend on bees and pollinators. In fact, in order to ensure the almond crop gets pollinated in California each year, most of the beehives in the United States are shipped to California to ensure it. In fact, bees help to pollinate 90% of the leading global crop types, including fruit trees, coffee, vanilla, and cotton plants. And, of course, healthy plants are critical in replenishing our oxygen supply thanks to photosynthesis.

If the pollinators aren't alive or healthy enough to do their job, our global crop production, food security, biodiversity, and clean air is in peril. Honeybees are the world's most important pollinators. As much as 40 percent of the global nutrient supply for humans depends on pollinators. Presently there are approximately 2 billion people who suffer deficiencies of micronutrients.

Our lives are intrinsically connected to the bees, Malik said.

Partnership to monitor global honeybee population

The World Bee Project is the first private globally coordinated organization to launch and be devoted to monitoring the global honey bee population. Since 2014, the organization has brought together scientists to study the global problem of bee decline to provide insight about the issue to farmers, governments, beekeepers, and other vested organizations.

In 2018, Oracle Cloud technology was brought into the work to better understand the worldwide decline in bee populations, and The World Bee Project Hive Network began.

How technology can save the bees

How could technology be used to save the bees? Technology can be leveraged to help save the bees in a similar way that it is applied to other innovative projects. First, by using internet-of-things sensors, including microphones and cameras that can see invasive predators and collect data from the bees and hives. Human ingenuity and innovations such as wireless technologies, robotics, and computer vision help deliver new insights and solutions to the issue. One of the key metrics of a hive's health is the sounds it produces. Critical to the data-gathering efforts is to "listen" to the hives to determine colony health, strength, and behavior as well as collect temperature, humidity, apiary weather conditions, and hive weight.

The sound and vision sensors can also detect hornets, which can be a threat to bee populations.

The data is then fed to the Oracle Cloud, where artificial intelligence (AI) algorithms get to work to analyze the data. The algorithms will look for patterns and try to predict behaviors of the hive, such as if it's preparing to swarm. The insights are then shared with beekeepers and conservationists so they can step in to try to protect the hives. Since it's a globally connected network, the algorithms can also learn more about differences in bee colonies in different areas of the world. Students, researchers, and even interested citizens can also interact with the data, work with it through the hive network's open API, and discuss it via chatbot.

For example, the sound and vision sensors can detect hornets, which can be a threat to bee populations. The sound from the wing flab or a hornet is different from those of bees, and the AI can pick this up automatically and alert beekeepers to the hornet threat.

Technology is making it easier for The World Bee Project to share real-time information and gather resources to help save the world's bee population. In fact, Malik shared, "Our partnership with Oracle Cloud is an extraordinary marriage between nature and technology." Technology is helping to multiply the impact of The World Bee Project Hive Network across the world and makes action to save the bees quicker and more effective.

Here you can see a short video showing the connected beehive in augmented reality during my interview with Sabiha Rumani Malik - pretty cool:

Visit link:

How Artificial Intelligence, IoT And Big Data Can Save The Bees - Forbes

First meeting of the new CEPEJ Working Group on cyberjustice and artificial intelligence – Council of Europe

The new CEPEJ Working group on Cyberjustice and artificial intelligence (CEPEJ-GT-CYBERJUST) will hold a first meeting by videoconference on 27 April 2020.

The objective of the Working group is to analyse and develop appropriate tools on new issues such as the use of cyberjustice or artificial intelligence in judicial systems in relation to the efficiency and quality of judicial systems.

At this meeting, an exchange of views will take place on the possible future work of the Working Group, which should be based on the themes contained in its mandate:

The CYBERJUST group will also hold a joint meeting at a later stage with the CEPEJ Working Group on Quality of Justice (CEPEJ-GT-QUAL) with a view to sharing tasks, in particular to follow up the implementation of the CEPEJ European Ethical Charter on the use of artificial intelligence in judicial systems and their environment and its toolbox and to ensure co-ordination.

Read the original:

First meeting of the new CEPEJ Working Group on cyberjustice and artificial intelligence - Council of Europe