...23456...102030...


Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements – Unite.AI

Julien Rebetez, is the Lead Software & Machine Learning Engineer at Picterra. Picterra provides a geospatial cloud-based-platform specially designed for training deep learning based detectors, quickly and securely.

Without a single line of code and with only few human-made annotations, Picterras users build and deploy unique actionable and ready to use deep learning models.

Itautomates the analysis of satellite and aerial imagery, enabling users to identify objects and patterns.

What is it that attracted you to machine learning and AI?

I started programming because I wanted to make video games and got interested in computer graphics at first. This led me to computer vision, which is kind of the reverse process where instead of having the computer create a fake environment, you have it perceive the real environment. During my studies, I took some Machine Learning courses and I got interested in the computer vision angle of it. I think whats interesting about ML is that its at the intersection between software engineering, algorithms and math and it still feels kind of magical when it works.

Youve been working on using machine learning to analyze satellite image for many years now. What was your first project?

My first exposure to satellite imagery was the Terra-i project (to detect deforestation) and I worked on it during my studies. I was amazed at the amount of freely available satellite data that is produced by the various space agencies (NASA, ESA, etc). You can get regular images of the planet for free every day or so and this is a great resource for many scientific applications.

Could you share more details regarding the Terra-i project?

The Terra-i project (http://terra-i.org/terra-i.html) was started by Professor Andrez Perez-Uribe, from HEIG-VD (Switzerland) and is now led by Louis Reymondin, from CIAT (Colombia). The idea of the project is to detect deforestation using freely available satellite images. At the time, we worked with MODIS imagery (250m pixel resolution) because it provided a uniform and predictable coverage (both spatially and temporally). We would get a measurement for each pixel every few days and from this time series of measurements, you can try to detect anomalies or novelties as we call them in ML sometimes.

This project was very interesting because the amount of data was a challenge at the time and there was also some software engineering involved to make it work on multiple computers and so on. From the ML side, it used Bayesian Neural Network (not very deep at the time ) to predict what the time series of a pixel should look like. If the measurement didnt match the prediction, then we would have an anomaly.

As part of this project, I also worked on cloud removal. We took a traditional signal processing approach there, where you have a time series of measurements and some of them will be completely off because of a cloud. We used a fourier-based approach (HANTS) to clean the time series before detecting novelties in it. One of the difficulties is that if we would clean it too strongly, wed also remove novelties, so there were quite some experiments to do to find the right parameters.

You also designed and implemented a deep learning system for automatic crop type classification from aerial (drone) imagery of farm fields. What were the main challenges at the time?

This was my first real exposure to Deep Learning. At the time, I think the main challenge were more on getting the framework to run and properly use a GPU than on the ML itself. We used Theano, which was one of the ancestors of Tensorflow.

The goal of the project was to classify the type of crop in a field, from drone imagery. We tried an approach where the Deep Learning Model was using color histograms as inputs as opposed to just the raw image. To make this work reasonably quickly, I remember having to implement a custom Theano layer, all the way to some CUDA code. That was a great learning experience at the time and a good way to dig a bit into the technical details of Deep Learning.

Youre officially the Lead Software and Machine Learning Engineer at Picterra. How would you best describe your day to day activities?

It really varies, but a lot of it is about keeping an eye on the overall architecture of the system and the product in general and communicating with the various stakeholders. Although ML is at the core of our business, you quickly realize that most of the time is not spent on ML itself, but all the things around it: data management, infrastructure, UI/UX, prototyping, understanding users, etc This is quite a change from Academia or previous experience in bigger companies where you are much more focused on a specific problem.

Whats interesting about Picterra is that we not only run Deep Learning Models for users, but we actually allow them to train their own. That is different from a lot of the typical ML workflows where you have the ML team train a model and then publish it to production. What this means is that we cannot manually play with the training parameters as you often do. We have to find some training method that will work for all of our users. This led us to create what we call our experiment framework, which is a big repository of datasets that simulates the training data our users would build on the platform. We can then easily test changes to our training methodology against these datasets and evaluate if they help or not. So instead of evaluating a single model, we are more evaluating an architecture + training methodology.

The other challenge is that our users are not ML practitioners, so they dont necessarily know what a training set is, what a label is and so on. Building a UI to allow non-ML practitioners to build datasets and train ML models is a constant challenge and there is a lot of back-and-forth between the UX and ML teams to make sure we guide users in the right direction.

Some of your responsibilities include prototyping new ideas and technologies. What are some of the more interesting projects that you have worked on?

I think the most interesting one at Picterra was the Custom Detector prototype. 1.5 years ago, we had built-in detectors on the platform: those were detectors that we trained ourselves and made accessible to users. For example, we had a building detector, a car detector, etc

This is actually the typical ML workflow: you have some ML engineer develop a model for a specific case and then you serve it to your clients.

But we wanted to do something differently and push the boundaries a bit. So we said: What if we allow users to train their own models directly on the platform ? There were a few challenges to make this work: first, we didnt want this to take multiple hours. If you want to keep this feeling of interactivity, training should take a few minutes at most. Second, we didnt want to require thousands of annotations, which is typically what you need for large Deep Learning models.

So we started with a super simple model, did a bunch of tests in jupyter and then tried to integrate it in our platform and test the whole workflow, with a basic UI and so on. At first, it wasnt working very well in most cases, but there were a few cases where it would work. This gave us hope and we started iterating on the training methodology and the model. After some months, we were able to reach a point where it worked well, and we now have our users using this all the time.

What was interesting about this is the double challenge of keeping the training fast (currently a few minutes) and therefore the model not too complex, but at the same time making it complex enough that it works and solves users problems. On top of that, it works with few (<100) labels for a lot of cases.

We also applied many of Googles Rules of Machine Learning, in particular the ones about implementing the whole pipeline and metrics before starting to optimize the model. It puts you into system thinking mode where you figure out that not all your problems should be handled by the core ML, but some of them can be pushed to the UI, some of them pre/post-processed, etc

What are some of the machine learning technologies that are used at Picterra?

In production, we are currently using Pytorch to train & run our models. We are also using Tensorflow from time to time, for some specific models developed for clients. Other than that, its a pretty standard scientific Python stack (numpy, scipy) with some geospatial libraries (gdal) thrown in.

Can you discuss how Picterra works in the backend once someone uploads images and wishes to train the neural network to properly annotate objects?

Sure, so first when you upload an image, we process it and store it in a Cloud-Optimized-Geotiff (COG) format on our blobstore (Google Cloud Storage), which allows us to quickly access blocks of the image without having to download the whole image later on. This is a key point because geospatial imagery can be huge: we have users routinely working with 5000050000 images.

So then, to train your model, you will have to create your training dataset through our web UI. You will do that by defining 3 types of areas:

Once you have created this dataset, you can simply click Train and well train a detector for you. What happens next is that we enqueue a training job, have one of our GPU worker pick it up (new GPU workers are started automatically if there are many concurrent jobs), train your model, save its weights to the blobstore and finally predict in the testing area to display on the UI. From there, you can iterate over your model. Typically, youll spot some mistakes in testing areas and add training areas to help the model improve.

Once you are happy with the score of your model, you can run it at scale. From the users point of view, this is really simple: just click on Detect next to the image you want to run it on. But its a bit more involved under the hood if the image is large. To speed things up, handle failures and avoid having detections taking multiple hours, we break down large detections in grid cells and run an independent detection job for each cell. This allows us to run very large-scale detections. For example, we had a customer run detection over the whole country of Denmark on 25cm imagery, which is in the range of TB of data for a single project. Weve covered a similar project in this medium post.

Is there anything else that you would like to share about Picterra?

I think whats great about Picterra is that it is a unique product, at the intersection between ML and Geospatial. What differentiates us from other companies that process geospatial data is that we equip our users with a self-serve platform. They can easily find locations, analyze patterns, and detect and count objects on Earth observation imagery. It would be impossible without machine learning, but our users dont even need basic coding skills the platform does the work based on a few human-made annotations. For those who want to go deeper and learn the core concepts of machine learning in the geospatial domain, we have launched a comprehensive online course.

What is also worth mentioning is that possible applications of Picterra are endless detectors built on the platform have been used in city management, precision agriculture, forestry management, humanitarian and disaster risk management, farming, etc., just to name the most common applications. We are basically surprised every day by what our users are trying to do with our platform. You can give it a try and let us know how it worked on social media.

Thank you for the great interview and for sharing with us how powerful Picterra is, readers who wish to learn more should visit the Picterra website.

Read this article:

Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements - Unite.AI

The Impending Artificial Intelligence Revolution in Healthcare – Op-Ed – HIT Consultant

Harjinder Sandhu, CEO of Saykara

For at least a decade, healthcare luminaries have been predicting the coming AI revolution. In other fields, AI has evolved beyond the hype and has begun to showcase real and transformative applications: autonomous vehicles, fraud detection, personalized shopping, virtual assistants, and so on. The list is long and impressive. But in healthcare, despite the expectations and the tremendous potential in improving the delivery of care, the AI revolution is just getting started. There have been definite advancements in areas such as diagnostic imaging, logistics within healthcare, and speech recognition for documentation. Still, the realm of AI technologies that impact the cost and quality of patient care continues to be rather narrow today.

Why has AI been slow in delivering change in the care processes of healthcare? With a wealth of new AI algorithms and computing power ready to take on new challenges, the limiting function in AIs successful application has been the availability of meaningful data sets to train on. This is surprising to many, given that EHRs were supposed to have solved the data barrier.

The promise of EHRs was that they would create a wealth of actionable data that could be leveraged for better patient care. Unfortunately, this promise never fully materialized. Most of the interesting information that can be captured in the course of patient care either is not or is captured minimally or inconsistently. Often, just enough information is recorded in the EHR to support billing and is in plain text (not actionable) form. Worse, documentation requirements have had a serious impact on physicians, to whom it ultimately fell to input much of that data. Burnout and job dissatisfaction among physicians have become endemic.

EHRs didnt create the documentation challenge. But using an EHR in the exam room can significantly detract from patient care. Speech recognition has come a long way since then, although it hasnt changed that fundamental dynamic of the screen interaction that takes away from the patient. Indeed, using speech recognition, physicians stare at the screen even more intently as they must be mindful of mistakes that the speech recognition system may generate.

Having been involved in the advancement of speech recognition in the healthcare domain and been witness to its successes and failures, I continue to believe that the next stage in the evolution of this technology would be to free physicians from the tyranny of the screen. To evolve from speech recognition systems to AI-based virtual scribes that listen to doctor-patient conversations, creating notes, and entering orders.

Using a human scribe solves a significant part of the problem for physicians scribes relieve the physician of having to enter data manually. For many physicians, a scribe has allowed them to reclaim their work lives (they can focus on patients rather than computers) as well as their personal lives (fewer evening hours completing patient notes). However, the inherent cost of both training and then employing a scribe has led to many efforts to build digital counterparts, AI-based scribes that can replicate the work of a human scribe.

Building an AI scribe is hard. It requires a substantially more sophisticated system than the current generation of speech recognition systems. Interpreting natural language conversation is one of the next major frontiers for AI in any domain. The current generation of virtual assistants, like Alexa and Siri, simplify the challenge by putting boundaries on speech, forcing a user, for example, to express a single idea at a time, within a few seconds and within the boundaries of a list of skills that these systems know how to interpret.

In contrast, an AI system that is listening to doctor-patient conversations must deal with the complexity of human speech and narrative. A patient visit could last five minutes or an hour, the speech involves at least two parties (the doctor and the patient), and a patients visit can meander to irrelevant details and branches that dont necessarily contribute to a physician making their diagnosis.

As a result of the complexity of conversational speech, it is still quite early for fully autonomous AI scribes. In the meantime, augmented AI scribes, AI systems augmented by human power, are filling in the gaps of AI competency and allowing these systems to succeed while incrementally chipping away at the goal of making these systems fully autonomous. These systems are beginning to do more than simply relieve doctors of the burden of documentation, though that is obviously important. The real transformative impact will be from capturing a comprehensive set of data about a patient journey in a structured and consistent fashion and putting that into the medical records, thereby building a base for all other AI applications to come.

About Harjinder Sandhu

Harjinder Sandhu, CEO of Saykara, a company leveraging the power and simplicity of the human voice to make delivering great care easier while streamlining physician workflow

Follow this link:

The Impending Artificial Intelligence Revolution in Healthcare - Op-Ed - HIT Consultant

COVID-19 identification in X-ray images by Artificial intelligence – News Anyway

Novel Software to identify COVID-19 from X-Ray images

A novel approach to identify COVID-19 with high confidence from other viral and bacterial infections using X-ray will help to make Coronavirus diagnosis faster and more accurate with the help artificial intelligence-based program.

Flint, North Wales May 2020,Flint based team of mathematicians and computer scientists has successfully developed an artificial intelligence-based program to identify COVID-19 pneumonia from other virus and bacteria caused pneumonia. Impressively, the software has managed to make the identification with 95% accuracy. The software takes less than 30 seconds to analyse a Chest X-Ray image to reach its conclusion. It is expected that this technology will reduce the workload on Radiologists and speed up the triage of COVID-19 patients in combination with other clinical data.

Professor Sabah Jassim, who oversaw the development commented The unique nature of the development cannot be underestimated and its contribution to the medical teams is clear. What is needed now is to work with many hospitals to evaluate the technology and validate its output.

Dr Shakir Al-Zaidi, managing director of Medical Analytica Ltd added It is exciting to offer such unique approach which can support the speedy triage of suspected COVID-19 patients and also to differentiate, with high confidence, COVID-19 infection from pneumonia caused by other viruses and bacteria.

The company is seeking collaboration with radiologists to work together to test and validate the software performance on X-ray images and furthermore to test the technology to identify other critical medical conditions.

The development team is part of Medical Analytica Ltd, a new start-up company based in Castle Park, Flint. It focuses on the application of artificial intelligence in medical imaging. Software for the identification of COVID-19 using CT scans has already been developed. Another software for the identification of Malignant ovarian tumour from Ultrasonic images has already been developed and hospital-based evaluation studies are on course to start in June 2020.

Read more here:

COVID-19 identification in X-ray images by Artificial intelligence - News Anyway

Is artificial intelligence the answer to the care sector amid COVID-19? – Descrier

It is clear that the health and social care sectors in the United Kingdom have long been suffering from systematic neglect, and this has predictably resulted in dramatic workforce shortages. These shortages have been exacerbated by the current coronavirus crisis, and will be further compounded by the stricter immigration rules coming into force in January 2021. The Home Office is reportedly considering an unexpected solution to this; replacing staff with tech and artificial intelligence.

To paraphrase Aneurin Bevan, the mark of a civilised society is how it treats its sick and vulnerable. As a result, whenever technology is broached in healthcare, people are sceptical particularly if it means removing that all-important human touch.

Such fears are certainly justified. Technology and AI itself has become fraught with issues: there is a wealth of evidence that points to prove algorithms can become susceptible to absorbing the unconscious human biases of its designers, particularly around gender and race. Even the Home Office has been found using discriminatory algorithms that scan and evaluate visa applications while a similar algorithm utilised in hospitals in the US was found to be systematically discriminating against black people as the software was more likely to refer white patients to care programmes.

Such prejudices clearly present AI as unfit in healthcare. Indeed, technology is by no means a quick fix to staff shortages and should never be used at the expense of human interaction, especially in areas that are as emotionally intensive as care.

However, this does not mean that the introduction of AI into the UK care sector is necessarily a slippery slope to a techno-dystopia. Robotics have already made vital changes in the healthcare sector; surgical robots, breast cancer scanners and algorithms that can detect even the early stages of Alzheimers have proved revolutionary. The coronavirus crisis itself has reinforced just how much we rely on technology as we are able to keep in touch with our loved ones and work from home.

Yet in a more dramatic example of the potential help AI could deliver in the UK, robots have been utilised to disinfect the streets of China amid the coronavirus pandemic and one hospital at the centre of the outbreak in Wuhan outnumbered its doctor workforce with robotic aides to slow the spread of infection.

Evidently, if used correctly, AI and automation could improve care and ease the burden on staff in the UK. The Institute for Public Policy Research even calculated that 30% of work done by adult social care staff could be automated, saving the sector 6 billion. It is important to stress, though, that this initiative cannot be used as a cost cutting exercise if money is saved by automation, it should be put back into the care sector to improve both the wellbeing of those receiving care, and also the working conditions of the carers themselves.

There is much that care robots cannot do, but they can provide some level of companionship, and can serve as assistance with medication prep while smart speakers can remind or alert patients. AI can realistically monitor vulnerable patients safety 24/7 while allowing them to maintain their privacy and sense of independence.

There are examples of tech being used in social care around the world that demonstrate the positive effect that it can have; in Japan specifically, they have implemented the use of a robot called Robear that helps carry patients from their bed to their wheelchairs, a bionic suit called HAL that assists with motor tasks, and Paro a baby harp seal bot that is a therapeutic companion which has been shown to alleviate anxiety and depression in dementia sufferers. Another, a humanoid called Pepper, has been introduced as an entertainer, cleaner and corridor monitor to great success.

It is vital, though, that if automation and AI is to be introduced on a wide scale into the care sector, it must work in harmony with human caregivers. It could transform the care sector for the better if used properly, however the current government does not view it in this way; and the focus on automation is ushered in to coincide with the immigration rules that will prohibit migrant carers from entry. Rolling out care robots across the nation on such a huge scale in the next 9 months is mere blue sky thinking; replacing the fresh-and-blood and hard graft of staff with robots is therefore far-fetched at best, but disastrous to a sector that is suffering under a 110,000 staff shortage at worst. Besides, robots still disappointingly lack the empathy required for the job and simply cannot give the personal, compassionate touch that is so important; they can only ease the burden on carers, and cannot step in their shoes alone.

While in the long term it is possible that automation in the care sector could help ease the burden on staff, and plug gaps as an when it is needed, the best course of action that is currently attainable in order to solve the care crisis is for the government to reconsider just who it classifies as low skilled in relation to immigration as some Conservative MPs have already made overtures towards.

In order to remedy the failing care sector, the government should invest both in home grown talent and relax restrictions on carers from overseas seeking to work in the country. A renovation of the care sector is needed; higher wages, more reasonable hours, more secure contracts, and the introduction of a care worker visa is what is so desperately needed, and if this is implemented in conjunction with support from AI and automation we could see the growing and vibrant care sector for which this country is crying out.

Excerpt from:

Is artificial intelligence the answer to the care sector amid COVID-19? - Descrier

Artificial intelligence – Wikipedia

Intelligence demonstrated by machines

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[3] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[4] For instance, optical character recognition is frequently excluded from things considered to be AI,[5] having become a routine technology.[6] Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go),[8] autonomously operating cars, intelligent routing in content delivery networks, and military simulations[9].

Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[10][11] followed by disappointment and the loss of funding (known as an "AI winter"),[12][13] followed by new approaches, success and renewed funding.[11][14] For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other.[15] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[16] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[17][18][19] Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).[15]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[16] General intelligence is among the field's long-term goals.[20] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[21] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity.[22] Some people also consider AI to be a danger to humanity if it progresses unabated.[23][24] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[25]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[26][14]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[27] and have been common in fiction, as in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots).[28] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[22]

The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[29] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed changing the question from whether a machine was intelligent, to "whether or not it is possible for machinery to show intelligent behaviour".[30] The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons".

The field of AI research was born at a workshop at Dartmouth College in 1956,[32] where the term "Artificial Intelligence" was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener.[33] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[34] They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies (c. 1954)[36] (and by 1959 were reportedly playing better than the average human),[37] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[38] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[39] and laboratories had been established around the world.[40] AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". Marvin Minsky agreed, writing, "within a generation... the problem of creating 'artificial intelligence' will substantially be solved".[10]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter",[12] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[42] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[11] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[13]

The development of metaloxidesemiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) transistor technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.[43]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[26] The success was due to increasing computational power (see Moore's law and transistor count), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[44] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[47] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[48] as do intelligent personal assistants in smartphones.[49] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[8][50] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[51] who at the time continuously held the world No. 1 ranking for two years.[52][53] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess.

According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.[54] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[14] Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.[54] In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes".[55][56] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an "AI superpower".[57][58] However, it has been acknowledged that reports regarding artificial intelligence have tended to be exaggerated.[59][60][61]

Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[62]

A typical AI analyzes its environment and takes actions that maximize its chance of success.[1] An AI's intended utility function (or goal) can be simple ("1 if the AI wins a game of Go, 0 otherwise") or complex ("Do mathematically similar actions to the ones succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "reinforcement learning", goals can be implicitly induced by rewarding some types of behavior or punishing others.[a] Alternatively, an evolutionary system can induce goals by using a "fitness function" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food. Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to successfully accomplish its narrow classification task.[65]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following (optimal for first player) recipe for play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world[citation needed]. These learners could therefore, derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad range of possibilities that are unlikely to be beneficial.[67] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding a pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[69]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): "If an otherwise healthy adult has a fever, then they may have influenza". A second, more general, approach is Bayesian inference: "If the current patient has a fever, adjust the probability they have influenza in such-and-such way". The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: "After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza". A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial "neurons" that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to "reinforce" connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[71]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as "since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well". They can be nuanced, such as "X% of families have geographically separate species with color variants, so there is a Y% chance that undiscovered black swans exist". Learners also work on the basis of "Occam's razor": The simplest theory that explains the data is the likeliest. Therefore, according to Occam's razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.

Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don't determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an "adversarial" image that the system misclassifies.[c][74][75][76]

Compared with humans, existing AI lacks several features of human "commonsense reasoning"; most notably, humans have powerful mechanisms for reasoning about "nave physics" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.)[79][80][81] This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[82][83][84]

The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is really capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart.[85]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[16]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[86] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[87]

These algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger.[67] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgments.[88]

Knowledge representation[89] and knowledge engineering[90] are central to classical AI research. Some "expert systems" attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the "commonsense knowledge" known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[91] situations, events, states and time;[92] causes and effects;[93] knowledge about knowledge (what we know about what other people know);[94] and many other, less well researched domains. A representation of "what exists" is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[95] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[96] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[97] scene interpretation,[98] clinical decision support,[99] knowledge discovery (mining "interesting" and actionable inferences from large databases),[100] and other areas.[101]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[108] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or "value") of available choices.[109]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[110] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.[111]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[112]

Machine learning (ML), a fundamental concept of AI research since the field's inception,[113] is the study of computer algorithms that improve automatically through experience.[114][115]

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[115] Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[116] In reinforcement learning[117] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[118] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[119] and machine translation.[120] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. "Keyword spotting" strategies for search are popular and scalable but dumb; a search query for "dog" might only match documents with the literal word "dog" and miss a document with the word "poodle". "Lexical affinity" strategies use the occurrence of words such as "accident" to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of "narrative" NLP is to embody a full understanding of commonsense reasoning.[121]

Machine perception[122] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[123] facial recognition, and object recognition.[124] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its "object model" to assess that fifty-meter pedestrians do not exist.[125]

AI is heavily used in robotics.[126] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[127] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient's breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[129][130] Moravec's paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[131][132] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[133]

Moravec's paradox can be extended to many forms of social intelligence.[135][136] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[137] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[141]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate humancomputer interaction.[142] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give nave users an unrealistic conception of how intelligent existing computer agents actually are.[143]

Historically, projects such as the Cyc knowledge base (1984) and the massive Japanese Fifth Generation Computer Systems initiative (19821992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation).[144] Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[20][145] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[146][147][148] Besides transfer learning,[149] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI. Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[151][152]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence). A problem like machine translation is considered "AI-complete", because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[153] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[17]Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[18]

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[154] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the mid-1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI "good old fashioned AI" or "GOFAI".[155] During the 1960s, symbolic approaches had achieved great success at simulating high-level "thinking" in small demonstration programs. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.[156]Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[157][158]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless whether people used the same algorithms.[17] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[159] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[160]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[161] found that solving difficult problems in vision and natural language processing required ad-hoc solutionsthey argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford).[18] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.[162]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[163] This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[42] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[164] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[19] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[165] Their work revived the non-symbolic point of view of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[166][167]

Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle of the 1980s.[170] Artificial neural networks are an example of soft computingthey are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, Grey system theory, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[171]

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d] Compared with GOFAI, new "statistical learning" techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring a semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[44][172] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.

AI has developed many tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[182] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[183] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[184] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[127] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[185] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies.[186] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[187]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming.[188] Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[189][190]

Logic[191] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[192] and inductive logic programming is a method for learning.[193]

Several different forms of logic are used in AI research. Propositional logic[194] involves truth functions such as "or" and "not". First-order logic[195] adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy set theory assigns a "degree of truth" (between 0 and 1) to vague statements such as "Alice is old" (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false. Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as "if you are close to the destination station and moving fast, increase the train's brake pressure"; these vague rules can then be numerically refined within the system. Fuzzy logic fails to scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences.[e][197][198]

Default logics, non-monotonic logics and circumscription[103] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[91] situation calculus, event calculus and fluent calculus (for representing events and time);[92] causal calculus;[93] belief calculus (belief revision);[199] and modal logics.[94] Logics to model contradictory or inconsistent statements arising in multi-agent systems have also been designed, such as paraconsistent logics.

Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[200]

Bayesian networks[201] are a very general tool that can be used for various problems: reasoning (using the Bayesian inference algorithm),[202] learning (using the expectation-maximization algorithm),[f][204] planning (using decision networks)[205] and perception (using dynamic Bayesian networks).[206] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[206] Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. Complicated graphs with diamonds or other "loops" (undirected cycles) can require a sophisticated method such as Markov chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on Xbox Live to rate and match players; wins and losses are "evidence" of how good a player is[citation needed]. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.

A key concept from the science of economics is "utility": a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[208] and information value theory.[109] These tools include models such as Markov decision processes,[209] dynamic decision networks,[206] game theory and mechanism design.[210]

The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[211]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[212] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[214]k-nearest neighbor algorithm,[g][216]kernel methods such as the support vector machine (SVM),[h][218]Gaussian mixture model,[219] and the extremely popular naive Bayes classifier.[i][221] Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as "naive Bayes" on most practical data sets.[222]

Neural networks were inspired by the architecture of neurons in the human brain. A simple "neuron" N accepts input from other neurons, each of which, when activated (or "fired"), cast a weighted "vote" for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed "fire together, wire together") is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The neural network forms "concepts" that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning "leg" might be coupled with a subnetwork meaning "foot" that includes the sound for "foot". Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural networks can learn both continuous functions and, surprisingly, digital logical operations. Neural networks' early successes included predicting the stock market and (in 1995) a mostly self-driving car.[k] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[225][226]

The study of non-learning artificial neural networks[214] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others[citation needed].

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[227] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ("fire together, wire together"), GMDH or competitive learning.[228]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[229][230] and was introduced to neural networks by Paul Werbos.[231][232][233]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[234]

To summarize, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches[citation needed]. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".[235]

Deep learning is any artificial neural network that can learn a long chain of causal links[dubious discuss]. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a "credit assignment path" (CAP) depth of seven[citation needed]. Many deep learning systems need to be able to learn chains ten or more causal links in length.[236] Deep learning has transformed many important subfields of artificial intelligence[why?], including computer vision, speech recognition, natural language processing and others.[237][238][236]

According to one overview,[239] the expression "Deep Learning" was introduced to the machine learning community by Rina Dechter in 1986[240] and gained traction afterIgor Aizenberg and colleagues introduced it to artificial neural networks in 2000.[241] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[242][pageneeded] These networks are trained one layer at a time. Ivakhnenko's 1971 paper[243] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[245]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[246] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application, CNNs already processed an estimated 10% to 20% of all the checks written in the US.[247]Since 2011, fast implementations of CNNs on GPUs havewon many visual pattern recognition competitions.[236]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind's "AlphaGo Lee", the program that beat a top Go champion in 2016.[248]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[249] which are in theory Turing complete[250] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[236] RNNs can be trained by gradient descent[251][252][253] but suffer from the vanishing gradient problem.[237][254] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[255]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[256] LSTM is often trained by Connectionist Temporal Classification (CTC).[257] At Google, Microsoft and Baidu this approach has revolutionized speech recognition.[258][259][260] For example, in 2015, Google's speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[261] Google also used LSTM to improve machine translation,[262] Language Modeling[263] and Multilingual Language Processing.[264] LSTM combined with CNNs also improved automatic image captioning[265] and a plethora of other applications.

AI, like electricity or the steam engine, is a general purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[266] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[267][268] Researcher Andrew Ng has suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[269] Moravec's paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[133]

Games provide a well-publicized benchmark for assessing rates of progress. AlphaGo around 2016 brought the era of classical board-game benchmarks to a close. Games of imperfect knowledge provide new challenges to AI in the area of game theory.[270][271] E-sports such as StarCraft continue to provide additional public benchmarks.[272][273] There are many competitions and prizes, such as the Imagenet Challenge, to promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[274]

The "imitation game" (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[275] A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

Proposed "universal intelligence" tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; unfortunately, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[277][278]

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive[280] and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[282] prediction of judicial decisions,[283] targeting online advertisements, [284][285] and energy storage[286]

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution,[287] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[288]

AI can also produce Deepfakes, a content-altering technology. ZDNet reports, "It presents something that did not actually occur," Though 88% of Americans believe Deepfakes can cause more harm than good, only 47% of them believe they can be targeted. The boom of election year also opens public discourse to threats of videos of falsified politician media.[289]

AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high-risk patients for population health. The breadth of applications is rapidly increasing.As an example, AI is being applied to the high-cost problem of dosage issueswhere findings suggested that AI could save $16 billion. In 2016, a groundbreaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[290]

Artificial intelligence is assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[291] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called "Hanover"[citation needed]. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[292] Another study is using artificial intelligence to try to monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[293] One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% accuracy.[294]

According to CNN, a recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel during open surgery, and doing so better than a human surgeon, the team claimed.[295] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson has struggled to achieve success and adoption in healthcare.[296]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016[update], there are over 30 companies utilizing AI into the creation of self-driving cars. A few companies involved with AI include Tesla, Google, and Apple.[297]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high-performance computers, are integrated into one complex vehicle.[298]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[299] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren't entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[300]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[301] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[302]

Another factor that is influencing the ability of a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high-risk situations. These situations could include a head-on collision with pedestrians. The car's main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[303] The programming of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorized use of debit cards.[304] Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[305] In August 2001, robots beat humans in a simulated financial trading competition.[306] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[307][308][309]

AI is increasingly being used by corporations. Jack Ma has controversially predicted that AI CEO's are 30 years away.[310][311]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[312] For example, AI-based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades[citation needed]. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient[citation needed]. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking[citation needed].. In August 2019, the AICPA introduced AI training course for accounting professionals.[313]

The cybersecurity arena faces significant challenges in the form of large-scale hacking attacks of different types that harm organizations of all kinds and create billions of dollars in business damage. Artificial intelligence and Natural Language Processing (NLP) has begun to be used by security companies - for example, SIEM (Security Information and Event Management) solutions. The more advanced of these solutions use AI and NLP to automatically sort the data in networks into high risk and low-risk information. This enables security teams to focus on the attacks that have the potential to do real harm to the organization, and not become victims of attacks such as Denial of Service (DoS), Malware and others.

Read more:

Artificial intelligence - Wikipedia

artificial intelligence | Definition, Examples, and …

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasksas, for example, discovering proofs for mathematical theorems or playing chesswith great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

Top Questions

Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.Although there are no AIs that can perform the wide variety of tasks an ordinary human can do, some AIs can match humans in specific tasks.

No, artificial intelligence and machine learning are not the same, but they are closely related. Machine learning is the method to train a computer to learn from its inputs but without explicit programming for every circumstance. Machine learning helps a computer to achieve artificial intelligence.

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasps instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligenceconspicuously absent in the case of Sphexmust include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and proceduresknown as rote learningis relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the add ed rule and so form the past tense of jump based on experience with similar verbs.

Read more from the original source:

artificial intelligence | Definition, Examples, and ...

What Is Artificial Intelligence (AI)? | PCMag

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans.

But true artificial intelligence, as McCarthy conceived it, continues to elude us.

A great challenge with artificial intelligence is that it's a broad term, and there's no clear agreement on its definition.

As mentioned, McCarthy proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said.

Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes: "Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence."

But our understanding of "human intelligence" and our expectations of technology are constantly evolving. Zachary Lipton, the editor of Approximately Correct, describes the term AI as "aspirational, a moving target based on those capabilities that humans possess but which machines do not." In other words, the things we ask of AI change over time.

For instance, In the 1950s, scientists viewed chess and checkers as great challenges for artificial intelligence. But today, very few would consider chess-playing machines to be AI. Computers are already tackling much more complicated problems, including detecting cancer, driving cars, and processing voice commands.

The first generation of AI scientists and visionaries believed we would eventually be able to create human-level intelligence.

But several decades of AI research have shown that replicating the complex problem-solving and abstract thinking of the human brain is supremely difficult. For one thing, we humans are very good at generalizing knowledge and applying concepts we learn in one field to another. We can also make relatively reliable decisions based on intuition and with little information. Over the years, human-level AI has become known as artificial general intelligence (AGI) or strong AI.

The initial hype and excitement surrounding AI drew interest and funding from government agencies and large companies. But it soon became evident that contrary to early perceptions, human-level intelligence was not right around the corner, and scientists were hard-pressed to reproduce the most basic functionalities of the human mind. In the 1970s, unfulfilled promises and expectations eventually led to the "AI winter," a long period during which public interest and funding in AI dampened.

It took many years of innovation and a revolution in deep-learning technology to revive interest in AI. But even now, despite enormous advances in artificial intelligence, none of the current approaches to AI can solve problems in the same way the human mind does, and most experts believe AGI is at least decades away.

The flipside, narrow or weak AI doesn't aim to reproduce the functionality of the human brain, and instead focuses on optimizing a single task. Narrow AI has already found many real-world applications, such as recognizing faces, transforming audio to text, recommending videos on YouTube, and displaying personalized content in the Facebook News Feed.

Many scientists believe that we will eventually create AGI, but some have a dystopian vision of the age of thinking machines. In 2014, renowned English physicist Stephen Hawking described AI as an existential threat to mankind, warning that "full artificial intelligence could spell the end of the human race."

In 2015, Y Combinator President Sam Altman and Tesla CEO Elon Musk, two other believers in AGI, co-founded OpenAI, a nonprofit research lab that aims to create artificial general intelligence in a manner that benefits all of humankind. (Musk has since departed.)

Others believe that artificial general intelligence is a pointless goal. "We don't need to duplicate humans. That's why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own," says Peter Norvig, Director of Research at Google.

Scientists such as Norvig believe that narrow AI can help automate repetitive and laborious tasks and help humans become more productive. For instance, doctors can use AI algorithms to examine X-ray scans at high speeds, allowing them to see more patients. Another example of narrow AI is fighting cyberthreats: Security analysts can use AI to find signals of data breaches in the gigabytes of data being transferred through their companies' networks.

Early AI-creation efforts were focused on transforming human knowledge and intelligence into static rules. Programmers had to meticulously write code (if-then statements) for every rule that defined the behavior of the AI. The advantage of rule-based AI, which later became known as "good old-fashioned artificial intelligence" (GOFAI), is that humans have full control over the design and behavior of the system they develop.

Rule-based AI is still very popular in fields where the rules are clearcut. One example is video games, in which developers want AI to deliver a predictable user experience.

The problem with GOFAI is that contrary to McCarthy's initial premise, we can't precisely describe every aspect of learning and behavior in ways that can be transformed into computer rules. For instance, defining logical rules for recognizing voices and imagesa complex feat that humans accomplish instinctivelyis one area where classic AI has historically struggled.

An alternative approach to creating artificial intelligence is machine learning. Instead of developing rules for AI manually, machine-learning engineers "train" their models by providing them with a massive amount of samples. The machine-learning algorithm analyzes and finds patterns in the training data, then develops its own behavior. For instance, a machine-learning model can train on large volumes of historical sales data for a company and then make sales forecasts.

Deep learning, a subset of machine learning, has become very popular in the past few years. It's especially good at processing unstructured data such as images, video, audio, and text documents. For instance, you can create a deep-learning image classifier and train it on millions of available labeled photos, such as the ImageNet dataset. The trained AI model will be able to recognize objects in images with accuracy that often surpasses humans. Advances in deep learning have pushed AI into many complicated and critical domains, such as medicine, self-driving cars, and education.

One of the challenges with deep-learning models is that they develop their own behavior based on training data, which makes them complex and opaque. Often, even deep-learning experts have a hard time explaining the decisions and inner workings of the AI models they create.

Here are some of the ways AI is bringing tremendous changes to different domains.

Self-driving cars: Advances in artificial intelligence have brought us very close to making the decades-long dream of autonomous driving a reality. AI algorithms are one of the main components that enable self-driving cars to make sense of their surroundings, taking in feeds from cameras installed around the vehicle and detecting objects such as roads, traffic signs, other cars, and people.

Digital assistants and smart speakers: Siri, Alexa, Cortana, and Google Assistant use artificial intelligence to transform spoken words to text and map the text to specific commands. AI helps digital assistants make sense of different nuances in spoken language and synthesize human-like voices.

Translation: For many decades, translating text between different languages was a pain point for computers. But deep learning has helped create a revolution in services such as Google Translate. To be clear, AI still has a long way to go before it masters human language, but so far, advances are spectacular.

Facial recognition: Facial recognition is one of the most popular applications of artificial intelligence. It has many uses, including unlocking your phone, paying with your face, and detecting intruders in your home. But the increasing availability of facial-recognition technology has also given rise to concerns regarding privacy, security, and civil liberties.

Medicine: From detecting skin cancer and analyzing X-rays and MRI scans to providing personalized health tips and managing entire healthcare systems, artificial intelligence is becoming a key enabler in healthcare and medicine. AI won't replace your doctor, but it could help to bring about better health services, especially in underprivileged areas, where AI-powered health assistants can take some of the load off the shoulders of the few general practitioners who have to serve large populations.

In our quest to crack the code of AI and create thinking machines, we've learned a lot about the meaning of intelligence and reasoning. And thanks to advances in AI, we are accomplishing tasks alongside our computers that were once considered the exclusive domain of the human brain.

Some of the emerging fields where AI is making inroads include music and arts, where AI algorithms are manifesting their own unique kind of creativity. There's also hope AI will help fight climate change, care for the elderly, and eventually create a utopian future where humans don't need to work at all.

There's also fear that AI will cause mass unemployment, disrupt the economic balance, trigger another world war, and eventually drive humans into slavery.

We still don't know which direction AI will take. But as the science and technology of artificial intelligence continues to improve at a steady pace, our expectations and definition of AI will shift, and what we consider AI today might become the mundane functions of tomorrow's computers.

Follow this link:

What Is Artificial Intelligence (AI)? | PCMag

How Artificial Intelligence Is Totally Changing Everything …

Advertisement

Back in Oct. 1950, British techno-visionary Alan Turing published an article called "Computing Machinery and Intelligence," in the journal MIND that raised what at the time must have seemed to many like a science-fiction fantasy.

"May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" Turing asked.

Turing thought that they could. Moreover, he believed, it was possible to create software for a digital computer that enabled it to observe its environment and to learn new things, from playing chess to understanding and speaking a human language. And he thought machines eventually could develop the ability to do that on their own, without human guidance. "We may hope that machines will eventually compete with men in all purely intellectual fields," he predicted.

Nearly 70 years later, Turing's seemingly outlandish vision has become a reality. Artificial intelligence, commonly referred to as AI, gives machines the ability to learn from experience and perform cognitive tasks, the sort of stuff that once only the human brain seemed capable of doing.

AI is rapidly spreading throughout civilization, where it has the promise of doing everything from enabling autonomous vehicles to navigate the streets to making more accurate hurricane forecasts. On an everyday level, AI figures out what ads to show you on the web, and powers those friendly chatbots that pop up when you visit an e-commerce website to answer your questions and provide customer service. And AI-powered personal assistants in voice-activated smart home devices perform myriad tasks, from controlling our TVs and doorbells to answering trivia questions and helping us find our favorite songs.

But we're just getting started with it. As AI technology grows more sophisticated and capable, it's expected to massively boost the world's economy, creating about $13 trillion worth of additional activity by 2030, according to a McKinsey Global Institute forecast.

"AI is still early in adoption, but adoption is accelerating and it is being used across all industries," says Sarah Gates, an analytics platform strategist at SAS, a global software and services firm that focuses upon turning data into intelligence for clients.

It's even more amazing, perhaps, that our existence is quietly being transformed by a technology that many of us barely understand, if at all something so complex that even scientists have a tricky time explaining it.

"AI is a family of technologies that perform tasks that are thought to require intelligence if performed by humans," explains Vasant Honavar, a professor and director of the Artificial Intelligence Research Laboratory at Penn State University. "I say 'thought,' because nobody is really quite sure what intelligence is."

Honavar describes two main categories of intelligence. There's narrow intelligence, which is achieving competence in a narrowly defined domain, such as analyzing images from X-rays and MRI scans in radiology. General intelligence, in contrast, is a more human-like ability to learn about anything and to talk about it. "A machine might be good at some diagnoses in radiology, but if you ask it about baseball, it would be clueless," Honavar explains. Humans' intellectual versatility "is still beyond the reach of AI at this point."

According to Honavar, there are two key pieces to AI. One of them is the engineering part that is, building tools that utilize intelligence in some way. The other is the science of intelligence, or rather, how to enable a machine to come up with a result comparable to what a human brain would come up with, even if the machine achieves it through a very different process. To use an analogy, "birds fly and airplanes fly, but they fly in completely different ways," Honavar. "Even so, they both make use of aerodynamics and physics. In the same way, artificial intelligence is based upon the notion that there are general principles about how intelligent systems behave."

AI is "basically the results of our attempting to understand and emulate the way that the brain works and the application of this to giving brain-like functions to otherwise autonomous systems (e.g., drones, robots and agents)," Kurt Cagle, a writer, data scientist and futurist who's the founder of consulting firm Semantical, writes in an email. He's also editor of The Cagle Report, a daily information technology newsletter.

And while humans don't really think like computers, which utilize circuits, semi-conductors and magnetic media instead of biological cells to store information, there are some intriguing parallels. "One thing we're beginning to discover is that graph networks are really interesting when you start talking about billions of nodes, and the brain is essentially a graph network, albeit one where you can control the strengths of processes by varying the resistance of neurons before a capacitive spark fires," Cagle explains. "A single neuron by itself gives you a very limited amount of information, but fire enough neurons of varying strengths together, and you end up with a pattern that gets fired only in response to certain kinds of stimuli, typically modulated electrical signals through the DSPs [that is digital signal processing] that we call our retina and cochlea."

"Most applications of AI have been in domains with large amounts of data," Honavar says. To use the radiology example again, the existence of large databases of X-rays and MRI scans that have been evaluated by human radiologists, makes it possible to train a machine to emulate that activity.

AI works by combining large amounts of data with intelligent algorithms series of instructions that allow the software to learn from patterns and features of the data, as this SAS primer on artificial intelligence explains.

In simulating the way a brain works, AI utilizes a bunch of different subfields, as the SAS primer notes.

The concept of AI dates back to the 1940s, and the term "artificial intelligence" was introduced at a 1956 conference at Dartmouth College. Over the next two decades, researchers developed programs that played games and did simple pattern recognition and machine learning. Cornell University scientist Frank Rosenblatt developed the Perceptron, the first artificial neural network, which ran on a 5-ton (4.5-metric ton), room-sized IBM computer that was fed punch cards.

But it wasn't until the mid-1980s that a second wave of more complex, multilayer neural networks were developed to tackle higher-level tasks, according to Honavar. In the early 1990s, another breakthrough enabled AI to generalize beyond the training experience.

In the 1990s and 2000s, other technological innovations the web and increasingly powerful computers helped accelerate the development of AI. "With the advent of the web, large amounts of data became available in digital form," Honavar says. "Genome sequencing and other projects started generating massive amounts of data, and advances in computing made it possible to store and access this data. We could train the machines to do more complex tasks. You couldn't have had a deep learning model 30 years ago, because you didn't have the data and the computing power."

AI is different from, but related to, robotics, in which machines sense their environment, perform calculations and do physical tasks either by themselves or under the direction of people, from factory work and cooking to landing on other planets. Honavar says that the two fields intersect in many ways.

"You can imagine robotics without much intelligence, purely mechanical devices like automated looms," Honavar says. "There are examples of robots that are not intelligent in a significant way." Conversely, there's robotics where intelligence is an integral part, such as guiding an autonomous vehicle around streets full of human-driven cars and pedestrians.

"It's a reasonable argument that to realize general intelligence, you would need robotics to some degree, because interaction with the world, to some degree, is an important part of intelligence," according to Honavar. "To understand what it means to throw a ball, you have to be able to throw a ball."

AI quietly has become so ubiquitous that it's already found in many consumer products.

"A huge number of devices that fall within the Internet of Things (IoT) space readily use some kind of self-reinforcing AI, albeit very specialized AI," Cagle says. "Cruise control was an early AI and is far more sophisticated when it works than most people realize. Noise dampening headphones. Anything that has a speech recognition capability, such as most contemporary television remotes. Social media filters. Spam filters. If you expand AI to cover machine learning, this would also include spell checkers, text-recommendation systems, really any recommendation system, washers and dryers, microwaves, dishwashers, really most home electronics produced after 2017, speakers, televisions, anti-lock braking systems, any electric vehicle, modern CCTV cameras. Most games use AI networks at many different levels."

AI already can outperform humans in some narrow domains, just as "airplanes can fly longer distances, and carry more people than a bird could," Honavar says. AI, for example, is capable of processing millions of social media network interactions and gaining insights that can influence users' behavior an ability that the AI expert worries may have "not so good consequences."

It's particularly good at making sense of massive amounts of information that would overwhelm a human brain. That capability enables internet companies, for example, to analyze the mountains of data that they collect about users and employ the insights in various ways to influence our behavior.

But AI hasn't made as much progress so far in replicating human creativity, Honavar notes, though the technology already is being utilized to compose music and write news articles based on data from financial reports and election returns.

Given AI's potential to do tasks that used to require humans, it's easy to fear that its spread could put most of us out of work. But some experts envision that while the combination of AI and robotics could eliminate some positions, it will create even more new jobs for tech-savvy workers.

"Those most at risk are those doing routine and repetitive tasks in retail, finance and manufacturing," Darrell West, a vice president and founding director of the Center for Technology Innovation at the Brookings Institution, a Washington-based public policy organization, explains in an email. "But white-collar jobs in health care will also be affected and there will be an increase in job churn with people moving more frequently from job to job. New jobs will be created but many people will not have the skills needed for those positions. So the risk is a job mismatch that leaves people behind in the transition to a digital economy. Countries will have to invest more money in job retraining and workforce development as technology spreads. There will need to be lifelong learning so that people regularly can upgrade their job skills."

And instead of replacing human workers, AI may be used to enhance their intellectual capabilities. Inventor and futurist Ray Kurzweil has predicted that by the 2030s, AI have achieved human levels of intelligence, and that it will be possible to have AI that goes inside the human brain to boost memory, turning users into human-machine hybrids. As Kurzweil has described it, "We're going to expand our minds and exemplify these artistic qualities that we value."

More here:

How Artificial Intelligence Is Totally Changing Everything ...

Artificial Intelligence And Automation Top Focus For Venture Capitalists – Forbes

Artificial intelligence and automation have been two hot areas of investment, especially over the past decade. As the worldwide workforce increasingly shifts to a remote workforce, the need for automation, technology, and tools continues to grow. As such, its no surprise that automation and intelligent systems continue to be of significant interest to venture capitalists who are investing in growing firms focused in these areas. The AI Today podcast had the chance to talk to Oliver Mitchell, a Founding Partner of Autonomy Ventures. (disclosure: Im a co-host of the AI Today podcast).

Oliver Mitchell

For over 20 years Oliver has been working on technology startups and in the past decade he has been working on investing in automation. He spoke with us about seeing the big changes that are coming to the world with automation and the exciting possibilities that it still has to offer. He is a partner at venture firm Autonomy Ventures, an early stage venture capital firm that looks to invest in automation and robotics.

The best AI solutions are the ones that solve industry-specific problems

Despite the fact that Artificial Intelligence has been around for decades, there is still no commonly accepted definition. Because of this, artificial intelligence means something different to every industry, and this is reflected in the sort of investments that Oliver and other VCs are seeing. While some technology firms may be focused on how artificial intelligence can better help them manage funds, other companies might be more interested in how AI can supplement their human workforce. The various different tasks that artificial intelligence can help with is something that investors need to look at when making their investments.

Out of all of the investments that Oliver has made over the years, the best ones have been with companies that really focus on solving specific problems in an industry. In particular, applications of robotics to manufacturing, and specifically the concept of collaborative robots is appealing. Collaborative robots can be used to work alongside employees. To make the arm easier to use it has AI onboard and a suite of tools to enable anyone to operate the arm without technological training. With this arm, companies dont need to spend hundreds of thousands of dollars to hire specialists to train their robotic arms. Rather, the arm can be taught through movement how to carry out tasks through an iPad or similar device. This arm falls under the category of collaborative robots, or cobots for short, that are able to work side by side with humans.

About half of the Autonomy Ventures portfolio companies are based out of Israel. One portfolio company is Aurora Labs, which focuses on providing a software platform for autonomous and connected cars to monitor their onboard software. Aurora Labs calls their software a self-healing software for connected cars. Your average car needs to go to a dealership in order to receive any kind of firmware or software update if an issue is detected. This is because the technician needs to plug a device into the OBDII port of the car. Due to limited power in the chips in most current cars, they arent able to access the cloud. Even those cars that have OnStar onboard have very limited connectivity. Self-healing software for connected cars from Aurora Labs allows cars to connect to the cloud so that they can receive updates over the air. While much of this solution isnt AI per se, the use of machine learning for more adaptive updates is part of the indication that AI is finding its application in a wide range of niches.

Keeping AI in check

Something important that Oliver addressed is the view and aims of AI. A lot of people have a science fiction perspective on artificial intelligence. He believes that we need to manage our expectations on AI because there are many tasks that AI still cant do that even a child can. One example Oliver uses is the ability to tie a shoe. While a 7-year-old has been able to tie shoes for years, robots still cannot tie a shoe. We need to be able to address everyday problems before we can start to move on to what we see in movies.

Oliver also is concerned about issues of bias in AI and machine learning, especially as systems become more autonomous. Software around the world is used to help humans but so many of us are quick to turn to technology without a chance to evaluate its proper use. Oliver sites many examples including the AI-based criminal justice system that was biased in its assessment of an offenders likelihood of reoffending. Once the software was deployed in multiple states it was found that it rated people of color more likely to reoffend.

Oliver also points out bias in a type of technology that is used in emergency departments around the world to analyze patients. The software looks at a patients chief complaint, symptoms, and medical history along with demographics and gives the medical staff a recommendation about what to do. However, this software has been found to not take into account the human aspect of medical care. It will make a decision based on a perceived likelihood of effective treatment, not on saving every life possible.

Regardless of the challenges and limitations of AI, investors and entrepreneurs see significant potential for both simple automation and more complicated intelligent and autonomous systems. Companies are continuing to push the boundary of whats possible, especially in our increasingly remote and virtual world. It should be no surprise then that VCs will continue to look to invest in these types of companies as AI becomes part of our every day lives.

See original here:

Artificial Intelligence And Automation Top Focus For Venture Capitalists - Forbes

Benefits & Risks of Artificial Intelligence – Future of …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Continued here:

Benefits & Risks of Artificial Intelligence - Future of ...

Artificial intelligence can take banks to the next level – TechRepublic

Banking has the potential to improve its customer service, loan applications, and billing with the help of AI and natural language processing.

Image: Kubkoo, Getty Images/iStockPhoto

When I was an executive in banking, we struggled with how to transform tellers at our branches into customer service specialists instead of the "order takers" that they were. This struggle with customer service is ongoing for financial institutions. But it's an area in which artificial intelligence (AI), and its ability to work with unstructured data like voice and images, can help.

"There are two things that artificial intelligence does really well," said Ameek Singh, vice president of IBM's Watson applications and solutions. "It's really good with analyzing images and it also performs uniquely well with natural language processing (NLP)."

SEE:Managing AI and ML in the enterprise 2020 (free PDF)(TechRepublic)

AI's ability to process natural language helps behind the scenes as banks interact with their customers. In call center banking transactions, the ability to analyze language can detect emotional nuances from the speaker, and understand linguistic differences such as the difference between American and British English. AI works with other languages as well, understanding the emotional nuances and slang terms that different groups use.

Collectively, real-time feedback from AI aids bank customer service reps in call centersbecause if they know the sentiments of their customers, it's easier for them to relate to customers and to understand customer concerns that might not have been expressed directly.

"We've developed AI models for natural language processing in a multitude of languages, and the AI continues to learn and refine these linguistics models with the help of machine learning (ML)," Singh said.

SEE:AI isn't perfect--but you can get it pretty darn close(TechRepublic)

The result is higher quality NLP that enables better relationships between customers and the call center front line employees who are trying to help them.

But the use of AI in banking doesn't stop there. Singh explained how AI engines like Watson were also helping on the loans and billing side.

"The (mortgage) loan underwriter looks at items like pay stubs and credit card statements. He or she might even make a billing inquiry," Singh said.

Without AI, these document reviews are time consuming and manual. AI changes that because the AI can "read" the document. It understands what the salient information is and also where irrelevant items, like a company logo, are likely to be located. The AI extracts the relevant information, places the information into a loan evaluation model, and can make a loan recommendation that the underwriter reviews, with the underwriter making a final decision.

Of course, banks have had software for years that has performed loan evaluations. However, they haven't had an easy way to process foundational documents such as bills and pay stubs, that go into the loan decisioning process and that AI can now provide.

SEE:These five tech trends will dominate 2020(ZDNet)

The best news of all for financial institutions is that AI modeling and execution don't exclude them.

"The AI is designed to be informed by bank subject matter experts so it can 'learn' the business rules that the bank wants to apply," Singh said. "The benefit is that real subject matter experts get involvednot just the data scientists."

Singh advises banks looking at expanding their use of AI to carefully select their business use cases, without trying to do too much at once.

"Start small instead of using a 'big bang' approach," he said. "In this way, you can continue to refine your AI model and gain success with it that immediately benefits the business."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read this article:

Artificial intelligence can take banks to the next level - TechRepublic

How Artificial Intelligence, IoT And Big Data Can Save The Bees – Forbes

Modern agriculture depends on bees. In fact, our entire ecosystem, including the food we eat and the air we breathe, counts on pollinators. But the pollinator population is declining according to Sabiha Rumani Malik, the founder and executive president of The World Bee Project. But, in an intriguing collaboration with Oracle and by putting artificial intelligence, internet of things and big data to work on the problem, they hope to reverse the trend.

How Artificial Intelligence, IoT and Big Data Can Save The Bees

Why is the global bee population in decline?

According to an Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) report, pollinators are in danger. There are many reasons pollinators are being driven to extinction, including habitat destruction, urbanization, use of pesticides, pollution, fragmentation of natural flowering habitats, predators and parasites, and changing climate. However, until recently, with The World Bee Project's work, there hasn't been a global initiative to study bee populations or to research and attack the issue from a global perspective.

Why is it important to save the bees?

Did you know that bees, along with other pollinators, such as butterflies, are the reason plants can produce seeds and reproduce? According to the United States Department of Agriculture (USDA), 35 percent of food crops and three-quarters of the worlds flowering plants depend on bees and pollinators. In fact, in order to ensure the almond crop gets pollinated in California each year, most of the beehives in the United States are shipped to California to ensure it. In fact, bees help to pollinate 90% of the leading global crop types, including fruit trees, coffee, vanilla, and cotton plants. And, of course, healthy plants are critical in replenishing our oxygen supply thanks to photosynthesis.

If the pollinators aren't alive or healthy enough to do their job, our global crop production, food security, biodiversity, and clean air is in peril. Honeybees are the world's most important pollinators. As much as 40 percent of the global nutrient supply for humans depends on pollinators. Presently there are approximately 2 billion people who suffer deficiencies of micronutrients.

Our lives are intrinsically connected to the bees, Malik said.

Partnership to monitor global honeybee population

The World Bee Project is the first private globally coordinated organization to launch and be devoted to monitoring the global honey bee population. Since 2014, the organization has brought together scientists to study the global problem of bee decline to provide insight about the issue to farmers, governments, beekeepers, and other vested organizations.

In 2018, Oracle Cloud technology was brought into the work to better understand the worldwide decline in bee populations, and The World Bee Project Hive Network began.

How technology can save the bees

How could technology be used to save the bees? Technology can be leveraged to help save the bees in a similar way that it is applied to other innovative projects. First, by using internet-of-things sensors, including microphones and cameras that can see invasive predators and collect data from the bees and hives. Human ingenuity and innovations such as wireless technologies, robotics, and computer vision help deliver new insights and solutions to the issue. One of the key metrics of a hive's health is the sounds it produces. Critical to the data-gathering efforts is to "listen" to the hives to determine colony health, strength, and behavior as well as collect temperature, humidity, apiary weather conditions, and hive weight.

The sound and vision sensors can also detect hornets, which can be a threat to bee populations.

The data is then fed to the Oracle Cloud, where artificial intelligence (AI) algorithms get to work to analyze the data. The algorithms will look for patterns and try to predict behaviors of the hive, such as if it's preparing to swarm. The insights are then shared with beekeepers and conservationists so they can step in to try to protect the hives. Since it's a globally connected network, the algorithms can also learn more about differences in bee colonies in different areas of the world. Students, researchers, and even interested citizens can also interact with the data, work with it through the hive network's open API, and discuss it via chatbot.

For example, the sound and vision sensors can detect hornets, which can be a threat to bee populations. The sound from the wing flab or a hornet is different from those of bees, and the AI can pick this up automatically and alert beekeepers to the hornet threat.

Technology is making it easier for The World Bee Project to share real-time information and gather resources to help save the world's bee population. In fact, Malik shared, "Our partnership with Oracle Cloud is an extraordinary marriage between nature and technology." Technology is helping to multiply the impact of The World Bee Project Hive Network across the world and makes action to save the bees quicker and more effective.

Here you can see a short video showing the connected beehive in augmented reality during my interview with Sabiha Rumani Malik - pretty cool:

Visit link:

How Artificial Intelligence, IoT And Big Data Can Save The Bees - Forbes

Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic | TheHill – The Hill

SARS-COV-2 has upended modern health care, leaving health systems struggling to cope. Addressing a fast-moving and uncontrolled disease requires an equally efficient method of discovery, development and administration. Artificial Intelligence (AI) and Machine Learning driven health care solutions provide such an answer. AI-enabled health care is not the medicine of the future, nor does it mean robot doctors rolling room to room in hospitals treating patients. Instead of a hospital from some future Jetsons-like fantasy, AI is poised to make impactful and urgent contributions to the current health care ecosystem. Already AI-based systems are helping to alleviate the strain on health care providers overwhelmed by a crushing patient load, accelerate diagnostic and reporting systems, and enable rapid development of new drugs and existing drug combinations that better match a patients unique genetic profile and specific symptoms.

For the thousands of patients fighting for their lives against this deadly disease and the health care providers who incur a constant risk of infection, AI provides an accelerated route to understand the biology of COVID-19. Leveraging AI to assist in prediction, correlation and reporting allow health care providers to make informed decisions quickly. With the current standard of PCR based testing requiring up to 48 hours to return a result, New York-based Envisagenics has developed an AI platform that analyzes 1,000 patient samples in parallel in just two hours. Time saves lives, and the company hopes to release the platform for commercial use in the coming weeks.

AI-powered wearables, such as a smart shirt developed by Montreal-based Hexoskin to continuously measure biometrics including respiration effort, cardiac activity, and a host of other metrics, provide options for hospital staff to minimize exposure by limiting the required visits to infected patients. This real-time data provides an opportunity for remote monitoring and creates a unique dataset to inform our understanding of disease progression to fuel innovation and enable the creation of predictive metrics, alleviating strain on clinical staff. Hexoskin has already begun to assist hospitals in New York City with monitoring programs for their COVID-19 patients, and they are developing an AI/ML platform to better assess the risk profile of COVID-19 patients recovering at home. Such novel platforms would offer a chance for providers and researchers to get ahead of the disease and develop more effective treatment plans.

AI also accelerates discovery and enables efficient and effective interrogation of, the necessary chemistry to address COVID-19. An increasing number of companies are leveraging AI/ML to identify new treatment paths, whether from a list of existing molecules or de novo discovery. San Francisco-based Auransa is using AI to map the gene sequence of SARS-COV-2 to its effect on the host to generate a short-list of already approved drugs that have a high likelihood to alleviate symptoms of COVID-19. Similarly, UK-based Healx has set its AI platform to discover combination therapies, identifying multi-drug approaches to simultaneously treat different aspects of the disease pathology to improve patient outcomes. The company analyzed a library of 4,000 approved drugs to map eight million possible pairs and 10.5 billion triplets to generate combination therapy candidates. Preclinical testing will begin in May 2020.

Developers cannot always act alone - realizing the potential of AI often requires the resources of a collaboration to succeed. Generally, the best data sets and the most advanced algorithms do not exist within the same organization, and it is often the case that multiple data sources and algorithms need to be combined for maximum efficacy. Over the last month, we have seen the rise of several collaborations to encourage information sharing and hasten potential outcomes to patients.

Medopad, a UK-based AI developer, has partnered with Johns Hopkins University to mine existing datasets on COVID-19 and relevant respiratory diseases captured by the UK Biobank and similar databases to identify a biomarker associated with a higher risk for COVID-19. A biomarker database is essential in executing long-term population health measures, and can most effectively be generated by an AI system. In the U.S., over 500 leading companies and organizations, including Mayo Clinic, Amazon Web Services and Microsoft, have formed the COVID-19 Healthcare Coalition to assist in coordinating on all COVID-19 related matters. As part of this effort, LabCorp and HD1, among others, have come together to use AI to make testing and diagnostic data available to researchers to help build disease models including predictions of future hotspots and at-risk populations. On the international stage, the recently launched COAI, a consortium of AI-companies being assembled by French-US OWKIN, aims to increase collaborative research, to accelerate the development of effective treatments, and to share COVID-19 findings with the global medical and scientific community.

Leveraging the potential of AI and machine learning capabilities provides a potent tool to the global community in tackling the pandemic. AI presents novel ways to address old problems and opens doors to solving newly developing population health concerns. The work of our health care system, from the research scientists to the nurses and physicians, should be celebrated, and we should embrace the new tools which are already providing tremendous value. With the rapid deployment and integration of AI solutions into the COVID-19 response, the health care of tomorrow is already addressing the challenges we face today.

Brandon Allgood, PhD, is vice chair of the Alliance for Artificial Intelligence in Healthcare, a global advocacy organization dedicated to the discovery, development and delivery of better solutions to improve patient lives. Allgood is a SVP of DS&AI at Integral Health, a computationally driven biotechnology company in Boston.

See the article here:

Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic | TheHill - The Hill

First meeting of the new CEPEJ Working Group on cyberjustice and artificial intelligence – Council of Europe

The new CEPEJ Working group on Cyberjustice and artificial intelligence (CEPEJ-GT-CYBERJUST) will hold a first meeting by videoconference on 27 April 2020.

The objective of the Working group is to analyse and develop appropriate tools on new issues such as the use of cyberjustice or artificial intelligence in judicial systems in relation to the efficiency and quality of judicial systems.

At this meeting, an exchange of views will take place on the possible future work of the Working Group, which should be based on the themes contained in its mandate:

The CYBERJUST group will also hold a joint meeting at a later stage with the CEPEJ Working Group on Quality of Justice (CEPEJ-GT-QUAL) with a view to sharing tasks, in particular to follow up the implementation of the CEPEJ European Ethical Charter on the use of artificial intelligence in judicial systems and their environment and its toolbox and to ensure co-ordination.

Read the original:

First meeting of the new CEPEJ Working Group on cyberjustice and artificial intelligence - Council of Europe

When the coronavirus hit, California turned to artificial intelligence to help map the spread – 60 Minutes – CBS News

California was the first state to shut down in response to the COVID-19 pandemic. It also enlisted help from the tech sector, harnessing the computing power of artificial intelligence to help map the spread of the disease, Bill Whitaker reports. Whitaker's story will be broadcast on the next edition of 60 Minutes, Sunday, April 26 at 7 p.m. ET/PT on CBS.One of the companies California turned to was a small Canadian start-up called BlueDot that uses anonymized cell phone data to determine if social distancing is working. Comparing location data from cell phone users over a recent 24-hour period to a week earlier in Los Angeles, BlueDot's algorithm maps where people are still gathering. It could be a hospital or it could be a problem. "We can see on a moment by moment basis if necessary, where or not our stay at home orders were working," says California Governor Gavin Newsom.The data allows public health officials to predict which hospitals might face the greatest number of patients. "We are literally looking into the future and predicting in real time based on constant update of information where patterns are starting to occur," Newsom tells Whitaker. "So the gap between the words and people's actions is often anecdotal. But not with this technology."California is just one client of BlueDot. The firm was among the first to warn of the outbreak in Wuhan on December 31. Public officials in ten Asian countries, airlines and hospitals were alerted to the potential danger of the virus by BlueDot.BlueDot also uses anonymized global air ticket data to predict how an outbreak of infectious disease might spread. BlueDot founder Dr. Kamran Khan tells Whitaker, "We can analyze and visualize all this information across the globe in just a few seconds." The computing power of artificial intelligence lets BlueDot sort through billions of pieces of raw data offering the critical speed needed to map a pandemic. "Our surveillance system that picked up the outbreak of Wuhan automatically talks to the system that is looking at how travelers might go to various airports around Wuhan," says Dr. Khan.

2020 CBS Interactive Inc. All Rights Reserved.

See original here:

When the coronavirus hit, California turned to artificial intelligence to help map the spread - 60 Minutes - CBS News

Artificial Intelligence in the Oil & Gas Industry, 2020-2025 – Upstream Operations to Witness Significant Growth – ResearchAndMarkets.com – Yahoo…

The "AI in Oil and Gas Market - Growth, Trends, and Forecast (2020-2025)" report has been added to ResearchAndMarkets.com's offering.

The AI in Oil and Gas market was valued at USD2 billion in 2019 and is expected to reach USD3.81 billion by 2025, at a CAGR of 10.96% over the forecast period 2020-2025. As the cost of IoT sensors declines, more major oil and gas organizations are bound to start integrating these sensors into their upstream, midstream, and downstream operations along with AI-enabled predictive analytics.

Oil and gas remains as one of the most highly valued commodities in the energy sector. In recent years, there has been an increased focus on improving efficiency, and reducing downtime has been a priority for the oil and gas companies as their profits slashed since 2014, due to fluctuating oil prices. However, as concerns over the environmental impact of energy production and consumption persist, oil and gas companies are actively seeking innovative approaches to achieve their business goals, while reducing environmental impact.

In addition, the Oil and Gas Authority (OGA) is making use of AI in parallel ways, owing to the United Kingdom's first oil and gas National Data Repository (NDR), launched in March 2019, using AI to interpret data, which, according to the OGA anticipations, is likely to assist to discover new oil and gas forecast and permit more production from existing infrastructures.

The offshore oil and gas business use AI in data science to make the complex data used for oil and gas exploration and production more reachable, which lets companies to discover new exploration prospects or make more use out of existing infrastructures. For instance, in January 2019, BP invested in Houston-based technology start-up, Belmont Technology, to bolster the company's AI capabilities, developing a cloud-based geoscience platform nicknamed Sandy.

However, high capital investments for the integration of AI technologies, along with the lack of skilled AI professionals, could hinder the growth of the market. A recent poll validated that 56% of senior AI professionals considered that a lack of additional and qualified AI workers was the only biggest hurdle to be overcome, in terms of obtaining the necessary level of AI implementation across business operations.

Key Market Trends

Upstream Operations to Witness a Significant Growth

North America Expected to Hold a Significant Market Share

Competitive Landscape

The AI in the oil and gas market is highly competitive and consists of several major players. In terms of market share, few of the major players currently dominate the market. The companies are continuously capitalizing on acquisitions, in order to broaden, complement, and enhance its product and service offerings, to add new customers and certified personnel, and to help expand sales channels.

Recent Industry Developments

Key Topics Covered

1 INTRODUCTION

1.1 Study Assumptions and Market Definition

1.2 Scope of the Study

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET INSIGHTS

4.1 Market Overview

4.2 Industry Attractiveness - Porter's Five Forces Analysis

4.3 Technology Snapshot - By Application

4.3.1 Quality Control

4.3.2 Production Planning

4.3.3 Predictive Maintenance

4.3.4 Other Applications

5 MARKET DYNAMICS

5.1 Market Drivers

5.1.1 Increasing Focus to Easily Process Big Data

5.1.2 Rising Trend to Reduce Production Cost

5.2 Market Restraints

5.2.1 High Cost of Installation

5.2.2 Lack of Skilled Professionals across the Oil and Gas Industry

6 MARKET SEGMENTATION

6.1 By Operation

6.1.1 Upstream

6.1.2 Midstream

6.1.3 Downstream

6.2 By Service Type

6.2.1 Professional Services

6.2.2 Managed Services

6.3 Geography

6.3.1 North America

6.3.2 Europe

6.3.3 Asia-Pacific

6.3.4 Latin America

6.3.5 Middle East & Africa

7 COMPETITIVE LANDSCAPE

7.1 Company Profiles

7.1.1 Google LLC

7.1.2 IBM Corporation

7.1.3 FuGenX Technologies Pvt. Ltd.

7.1.4 Microsoft Corporation

7.1.5 Intel Corporation

7.1.6 Royal Dutch Shell PLC

7.1.7 PJSC Gazprom Neft

7.1.8 Huawei Technologies Co. Ltd.

7.1.9 NVIDIA Corp.

7.1.10 Infosys Ltd.

7.1.11 Neudax

8 INVESTMENT ANALYSIS

9 FUTURE OF THE MARKET

For more information about this report visit https://www.researchandmarkets.com/r/14dtcc

View source version on businesswire.com: https://www.businesswire.com/news/home/20200424005472/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com

For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

Read the original here:

Artificial Intelligence in the Oil & Gas Industry, 2020-2025 - Upstream Operations to Witness Significant Growth - ResearchAndMarkets.com - Yahoo...

Pre & Post COVID-19 Market Estimates-Artificial Intelligence (AI) Market in Retail Sector 2019-2023| Increased Efficiency of Operations to Boost…

LONDON--(BUSINESS WIRE)--The artificial intelligence (AI) market in retail sector is expected to grow by USD 14.05 billion during 2019-2023. The report also provides the market impact and new opportunities created due to the COVID-19 pandemic. The impact can be expected to be significant in the first quarter but gradually lessen in subsequent quarters with a limited impact on the full-year economic growth, according to the latest market research report by Technavio. Request a free sample report

Companies operating in the retail sector are increasingly adopting AI solutions to improve efficiency and productivity of operations through real-time problem-solving. For instance, the integration of AI with inventory management helps retailers to effectively plan their inventories with respect to demand. AI also helps retailers to identify gaps in their online product offerings and deliver a personalized experience to their customers. Many such benefits offered by the integration of AI are crucial in driving the growth of the market.

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR31763

As per Technavio, the increased applications in e-commerce will have a positive impact on the market and contribute to its growth significantly over the forecast period. This research report also analyzes other significant trends and market drivers that will influence market growth over 2019-2023.

Artificial Intelligence (AI) Market in Retail Sector: Increased Applications in E-commerce

E-commerce companies are increasingly integrating AI in various applications to gain a competitive advantage in the market. The adoption of AI-powered tools helps them to analyze the catalog in real-time to serve customers with similar and relevant products. This improves both sales and customer satisfaction. E-commerce companies are also integrating AI with other areas such as planning and procurement, production, supply chain management, in-store operations, and marketing to improve overall efficiency. Therefore, the increasing application areas of AI in e-commerce is expected to boost the growth of the market during the forecast period.

Bridging offline and online experiences and the increased availability of cloud-based applications will further boost market growth during the forecast period, says a senior analyst at Technavio.

Register for a free trial today and gain instant access to 17,000+ market research reports

Technavio's SUBSCRIPTION platform

Artificial Intelligence (AI) Market in Retail Sector: Segmentation Analysis

This market research report segments the artificial intelligence (AI) market in retail sector by application (sales and marketing, in-store, planning, procurement, and production, and logistics management) and geographic landscape (North America, APAC, Europe, MEA, and South America).

The North America region led the artificial intelligence (AI) market in retail sector in 2018, followed by APAC, Europe, MEA, and South America respectively. During the forecast period, the North America region is expected to register the highest incremental growth due to factors such as early adoption of AI, rising investments in R&D and start-ups, and increasing investments in technologies.

Technavios sample reports are free of charge and contain multiple sections of the report, such as the market size and forecast, drivers, challenges, trends, and more. Request a free sample report

Some of the key topics covered in the report include:

Market Drivers

Market Challenges

Market Trends

Vendor Landscape

About Technavio

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions.

With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

View post:

Pre & Post COVID-19 Market Estimates-Artificial Intelligence (AI) Market in Retail Sector 2019-2023| Increased Efficiency of Operations to Boost...

EUREKA Clusters Artificial Intelligence (AI) Call | News item – The Netherlands and You

News item | 21-04-2020 | 04:58

Singapore has joined the EUREKA Clusters Artificial Intelligence (AI) Call. Through this new initiative, Singapore and Dutch companies can receive support in the facilitation of and funding for joint innovation projects in the AI domain with entities from 14 other EUREKA countries. The 14 partner countries are Austria, Belgium, Canada, Denmark, Finland, Germany, Hungary, Luxembourg, Malta, Portugal, Spain, Sweden, South Korea and Turkey. The call will be open from 1 April to 15 June 2020, with funding decisions to be made by January 2021.

The EUREKA Clusters CELTIC-NEXT, EUROGIA, ITEA 3, and PENTA-EURIPIDES, have perceived a common cross domain interest in developing, adapting and utilising emerging Artificial Intelligence within and across their focus areas. These Clusters, together with a number of EUREKA Public Authorities, are now launching a Call for innovative projects in the AI domain. The aim of this Call is to boost the productivity & competitiveness of European industries through the adoption and use of AI systems and services.

The call for proposals is open to projects that apply AI to a large number of application areas, including but not limited to Agriculture, Circular Economy, Climate Response, Cybersecurity, eHealth, Electronic Component and Systems, ICT and applications, Industry 4.0, Low Carbon Energy, Safety, Transport and Smart Mobility, Smart Cities, Software Innovation, and Smart Engineering.

More information: https://eureka-clusters-ai.eu/

To find partners please check the online brokerage tool:https://eureka-clusters-ai.eu/brokerage-tool/

The Netherlands Enterprise Agency (RVO) will host a webinar on Tuesday 28th of April at 10am CEST for Dutch based potential applicants or intermediaries, register here.

Enterprise Singapore will host a webinar on Monday 27 April at 4pm (SG time) for Singapore based potential applicants or intermediaries, register here.

Link:

EUREKA Clusters Artificial Intelligence (AI) Call | News item - The Netherlands and You

A guide to healthy skepticism of artificial intelligence and coronavirus – Brookings Institution

The COVID-19 outbreak has spurred considerable news coverage about the ways artificial intelligence (AI) can combat the pandemics spread. Unfortunately, much of it has failed to be appropriately skeptical about the claims of AIs value. Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI.

Still, various news articles have dramatized the role AI is playing in the pandemic by overstating what tasks it can perform, inflating its effectiveness and scale, neglecting the level of human involvement, and being careless in consideration of related risks. In fact, the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI. And so, framed around examples from the COVID-19 outbreak, here are eight considerations for a skeptics approach to AI claims.

No matter what the topic, AI is only helpful when applied judiciously by subject-matter expertspeople with long-standing experience with the problem that they are trying to solve. Despite all the talk of algorithms and big data, deciding what to predict and how to frame those predictions is frequently the most challenging aspect of applying AI. Effectively predicting a badly defined problem is worse than doing nothing at all. Likewise, it always requires subject matter expertise to know if models will continue to work in the future, be accurate on different populations, and enable meaningful interventions.

In the case of predicting the spread of COVID-19, look to the epidemiologists, who have been using statistical models to examine pandemics for a long time. Simple mathematical models of smallpox mortality date all the way back to 1766, and modern mathematical epidemiology started in the early 1900s. The field has developed extensive knowledge of its particular problems, such as how to consider community factors in the rate of disease transmission, that most computer scientists, statisticians, and machine learning engineers will not have.

There is no value in AI without subject-matter expertise.

It is certainly the case that some of the epidemiological models employ AI. However, this should not be confused for AI predicting the spread of COVID-19 on its own. In contrast to AI models that only learn patterns from historical data, epidemiologists are building statistical models that explicitly incorporate a century of scientific discovery. These approaches are very, very different. Journalists that breathlessly cover the AI that predicted coronavirus and the quants on Twitter creating their first-ever models of pandemics should take heed: There is no value in AI without subject-matter expertise.

The set of algorithms that conquered Go, a strategy board game, and Jeopardy! have accomplishing impressive feats, but they are still just (very complex) pattern recognition. To learn how to do anything, AI needs tons of prior data with known outcomes. For instance, this might be the database of historical Jeopardy! questions, as well as the correct answers. Alternatively, a comprehensive computational simulation can be used to train the model, as is the case for Go and chess. Without one of these two approaches, AI cannot do much of anything. This explains why AI alone cant predict the spread of new pandemics: There is no database of prior COVID-19 outbreaks (as there is for the flu).

So, in taking a skeptics approach to AI, it is critical to consider whether a company spent the time and money to build an extensive dataset to effectively learn the task in question. Sadly, not everyone is taking the skeptical path. VentureBeat has regurgitated claims from Baidu that AI can be used with infrared thermal imaging to see the fever that is a symptom of COVID-19. Athena Security, which sells video analysis software, has also claimed it adapted its AI system to detect fever from thermal imagery data. Vice, Fast Company, and Forbes rewarded the companys claims, which included a fake software demonstration, with free press.

To even attempt this, companies would need to collect extensive thermal imaging data from people while simultaneously taking their temperature with a conventional thermometer. In addition to attaining a sample diverse in age, gender, size, and other factors, this would also require that many of these people actually have feversthe outcome they are trying to predict. It stretches credibility that, amid a global pandemic, companies are collecting data from significant populations of fevered persons. While there are other potential ways to attain pre-existing datasets, questioning the data sources is always a meaningful way to assess the viability of an AI system.

The company Alibaba claims it can use AI on CT imagery to diagnose COVID-19, and now Bloomberg is reporting that the company is offering this diagnostic software to European countries for free. There is some appeal to the idea. Currently, COVID-19 diagnosis is done through a process called polymerase chain reaction (PCR), which requires specialized equipment. Including shipping time, it can easily take several days, whereas Alibaba says its model is much faster and is 96% accurate.

However, it is not clear that this accuracy number is trustworthy. A poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem. If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development. In fact, an inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world. That Alibaba claims its model works that well without caveat or self-criticism is suspicious on its face.

[A]n inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world.

In addition, accuracy alone does not indicate enough to evaluate the quality of predictions. Imagine if 90% of the people in the training data were healthy, and the remaining 10% had COVID-19. If the model was correctly predicting all of the healthy people, a 96% accuracy could still be truebut the model would still be missing 40% of the infected people. This is why its important to also know the models sensitivity, which is the percent of correct predictions for individuals who have COVID-19 (rather than for everyone). This is especially important when one type of mistaken prediction is worse than the other, which is the case now. It is far worse to mistakenly suggest that a person with COVID-19 is not sick (which might allow them to continue infecting others) than it is to suggest a healthy person has COVID-19.

Broadly, this is a task that seems like it could be done by AI, and it might be. Emerging research suggests that there is promise in this approach, but the debate is unsettled. For now, the American College of Radiology says that the findings on chest imaging in COVID-19 are not specific, and overlap with other infections, and that it should not be used as a first-line test to diagnose COVID-19. Until stronger evidence is presented and AI models are externally validated, medical providers should not consider changing their diagnostic workflowsespecially not during a pandemic.

The circumstances in which an AI system is deployed can also have huge implications for how valuable it really is. When AI models leave development and start making real-world predictions, they nearly always degrade in performance. In evaluating CT scans, a model that can differentiate between healthy people and those with COVID-19 might start to fail when it encounters patients who are sick with the regular flu (and it is still flu season in the United States, after all). A drop of 10% accuracy or more during deployment would not be unusual.

In a recent paper about the diagnosis of malignant moles with AI, researchers noticed that their models had learned that rulers were frequently present in images of moles known to be malignant. So, of course, the model learned that images without rulers were more likely to be benign. This is a learning pattern that leads to the appearance of high accuracy during model development, but it causes a steep drop in performance during the actual application in a health-care setting. This is why independent validation is absolutely essential before using new and high-impact AI systems.

When AI models leave development and start making real-world predictions, they nearly always degrade in performance.

This should engender even more skepticism of claims that AI can be used to measure body temperature. Even if a company did invest in creating this dataset, as previously discussed, reality is far more complicated than a lab. While measuring core temperature from thermal body measurements is imperfect even in lab conditions, environmental factors make the problem much harder. The approach requires an infrared camera to get a clear and precise view of the inner face, and it is affected by humidity and the ambient temperature of the target. While it is becoming more effective, the Centers for Disease Control and Prevention still maintain that thermal imaging cannot be used on its owna second confirmatory test with an accurate thermometer is required.

In high-stakes applications of AI, it typically requires a prediction that isnt just accurate, but also one that meaningfully enables an intervention by a human. This means sufficient trust in the AI system is necessary to take action, which could mean prioritizing health-care based on the CT scans or allocating emergency funding to areas where modeling shows COVID-19 spread.

With thermal imaging for fever-detection, an intervention might imply using these systems to block entry into airports, supermarkets, pharmacies, and public spaces. But evidence shows that as many as 90% of people flagged by thermal imaging can be false positives. In an environment where febrile people know that they are supposed to stay home, this ratio could be much higher. So, while preventing people with fever (and potentially COVID-19) from enabling community transmission is a meaningful goal, there must be a willingness to establish checkpoints and a confirmatory test, or risk constraining significant chunks of the population.

This should be a constant consideration for implementing AI systems, especially those used in governance. For instance, the AI fraud-detection systems used by the IRS and the Centers for Medicare and Medicaid Services do not determine wrongdoing on their own; rather, they prioritize returns and claims for auditing by investigators. Similarly, the celebrated AI model that identifies Chicago homes with lead paint does not itself make the final call, but instead flags the residence for lead paint inspectors.

Wired ran a piece in January titled An AI Epidemiologist Sent the First Warnings of the Wuhan Virus about a warning issued on Dec. 31 by infectious disease surveillance company, BlueDot. One blog post even said the company predicted the outbreak before it happened. However, this isnt really true. There is reporting that suggests Chinese officials knew about the coronavirus from lab testing as early as Dec. 26. Further, doctors in Wuhan were spreading concerns online (despite Chinese government censorship) and the Program for Monitoring Emerging Diseases, run by human volunteers, put out a notification on Dec. 30.

That said, the approach taken by BlueDot and similar endeavors like HealthMap at Boston Childrens Hospital arent unreasonable. Both teams are a mix of data scientists and epidemiologists, and they look across health-care analyses and news articles around the world and in many languages in order to find potential new infectious disease outbreaks. This is a plausible use case for machine learning and natural language processing and is a useful tool to assist human observers. So, the hype, in this case, doesnt come from skepticism about the feasibility of the application, but rather the specific type of value it brings.

AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions.

Even as these systems improve, AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions. AI can hardly be blamed. Predicting rare events is just very hard, and AIs reliance on historical data does it no favors here. However, AI does offer quite a bit of value at the opposite end of the spectrumproviding minute detail.

For example, just last week, California Gov. Gavin Newsom explicitly praised BlueDots work to model the spread of the coronavirus to specific zip codes, incorporating flight-pattern data. This enables relatively precise provisioning of funding, supplies, and medical staff based on the level of exposure in each zip code. This reveals one of the great strengths of AI: its ability to quickly make individualized predictions when it would be much harder to do so individually. Of course, individualized predictions require individualized data, which can lead to unintended consequences.

AI implementations tend to have troubling second-order consequences outside of their exact purview. For instance, consolidation of market power, insecure data accumulation, and surveillance concerns are very common byproducts of AI use. In the case of AI for fighting COVID-19, the surveillance issues are pervasive. In South Korea, the neighbors of confirmed COVID-19 patients were given details of that persons travel and commute history. Taiwan, which in many ways had a proactive response to the coronavirus, used cell phone data to monitor individuals who had been assigned to stay in their homes. Israel and Italy are moving in the same direction. Of exceptional concern is the deployed social control technology in China, which nebulously uses AI to individually approve or deny access to public space.

Government action that curtails civil liberties during an emergency (and likely afterwards) is only part of the problem. The incentives that markets create can also lead to long-term undermining of privacy. At this moment, Clearview AI and Palantir are among the companies pitching mass-scale surveillance tools to the federal government. This is the same Clearview AI that scraped the web to make an enormous (and unethical) database of facesand it was doing so as a reaction to an existing demand in police departments for identifying suspects with AI-driven facial recognition. If governments and companies continue to signal that they would use invasive systems, ambitious and unscrupulous start-ups will find inventive new ways to collect more data than ever before to meet that demand.

In new approaches to using AI in high-stakes circumstances, bias should be a serious concern. Bias in AI models results in skewed estimates across different subgroups, such as women, racial minorities, or people with disabilities. In turn, this frequently leads to discriminatory outcomes, as AI models are often seen as objective and neutral.

While investigative reporting and scientific research has raised awareness about many instances of AI bias, it is important to realize that AI bias is more systemic than anecdotal. An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

For example, a preprint paper suggests it is possible to use biomarkers to predict mortality risk of Wuhan COVID-19 patients. This might then be used to prioritize care for those most at riska noble goal. However, there are myriad sources of potential bias in this type of prediction. Biological associations between race, gender, age, and these biomarkers could lead to biased estimates that dont represent mortality risk. Unmeasured behavioral characteristics can lead to biases, too. It is reasonable to suspect that smoking history, more common among Chinese men and a risk factor for death by COVID-19, could bias the model into broadly overestimating male risk of death.

Especially for models involving humans, there are so many potential sources of bias that they cannot be dismissed without investigation. If an AI model has no documented and evaluated biases, it should increase a skeptics certainty that they remain hidden, unresolved, and pernicious.

While this article takes a deliberately skeptical perspective, the future impact of AI on many of these applications is bright. For instance, while diagnosis of COVID-19 with CT scans is of questionable value right now, the impact that AI is having on medical imaging is substantial. Emerging applications can evaluate the malignancy of tissue abnormalities, study skeletal structures, and reduce the need for invasive biopsies.

Other applications show great promise, though it is too soon to tell if they will meaningfully impact this pandemic. For instance, AI-designed drugs are just now starting human trials. The use of AI to summarize thousands of research papers may also quicken medical discoveries relevant to COVID-19.

AI is a widely applicable technology, but its advantages need to be hedged in a realistic understanding of its limitations. To that end, the goal of this paper is not to broadly disparage the contributions that AI can make, but instead to encourage a critical and discerning eye for the specific circumstances in which AI can be meaningful.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Read more:

A guide to healthy skepticism of artificial intelligence and coronavirus - Brookings Institution

AI vs your career? What artificial intelligence will really do to the future of work – ZDNet

Jill Watson has been a teaching assistant (TA) at the Georgia Institute of Technology for five years now, helping students day and night with all manner of course-related inquiries. But for all the hard work she has done, she still can't qualify for outstanding TA of the year.

That's because Jill Watson, contrary to many students' belief, is not actually human.

Created back in 2015 by Ashok Goel, professor of computer science and cognitive science at the Institute, Jill Watson is an artificial system based on IBM's Watson artificial intelligence software. Her role consists of answering students' questions a task she remarkably carries out with a 97% accuracy rate, for inquiries ranging from confirming the word count for an assignment, to complex technical questions related to the content of the course.

And she has certainly gone down well with students, many of whom, in 2015, were "flabbergasted" upon discovering that their favorite TA was not the serviceable, human lady that they expected, but in fact a cold-hearted machine.

What students found an amusing experiment is the sort of thing that worries many workers. Automation, we have been told time and again, will displace jobs; so are experiments like Jill Watson the first step towards unemployment for professionals?

SEE:How to implement AI and machine learning(ZDNet special report) |Download the report as a PDF(TechRepublic)

In fact, it's quite the contrary, Goel tells ZDNet. "Job losses are an important concern Jill Watson, in a way, could replace me as a teacher," he said. "But among the professors who use her, that question has never come up, because there is a huge need for teachers globally. Instead of replacing teachers, Jill Watson augments and amplifies their work, and that is something we actually need."

The AI was originally developed for an online masters in computer science, where students interact with teachers via a web discussion forum. Just in the spring of 2015, noticed Goel, 350 students posted 10,000 messages to the forum; to answer all of their questions, he worked out, would have taken a real-life teacher a year, working full time.

Jill Watson has only grown in popularity since 2015, said Goel, and she has now been deployed to a dozen other courses -- building her up for a new class takes less than ten hours. And while the artificial TA, for now, is only used at Georgia Institute of Technology, Jill Watson could change the education game if she were to be scaled globally. With UNESCO estimating that an additional 69 million teachers are needed to achieve sustainable development goals, the notion of 'augmenting' and 'amplifying' teachers' work could go a long way.

The automation of certain tasks is not such a scary prospect for those working in education. And perhaps neither is it a risk to the medical industry, where AI is already lending a helping hand with tasks ranging from disease diagnosis to prescription monitoring. It's a welcome support, rather than a looming threat, as the overwhelming majority of health services across the world report staff shortages and lack of resources even at the best of times.

But of course, not all professions are in dire need of more staff. For many workers, the advent of AI-powered technologies seems to be synonymous with permanent lay-off. Retailersare already using robotic fulfillment systems to pick orders in their warehouses. Google's project to build autonomous vehicles, Waymo, has launched its first commercial self-driving car service in the US, which in the long term will remove the need for a human taxi driver. Ford is even working on automating delivery services from start to finish, with a two-legged, two-armed robot that can walk around neighborhoods carrying parcels from the delivery vehicle right up to your doorstep.

Advancements in AI technology, therefore, don't bode well for all workers. "Nobody wants to be out of a job," says David McDonald, professor of human-centered design and engineering at the University of Washington. "Technological changes that impact our work, and thus, our ability to support ourselves and our families, are incredibly threatening."

"This suggests that when people hear stories saying that their livelihood is going to disappear," he says, "that they probably will not hear the part of the story that says there will be additional new jobs."

Consultancy McKinsey estimates that automation will cause up to 800 million individuals around the world to be displaced from their jobs by 2030 a statistic that will sound ominous, to say the least, to most of the workforce. But the firm's research also shows that in nearly all scenarios, and provided that there is sufficient investment and growth, most countries can expect to be at very near full employment by the same year.

The potential impact of artificial intelligence needs to be seen as part of the bigger picture. McKinsey highlighted that one of the countries that will face the largest displacement of workers is China, with up to 12% of the workforce needing to switch occupations. But although 12% seems like a lot, the consultancy noted, it's still relatively small compared with the tens of millions of Chinese who have moved out of agriculture in the past 25 years.

In other words, AI is only the latest news in the long history of technological progress and as with all previous advancements, the new opportunities that AI will open will balance out the skills that the technology makes out-of-date. At least that's the theory; one that Brett Frischmann explores in the book he co-authored, Re-engineering Humanity. It's a project that's been going on forever and more recent innovations are building on the efficiencies pioneered by the likes of Frederick Winslow Taylor and Henry Ford.

"At one point, human beings used spears to fish. As we developed fishing technology, fewer people needed that skill and did other things," he says. "The idea that there is something dramatically different about AI has to be looked at carefully. Ultimately, data-driven systems, for example as a way to optimize factory outputs, are only a ramped-up version of Ford and Taylor's processes."

Seeing AI as simply the next chapter of tech is a common position among experts. The University of Washington's McDonald is equally convinced that in one form or another, we have been building systems to complement work "for over 50 years".

So where does the big AI scare come from? A large part of the problem, as often, comes down to misunderstanding. There is one point that Frischmann was determined to clarify: people do tend to think, and wrongly so, that the technology is a force that has its own agenda -- one that involves coming against us and stealing our jobs.

"It's really important for people to understand that the AI doesn't want anything," he said. "It's not a bad guy. It doesn't have a role of its own, or an agenda. Human beings are the ones that create, design, damage, deploy, control those systems."

In reality, according to McKinsey, fewer than 5% of occupations can be entirely automated using current technology. But over half of jobs could have 30% of their activities taken on by AI. Rather than robots taking over, therefore, it looks like the future will be about task-sharing.

Gartner previously reported that by 2022one in five workers engaged in non-routine tasks will rely on AI to get work done. The research firm's analysts forecasted that combining human and artificial intelligence would be the way forward to maximize the value generated by the technology. AI, said Gartner, will assist workers in all types of jobs, from entry-level to highly-skilled.

The technology could become a virtual assistant, an intern, or another kind of robo-employee; in any case, it will lead to the development of an 'augmented' workforce, whose productivity will be enhanced by the tool.

For Gina Neff, associate professor at the Oxford Internet Institute, delegating tasks to AI will only bring about a brighter future for workers. "Humans are very good at lots of tasks, and there are lots of tasks that computers are better at than we are. I don't want to have to add large lists of sums by hand for my job, and thankfully I have a technology to help me do that."

"Increasingly, the conversation will shift towards thinking about what type of work we want to do, and how we can use the tools we have at our disposal to enhance our capacity, and make our work both productive and satisfying."

As machines take on tasks such as collecting and processing data, which they already carry out much better than humans, workers will find that they have more time to apply themselves to projects involving the cognitive skills logical reasoning, creativity, communication that robots (at least currently) lack.

Using technology to augment the human value of work is also the prospect that McDonald has in mind. "We should be using AI and complex computational systems to help people achieve their hopes, dreams and goals," he said. "That is, the AI systems we build should augment and extend our social and our cognitive skills and abilities."

There is a caveat. For AI systems to effectively bolster our hopes, dreams and goals, as McDonald said, it is crucial that the technology is designed from the start as a human-centered tool one that is made specifically to fulfil the interests of the human workforce.

Human-centricity might be the next big challenge for AI. Some believe, however, that so far the technology has not done such a good job at ensuring that it enhances humans. In Re-engineering Humanity, Frischmann, for one, does not do AI any favours.

"Smart systems and automation, in my opinion, cause atrophy, more than enhancement," he argued. "The question of whether robots will take our jobs is the wrong one. What is more relevant is how the deployment of AI affects humans. Are we engineering unintelligent humans, rather than intelligent machines?"

It is certainly a fine line, and going forward, will be a delicate balancing act. For Oxford Internet Institute's Neff, making AI work in humans' best interest will require a whole new category of workers, which she called "translators", to act as intermediaries between the real world and the technology.

For Neff, translators won't be roboticists or "hot-shot data scientists", but workers who understand the situation "on the ground" well enough to see how the technology can be applied efficiently to complement human activity.

In an example of good behaviour, and of a way to bridge between humans and technology, Amazon last year launched an initiative to help reconvert up to 1,300 employees that were being made redundant as the company deployed robots to its US fulfilment centres. The e-tailer announced that it would pay workers $10,000 to quit their jobs and set up their own delivery business, in order to tackle retail's infamous last-mile logistics challenge. Tens of thousands of workers have now applied to the program.

In a similar vein, Gartner recently suggested that HR departments startincluding a section dedicated to "robot resources", to better manage employees as they start working alongside robotic colleagues. "Getting an AI to collaborate with humans in the ways that we collaborate with others at work, every day, is incredibly hard," said McDonald. "One of the emerging areas in design is focused on designing AI that more effectively augments human capacity with respect for people."

SEE: 7 business areas ripe for an artificial intelligence boost

From human-centred design, to participatory design or user-experience design: for McDonald, humans have to be the main focus from the first stage of creating an AI.

And then there is the question of communication. At the Georgia Institute of Technology, Goel recognised that AI "has not done a good job" of selling itself to those who are not inside the experts' bubble.

"AI researchers like me cannot stay in our glass tower and develop tools while the rest of the world is anxious about the technology," he said. "We need to look at the social implications of what we do. If we can show that AI can solve previously unsolvable problems, then the value of AI will become clearer to everyone."

His dream for the future? To get every teacher in the world a Jill Watson assistant within five years; and that in the next decade, every parent can access one too, to help children with after-school questions. In fact, it's increasingly looking like every industry, not only education, will be getting their own version of a Jill Watson, too and that we needn't worry that she will be coming at our jobs anytime soon.

Excerpt from:

AI vs your career? What artificial intelligence will really do to the future of work - ZDNet


...23456...102030...