12345...102030...


What is AI? Everything you need to know about Artificial …

Video: Getting started with artificial intelligence and machine learning

It depends who you ask.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

Special report: How to implement AI and machine learning (free PDF)

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ' superintelligence' -- which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader's guide to deep learning(Tech Pro Research)

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research is evolutionary computation, which borrows from Darwin's famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialized chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

See also: How artificial intelligence is taking call centers to the next level

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size -- Google's Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don't want to build their own machine learning models but instead want to consume AI-powered, on-demand services -- such as voice, vision, and language recognition -- Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella -- and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Internally, each of the tech giants -- and others such as Facebook -- use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Read more: How we learned to talk to computers, and how they learned to answer back (PDF download)

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana's days are numbered, although Microsoft was quick to reject this.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu's self-driving car, a modified BMW 3 series.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

There's too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each -- setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

IBM Watson competes on Jeopardy! in January 14, 2011

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson's win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95 percent. Recently Microsoft's Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it's likely this more intrusive use of AI technology -- including AI that can recognize emotions -- will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM's Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK's National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a "fundamental risk to the existence of human civilization". As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft's director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about "Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away."

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

While AI won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn't have the potential to impact. As AI expert Andrew Ng puts it: "many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work", saying he sees a "significant risk of technological unemployment over the next few decades".

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it's not a given that manual and robotic labor will continue to grow hand-in-hand.

Amazon bought Kiva robotics in 2012 and today uses Kiva robots throughout its warehouses.

Visit link:

What is AI? Everything you need to know about Artificial ...

Posted in Ai

What Is Artificial Intelligence (AI)? | PCMag

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans.

But true artificial intelligence, as McCarthy conceived it, continues to elude us.

A great challenge with artificial intelligence is that it's a broad term, and there's no clear agreement on its definition.

As mentioned, McCarthy proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said.

Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes: "Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence."

But our understanding of "human intelligence" and our expectations of technology are constantly evolving. Zachary Lipton, the editor of Approximately Correct, describes the term AI as "aspirational, a moving target based on those capabilities that humans possess but which machines do not." In other words, the things we ask of AI change over time.

For instance, In the 1950s, scientists viewed chess and checkers as great challenges for artificial intelligence. But today, very few would consider chess-playing machines to be AI. Computers are already tackling much more complicated problems, including detecting cancer, driving cars, and processing voice commands.

The first generation of AI scientists and visionaries believed we would eventually be able to create human-level intelligence.

But several decades of AI research have shown that replicating the complex problem-solving and abstract thinking of the human brain is supremely difficult. For one thing, we humans are very good at generalizing knowledge and applying concepts we learn in one field to another. We can also make relatively reliable decisions based on intuition and with little information. Over the years, human-level AI has become known as artificial general intelligence (AGI) or strong AI.

The initial hype and excitement surrounding AI drew interest and funding from government agencies and large companies. But it soon became evident that contrary to early perceptions, human-level intelligence was not right around the corner, and scientists were hard-pressed to reproduce the most basic functionalities of the human mind. In the 1970s, unfulfilled promises and expectations eventually led to the "AI winter," a long period during which public interest and funding in AI dampened.

It took many years of innovation and a revolution in deep-learning technology to revive interest in AI. But even now, despite enormous advances in artificial intelligence, none of the current approaches to AI can solve problems in the same way the human mind does, and most experts believe AGI is at least decades away.

The flipside, narrow or weak AI doesn't aim to reproduce the functionality of the human brain, and instead focuses on optimizing a single task. Narrow AI has already found many real-world applications, such as recognizing faces, transforming audio to text, recommending videos on YouTube, and displaying personalized content in the Facebook News Feed.

Many scientists believe that we will eventually create AGI, but some have a dystopian vision of the age of thinking machines. In 2014, renowned English physicist Stephen Hawking described AI as an existential threat to mankind, warning that "full artificial intelligence could spell the end of the human race."

In 2015, Y Combinator President Sam Altman and Tesla CEO Elon Musk, two other believers in AGI, co-founded OpenAI, a nonprofit research lab that aims to create artificial general intelligence in a manner that benefits all of humankind. (Musk has since departed.)

Others believe that artificial general intelligence is a pointless goal. "We don't need to duplicate humans. That's why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own," says Peter Norvig, Director of Research at Google.

Scientists such as Norvig believe that narrow AI can help automate repetitive and laborious tasks and help humans become more productive. For instance, doctors can use AI algorithms to examine X-ray scans at high speeds, allowing them to see more patients. Another example of narrow AI is fighting cyberthreats: Security analysts can use AI to find signals of data breaches in the gigabytes of data being transferred through their companies' networks.

Early AI-creation efforts were focused on transforming human knowledge and intelligence into static rules. Programmers had to meticulously write code (if-then statements) for every rule that defined the behavior of the AI. The advantage of rule-based AI, which later became known as "good old-fashioned artificial intelligence" (GOFAI), is that humans have full control over the design and behavior of the system they develop.

Rule-based AI is still very popular in fields where the rules are clearcut. One example is video games, in which developers want AI to deliver a predictable user experience.

The problem with GOFAI is that contrary to McCarthy's initial premise, we can't precisely describe every aspect of learning and behavior in ways that can be transformed into computer rules. For instance, defining logical rules for recognizing voices and imagesa complex feat that humans accomplish instinctivelyis one area where classic AI has historically struggled.

An alternative approach to creating artificial intelligence is machine learning. Instead of developing rules for AI manually, machine-learning engineers "train" their models by providing them with a massive amount of samples. The machine-learning algorithm analyzes and finds patterns in the training data, then develops its own behavior. For instance, a machine-learning model can train on large volumes of historical sales data for a company and then make sales forecasts.

Deep learning, a subset of machine learning, has become very popular in the past few years. It's especially good at processing unstructured data such as images, video, audio, and text documents. For instance, you can create a deep-learning image classifier and train it on millions of available labeled photos, such as the ImageNet dataset. The trained AI model will be able to recognize objects in images with accuracy that often surpasses humans. Advances in deep learning have pushed AI into many complicated and critical domains, such as medicine, self-driving cars, and education.

One of the challenges with deep-learning models is that they develop their own behavior based on training data, which makes them complex and opaque. Often, even deep-learning experts have a hard time explaining the decisions and inner workings of the AI models they create.

Here are some of the ways AI is bringing tremendous changes to different domains.

Self-driving cars: Advances in artificial intelligence have brought us very close to making the decades-long dream of autonomous driving a reality. AI algorithms are one of the main components that enable self-driving cars to make sense of their surroundings, taking in feeds from cameras installed around the vehicle and detecting objects such as roads, traffic signs, other cars, and people.

Digital assistants and smart speakers: Siri, Alexa, Cortana, and Google Assistant use artificial intelligence to transform spoken words to text and map the text to specific commands. AI helps digital assistants make sense of different nuances in spoken language and synthesize human-like voices.

Translation: For many decades, translating text between different languages was a pain point for computers. But deep learning has helped create a revolution in services such as Google Translate. To be clear, AI still has a long way to go before it masters human language, but so far, advances are spectacular.

Facial recognition: Facial recognition is one of the most popular applications of artificial intelligence. It has many uses, including unlocking your phone, paying with your face, and detecting intruders in your home. But the increasing availability of facial-recognition technology has also given rise to concerns regarding privacy, security, and civil liberties.

Medicine: From detecting skin cancer and analyzing X-rays and MRI scans to providing personalized health tips and managing entire healthcare systems, artificial intelligence is becoming a key enabler in healthcare and medicine. AI won't replace your doctor, but it could help to bring about better health services, especially in underprivileged areas, where AI-powered health assistants can take some of the load off the shoulders of the few general practitioners who have to serve large populations.

In our quest to crack the code of AI and create thinking machines, we've learned a lot about the meaning of intelligence and reasoning. And thanks to advances in AI, we are accomplishing tasks alongside our computers that were once considered the exclusive domain of the human brain.

Some of the emerging fields where AI is making inroads include music and arts, where AI algorithms are manifesting their own unique kind of creativity. There's also hope AI will help fight climate change, care for the elderly, and eventually create a utopian future where humans don't need to work at all.

There's also fear that AI will cause mass unemployment, disrupt the economic balance, trigger another world war, and eventually drive humans into slavery.

We still don't know which direction AI will take. But as the science and technology of artificial intelligence continues to improve at a steady pace, our expectations and definition of AI will shift, and what we consider AI today might become the mundane functions of tomorrow's computers.

See the original post:

What Is Artificial Intelligence (AI)? | PCMag

Posted in Ai

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

Read more:

Artificial Intelligence What it is and why it matters | SAS

Posted in Ai

What is Artificial Intelligence (AI)?

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI includeexpert systems, natural language processing (NLP), speech recognition andmachine vision.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible. While the huge volume of data that's being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

AI can be categorized as eitherweakorstrong. Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.

Strong AI, also known as artificial general intelligence (AGI), describes programming that can replicate human cognitive abilities. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to pass both a Turing test and the Chinese room test.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general. Some researchers and marketers hope the labelaugmented intelligence, which has a more neutral connotation, will help people understand that most implementations of AI will be weak and simply improve products and services. The concept of the Singularity and a world where the application of superintelligence to humans or human problems -- including poverty, disease and mortality -- still falls within the realm of science fiction.

While AI tools present a range of new functionality for businesses, theuse of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python and C, have set themselves apart.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.

Popular AI cloud offerings include the following:

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorized AI into four types, beginning with the intelligent systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to products and services that automate tasks, while the label cognitive computing is used in reference to products and services that augment human thought processes.

AI is incorporated into a variety of different types of technology. Here are seven examples:

Artificial intelligence has made its way into a wide variety of markets. Here are six examples:

AIandmachine learningare at the top of the buzzword list security vendors are using today to differentiate their offerings. Those terms also represent truly viable technologies. Artificial intelligence and machine learning in cybersecurity products are adding real value for the security teams looking for ways to identify attacks, malware and other threats.

Organizations today use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations.

As a result, AI security technology both dramatically lowers the number of false positives and gives organizations more time to counteract real threats before damage is done. The maturing technology is playing a big role in helping organizations fight off cyberattacks.

Despite potential risks, there are currently few regulations governing theuse ofAI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The Europe Union'sGDPRputs strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

More here:

What is Artificial Intelligence (AI)?

Posted in Ai

The Top 100 AI Startups Out There Now, and What They’re Working On – Singularity Hub

New drug therapies for a range of chronic diseases. Defenses against various cyber attacks. Technologies to make cities work smarter. Weather and wildfire forecasts that boost safety and reduce risk. And commercial efforts to monetize so-called deepfakes.

What do all these disparate efforts have in common? Theyre some of the solutions that the worlds most promising artificial intelligence startups are pursuing.

Data research firm CB Insights released its much-anticipated fourth annual list of the top 100 AI startups earlier this month. The New York-based company has become one of the go-to sources for emerging technology trends, especially in the startup scene.

About 10 years ago, it developed its own algorithm to assess the health of private companies using publicly-available information and non-traditional signals (think social media sentiment, for example) thanks to more than $1 million in grants from the National Science Foundation.

It uses that algorithm-generated data from what it calls a companys Mosaic scorepulling together information on market trends, money, and momentumalong with other details ranging from patent activity to the latest news analysis to identify the best of the best.

Our final list of companies is a mix of startups at various stages of R&D and product commercialization, said Deepashri Varadharajanis, a lead analyst at CB Insights, during a recent presentation on the most prominent trends among the 2020 AI 100 startups.

About 10 companies on the list are among the worlds most valuable AI startups. For instance, theres San Francisco-based Faire, which has raised at least $266 million since it was founded just three years ago. The company offers a wholesale marketplace that uses machine learning to match local retailers with goods that are predicted to sell well in their specific location.

Another startup valued at more than $1 billion, referred to as a unicorn in venture capital speak, is Butterfly Network, a company on the East Coast that has figured out a way to turn a smartphone phone into an ultrasound machine. Backed by $350 million in private investments, Butterfly Network uses AI to power the platforms diagnostics. A more modestly funded San Francisco startup called Eko is doing something similar for stethoscopes.

In fact, there are more than a dozen AI healthcare startups on this years AI 100 list, representing the most companies of any industry on the list. In total, investors poured about $4 billion into AI healthcare startups last year, according to CB Insights, out of a record $26.6 billion raised by all private AI companies in 2019. Since 2014, more than 4,300 AI startups in 80 countries have raised about $83 billion.

One of the most intensive areas remains drug discovery, where companies unleash algorithms to screen potential drug candidates at an unprecedented speed and breadth that was impossible just a few years ago. It has led to the discovery of a new antibiotic to fight superbugs. Theres even a chance AI could help fight the coronavirus pandemic.

There are several AI drug discovery startups among the AI 100: San Francisco-based Atomwise claims its deep convolutional neural network, AtomNet, screens more than 100 million compounds each day. Cyclica is an AI drug discovery company in Toronto that just announced it would apply its platform to identify and develop novel cannabinoid-inspired drugs for neuropsychiatric conditions such as bipolar disorder and anxiety.

And then theres OWKIN out of New York City, a startup that uses a type of machine learning called federated learning. Backed by Google, the companys AI platform helps train algorithms without sharing the necessary patient data required to provide the sort of valuable insights researchers need for designing new drugs or even selecting the right populations for clinical trials.

Privacy and data security are the focus of a number of AI cybersecurity startups, as hackers attempt to leverage artificial intelligence to launch sophisticated attacks while also trying to fool the AI-powered systems rapidly coming online.

I think this is an interesting field because its a bit of a cat and mouse game, noted Varadharajanis. As your cyber defenses get smarter, your cyber attacks get even smarter, and so its a constant game of whos going to match the other in terms of tech capabilities.

Few AI cybersecurity startups match Silicon Valley-based SentinelOne in terms of private capital. The company has raised more than $400 million, with a valuation of $1.1 billion following a $200 million Series E earlier this year. The companys platform automates whats called endpoint security, referring to laptops, phones, and other devices at the end of a centralized network.

Fellow AI 100 cybersecurity companies include Blue Hexagon, which protects the edge of the network against malware, and Abnormal Security, which stops targeted email attacks, both out of San Francisco. Just down the coast in Los Angeles is Obsidian Security, a startup offering cybersecurity for cloud services.

Deepfakes of videos and other types of AI-manipulated media where faces or voices are synthesized in order to fool viewers or listeners has been a different type of ongoing cybersecurity risk. However, some firms are swapping malicious intent for benign marketing and entertainment purposes.

Now anyone can be a supermodel thanks to Superpersonal, a London-based AI startup that has figured out a way to seamlessly swap a users face onto a fashionista modeling the latest threads on the catwalk. The most obvious use case is for shoppers to see how they will look in a particular outfit before taking the plunge on a plunging neckline.

Another British company called Synthesia helps users create videos where a talking head will deliver a customized speech or even talk in a different language. The startups claim to fame was releasing a campaign video for the NGO Malaria Must Die showing soccer star David Becham speak in nine different languages.

Theres also a Seattle-based company, Wellsaid Labs, which uses AI to produce voice-over narration where users can choose from a library of digital voices with human pitch, emphasis, and intonation. Because every narrator sounds just a little bit smarter with a British accent.

Speaking of smarter: A handful of AI 100 startups are helping create the smart city of the future, where a digital web of sensors, devices, and cloud-based analytics ensure that nobody is ever stuck in traffic again or without an umbrella at the wrong time. At least thats the dream.

A couple of them are directly connected to Google subsidiary Sidewalk Labs, which focuses on tech solutions to improve urban design. A company called Replica was spun out just last year. Its sort of SimCity for urban planning. The San Francisco startup uses location data from mobile phones to understand how people behave and travel throughout a typical day in the city. Those insights can then help city governments, for example, make better decisions about infrastructure development.

Denver-area startup AMP Robotics gets into the nitty gritty details of recycling by training robots on how to recycle trash, since humans have largely failed to do the job. The U.S. Environmental Protection Agency estimates that only about 30 percent of waste is recycled.

Some people might complain that weather forecasters dont even do that well when trying to predict the weather. An Israeli AI startup, ClimaCell, claims it can forecast rain block by block. While the company taps the usual satellite and ground-based sources to create weather models, it has developed algorithms to analyze how precipitation and other conditions affect signals in cellular networks. By analyzing changes in microwave signals between cellular towers, the platform can predict the type and intensity of the precipitation down to street level.

And those are just some of the highlights of what some of the worlds most promising AI startups are doing.

You have companies optimizing mining operations, warehouse logistics, insurance, workflows, and even working on bringing AI solutions to designing printed circuit boards, Varadharajanis said. So a lot of creative ways in which companies are applying AI to solve different issues in different industries.

Image Credit: Butterfly Network

See the original post:

The Top 100 AI Startups Out There Now, and What They're Working On - Singularity Hub

Posted in Ai

COVID-19: AI can help – but the right human input is key – World Economic Forum

Artificial intelligence (AI) has the potential to help us tackle the pressing issues raised by the COVID-19 pandemic. It is not the technology itself, though, that will make the difference but rather the knowledge and creativity of the humans who use it.

Indeed, the COVID-19 crisis will likely expose some of the key shortfalls of AI. Machine learning, the current form of AI, works by identifying patterns in historical training data. When used wisely, AI has the potential to exceed humans not only through speed but also by detecting patterns in that training data that humans have overlooked.

However, AI systems need a lot of data, with relevant examples in that data, in order to find these patterns. Machine learning also implicitly assumes that conditions today are the same as the conditions represented in the training data. In other words, AI systems implicitly assume that what has worked in the past will still work in the future.

A new strain of Coronavirus, COVID 19, is spreading around the world, causing deaths and major disruption to the global economy.

Responding to this crisis requires global cooperation among governments, international organizations and the business community, which is at the centre of the World Economic Forums mission as the International Organization for Public-Private Cooperation.

The Forum has created the COVID Action Platform, a global platform to convene the business community for collective action, protect peoples livelihoods and facilitate business continuity, and mobilize support for the COVID-19 response. The platform is created with the support of the World Health Organization and is open to all businesses and industry groups, as well as other stakeholders, aiming to integrate and inform joint action.

As an organization, the Forum has a track record of supporting efforts to contain epidemics. In 2017, at our Annual Meeting, the Coalition for Epidemic Preparedness Innovations (CEPI) was launched bringing together experts from government, business, health, academia and civil society to accelerate the development of vaccines. CEPI is currently supporting the race to develop a vaccine against this strand of the coronavirus.

What does this have to do with the current crisis? We are facing unprecedented times. Our situation is jarringly different from that of just a few weeks ago. Some of what we need to try today will have never been tried before. Similarly, what has worked in the past may very well not work today.

Humans are not that different from AI in these limitations, which partly explains why our current situation is so daunting. Without previous examples to draw on, we cannot know for sure the best course of action. Our traditional assumptions about cause and effect may no longer hold true.

Humans have an advantage over AI, though. We are able to learn lessons from one setting and apply them to novel situations, drawing on our abstract knowledge to make best guesses on what might work or what might happen. AI systems, in contrast, have to learn from scratch whenever the setting or task changes even slightly.

The COVID-19 crisis, therefore, will highlight something that has always been true about AI: it is a tool, and the value of its use in any situation is determined by the humans who design it and use it. In the current crisis, human action and innovation will be particularly critical in leveraging the power of what AI can do.

One approach to the novel situation problem is to gather new training data under current conditions. For both human decision-makers and AI systems alike, each new piece of information about our current situation is particularly valuable in informing our decisions going forward. The more effective we are at sharing information, the more quickly our situation is no longer novel and we can begin to see a path forward.

Projects such as the COVID-19 Open Research Dataset, which provides the text of over 24,000 research papers, the COVID-net open-access neural network, which is working to collaboratively develop a system to identify COVID-19 in lung scans, and an initiative asking individuals to donate their anonymized data, represent important efforts by humans to pool data so that AI systems can then sift through this information to identify patterns.

Global spread of COVID-19

Image: World Economic Forum

A second approach is to use human knowledge and creativity to undertake the abstraction that the AI systems cannot do. Humans can discern between places where algorithms are likely to fail and situations in which historical training data is likely still relevant to address critical and timely issues, at least until more current data becomes available.

Such systems might include algorithms that predict the spread of the virus using data from previous pandemics or tools that help job seekers identify opportunities that match their skillsets. Even though the particular nature of COVID-19 is unique and many of the fundamental rules of the labour market are not operating, it is still possible to identify valuable, although perhaps carefully circumscribed, avenues for applying AI tools.

Efforts to leverage AI tools in the time of COVID-19 will be most effective when they involve the input and collaboration of humans in several different roles. The data scientists who code AI systems play an important role because they know what AI can do and, just as importantly, what it cant. We also need domain experts who understand the nature of the problem and can identify where past training data might still be relevant today. Finally, we need out-of-the-box thinkers who push us to move beyond our assumptions and can see surprising connections.

Toronto-based startup Bluedot is an example of such a collaboration. In December it was one of the first to identify the emergence of a new outbreak in China. Its system relies on the vision of its founder, who believed that predicting outbreaks was possible, and combines the power several different AI tools with the knowledge of epidemiologists who identified where and how to look for evidence of emerging diseases. These epidemiologists also verify the results at the end.

Reinventing the rules is different from breaking the rules, though. As we work to address our current needs, we must also keep our eye on the long-term consequences. All of the humans involved in developing AI systems need to maintain ethical standards and consider possible unintended consequences of the technologies they create. While our current crisis is very pressing, we cannot sacrifice our fundamental principles to address it.

The key takeaway is this: Despite the hype, there are many ways that humans in which still surpass the capabilities of AI. The stunning advances that AI has made in recent years are not an inherent quality of the technology, but rather a testament to the humans who have been incredibly creative in how they use a tool that is mathematically and computationally complex and yet at its foundation still quite simple and limited.

As we seek to move rapidly to address our current problems, therefore, we need to continue to draw on this human creativity from all corners, not just the technology experts but also those with knowledge of the settings, as well as those who challenge our assumptions and see new connections. It is this human collaboration that will enable AI to be the powerful tool for good that it has the potential to be.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

Written by

Matissa Hollister, Assistant Professor of Organizational Behaviour, McGill University

The views expressed in this article are those of the author alone and not the World Economic Forum.

Read this article:

COVID-19: AI can help - but the right human input is key - World Economic Forum

Posted in Ai

Microsoft teams up with leading universities to tackle coronavirus pandemic using AI – TechRepublic

The newly-formed C3.ai Digital Transformation Institute has an open call for proposals to mitigate the COVID-19 epidemic using artificial intelligence and machine learning.

With the coronavirus impacting most of the world, the medical community is hard at work trying to come up with some type of magic bullet that will stop the pandemic from propagating. Can artificial intelligence (AI) and machine learning (ML) help nurture a solution? That's what Microsoft and a host of top universities are hoping.

In a blog post published last week, Microsoft detailed the creation of the C3.ai Digital Transformation Institute (C3.ai DTI), a consortium of scientists, researchers, innovators, and executives from the academic and corporate worlds whose mission it is to push AI to achieve social and economic benefits. As such, C3.ai DTI will sponsor and fund scientists and researchers to spur the digital transformation of business, government, and society.

Created by Microsoft, AI software provider C3.ai, and several leading universities, C3.ai DTI already has the first task on its agenda--to harness the power of AI to combat the coronavirus.

SEE:Coronavirus: Critical IT policies and tools every business needs(TechRepublic Premium)

Known as "AI Techniques to Mitigate Pandemic," C3.ai DTI's first call for research proposals is asking scholars, developers, and researchers to "embrace the challenge of abating COVID-19 and advance the knowledge, science, and technologies for mitigating future pandemics using AI." Researchers are free to develop their own topics in response to this subject, but the consortium outlined 10 different areas open for consideration:

"We are collecting a massive amount of data about MERS, SARS, and now COVID-19," Condoleezza Rice, former US Secretary of State, said in the blog post. "We have a unique opportunity before us to apply the new sciences of AI and digital transformation to learn from these data how we can better manage these phenomena and avert the worst outcomes for humanity."

This first call is currently open with a deadline of May 1, 2020. Interested participants can check the C3.ai DTI website to learn about the process and find out how to submit their proposals. Selected proposals will be announced by June 1, 2020.

The group will fund as much as $5.8 million in awards for this first call, with cash awards ranging from $100,000 to $500,000 each. Recipients will also receive cloud computing, supercomputing, data access, and AI software resources and technical support provided by Microsoft and C3.ai. Specifically, those with successful proposals will get unlimited use of the C3 AI Suite, access to the Microsoft Azure cloud platform, and access to the Blue Waters supercomputer at the National Center for Super Computing Applicationsat the University of Illinois Urbana-Champaign (UIUC).

To fund the institute, C3.ai will provide $57,250,000 over the first five years of operation. C3.ai and Microsoft will contribute an additional $310 million, which includes use of the C3 AI Suite and Microsoft Azure. The universities involved in the consortium include the UIUC; the University of California, Berkeley; Princeton University; the University of Chicago; the Massachusetts Institute of Technology; and Carnegie Mellon University.

Beyond funding successful research proposals, Microsoft said that C3.ai DTI will generate new ideas for the use of AI and ML through ongoing research, visiting professors and research scholars, and faculty and scholars in residence, many of whom will come from the member universities.

More specifically, the group will focus its research on AI, ML, Internet of Things, Big Data Analytics, human factors, organizational behavior, ethics, and public policy. This research will examine new business models, develop ways for creating change within organizations, analyze methods to protect privacy, and ramp up the conversations around the ethics and public policy of AI.

"In these difficult times, we need--now more than ever--to join our forces with scholars, innovators, and industry experts to propose solutions to complex problems," Gwenalle Avice-Huet, Executive Vice President of ENGIE, said. "I am convinced that digital, data science, and AI are a key answer. The C3.ai Digital Transformation Institute is a perfect example of what we can do together to make the world better."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Here is the original post:

Microsoft teams up with leading universities to tackle coronavirus pandemic using AI - TechRepublic

Posted in Ai

Enlisting AI in our war on coronavirus: Potential and pitfalls | TheHill – The Hill

Given the outsized hold Artificial Intelligence (AI) technology has acquired on public imagination of late, it comes as no surprise that many are wondering whatAI can do for the public health crisis wrought by the COVID-19 coronavirus.

A casual search of AI and COVID-19 already returns a plethora of news stories, many of them speculative. While AI technology is not ready to help with the magical discovery of a new vaccine, there are important ways it can assist in this fight.

Controlling epidemics is, in large part, based on laborious contact tracing and using that information to predict the spread. We live in a time in which we constantly leave digital footprints through our daily life and interactions. These massive troves of data can be analyzed with AI technologies for detection, contact tracing and to find infection clusters, spread patterns and identify high-risk patients.

There is some evidence that AI techniques analyzing news feeds and social media data were not too far behind humans in originally detecting the COVID-19 outbreak in Wuhan. China seems not only to have used existing digital traces but also enforced additional ones; for example, citizens in Nanjing are required to register their presence in subway trains and many shops by scanning QR codes with their cellphones. Singapore, lauded for its effective containment of the virus without widespread lockdowns, has used public cameras to trace the interaction patterns of the infected, and even introduced a crowd-sourced app for voluntary contact tracing. In the U.S., research efforts are underway to mine body temperature and heart-rate data from wearables for early detection of COVID-19 infection.

It is widely feared that COVID-19 cases, at their peak, will overwhelm medical infrastructure in many cities. Evidence from Hubei, China, and Lombardy, Italy, do indeed support this fear. One way to alleviate this situation is to adopt novel methods of remotely providing medical help. The basic infrastructure for telemedicine has existed for a long time but has been a hard-sell from consumer, provider and regulatory points of view until now. Already, the U.S. has waived regulations to allow doctors to practice across state boundaries; the Department of Health and Human Services (HHS) has also announced that it will not levy penalties on medical providers using certain virtual communication tools, such as Skype and FaceTime, to connect with patients.

AI technologies certainly can help as a force-multiplier here, as front-line medical decision support tools for patient-provider matching, triage and even in faster diagnosis. For example, the Chinese company Alibaba claims rapid diagnostic image analytics for chest CT scans; China also has leveraged robots in disinfection of public spaces. Remote tele-presence robots increasingly could be leveraged to bring virtual movement and solace to people in forced medical quarantines.

AI technologies already have been enablers of, and defenders against, fake news. In the context of this pandemic, our incomplete knowledge coupled with angst has led to an infodemic of unreliable/fake information about coping with the outbreak, often spread by well-meaning (if gullible) people. AI technologies certainly can be of help here, both in flagging stories of questionable lineage and pointing to more trusted information sources.

AI also can be used to distill COVID-19-related information. A prominent example here is a White House Office of Science and Technology Policy-supported effort to use natural language-processing technologies to mine the stream of research papers relevant to the COVID-19 virus, with the aim of helping scientists quickly gain insight and spot trends within the research. There is some hope that such distillation can help in vaccine discovery efforts, too.

Suppression by social distancing has emerged as the most promising way to stem the tide of infection. It is clear, however, that social distancing like sticking to a healthy diet runs very much counter to our natural impulses. Short of draconian state enforcement, what can we do to increase the chances that people follow the best practices? One way AI can help here is via micro-targeted behavioral nudges. Like it or not, AI technologies already harvest vast troves of user profiles via our digital footprints and weaponize those for targeted ads. The same technologies can be readily rejiggered for subliminal micro-targeted social distancing messages that could include distracting us from cabin fever. There already is some evidence that mild nudging can even reduce the sharing of misinformation.

Lockdowns and social distancing measures are affecting the education of millions of schoolchildren. Tutoring services assisted by AI technologies can help significantly when students are stuck at home. China reportedly has relied on the help of online AI-based tutoring companies such as Squirrel AI to engage some of its millions of schoolchildren in lockdown.

And while self-driving cars remain a distant dream, delivering essential goods to people via deserted streets certainly could be within reach. Depending on how long shelter-at-home continues, we might rely increasingly on such technologies to transport critical personnel and goods.

Some of these potential uses of AI are controversial, as they infringe on privacy and civil liberties or reflect the very type of applications that the AI ethics community has resisted. Do we really want our personal AI assistants to start nudging us subliminally? Should we support increased cellphone tracking for infection control? It will be interesting to see to what extent society is willing to adopt them.

Indeed, our readiness to try almost anything to fight this unprecedented viral war is opening an inadvertent window into how we might handle the worries surrounding an AI-enabled future. Ideas such as universal basic income (UBI) in the presence of widespread technological unemployment, or concerns about diminished privacy thanks to widespread AI-based surveillance all are coming to the fore.

China has mobilized state resources to feed its quarantined population and used extensive cellphone tracking to analyze the spread of the virus. Israel is reportedly using cellphone tracking to ensure quarantines, as did Taiwan. The U.S. is considering UBI-like ideas e.g., providing thousand-dollar checks to many adults effectively unemployed during the pandemic and is reportedly mulling cellphone-based tracking to get people to follow social distancing guidelines.

Once such practices are adopted, they will no longer just be theoretical constructs. Some or all of them will become part of our society beyond this war on the virus, just as many Great Depression-era programs became part of our social fabric. The possibility that our choices in this time of crisis can change our society in crucial ways is raising alarms and calls for circumspection. Yet, to what extent civil society is likely to pause for circumspection at the height of this execution imperative remains to be seen.

Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies. He served as president and is now past-president of the Association for the Advancement of Artificial Intelligence and was a founding board member of Partnership on AI. He can be followed on Twitter@rao2z.

Read more from the original source:

Enlisting AI in our war on coronavirus: Potential and pitfalls | TheHill - The Hill

Posted in Ai

Google is using AI to design chips that will accelerate AI – MIT Technology Review

A new reinforcement-learning algorithm has learned to optimize the placement of components on a computer chip to make it more efficient and less power-hungry.

3D Tetris: Chip placement, also known as chip floor planning, is a complex three-dimensional design problem. It requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area. Traditionally, engineers will manually design configurations that minimize the amount of wire used between components as a proxy for efficiency. They then use electronic design automation software to simulate and verify their performance, which can take up to 30 hours for a single floor plan.

Time lag: Because of the time investment put into each chip design, chips are traditionally supposed to last between two and five years. But as machine-learning algorithms have rapidly advanced, the need for new chip architectures has also accelerated. In recent years, several algorithms for optimizing chip floor planning have sought to speed up the design process, but theyve been limited in their ability to optimize across multiple goals, including the chips power draw, computational performance, and area.

Intelligent design: In response to these challenges, Google researchers Anna Goldie and Azalia Mirhoseini took a new approach: reinforcement learning. Reinforcement-learning algorithms use positive and negative feedback to learn complicated tasks. So the researchers designed whats known as a reward function to punish and reward the algorithm according to the performance of its designs. The algorithm then produced tens to hundreds of thousands of new designs, each within a fraction of a second, and evaluated them using the reward function. Over time, it converged on a final strategy for placing chip components in an optimal way.

Validation: After checking the designs with the electronic design automation software, the researchers found that many of the algorithms floor plans performed better than those designed by human engineers. It also taught its human counterparts some new tricks, the researchers said.

Production line: Throughout the field's history, progress in AI has been tightly interlinked with progress in chip design. The hope is this algorithm will speed up the chip design process and lead to a new generation of improved architectures, in turn accelerating AI advancement.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

View post:

Google is using AI to design chips that will accelerate AI - MIT Technology Review

Posted in Ai

Researchers find AI is bad at predicting GPA, grit, eviction, job training, layoffs, and material hardship – VentureBeat

A paper coauthored by over 112 researchers across 160 data and social science teams found that AI and statistical models, when used to predict six life outcomes for children, parents, and households, werent very accurate even when trained on 13,000 data points from over 4,000 families. They assert that the work is a cautionary tale on the use of predictive modeling, especially in the criminal justice system and social support programs.

Heres a setting where we have hundreds of participants and a rich data set, and even the best AI results are still not accurate, said study co-lead author Matt Salganik, a professor of sociology at Princeton and interim director of the Center for Information Technology Policy at the Woodrow Wilson School of Public and International Affairs. These results show us that machine learning isnt magic; there are clearly other factors at play when it comes to predicting the life course.

The study, which was published this week in the journal Proceedings of the National Academy of Sciences, is the fruit of the Fragile Families Challenge, a multi-year collaboration that sought to recruit researchers to complete a predictive task by predicting the same outcomes using the same data. Over 457 groups applied, of which 160 were selected to participate, and their predictions were evaluated with an error metric that assessed their ability to predict held-out data (i.e., data held by the organizer and not available to the participants).

The Challenge was an outgrowth of the Fragile Families Study (formerly Fragile Families and Child Wellbeing Study) based at Princeton, Columbia University, and the University of Michigan, which has been studying a cohort of about 5,000 children born in 20 large American cities between 1998 and 2000. Its designed to oversample births to unmarried couples in those cities, and to address four questions of interest to researchers and policymakers:

When we began, I really didnt know what a mass collaboration was, but I knew it would be a good idea to introduce our data to a new group of researchers: data scientists, said Sara McLanahan, the William S. Tod Professor of Sociology and Public Affairs at Princeton. The results were eye-opening.

The Fragile Families Study data set consists of modules, each of which is made up of roughly 10 sections, where each section includes questions about a topic asked of the childrens parents, caregivers, teachers, and the children themselves. For example, a mother who recently gave birth might be asked about relationships with extended kin, government programs, and marriage attitudes, while a 9-year-old child might be asked about parental supervision, sibling relationships, and school. In addition to the surveys, the corpus contains the results of in-home assessments, including psychometric testing, biometric measurements, and observations of neighborhoods and homes.

The goal of the Challenge was to predict the social outcomes of children aged 15 years, which encompasses 1,617 variables. From the variables, six were selected to be the focus:

Contributing researchers were provided anonymized background data from 4,242 families and 12,942 variables about each family, as well as training data incorporating the six outcomes for half of the families. Once the Challenge was completed, all 160 submissions were scored using the holdout data.

In the end, even the best of the over 3,000 models submitted which often used complex AI methods and had access to thousands of predictor variables werent spot on. In fact, they were only marginally better than linear regression and logistic regression, which dont rely on any form of machine learning.

Either luck plays a major role in peoples lives, or our theories as social scientists are missing some important variable, added McLanahan. Its too early at this point to know for sure.

Measured by the coefficient of determination, or the correlation of the best models predictions with the ground truth data, material hardship i.e., whether 15-year-old childrens parents suffered financial issues was .23, or 23% accuracy. GPA predictions were 0.19 (19%), while grit, eviction, job training, and layoffs were 0.06 (6%), 0.05 (5%), and 0.03 (3%), respectively.

The results raise questions about the relative performance of complex machine-learning models compared with simple benchmark models. In the Challenge, the simple benchmark model with only a few predictors was only slightly worse than the most accurate submission, and it actually outperformed many of the submissions, concluded the studys coauthors. Therefore, before using complex predictive models, we recommend that policymakers determine whether the achievable level of predictive accuracy is appropriate for the setting where the predictions will be used, whether complex models are more accurate than simple models or domain experts in their setting, and whether possible improvement in predictive performance is worth the additional costs to create, test, and understand the more complex model.

The research team is currently applying for grants to continue studies in this area, and theyve also published 12 of the teams results in a special issue of a journal called Socius, a new open-access journal from the American Sociological Association. In order to support additional research, all the submissions to the Challenge including the code, predictions, and narrative explanations will be made publicly available.

The Challenge isnt the first to expose the predictive shortcomings of AI and machine learning models. The Partnership on AI, a nonprofit coalition committed to the responsible use of AI, concluded in its first-ever report last year that algorithms are unfit to automate the pre-trial bail process or label some people as high-risk and detain them. The use of algorithms in decision making for judges has been known to produce race-based unfair results that are more likely to label African-American inmates as at risk of recidivism.

Its well-understood that AI has a bias problem. For instance, word embedding, a common algorithmic training technique that involves linking words to vectors, unavoidably picks up and at worst amplifies prejudices implicit in source text and dialogue. A recent study by the National Institute of Standards and Technology (NIST) found that many facial recognition systems misidentify people of color more often than Caucasian faces. And Amazons internal recruitment tool which was trained on resumes submitted over a 10-year period was reportedly scrapped because it showed bias against women.

A number of solutions have been proposed, from algorithmic tools to services that detect bias by crowdsourcing large training data sets.

In June 2019, working with experts in AI fairness, Microsoft revised and expanded the data sets it uses to train Face API, a Microsoft Azure API that provides algorithms for detecting, recognizing, and analyzing human faces in images. Last May, Facebook announced Fairness Flow, which automatically sends a warning if an algorithm is making an unfair judgment about a person based on their race, gender, or age. Google recently released the What-If Tool, a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework. Not to be outdone, IBM last fall released AI Fairness 360, a cloud-based, fully automated suite that continually provides [insights] into how AI systems are making their decisions and recommends adjustments such as algorithmic tweaks or counterbalancing data that might lessen the impact of prejudice.

The rest is here:

Researchers find AI is bad at predicting GPA, grit, eviction, job training, layoffs, and material hardship - VentureBeat

Posted in Ai

Google Researchers Create AI-ception with an AI Chip That Speeds Up AI – Interesting Engineering

Reinforcement learning algorithms may be the next best thing since sliced bread for engineers looking to improve chip placement.

Researchers from Google have created a new algorithm that has learned how to optimize the placement of the components in a computer chip, so as to make it more efficient and less power-hungry.

SEE ALSO: WILL AI AND GENERATIVE DESIGN STEAL OUR ENGINEERING JOBS?

Typically, engineers can spend up to 30 hours configuring a single floor plan of chip placement, or chip floor planning. This complicated 3D design problem requires the configuration of hundreds, or even thousands, of components across a number of layers in a constrained area. Engineers will manually design configurations to minimize the number of wires used between components as a proxy for efficiency.

Because this is time-consuming, these chips are designed to only last between two and five years. However, as machine-learning algorithms keep improving year upon year, a need for new chip architectures has also arisen.

Facing these challenges, Google researchers Anna Goldie and Azalia Mirhoseini, have looked into reinforcement learning. These types of algorithms use positive and negative feedback in order to learn new and complicated tasks. Thus, the algorithm is either "rewarded" or "punished" depending on how well it learns a task. Following this, it then creates tens to hundreds of thousands of new designs. Ultimately, it creates an optimal strategy on how to place these chip components.

After their tests, the researchers checked their designs with the electronic design automation software and discovered that their method's floor planning was much more effective than the ones human engineers designed. Moreover, the system was able to teach its human workers a new trick or two.

Progress in AI has been largely interlinked with progress is computer chip design. The researchers' hope is that their new algorithm will assist in speeding up the chip design process and pave the way for new and improved architectures, which would ultimately accelerate AI.

The rest is here:

Google Researchers Create AI-ception with an AI Chip That Speeds Up AI - Interesting Engineering

Posted in Ai

A.I. Versus the Coronavirus – The New York Times

Advanced computers have defeated chess masters and learned how to pick through mountains of data to recognize faces and voices. Now, a billionaire developer of software and artificial intelligence is teaming up with top universities and companies to see if A.I. can help curb the current and future pandemics.

Thomas M. Siebel, founder and chief executive of C3.ai, an artificial intelligence company in Redwood City, Calif., said the public-private consortium would spend $367 million in its initial five years, aiming its first awards at finding ways to slow the new coronavirus that is sweeping the globe.

I cannot imagine a more important use of A.I., Mr. Siebel said in an interview.

Known as the C3.ai Digital Transformation Institute, the new research consortium includes commitments from Princeton, Carnegie Mellon, the Massachusetts Institute of Technology, the University of California, the University of Illinois and the University of Chicago, as well as C3.ai and Microsoft. It seeks to put top scientists onto gargantuan social problems with the help of A.I. its first challenge being the pandemic.

The new institute will seek new ways of slowing the pathogens spread, speeding the development of medical treatments, designing and repurposing drugs, planning clinical trials, predicting the diseases evolution, judging the value of interventions, improving public health strategies and finding better ways in the future to fight infectious outbreaks.

Condoleezza Rice, a former U.S. secretary of state who serves on the C3.ai board and was recently named the next director of the Hoover Institution, a conservative think tank on the Stanford campus, called the initiative a unique opportunity to better manage these phenomena and avert the worst outcomes for humanity.

The new institute plans to award up to 26 grants annually, each featuring up to $500,000 in research funds in addition to computing resources. It requires the principal investigators to be located at the consortiums universities but allows partners and team members at other institutions. It wants coronavirus proposals to be submitted by May and plans to award its first grants in June. The research findings are to be made public.

The institutes co-directors are S. Shankar Sastry of the University of California, Berkeley, and Rayadurgam Srikant of the University of Illinois, Urbana-Champaign. The computing power is to come from C3.ai and Microsoft, as well as the Lawrence Berkeley National Laboratory at the University of California and the National Center for Supercomputing Applications at the University of Illinois. The schools run some of the worlds most advanced supercomputers.

Successful A.I. can be extremely hard to deliver, especially in thorny real-world problems such as self-driving cars. When asked if the institute was less a plan for practical results than a feel-good exercise, Mr. Siebel replied, The probability of something good not coming out of this is zero.

In recent decades, many rich Americans have sought to reinvent themselves as patrons of social progress through science research, in some cases outdoing what the federal government can achieve because its goals are often unadventurous and its budgets unpredictable.

Forbes puts Mr. Siebels current net worth at $3.6 billion. His First Virtual Group is a diversified holding company that includes philanthropic ventures.

Born in 1952, Mr. Siebel studied history and computer science at the University of Illinois and was an executive at Oracle before founding Siebel Systems in 1993. It pioneered customer service software and merged with Oracle in 2006. He founded what came to be named C3.ai in 2009.

The first part of the companys name, Mr. Siebel said in an email, stands for the convergence of three digital trends: big data, cloud computing and the internet of things, with A.I. amplifying their power. Last year, he laid out his thesis in a book Digital Transformation: Survive and Thrive in an Era of Mass Extinction. C3.ai works with clients on projects like ferreting out digital fraud and building smart cities.

In an interview, Eric Horvitz, the chief scientist of Microsoft and a medical doctor who serves on the spinoff institutes board, likened the push for coronavirus solutions to a compressed moon shot.

The power of the approach, he said, comes from bringing together key players and institutions. We forget who is where and ask what we can do as a team, Dr. Horvitz said.

Seeing artificial intelligence as a good thing perhaps a lifesaver is a sharp reversal from how it often gets held in dread. Critics have assailed A.I. as dangerously powerful, even threatening the enslavement of humanity to robots with superhuman powers.

In no way am I suggesting that A.I. is all sweetness and light, Mr. Siebel said. But the new institute, he added, is a place where it can be a force for good.

Visit link:

A.I. Versus the Coronavirus - The New York Times

Posted in Ai

behold.ai and Wellbeing Software collaborate on national solution for rapid COVID-19 diagnosis using AI analysis of chest X-rays – GlobeNewswire

behold.ai and Wellbeing Software collaborate onnational solution for rapid COVID-19 diagnosis using AI analysis of chest X-rays

Companies working to fast-track programme for UK-wide rollout

LONDON, UK, March 31, 2020 Two British companies at the leading edge of medical imaging technology are working together on a plan to fast-track the diagnosis of COVID-19 in NHS hospitals using artificial intelligence analysis of chest X-rays.

behold.ai has developed the artificial intelligence-based red dot algorithm which can identify within 30 seconds abnormalities in chest X-rays. Wellbeing Software operates Cris, the UKs most widely used Radiology Information System (RIS), which is installed in over 700 locations.

A national roll-out combining these two technologies would enable a large number of hospitals to quickly process the significant volume of X-rays, currently being used as the key diagnostic test for triage of COVID-19 patients, thereby speeding up diagnosis and easing pressure on the NHS at this critical time. This solution will also find significant utility in dealing with the backlog of cases that continue to mount, such as suspected cancer patients.

Simon Rasalingham, Chairman and CEO of behold.ai, said:

behold.ai and Wellbeing are a great fit in terms of expertise and technology. We are able to prioritise abnormal chest X-rays with greater than 90% accuracy and a 30-second turnaround. If that were translated into a busy hospitals coping with COVID-19, the benefits to healthcare systems are potentially enormous.

Chris Yeowart, Director at Wellbeing Software, said:

Our technology provides the integration between the algorithm and the hospitals radiology systems and working processes, addressing the technical challenges to clearing the way for accelerated national rollout. It is clear from talking to radiology departments that chest X-rays have become one of the primary diagnostic tools for COVID-19 in this country.

Home

https://www.wellbeingsoftware.com/

Ends

For further information, please contact:Consilium Strategic Communications Tel: +44(0)20 3709 5700 beholdai@consilium-comms.com

About behold.ai and radiology

behold.ai provides artificial intelligence, through its red dot cognitive computing platform, to radiology departments. This technology augments the expertise of radiologists to enable them to report with greater clinical accuracy, faster and more safely than they could before. This revolutionary combination helps to deliver greater performance in radiology reporting at a fraction of the price of outsourced reporting.

Radiology departments play an essential role in the diagnostic process; however, a consequence of fewer radiologists and a growing demand for images has left services stretched beyond capacity across many trusts, resulting in reporting delays - in some cases impacting cancer diagnosis. These service issues have been highlighted by the Care Quality Commission and the Royal College of Radiologists.

Our solution seamlessly integrates into local trust workflows augmenting clinical practice and delivering state-of-the-art, safe, Artificial Intelligence.

The behold.ai algorithm has been developed using more than 30,000 example images, all of which have been reviewed and reported by highly experienced consultant radiology clinicians in order to shape accurate decision making. The red dot prioritisation platform is capable of sorting images into normal and abnormal categories in less than 30 seconds post image acquisition.

About behold.ai and quality

Apart from its FDA clearance,behold.aiis also CE approved and is gaining further approval for a CE mark Class IIa certification.

In June 2019 the Company was awarded ISO 13485 QMS certification for an AI medical device the gold standard of quality certification.

About Wellbeing Software

Wellbeing Software is a leading healthcare technology provider with a presence in more than 75% of NHS organisations. The company has combined its extensive UK resources and unparalleled experience in its specialist divisions radiology, maternity, data management and electronic health records - to form Wellbeing Software, uniting their core businesses to enable customers to build on existing investments in IT as a way of delivering connected healthcare records and better patient care. Wellbeings ability to connect its specialist systems with other third-party software enables healthcare organisations to achieve key objectives, such as paperless working and the creation of complete electronic health records. Through their established footprint, specialist knowledge and significant development resources, the company is building the foundations for connectivity within NHS organisations and beyond.

Wellbeing media contact : Jenni Livesley, Context Public Relations, wellbeing@contextpr.co.uk

More here:

behold.ai and Wellbeing Software collaborate on national solution for rapid COVID-19 diagnosis using AI analysis of chest X-rays - GlobeNewswire

Posted in Ai

iFLYTEK and Hancom Group Launch Accufly.AI to Help Combat the Coronavirus Pandemic – Business Wire

HEFEI, China--(BUSINESS WIRE)--Asias leading artificial intelligence (AI) and speech technology company, iFLYTEK has partnered with the South Korean technology company, Hancom Group, to launch the joint venture Accufly.AI in South Korea. Accufly.AI launched its AI Outbound Calling System to assist the South Korean government at no cost and provide information to individuals who have been in close contact with or have had a confirmed coronavirus case.

The AI Outbound Calling System is a smart, integrated system that is based on iFLYTEK solutions and Hancom Groups Korean-based speech recognition. The technology saves manpower and assists in the automatic distribution of important information to potential carriers of the virus and provides a mechanism for follow up with recovered patients. iFLYTEK is looking to make this technology available in markets around the world, including North American and Europe.

The battle against the Covid-19 epidemic requires collective wisdom and sharing of best practices from the international community, said iFLYTEK Chief Financial Officer Mr. Dawei Duan. Given the challenges we all face, iFLYTEK is continuously looking at ways to provide technologies and support to partners around the world, including in the United States, Canada, the United Kingdom, New Zealand, and Australia.

In February, the Hancom Group donated 20,000 protective masks and 5 thermal devices to check temperatures to Anhui to help fight the epidemic.

iFLYTEKs AI technology helped stem the spread of the virus in China and will help the South Korean government conduct follow-up, identify patients with symptoms, manage self-isolated residents, and reduce the risk of cross-infection. The system also will help the government distribute important health updates, increase public awareness, and bring communities together.

iFLYTEK is working to create a better world through artificial intelligence and seeks to do so on a global scale. iFLYTEK will maximize its technical advantages in smart services to support the international community in defeating the coronavirus, said Mr. Duan.

More:

iFLYTEK and Hancom Group Launch Accufly.AI to Help Combat the Coronavirus Pandemic - Business Wire

Posted in Ai

NEC and Kagome to Provide AI-enabled Services That Improve Tomato Yields – Business Wire

TOKYO--(BUSINESS WIRE)--NEC Corporation today announced the conclusion of a strategic partnership agreement with Kagome Co., Ltd. to launch agricultural management support services utilizing AI for leading tomato processing companies.

The new service uses NECs AI-enabled agricultural ICT platform, CropScope, to visualize tomato growth and soil conditions based on sensor data and satellite images, and to provide farming management recommendation services. This AI enables the service to provide data on the best timing and amounts of irrigation and fertilizer for healthy crops. As a result, farms are able to achieve stable yields and lower costs, while practicing environmentally sustainable agriculture without depending on the skill of individual growers.

Tomato processing companies can obtain a comprehensive understanding of the most effective growing conditions for tomato production on their own farms, as well as their contract growers. Also, they can optimally manage crop harvest orders across all fields based on objective data, which helps to reduce yield loss and improve productivity.

NEC and Kagome began agricultural collaboration in 2015, and by 2019 they had conducted demonstrations in regions that include Portugal, Australia and the USA. An AI farming experiment in Portugal in 2019 showed that the amount of fertilizer used for the trial was approximately 20% less than the average amount used in general, yielding 127 tons of tomatoes per hectare, approximately 1.3 times that of the average Portuguese grower, and almost the same as that of skilled growers.

Kagome will establish a Smart Agri Division in April 2020, first targeting customers in Europe, then aiming to expand the business to worldwide markets.

Kagome has been developing agricultural management support technologies using big data in collaboration with NEC since 2015, with the aim of realizing environmentally friendly and highly profitable agricultural management in the cultivation of tomatoes for processing on a global basis, said Kengo Nakata, General Manager, Smart Agri Division, Kagome. By combining Kagomes farming know-how with NEC's AI technology, we will realize sustainable agriculture, he added.

NEC is pleased to have signed a strategic partnership agreement with Kagome, said Masamitsu Kitase, General Manager, Corporate Business Development Division, NEC. NEC aims to realize a sustainable agriculture that can respond flexibly to global social issues on climate change and food safety, he added.

About NEC Corporation: For more information, visit NEC at http://www.nec.com.

Continued here:

NEC and Kagome to Provide AI-enabled Services That Improve Tomato Yields - Business Wire

Posted in Ai

Futuristic Impacts of AI Over Businesses and Society – Analytics Insight

In the past decade, artificial intelligence (AI) has made it to mainstream society from academic journals. The technology has achieved numerous milestones when it comes to digital transformation across society including businesses, education, and healthcare as well. Today people can do the tasks which were not even possible ten years back.

The proportion of organizations using AI in some form rose from 10 percent in 2016 to 37 percent in 2019 and that figure is extremely likely to rise further in the coming year, according to Gartners 2019 CIO Agenda survey.

While the breakthroughs in surpassing human ability at human pursuits, such as chess, make headlines, AI has been a standard part of the industrial repertoire since at least the 1980s. Then production-rule or expert systems became a standard technology for checking circuit boards and detecting credit card fraud. Similarly, machine-learning (ML) strategies like genetic algorithms have long been used for intractable computational problems, such as scheduling, and neural networks not only to model and understand human learning but also for basic industrial control and monitoring.

Moreover, AI is also the core of some of the most successful companies in history in terms of market capitalizationApple, Alphabet, Microsoft, and Amazon. Along with information and communication technology (ICT) more generally, the technology has revolutionized the ease with which people from all over the world can access knowledge, credit, and other benefits of a contemporary global society. Such access has helped lead to a massive reduction of global inequality and extreme poverty, for example by allowing farmers to know fair prices, the best crops, and giving them access to accurate weather predictions.

Following the trends, we can say that there will be big winners and losers as collaborative technologies, robots and artificial intelligence transform the nature of work. Moreover, data expertise will become exponentially more important. Across various organizations, the role of a senior manager in a deeply data-driven world is about to shift, thanks to the AI revolution. It is estimated that information hoarders will slow the pace of their organizations and forsake the power of artificial intelligence while competitors exploit it.

In the future, judgments about consumers and potential consumers will be made instantaneously and many organizations will put cybersecurity on par with other intelligence and defense priorities. Besides, open-source information and artificial intelligence collection will provide opportunities for global technological parity and soon predictive analytics and artificial intelligence could play an even more fundamental role in content creation.

With the growth of AI-enabled technologies in the future, societies will face challenges in realizing technologies that benefit humanity instead of destroying and intruding on the human rights of privacy and freedom of access to information. Also, the surging capabilities of robots and artificial intelligence will see a range of current jobs supplanted, where professional roles such as doctors, lawyers, and accountants could be replaced by artificial intelligence by the year 2025.

Moreover, low-skill workers will reallocate to tasks that are non-susceptible to computerization. All the risks will arise out of human activity from certain technological development in this technology, synthetic biology, nano techno, and artificial intelligence.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Excerpt from:

Futuristic Impacts of AI Over Businesses and Society - Analytics Insight

Posted in Ai

The Global AI in Telecommunication Market is expected to grow from USD 347.28 Million in 2018 to USD 2,145.39 Million by the end of 2025 at a Compound…

New York, March 31, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global AI in Telecommunication Market - Premium Insight, Competitive News Feed Analysis, Company Usability Profiles, Market Sizing & Forecasts to 2025" - https://www.reportlinker.com/p05871938/?utm_source=GNW

The report deeply explores the recent significant developments by the leading vendors and innovation profiles in the Global AI in Telecommunication Market including are AT&T Inc., Google LLC, IBM Corporation, Intel, Microsoft Corporation, Cisco Systems, H2O.ai, Infosys Limited, Nuance Communications, Nvidia Corporation, Salesforce.com, Inc., and Sentient Technologies.

On the basis of Technology, the Global AI in Telecommunication Market is studied across Machine Learning & Deep Learning and Natural Language Processing.

On the basis of Component, the Global AI in Telecommunication Market is studied across Service and Solution.

On the basis of Application, the Global AI in Telecommunication Market is studied across Customer Analytics, Network Optimization, Network Security, Self-Diagnostics, and Virtual Assistance.

On the basis of Deployment, the Global AI in Telecommunication Market is studied across On-Cloud and On-Premise.

For the detailed coverage of the study, the market has been geographically divided into the Americas, Asia-Pacific, and Europe, Middle East & Africa. The report provides details of qualitative and quantitative insights about the major countries in the region and taps the major regional developments in detail.

In the report, we have covered two proprietary models, the FPNV Positioning Matrix and Competitive Strategic Window. The FPNV Positioning Matrix analyses the competitive market place for the players in terms of product satisfaction and business strategy they adopt to sustain in the market. The Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies. The Competitive Strategic Window helps the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. During a forecast period, it defines the optimal or favorable fit for the vendors to adopt successive merger and acquisitions strategies, geography expansion, research & development, new product introduction strategies to execute further business expansion and growth.

Research Methodology:Our market forecasting is based on a market model derived from market connectivity, dynamics, and identified influential factors around which assumptions about the market are made. These assumptions are enlightened by fact-bases, put by primary and secondary research instruments, regressive analysis and an extensive connect with industry people. Market forecasting derived from in-depth understanding attained from future market spending patterns provides quantified insight to support your decision-making process. The interview is recorded, and the information gathered in put on the drawing board with the information collected through secondary research.

The report provides insights on the following pointers:1. Market Penetration: Provides comprehensive information on sulfuric acid offered by the key players in the Global AI in Telecommunication Market 2. Product Development & Innovation: Provides intelligent insights on future technologies, R&D activities, and new product developments in the Global AI in Telecommunication Market 3. Market Development: Provides in-depth information about lucrative emerging markets and analyzes the markets for the Global AI in Telecommunication Market 4. Market Diversification: Provides detailed information about new products launches, untapped geographies, recent developments, and investments in the Global AI in Telecommunication Market 5. Competitive Assessment & Intelligence: Provides an exhaustive assessment of market shares, strategies, products, and manufacturing capabilities of the leading players in the Global AI in Telecommunication Market

The report answers questions such as:1. What is the market size of AI in Telecommunication market in the Global?2. What are the factors that affect the growth in the Global AI in Telecommunication Market over the forecast period?3. What is the competitive position in the Global AI in Telecommunication Market?4. Which are the best product areas to be invested in over the forecast period in the Global AI in Telecommunication Market?5. What are the opportunities in the Global AI in Telecommunication Market?6. What are the modes of entering the Global AI in Telecommunication Market?Read the full report: https://www.reportlinker.com/p05871938/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

More:

The Global AI in Telecommunication Market is expected to grow from USD 347.28 Million in 2018 to USD 2,145.39 Million by the end of 2025 at a Compound...

Posted in Ai

The global AI agenda: Promise, reality, and a future of data sharing – MIT Technology Review

Artificial intelligence technologies are no longer the preserve of the big tech and digital platform players of this world. From manufacturing to energy, health care to government, our research shows organizations from all industries and sectors are experimenting with a suite of AI technologies across numerous use cases.

Among the organizations surveyed for this report, 72% had begun deploying AI by 2018, and 87% by 2019. Yet much remains unknown about AIs real, as opposed to potential, impact. Companies are developing use cases, but far from all are yet bearing fruit. How, business leaders ask, can we scale promising use cases to multiple parts of the enterprise? How can we leverage data, talent, and other resources to exploit AI to the fullest? And how can we do so ethically and within the bounds of regulation?

MIT Technology Review Insights surveyed 1,004 senior executives in different sectors and regions of the world to understand how organizations are using AI today and planning to do so in the future. Following are the key findings of this research:

Download the full report.

Visit link:

The global AI agenda: Promise, reality, and a future of data sharing - MIT Technology Review

Posted in Ai

New MI5 head promises to focus on China and harness AI – The Guardian

MI5s deputy head will take the top job at the spy agency next month promising a sharper focus on China and to work more closely with the private sector in harnessing artificial intelligence in tackling hostile state and terrorist activity.

Ken McCallum, a career MI5 officer, has been the agencys deputy director general since April 2017 and was seen by insiders as the heir apparent at an organisation that prides itself on internal appointments to its leading position.

The Glaswegian is the youngest ever boss of MI5, although the organisation will only say he is in his 40s and replaces Sir Andrew Parker, who had been due to step down in April after seven years as the director general in charge of the the UKs domestic security service.

His appointment was announced by the home secretary, Priti Patel, on Monday.

MI5s purpose is hugely motivating, McCallum said. Our people with our partners strive to keep the country safe, and they always want to go the extra mile. Having devoted my working life to that team effort, it is a huge privilege now to be asked to lead it as director general.

Insiders said that McCallum wanted to be clearer about the threat posed by China particularly in terms of industrial espionage and cyberwarfare in the belief that the level of spying by Bejiing in the UK was not appreciated more widely.

But the agency recognises that its concerns about China, which predate the coronavirus crisis by many months, also need to be set against the fact that the vast country also remains an important economic partner for the UK.

MI5 is expected to continue to support the decision to allow Huawei to supply 5G mobile phone equipment, even if highlighting other threats from China could provide further ammunition to Bejings critics on the Conservative backbenches, who are threatening to try to block the Chinese companys involvement.

McCallum also hopes to work more closely with technology companies to try to better exploit advances in technology at a time when the agency also remains concerned about the rise of hard-to-crack end-to-end encryption.

The spy agency believes it no longer has the internal capability to develop what it needs in fields such as artificial intelligence and data analysis, while McCallum will continue to demand that spy agencies have lawful access to secure messaging services.

At one point, around a decade ago, McCallum was responsible for MI5s cyber activities, when the subject was less fashionable in intelligence circles. It was a period that helped shape his interest in China and working with the technology industry.

As deputy director general, McCallum was responsible for the agencys operational work during a period when there were terrorist attacks in Manchester and London, and for MI5s response to the attempted assassination of Sergei Skripal by Russia.

McCallum has 25 years experience within MI5, spending the first 10 years of his career working on Northern Ireland, focusing on terrorism and the development of the peace process around the time of the 1998 Good Friday agreement.

He was asked to take charge of counter-terror investigations and risk management relating to the London 2012 Olympics, before being promoted to become director general, strategy in 2015, responsible for MI5s relationships with its sister intelligence agencies GCHQ and MI6.

McCallum will become the 18th director general of MI5 since its foundation in 1909, although its leaders have only been publicly avowed since 1993. Modern agency chiefs serve fixed periods of office, preventing them from becoming entrenched at the top of the agencies they run.

Parker, 57, had his term extended despite a difficult period in 2017 when the UK was hit by a spate of terrorist attacks; after the attack at London Bridge, the agency admitted that the ringleader, Khuram Butt, had been on its radar but the signs that the Islamist terrorist was planning the attack that killed eight with two associates were missed.

Fresh concerns also emerged this winter when Usman Khan, a man convicted of terror offences and released on licence, killed two in an attack also near London Bridge, prompting an emergency tightening of terror sentencing amid concerns that people were continuing to be radicalised in prison.

But the military defeat of Islamic State in Syria and the death of its leader Abu-Bakr al-Baghdadi has given some cautious grounds for optimism. The terror threat was reduced from severe to substantial last November.

More here:

New MI5 head promises to focus on China and harness AI - The Guardian

Posted in Ai

Helm.ai raises $13M on its unsupervised learning approach to driverless car AI – TechCrunch

Four years ago, mathematician Vlad Voroninski saw an opportunity to remove some of the bottlenecks in the development of autonomous vehicle technology thanks to breakthroughs in deep learning.

Now, Helm.ai, the startup he co-founded in 2016 with Tudor Achim, is coming out of stealth with an announcement that it has raised $13 million in a seed round that includes investment from A.Capital Ventures, Amplo, Binnacle Partners, Sound Ventures, Fontinalis Partners and SV Angel. More than a dozen angel investors also participated, including Berggruen Holdings founder Nicolas Berggruen, Quora co-founders Charlie Cheever and Adam DAngelo, professional NBA player Kevin Durant, Gen. David Petraeus, Matician co-founder and CEO Navneet Dalal, Quiet Capital managing partner Lee Linden and Robinhood co-founder Vladimir Tenev, among others.

Helm.ai will put the $13 million in seed funding toward advanced engineering and R&D and hiring more employees, as well as locking in and fulfilling deals with customers.

Helm.ai is focused solely on the software. It isnt building the compute platform or sensors that are also required in a self-driving vehicle. Instead, it is agnostic to those variables. In the most basic terms, Helm.ai is creating software that tries to understand sensor data as well as a human would, in order to be able to drive, Voroninski said.

That aim doesnt sound different from other companies. Its Helm.ais approach to software that is noteworthy. Autonomous vehicle developers often rely on a combination of simulation and on-road testing, along with reams of data sets that have been annotated by humans, to train and improve the so-called brain of the self-driving vehicle.

Helm.ai says it has developed software that can skip those steps, which expedites the timeline and reduces costs. The startup uses an unsupervised learning approach to develop software that can train neural networks without the need for large-scale fleet data, simulation or annotation.

Theres this very long tail end and an endless sea of corner cases to go through when developing AI software for autonomous vehicles, Voroninski explained. What really matters is the unit of efficiency of how much does it cost to solve any given corner case, and how quickly can you do it? And so thats the part that we really innovated on.

Voroninski first became interested in autonomous driving at UCLA, where he learned about the technology from his undergrad adviser who had participated in the DARPA Grand Challenge, a driverless car competition in the U.S. funded by the Defense Advanced Research Projects Agency. And while Voroninski turned his attention to applied mathematics for the next decade earning a PhD in math at UC Berkeley and then joining the faculty in the MIT mathematics department he knew hed eventually come back to autonomous vehicles.

By 2016, Voroninski said breakthroughs in deep learning created opportunities to jump in. Voroninski left MIT and Sift Security, a cybersecurity startup later acquired by Netskope, to start Helm.ai with Achim in November 2016.

We identified some key challenges that we felt like werent being addressed with the traditional approaches, Voroninski said. We built some prototypes early on that made us believe that we can actually take this all the way.

Helm.ai is still a small team of about 15 people. Its business aim is to license its software for two use cases Level 2 (and a newer term called Level 2+) advanced driver assistance systems found in passenger vehicles and Level 4 autonomous vehicle fleets.

Helm.ai does have customers, some of which have gone beyond the pilot phase, Voroninski said, adding that he couldnt name them.

Go here to read the rest:

Helm.ai raises $13M on its unsupervised learning approach to driverless car AI - TechCrunch

Posted in Ai

12345...102030...