Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

See the article here:

Artificial Intelligence What it is and why it matters | SAS

Posted in Ai

What Is Artificial Intelligence (AI)? | PCMag

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans.

But true artificial intelligence, as McCarthy conceived it, continues to elude us.

A great challenge with artificial intelligence is that it's a broad term, and there's no clear agreement on its definition.

As mentioned, McCarthy proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said.

Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes: "Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence."

But our understanding of "human intelligence" and our expectations of technology are constantly evolving. Zachary Lipton, the editor of Approximately Correct, describes the term AI as "aspirational, a moving target based on those capabilities that humans possess but which machines do not." In other words, the things we ask of AI change over time.

For instance, In the 1950s, scientists viewed chess and checkers as great challenges for artificial intelligence. But today, very few would consider chess-playing machines to be AI. Computers are already tackling much more complicated problems, including detecting cancer, driving cars, and processing voice commands.

The first generation of AI scientists and visionaries believed we would eventually be able to create human-level intelligence.

But several decades of AI research have shown that replicating the complex problem-solving and abstract thinking of the human brain is supremely difficult. For one thing, we humans are very good at generalizing knowledge and applying concepts we learn in one field to another. We can also make relatively reliable decisions based on intuition and with little information. Over the years, human-level AI has become known as artificial general intelligence (AGI) or strong AI.

The initial hype and excitement surrounding AI drew interest and funding from government agencies and large companies. But it soon became evident that contrary to early perceptions, human-level intelligence was not right around the corner, and scientists were hard-pressed to reproduce the most basic functionalities of the human mind. In the 1970s, unfulfilled promises and expectations eventually led to the "AI winter," a long period during which public interest and funding in AI dampened.

It took many years of innovation and a revolution in deep-learning technology to revive interest in AI. But even now, despite enormous advances in artificial intelligence, none of the current approaches to AI can solve problems in the same way the human mind does, and most experts believe AGI is at least decades away.

The flipside, narrow or weak AI doesn't aim to reproduce the functionality of the human brain, and instead focuses on optimizing a single task. Narrow AI has already found many real-world applications, such as recognizing faces, transforming audio to text, recommending videos on YouTube, and displaying personalized content in the Facebook News Feed.

Many scientists believe that we will eventually create AGI, but some have a dystopian vision of the age of thinking machines. In 2014, renowned English physicist Stephen Hawking described AI as an existential threat to mankind, warning that "full artificial intelligence could spell the end of the human race."

In 2015, Y Combinator President Sam Altman and Tesla CEO Elon Musk, two other believers in AGI, co-founded OpenAI, a nonprofit research lab that aims to create artificial general intelligence in a manner that benefits all of humankind. (Musk has since departed.)

Others believe that artificial general intelligence is a pointless goal. "We don't need to duplicate humans. That's why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own," says Peter Norvig, Director of Research at Google.

Scientists such as Norvig believe that narrow AI can help automate repetitive and laborious tasks and help humans become more productive. For instance, doctors can use AI algorithms to examine X-ray scans at high speeds, allowing them to see more patients. Another example of narrow AI is fighting cyberthreats: Security analysts can use AI to find signals of data breaches in the gigabytes of data being transferred through their companies' networks.

Early AI-creation efforts were focused on transforming human knowledge and intelligence into static rules. Programmers had to meticulously write code (if-then statements) for every rule that defined the behavior of the AI. The advantage of rule-based AI, which later became known as "good old-fashioned artificial intelligence" (GOFAI), is that humans have full control over the design and behavior of the system they develop.

Rule-based AI is still very popular in fields where the rules are clearcut. One example is video games, in which developers want AI to deliver a predictable user experience.

The problem with GOFAI is that contrary to McCarthy's initial premise, we can't precisely describe every aspect of learning and behavior in ways that can be transformed into computer rules. For instance, defining logical rules for recognizing voices and imagesa complex feat that humans accomplish instinctivelyis one area where classic AI has historically struggled.

An alternative approach to creating artificial intelligence is machine learning. Instead of developing rules for AI manually, machine-learning engineers "train" their models by providing them with a massive amount of samples. The machine-learning algorithm analyzes and finds patterns in the training data, then develops its own behavior. For instance, a machine-learning model can train on large volumes of historical sales data for a company and then make sales forecasts.

Deep learning, a subset of machine learning, has become very popular in the past few years. It's especially good at processing unstructured data such as images, video, audio, and text documents. For instance, you can create a deep-learning image classifier and train it on millions of available labeled photos, such as the ImageNet dataset. The trained AI model will be able to recognize objects in images with accuracy that often surpasses humans. Advances in deep learning have pushed AI into many complicated and critical domains, such as medicine, self-driving cars, and education.

One of the challenges with deep-learning models is that they develop their own behavior based on training data, which makes them complex and opaque. Often, even deep-learning experts have a hard time explaining the decisions and inner workings of the AI models they create.

Here are some of the ways AI is bringing tremendous changes to different domains.

Self-driving cars: Advances in artificial intelligence have brought us very close to making the decades-long dream of autonomous driving a reality. AI algorithms are one of the main components that enable self-driving cars to make sense of their surroundings, taking in feeds from cameras installed around the vehicle and detecting objects such as roads, traffic signs, other cars, and people.

Digital assistants and smart speakers: Siri, Alexa, Cortana, and Google Assistant use artificial intelligence to transform spoken words to text and map the text to specific commands. AI helps digital assistants make sense of different nuances in spoken language and synthesize human-like voices.

Translation: For many decades, translating text between different languages was a pain point for computers. But deep learning has helped create a revolution in services such as Google Translate. To be clear, AI still has a long way to go before it masters human language, but so far, advances are spectacular.

Facial recognition: Facial recognition is one of the most popular applications of artificial intelligence. It has many uses, including unlocking your phone, paying with your face, and detecting intruders in your home. But the increasing availability of facial-recognition technology has also given rise to concerns regarding privacy, security, and civil liberties.

Medicine: From detecting skin cancer and analyzing X-rays and MRI scans to providing personalized health tips and managing entire healthcare systems, artificial intelligence is becoming a key enabler in healthcare and medicine. AI won't replace your doctor, but it could help to bring about better health services, especially in underprivileged areas, where AI-powered health assistants can take some of the load off the shoulders of the few general practitioners who have to serve large populations.

In our quest to crack the code of AI and create thinking machines, we've learned a lot about the meaning of intelligence and reasoning. And thanks to advances in AI, we are accomplishing tasks alongside our computers that were once considered the exclusive domain of the human brain.

Some of the emerging fields where AI is making inroads include music and arts, where AI algorithms are manifesting their own unique kind of creativity. There's also hope AI will help fight climate change, care for the elderly, and eventually create a utopian future where humans don't need to work at all.

There's also fear that AI will cause mass unemployment, disrupt the economic balance, trigger another world war, and eventually drive humans into slavery.

We still don't know which direction AI will take. But as the science and technology of artificial intelligence continues to improve at a steady pace, our expectations and definition of AI will shift, and what we consider AI today might become the mundane functions of tomorrow's computers.

Further Reading

Here is the original post:

What Is Artificial Intelligence (AI)? | PCMag

Posted in Ai

What is AI? Everything you need to know about Artificial …

Video: Getting started with artificial intelligence and machine learning

It depends who you ask.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

Special report: How to implement AI and machine learning (free PDF)

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ' superintelligence' -- which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fuelled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader's guide to deep learning(Tech Pro Research)

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research is evolutionary computation, which borrows from Darwin's theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behaviour of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialised chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labelled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labelled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labelling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

See also: How artificial intelligence is taking call centers to the next level

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size -- Google's Open Images Dataset has about nine million images, while its labelled video repository YouTube-8M links to seven million labelled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximise a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximise the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don't want to build their own machine learning models but instead want to consume AI-powered, on-demand services -- such as voice, vision, and language recognition -- Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella -- and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Internally, each of the tech giants -- and others such as Facebook -- use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Read more: How we learned to talk to computers, and how they learned to answer back (PDF download)

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana's days are numbered, although Microsoft was quick to reject this.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu's self-driving car, a modified BMW 3 series.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

There's too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each -- setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

IBM Watson competes on Jeopardy! in January 14, 2011

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson's win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognise what people are saying with an accuracy of almost 95 percent. Recently Microsoft's Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it's likely this more intrusive use of AI technology -- including AI that can recognize emotions -- will gradually become more widespread elsewhere.


AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM's Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK's National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a "fundamental risk to the existence of human civilization". As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft's director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about "Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away."

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

While AI won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn't have the potential to impact. As AI expert Andrew Ng puts it: "many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work", saying he sees a "significant risk of technological unemployment over the next few decades".

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it's not a given that manual and robotic labor will continue to grow hand-in-hand.

Amazon bought Kiva robotics in 2012 and today uses Kiva robots throughout its warehouses.

The rest is here:

What is AI? Everything you need to know about Artificial ...

Posted in Ai

How AI will automate cybersecurity in the post-COVID world – VentureBeat

By now, it is obvious to everyone that widespread remote working is accelerating the trend of digitization in society that has been happening for decades.

What takes longer for most people to identify are the derivative trends. One such trend is that increased reliance on online applications means that cybercrime is becoming even more lucrative. For many years now, online theft has vastly outstripped physical bank robberies. Willie Sutton said he robbed banks because thats where the money is. If he applied that maxim even 10 years ago, he would definitely have become a cybercriminal, targeting the websites of banks, federal agencies, airlines, and retailers. According to the 2020 Verizon Data Breach Investigations Report, 86% of all data breaches were financially motivated. Today, with so much of societys operations being online, cybercrime is the most common type of crime.

Unfortunately, society isnt evolving as quickly as cybercriminals are. Most people think they are only at risk of being targeted if there is something special about them. This couldnt be further from the truth: Cybercriminals today target everyone. What are people missing? Simply put: the scale of cybercrime is difficult to fathom. The Herjavec Group estimates cybercrime will cost the world over $6 trillion annually by 2021, up from $3 trillion in 2015, but numbers that large can be a bit abstract.

A better way to understand the issue is this: In the future, nearly every piece of technology we use will be under constant attack and this is already the case for every major website and mobile app we rely on.

Understanding this requires a Matrix-like radical shift in our thinking. It requires us to embrace the physics of the virtual world, which break the laws of the physical world. For example, in the physical world, it is simply not possible to try to rob every house in a city on the same day. In the virtual world, its not only possible, its being attempted on every house in the entire country. Im not referring to a diffuse threat of cybercriminals always plotting the next big hacks. Im describing constant activity that we see on every major website the largest banks and retailers receive millions of attacks on their users accounts every day. Just as Google can crawl most of the web in a few days, cybercriminals attack nearly every website on the planet in that time.

The most common type of web attack today is called credential stuffing. This is when cybercriminals take stolen passwords from data breaches and use tools to automatically log in to every matching account on other websites to take over those accounts and steal the funds or data inside them. These account takeover (ATO) events are possible because people frequently reuse their passwords across websites. The spate of gigantic data breaches in the last decade has been a boon for cybercriminals, reducing cybercrime success to a matter of reliable probability: In rough terms, if you can steal 100 users passwords, on any given website where you try them, one will unlock someones account. And data breaches have given cybercriminals billions of users passwords.

Above: Source: Attacks Against Financial Services via F5 Security Incident Response Team in 2017-2019

Whats going on here is that cybercrime is a business, and growing a business is all about scale and efficiency. Credential stuffing is only a viable attack because of the large-scale automation that technology makes possible.

This is where artificial intelligence comes in.

At a basic level, AI uses data to make predictions and then automates actions. This automation can be used for good or evil. Cybercriminals take AI designed for legitimate purposes and use it for illegal schemes. Consider one of the most common defenses attempted against credential stuffing CAPTCHA. Invented a couple of decades ago, CAPTCHA tries to protect against unwanted bots by presenting a challenge (e.g., reading distorted text) that humans should find easy and bots should find difficult. Unfortunately, cybercriminal use of AI has inverted this. Google did a study a few years ago and found that machine-learning based optical character recognition (OCR) technology could solve 99.8% of CAPTCHA challenges. This OCR, as well as other CAPTCHA-solving technology, is weaponized by cybercriminals who include it in their credential stuffing tools.

Cybercriminals can use AI in other ways too. AI technology has already been created to make cracking passwords faster, and machine learning can be used to identify good targets for attack, as well as to optimize cybercriminal supply chains and infrastructure. We see incredibly fast response times from cybercriminals, who can shut off and restart attacks with millions of transactions in a matter of minutes. They do this with a fully automated attack infrastructure, using the same DevOps techniques that are popular in the legitimate business world. This is no surprise, since running such a criminal system is similar to operating a major commercial website, and cybercrime-as-a-service is now a common business model. AI will be further infused throughout these applications over time to help them achieve greater scale and to make them harder to defend against.

So how can we protect against such automated attacks? The only viable answer is automated defenses on the other side. Heres what that evolution will look like as a progression:

Right now, the long tail of organizations are at level 1, but sophisticated organizations are typically somewhere between levels 3 and 4. In the future, most organizations will need to be at level 5. Getting there successfully across the industry requires companies to evolve past old thinking. Companies with the war for talent mindset of hiring huge security teams have started pivoting to also hire data scientists to build their own AI defenses. This might be a temporary phenomenon: While corporate anti-fraud teams have been using machine learning for more than a decade, the traditional information security industry has only flipped in the past five years from curmudgeonly cynicism about AI to excitement, so they might be over-correcting.

But hiring a large AI team is unlikely to be the right answer, just as you wouldnt hire a team of cryptographers. Such approaches will never reach the efficacy, scale, and reliability required to defend against constantly evolving cybercriminal attacks. Instead, the best answer is to insist that the security products you use integrate with your organizational data to be able to do more with AI. Then you can hold vendors accountable for false positives and false negatives, and the other challenges of getting value from AI. After all, AI is not a silver bullet, and its not sufficient to simply be using AI for defense; it has to be effective.

The best way to hold vendors accountable for efficacy is by judging them based on ROI. One of the beneficial side effects of cybersecurity becoming more of an analytics and automation problem is that the performance of all parties can be more granularly measured. When defensive AI systems create false positives, customer complaints rise. When there are false negatives, ATOs increase. And there are many other intermediate metrics companies can track as cybercriminals iterate with their own AI-based tactics.

If youre surprised that the post-COVID Internet sounds like its going to be a Terminator-style battle of good AI vs. evil AI, I have good news and bad news. The bad news is, were already there to a large extent. For example, among major retail sites today, around 90% of login attempts typically come from cybercriminal tools.

But maybe thats the good news, too, since the world obviously hasnt fallen apart yet. This is because the industry is moving in the right direction, learning quickly, and many organizations already have effective AI-based defenses in place. But more work is required in terms of technology development, industry education, and practice. And we shouldnt forget that sheltering-in-place has given cybercriminals more time in front of their computers too.

Shuman Ghosemajumder is Global Head of AI at F5. He was previously CTO of Shape Security, which was acquired by F5 in 2020, and was Global Head of Product for Trust & Safety at Google.

Read more:

How AI will automate cybersecurity in the post-COVID world - VentureBeat

Posted in Ai

3 Predictions For The Role Of Artificial Intelligence In Art And Design – Forbes

Christies made the headlines in 2018 when it became the first auction house to sell a painting created by AI. The painting, named Portrait of Edmond de Belamy, ended up selling for a cool $432,500, but more importantly, it demonstrated how intelligent machines are now perfectly capable of creating artwork.

3 Predictions For The Role Of Artificial Intelligence In Art And Design

It was only a matter of time, I suppose. Thanks to AI, machines have been able to learn more and more human functions, including the ability to see (think facial recognition technology), speak and write (chatbots being a prime example). Learning to create is a logical step on from mastering the basic human abilities. But will intelligent machines really rival humans remarkable capacity for creativity and design? To answer that question, here are my top three predictions for the role of AI in art and design.

1. Machines will be used to enhance human creativity (enhance being the key word)

Until we can fully understand the brains creative thought processes, its unlikely machines will learn to replicate them. As yet, theres still much we dont understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The eureka! moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines.

Typically, then, machines have to be told what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings.

The takeaway from this is that AI will largely be used to enhance human creativity, not replicate or replace it a process known as co-creativity." As an example of AI improving the creative process, IBM's Watson AI platform was used to create the first-ever AI-generated movie trailer, for the horror film Morgan. Watson analyzed visuals, sound, and composition from hundreds of other horror movie trailers before selecting appropriate scenes from Morgan for human editors to compile into a trailer. This reduced a process that usually takes weeks down to one day.

2. AI could help to overcome the limits of human creativity

Humans may excel at making sophisticated decisions and pulling ideas seemingly out of thin air, but human creativity does have its limitations. Most notably, were not great at producing a vast number of possible options and ideas to choose from. In fact, as a species, we tend to get overwhelmed and less decisive the more options were faced with! This is a problem for creativity because, as American chemist Linus Pauling the only person to have won two unshared Nobel Prizes put it, You cant have good ideas unless you have lots of ideas. This is where AI can be of huge benefit.

Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options the ones that best fit the human creatives vision. In this way, machines could help us come up with new creative solutions that we couldnt possibly have come up with on our own.

For example, award-winning choreographer Wayne McGregor has collaborated with Google Arts & Culture Lab to come up with new, AI-driven choreography. An AI algorithm was trained on thousands of hours of McGregors videos, spanning 25 years of his career and as a result, the program came up with 400,000 McGregor-like sequences. In McGregors words, the tool gives you all of these new possibilities you couldnt have imagined.

3. Generative design is one area to watch

Much like in the creative arts, the world of design will likely shift towards greater collaboration between humans and AI. This brings us to generative design a cutting-edge field that uses intelligent software to enhance the work of human designers and engineers.

Very simply, the human designer inputs their design goals, specifications, and other requirements, and the software takes over to explore all possible designs that meet those criteria. Generative design could be utterly transformative for many industries, including architecture, construction, engineering, manufacturing, and consumer product design.

In one exciting example of generative design, renowned designer Philippe Starck collaborated with software company Autodesk to create a new chair design. Starck and his team set out the overarching vision for the chair and fed the AI system questions like, "Do you know how we can rest our bodies using the least amount of material?" From there, the software came up with multiple suitable designs to choose from. The final design an award-winning chair named "AI" debuted at Milan Design Week in 2019.

Machine co-creativity is just one of 25 technology trends that I believe will transform our society. Read more about these key trends including plenty of real-world examples in my new books, Tech Trends in Practice: The 25 Technologies That Are Driving The 4th Industrial Revolution and The Intelligence Revolution: Transforming Your Business With AI.

Here is the original post:

3 Predictions For The Role Of Artificial Intelligence In Art And Design - Forbes

Posted in Ai

This know-it-all AI learns by reading the entire web nonstop – MIT Technology Review

This is a problem if we want AIs to be trustworthy. Thats why Diffbot takes a different approach. It is building an AI that reads every page on the entire public web, in multiple languages, and extracts as many facts from those pages as it can.

Like GPT-3, Diffbots system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.

Pointed at my bio, for example, Diffbot learns that Will Douglas Heaven is a journalist; Will Douglas Heaven works at MIT Technology Review; MIT Technology Review is a media company; and so on. Each of these factoids gets joined up with billions of others in a sprawling, interconnected network of facts. This is known as a knowledge graph.

Knowledge graphs are not new. They have been around for decades, and were a fundamental concept in early AI research. But constructing and maintaining knowledge graphs has typically been done by hand, which is hard. This also stopped Tim Berners-Lee from realizing what he called the semantic web, which would have included information for machines as well as humans, so that bots could book our flights, do our shopping, or give smarter answers to questions than search engines.

A few years ago, Google started using knowledge graphs too. Search for Katy Perry and you will get a box next to the main search results telling you that Katy Perry is an American singer-songwriter with music available on YouTube, Spotify, and Deezer. You can see at a glance that she is married to Orlando Bloom, shes 35 and worth $125 million, and so on. Instead of giving you a list of links to pages about Katy Perry, Google gives you a set of facts about her drawn from its knowledge graph.

But Google only does this for its most popular search terms. Diffbot wants to do it for everything. By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.

Alongside Google and Microsoft, it is one of only three US companies that crawl the entire public web. It definitely makes sense to crawl the web, says Victoria Lin, a research scientist at Salesforce who works on natural-language processing and knowledge representation. A lot of human effort can otherwise go into making a large knowledge base. Heiko Paulheim at the University of Mannheim in Germany agrees: Automation is the only way to build large-scale knowledge graphs.

To collect its facts, Diffbots AI reads the web as a human wouldbut much faster. Using a super-charged version of the Chrome browser, the AI views the raw pixels of a web page and uses image-recognition algorithms to categorize the page as one of 20 different types, including video, image, article, event, and discussion thread. It then identifies key elements on the page, such as headline, author, product description, or price, and uses NLP to extract facts from any text.

Every three-part factoid gets added to the knowledge graph. Diffbot extracts facts from pages written in any language, which means that it can answer queries about Katy Perry, say, using facts taken from articles in Chinese or Arabic even if they do not contain the term Katy Perry.

Browsing the web like a human lets the AI see the same facts that we see. It also means it has had to learn to navigate the web like us. The AI must scroll down, switch between tabs, and click away pop-ups. The AI has to play the web like a video game just to experience the pages, says Tung.

Diffbot crawls the web nonstop and rebuilds its knowledge graph every four to five days. According to Tung, the AI adds 100 million to 150 million entities each month as new people pop up online, companies are created, and products are launched. It uses more machine-learning algorithms to fuse new facts with old, creating new connections or overwriting out-of-date ones. Diffbot has to add new hardware to its data center as the knowledge graph grows.

Researchers can access Diffbots knowledge graph for free. But Diffbot also has around 400 paying customers. The search engine DuckDuckGo uses it to generate its own Google-like boxes. Snapchat uses it to extract highlights from news pages. The popular wedding-planner app Zola uses it to help people make wedding lists, pulling in images and prices. NASDAQ, which provides information about the stock market, uses it for financial research.

Adidas and Nike even use it to search the web for counterfeit shoes. A search engine will return a long list of sites that mention Nike trainers. But Diffbot lets these companies look for sites that are actually selling their shoes, rather just talking about them.

For now, these companies must interact with Diffbot using code. But Tung plans to add a natural-language interface. Ultimately, he wants to build what he calls a universal factoid question answering system: an AI that could answer almost anything you asked it, with sources to back up its response.

Tung and Lin agree that this kind of AI cannot be built with language models alone. But better yet would be to combine the technologies, using a language model like GPT-3 to craft a human-like front end for a know-it-all bot.

Still, even an AI that has its facts straight is not necessarily smart. Were not trying to define what intelligence is, or anything like that, says Tung. Were just trying to build something useful.

See the article here:

This know-it-all AI learns by reading the entire web nonstop - MIT Technology Review

Posted in Ai

Diffbot attempts to create smarter AI that can discern between fact and misinformation – The Financial Express

The better part of the early 2000s was spent in creating artificial intelligence (AI) systems that could beat the Turing Test; the test is designed to determine if an AI can trick a human into believing that it is a human. Now, companies are in a race to create a smarter AI that is more knowledgeable and trustworthy. A few months ago, Open AI showcased GPT-3, a much smarter version of its AI bot, and now as per a report in MIT Technology Review, Diffbot is working on a system that can surpass the capabilities of GPT-3.

Diffbot is expected to be a smarter system as it works by reading a page as a human does. Using this technology, it can create knowledge graphs, which will contain verifiable facts. One of the problems that constant testing of GPT-3 reveals is that you still need a human to cross-verify information it is collecting. The Diffbot is trying to make the process more autonomous. The use of knowledge graphs is not unique to Diffbot; Google also uses them. The success of Diffbot will depend on how accurately it can differentiate between information and misinformation.

Give it will apply natural language processing and image recognition to virtually billions of web-pages, the knowledge graph it will build will be galactic. It will join Google and Microsoft in crawling nearly the entire web. Its non-stop crawling of the web means it knocks down its knowledge graph periodically, incorporating new information. If it can sift through data to verify information, it will indeed be a victory for internet companies looking to make their platforms more reliable.

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

See the rest here:

Diffbot attempts to create smarter AI that can discern between fact and misinformation - The Financial Express

Posted in Ai

MQ-9 Reaper Flies With AI Pod That Sifts Through Huge Sums Of Data To Pick Out Targets – The Drive

General Atomics says that it has successfully integrated and flight-tested Agile Condor, a podded, artificial intelligence-driven targeting computer, on its MQ-9 Reaper drone as part of a technology demonstration effort for the U.S. Air Force. The system is designed to automatically detect, categorize, and track potential items of interest. It could be an important stepping stone to giving various types of unmanned, as well as manned aircraft, the ability to autonomously identify potential targets, and determine which ones might be higher priority threats, among other capabilities.

The California-headquartered drone maker announced the Agile Condor tests on Sept. 3, 2020, but did not say when they had taken place. The Reaper with the pod attached conducted the flight testing from General Atomics Aeronautical Systems, Inc.'s (GS-ASI) Flight Test and Training Center in Grand Forks, North Dakota.

Computing at the edge has tremendous implications for future unmanned systems, GA-ASI President David R. Alexander said in a statement. GA-ASI is committed to expanding artificial intelligence capabilities on unmanned systems and the Agile Condor capability is proof positive that we can accurately and effectively shorten the observe, orient, decide and act cycle to achieve information superiority. GA-ASI is excited to continue working with AFRL [Air Force Research Laboratory] to advance artificial intelligence technologies that will lead to increased autonomous mission capabilities."

Defense contractor SRC, Inc. developed the Agile Condor system for the Air Force Research Laboratory (AFRL), delivering the first pod in 2016. It's not clear whether the Air Force conducted any flight testing of the system on other platforms before hiring General Atomics to integrate it onto the Reaper in 2019. The service had previously said that it expected to take the initial pod aloft in some fashion before the end of 2016.

"Sensors have rapidly increased in fidelity, and are now able to collect vast quantities of data, which must be analyzed promptly to provide mission critical information," an SRC white paper on Agile Condor from 2018 explains. "Stored data [physically on a drone] ... creates an unacceptable latency between data collection and analysis, as operators must wait for the RPA [remotely piloted aircraft] to return to base to review time sensitive data."

"In-mission data transfers, by contrast, can provide data more quickly, but this method requires more power and available bandwidth to send data," the white paper continues. "Bandwidth limits result in slower downloads of large data files, a clogged communications link and increased latency that could allow potential changes in intel between data collection and analysis. The quantities of data being collected are also so vast, that analysts are unable to fully review the data received to ensure actionable information is obtained."

This is all particularly true for drones equipped with wide-area persistent surveillance systems, such as the Air Force's Gorgon Stare system, which you can read about in more detail here, that grab immense amounts of imagery that can be overwhelming for sensor operators and intelligence analysts to scour through. Agile Condor is designed to parse through the sensor data a drone collects first, spotting and classifying objects of interest and then highlighting them for operators back at a control center or personnel receiving information at other remote locations for further analysis. Agile Condor would simply discard "empty" imagery and other data that shows nothing it deems useful, not even bothering to forward that on.

"This selective 'detect and notify' process frees up bandwidth and increases transfer speeds, while reducing latency between data collection and analysis," SRC's 2018 white paper says. "Real time pre-processing of data with the Agile Condor system also ensures that all data collected is reviewed quickly, increasing the speed and effectiveness with which operators are notified of actionable information."

Here is the original post:

MQ-9 Reaper Flies With AI Pod That Sifts Through Huge Sums Of Data To Pick Out Targets - The Drive

Posted in Ai

The fourth generation of AI is here, and its called Artificial Intuition – The Next Web

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but its not nearly as new as you might think. In fact, its undergone several evolutions since its inception in the 1950s. The first generation of AI was descriptive analytics, which answers the question, What happened? The second, diagnostic analytics, addresses, Why did it happen? The third and current generation is predictive analytics, which answers the question, Based on what has already happened, what could happen in the future?

While predictive analytics can be very helpful and save time for data scientists, it is still fully dependent on historic data. Data scientists are therefore left helpless when faced with new, unknown scenarios. In order to have true artificial intelligence, we need machines that can think on their own, especially when faced with an unfamiliar situation. We need AI that can not just analyze the data it is shown, but express a gut feeling when something doesnt add up. In short, we need AI that can mimic human intuition.Thankfully, we have it.

What is Artificial Intuition?

The fourth generation of AI is artificial intuition, which enables computers to identify threats and opportunities without being told what to look for, just as human intuition allows us to make decisions without specifically being instructed on how to do so. Its similar to a seasoned detective who can enter a crime scene and know right away that something doesnt seem right, or an experienced investor who can spot a coming trend before anybody else. The concept of artificial intuition is one that, just five years ago, was considered impossible. But now companies like Google, Amazon and IBM are working to develop solutions, and a few companies have already managed to operationalize it.

How Does It Work?

So, how does artificial intuition accurately analyze unknown data without any historical context to point it in the right direction? The answer lies within the data itself. Once presented with a current dataset, the complex algorithms of artificial intuition are able to identify any correlations or anomalies between data points.

Of course, this doesnt happen automatically. First, instead of building a quantitative model to process the data, artificial intuition applies a qualitative model. It analyzes the dataset and develops a contextual language that represents the overall configuration of what it observes. This language uses a variety of mathematical models such as matrices, euclidean and multidimensional space, linear equations and eigenvalues to represent the big picture. If you envision the big picture as a giant puzzle, artificial intuition is able to see the completed puzzle right from the start, and then work backward to fill in the gaps based on the interrelationships of the eigenvectors.

In linear algebra, an eigenvector is a nonzero vector that changes at most by a scalar factor (direction does not change) when that linear transformation is applied to it. The corresponding eigenvalue is the factor by which the eigenvector is scaled. In concept this provides a guidepost for visualizing anomalous identifiers. Any eigenvectors that do not fit correctly into the big picture are then flagged as suspicious.

How Can It Be Used?

Artificial intuition can be applied to virtually any industry, but is currently making considerable headway in financial services. Large global banks are increasingly using it to detect sophisticated new financial cybercrime schemes, including money laundering, fraud and ATM hacking. Suspicious financial activity is usually hidden among thousands upon thousands of transactions that have their own set of connected parameters. By using extremely complicated mathematical algorithms, artificial intuition rapidly identifies the five most influential parameters and presents them to analysts.

In 99.9% of cases, when analysts see the five most important ingredients and interconnections out of tens of hundreds, they can immediately identify the type of crime being presented. So artificial intuition has the ability to produce the right type of data, identify the data, detect with a high level of accuracy and low level of false positives, and present it in a way that is easily digestible for the analysts.

By uncovering these hidden relationships between seemingly innocent transactions, artificial intuition is able to detect and alert banks to the unknown unknowns (previously unseen and therefore unexpected attacks). Not only that, but the data is explained in a way that is traceable and logged, enabling bank analysts to prepare enforceable suspicious activity reports for the Financial Crimes Enforcement Network (FinCEN).

How Will It Affect the Workplace?

Artificial intuition is not intended to serve as a replacement for human instinct. It is just an additional tool that helps people perform their jobs more effectively. In the banking example outlined above, artificial intuition isnt making any final decisions on its own; its simply presenting an analyst with what it believes to be criminal activity. It remains the analysts job to review the identified transactions and confirm the machines suspicions.

AI has certainly come a long way since Alan Turing first presented the concept back in the 1950s, and it is not showing any sign of slowing down. Previous generations were just the tip of the iceberg. Artificial intuition marks the point when AI truly became intelligent.

So youre interested in AI? Thenjoin our online event, TNW2020, where youll hear how artificial intelligence is transforming industries and businesses.

Published September 3, 2020 17:00 UTC

Read this article:

The fourth generation of AI is here, and its called Artificial Intuition - The Next Web

Posted in Ai

Catalyst of change: Bringing artificial intelligence to the forefront – The Financial Express

Artificial Intelligence (AI) has been much talked about over the last few years. Several interpretations of the potential of AI and its outcomes have been shared by technologists and futurologists. With the focus on the customer, the possibilities range from predicting trends to recommending actions to prescribing solutions.

The potential for change due to AI applications is energised by several factors. The first is the concept of AI itself which is not a new phenomenon. Researchers, cognitive specialists and hi-tech experts working with complex data for decades in domains such as space, medicine and astrophysics have used data to help derive deep insights to predict trends and build futuristic models.

AI has now moved out of the realms of research labs to the commercial world and every day life due to three key levers. Innovation and technology advancements in the hardware, telecommunications and software have been the catalysts in bringing AI to the forefront and attempting to go beyond the frontiers of data and analytics.

What was once seen as a big breakthrough to be able to analyse the data as if-else- then scenarios transitioned to machine learning with the capability to deal with hundreds of variables but mostly structured data sets. Handcrafted techniques using algorithms did find ways to convert unstructured data to structured data but there are limitations to such volumes of data that could be handled by machine learning.

With 80% of the data being unstructured and with the realisation that the real value of data analysis would be possible only when both structured and unstructured data are synthesised, there came deep learning which is capable of handling thousands of factors and is able to draw inferences from tens of billions of data comprising of voice, image, video and queries each day. Determining patterns from unstructured data multi-lingual text, multi-modal speech, vision have been maturing making recommendation engines more effective.

Another important factor that is aiding the process for adoption of AI rapidly is the evolution seen in the hardware. CPUs (Central processing unit) today are versatile and designed for handling sequential codes and not for addressing codes related to massive parallel problems. This is where the GPUs (graphcial processing units) which were hitherto considered primarily for applications such as gaming are now being deployed for the purpose of addressing the needs of commercial establishments, governments and other domains dealing with gigantic volumes of data supporting their needs for parallel processing in areas such as smart parking, retail analytics, intelligent traffic systems and others. Such computing intensive functions requiring massive problems to be broken up into smaller ones that require parallelisation are finding efficient hardware and hosting options in the cloud.

Therefore the key drivers for this major transition are the evolution of hardware and hosting on the cloud, sophisticated tools and software to capture, store and analyse the data as well as a variety of devices that keep us always connected and support in the generation of humungous volumes of data. These dimensions along with advances in telecommunications will continue to evolve, making it possible for commercial establishments, governments and society to arrive at solutions that deliver superior experiences for the common man. Whether it is agriculture, health, decoding crimes, transportation or maintenance of law and order, we have already started seeing the play of digital technologies and democratisation of AI would soon become a reality.

The writer is chairperson, Global Talent Track, a corporate training solutions company

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

View original post here:

Catalyst of change: Bringing artificial intelligence to the forefront - The Financial Express

Posted in Ai

The Impact of Artificial Intelligence on Workspaces – Forbes

Intelligent and intelligible office buildings

It is a truth universally acknowledged that artificial intelligence will change everything. In the next few decades, the world will become intelligible, and in many ways, intelligent. But insiders suggest that the world of big office real estate will get there more slowly - at least in the worlds major cities.

The real estate industry in London, New York, Hong Kong and other world cities moves in cycles of 10 or 15 years. This is the period of the lease. After a tense renewal negotiation, and perhaps a big row, landlord and tenant are generally happy to leave each other alone until the next time. This does not encourage innovation, or investment in new services in between the renewals. There are alternatives to this arrangement. In Scandinavia, for instance, lease durations are shorter - often three years or so. This encourages a more collegiate working relationship, where landlord and tenant are more like business partners.

Another part of the pathology of major city real estate is the landmark building. With the possible exception of planners, everyone likes grand buildings: certainly, architects, developers, and the property managers and CEOs of big companies do. A mutual appreciation society is formed, which is less concerned about the impact on a business than about appearing in the right magazines, and winning awards.

Outside the big cities, priorities are different. To attract a major tenant to Dixons old headquarters in Hemel Hempstead, for instance, the landlord will need to seduce with pragmatism rather than glamour.

Tim Oldman is the founder and CEO of Leesman, a firm which helps clients understand how to manage their workspaces in the best interests of their staff and their businesses. He says there is plenty of opportunity for AI to enhance real estate, and much of the impetus for it to happen will come from the employees who work in office buildings rather than the developers who design and build them. Employees, the actual users of buildings, will be welcoming AI into many corners of their lives in the coming years and decades, often without realising it. They will expect the same convenience and efficiency at work that they experience at home and when travelling. They will demand more from their employers and their landlords.

Christina Wood is responsible for two of Emap'sconferences on the office sector: Property Week's annual flagship event WorkSpace, and AV Magazine's new annual event AVWorks, which explores the changing role of AV in the workspace. She says that workspaces are undergoing an evolution that increasingly looks like a revolution, powered by technology innovation and driven by workforce demands for flexibility, connectivity, safety and style.

Buildings should be smart, and increasingly they will be. Smart buildings will be a major component of smart cities, a phenomenon which we have been hearing about since the end of the last century, and which will finally start to become a reality in the coming decade, enabled in part by 5G.

Buildings should know what load they are handling at any given time. They should provide the right amount of heat and light: not too little and not too much. The air conditioning should not go off at 7pm when an after-hours conference is in full flow. They should monitor noise levels, and let occupants know where the quiet places are, if they ask. They should manage the movement of water and waste intelligently. All this and much more is possible, given enough sensors, and a sensible approach to the use of data.

Imagine we are colleagues who usually work in different buildings. Today we are both in the head office, and our calendars show that we have scheduled a meeting. An intelligent building could suggest workspaces near to each other. Tim Oldman calls this assisted serendipity.

Generation Z is coming into the workplace. They are not naive about data and the potential for its mis-use, but they are more comfortable with sharing it in return for a defined benefit. Older generations are somewhat less trusting. We expect our taxi firm to know when we will be exiting the building, and to have a car waiting. But we are suspicious if the building wants to know our movements. Employees in Asian countries show more trust than those in France and Germany, say, with the US and the UK in between.

Robotic process automation, or RPA, can make mundane office interactions smoother and more efficient. But we will want it to be smart. IT helpdesks should not be rewarded for closing a ticket quickly, but for solving your problem in a way which means you wont come back with the same problem a week later and neither will anyone else.

That said, spreadsheet-driven efficiency is not always the best solution. Face-to-face genius bar-style helpdesks routinely deliver twice the level of customer satisfaction as the same service delivered over the phone, even when they use exactly the same people, the same technology, and the same infrastructure. There is a time and place for machines, and a time and a place for humans.

Rolls Royce is said to make more money from predictive maintenance plans than it makes by selling engines. Sensors in their engines relay huge volumes of real-time data about each engine component to headquarters in Derby. If a fault is developing, they can often have the relevant spare part waiting at the next airport before the pilot even knows theres a problem. One day, buildings will operate this way too.

The technology to enable these services is not cheap today, and an investment bank or a top management consultancy can offer their employees features which will not be available for years to workers in the garment industry in the developing world. There will be digital divides, but the divisions will be constantly changing, with laggards catching up, and sometimes overtaking, as they leapfrog legacy infrastructures. China is a world leader in smartphone payment apps partly because its banking infrastructure was so poor.

Covid will bring new pressure to bear on developers and landlords. Employees will demand biosecurity measures such as the provision of air which is fresh and filtered air, not re-circulated. They may want to know how many people are in which parts of the building, to help them maintain physical distancing. This means more sensors, and more data.

The great unplanned experiment in working from home which we are all engaged in thanks to covid-19 will probably result in a blended approach to office life in the future. Working from home suits some people very well, reducing commuting time, and enabling them to spend more time with their families. But others miss the decompression that commuting allows, and many of us dont have good working environments at home. In the winter, many homes are draughty, and the cost of heating them all day long can be considerable.

Tim Oldman thinks the net impact on demand for office space will probably be a slight reduction overall, and a new mix of locations. There are indications that companies will provide satellite offices closer to where their people live, perhaps sharing space with workers from other firms. This is the same principle as the co-working facilities provided by WeWork and Regus, but whereas those companies have buildings in city centres, there will be a new demand for space on local High Streets.

Retail banks have spotted this as an opportunity, a way of using the branch network which they have been shrinking as people shift to online banking. Old bank branches can be transformed into safe and comfortable satellite offices, and restore some life to tired suburban streets. Companies will have to up their game to co-ordinate this more flexible approach, and landlords will need to help them. They will need to collect and analyse information about where their people are each day, and develop and refine algorithms to predict where they will be tomorrow.

Some employers will face a crisis of trust as we emerge from the pandemic. Millions of us have been been trusted to work from home, and to the surprise of more than a few senior managers, it has mostly worked well. Snatching back the laptop and demanding that people come straight back to the office is not a good idea. Companies will adopt different approaches, and some will be more successful than others. Facebook has told its staff they can work from wherever they want, but their salary will be adjusted downwards if they leave the Bay Area. Google has simply offered every employee $1,000 to make their home offices more effective.

The way we work is being changed by lessons learned during the pandemic, and by the deployment of AI throughout the economy. Builders and owners of large office buildings must not get left behind.

Read the rest here:

The Impact of Artificial Intelligence on Workspaces - Forbes

Posted in Ai

We May Be Losing The Race For AI With China: Bob Work – Breaking Defense

Robert Work, former DoD deputy secretary

UPDATED from further Work remarks WASHINGTON: The former deputy secretary of defense who launched Project Maven and jumpstarted the Pentagons push for artificial intelligence says the Defense Department is not doing enough. Bob Work made the case that the Pentagon needs to adopt AI with the same bureaucracy-busting urgency the Navy seized on nuclear power in the 1950s, with the Joint Artificial Intelligence Center acting as the whip the way Adm. Hyman Rickover did during the Cold War.

There has to be this top-down sense of urgency, Work told the AFCEA AI+ML conference today. One thousand flowers blooming will work over time, but it wont [work] as fast as we need to go.

Work, now vice-chair of the congressionally chartered National Security Commission on Artificial Intelligence, told the conference yesterday that China and Russia to a lesser extent could overtake the US in military AI and automation. To keep them at bay, he said, the U.S. needs to undertake three major reforms:

Work added Wednesday that the US should also consider replicating the Chinese model of a single unified Strategic Support Force overseeing satellites, cyberspace, electronic warfare, and information warfare functions that the US splits between Space Command, Cyber Command, and other agencies. Given how interdependent these functions are in the modern world, he said, I think the unified Strategic Support Force is a better way to go, but this is something that would need to be analyzed, wargamed, experimented with.

Adm. Hyman Rickover in civilian clothes.

Rickover, Reprise?

Were all saying the right things: AI is absolutely important. Its going to give us an advantage on the battlefield for years to come, Work said. But the key thing is, where is our sense of urgency? We may be losing the race, due to our own lack of urgency.

For the US to keep up requires not only funding, Work said, but also a new sense of urgency and new forms of organization, he said: I would recommend that we adopt a Naval Reactors-type model.

At the dawn of the nuclear era, Congress promoted Hyman Rickover over the heads of more-tradition-minded admirals and empowered him as chief of Naval Reactors, which set strict technical standards for the training of nuclear personnel and construction of nuclear vessels. The remit of NR extended not only across the Navy but into the Energy Department, giving it an extraordinary independence from both military and civilian oversight.

How would this model apply to AI? Work proposes giving the Joint Artificial Intelligence Center to be renamed the Joint Autonomy & AI Center the role of systems architect for human-machine collaborative battle networks the most important AI-enabled applications. To unpack this jargon, the JAIC/JAAIC would effectively set the technical standards for most military AI projects and control how they fit together into an all-encompassing Joint All Domain Command & Control (JADC2) system sharing data across land, sea, air, space, and cyberspace.

Cross Functional Teams run by the JAIC for different aspects of AI would have to certify that any specific program had significant joint impact for it to be eligible for a share of the added $7 billion in AI funding, Work said. In some cases, he said, the JAIC could compel the services to invest in AI upgrades that they might not want to find room for in their budgets, but which would work best if everyone adopted them, much as then-Defense Secretary Bill Perry forced the services to install the early versions of GPS on vehicle, ships, and aircraft.

Im recommending a much more muscular JAIC, Work said Wednesday. You have to tell the JAIC, youre the whip. Youre going to be the one recommending to the senior leaders, what are the applications and the algorithms that we need to pursue now to gain near-term military advantage?

These proposals would upset rice bowls in the Defense Department and industry alike, making reform an uphill battle both politically and bureaucratically. What Im proposingall of the services would fight against, Work admitted.

But Work remains highly respected in the defense policy world and if Joe Biden becomes president in November, Work could well be back in the Pentagon again.

Then-Deputy Defense Secretary Robert Work (center) settles in for a congressional hearing, flanked by Adm. James Winnefeld (left) and comptroller Mike McCord (right).

Work has a lot of credibility in this area. A retired Marine Corps artillery officer, he spent years in government and thinktank positions, rising to Deputy Defense Secretary during the Obama Administration. Work warned that China and Russia had advanced their military technology dramatically while the US waged guerrilla warfare in Afghanistan and Iraq, and he convinced Sec. Chuck Hagel to launch the Third Offset Strategy to regain Americas high-tech edge. While the name died out after the Trump administration took power, the emphasis on great power competition in technology especially AI has only grown at the Trump Pentagon, despite the presidents own ambivalence about containing China and his outright refusal to confront Russias Vladimir Putin.

Just yesterday, the Defense Department released an alarming new report saying China has pulled ahead of the US in shipbuilding, missile defense, and offensive missiles. The Peoples Republic in large part owes its rapid advance to the government-mandated collaboration between military and industry, the report says, with China harvesting foreign technology from both international collaboration and outright theft. [There is] not a clear line between the PRCs civilian and military economies, raising due diligence costs for U.S. and global entities that do not desire to contribute to the PRCs military modernization, the report states.

Chinese weapons ranges (CSBA graphic)

Work likewise prioritizes China as the most dangerous competitor, although he notes that Russia is remarkably advanced in military robotics. But robotic vehicles and drones are just one aspect of AI and automation, Work says. Equally important and a major strength for China is the intangible autonomy of software systems and communications networks that comb through vast amounts of sensor data to spot targets, route supplies, schedule maintenance, and offer options to commanders.

Since algorithms are invisible, its much harder to calculate the correlation of forces today than in the Cold War, when spyplanes and satellites could count tanks, planes, and ships. For example, Work said, the right automation package could convert obsolete fighter jets from scrapyard relics to lethal but expendable drones able to out-maneuver human pilots a possibility hinted at in DARPAs recent AlphaDogfight simulation where AI beat human pilots 5-0.

An AI-driven world will be rife with surprise, Work warned. If you had a great new AI into a junky old MiG-21 or MiG-19, and you come up against it, its going to surprise the heck out of a [US] pilot because its going to be a lot more capable than the platform itself might indicate.

Here is the original post:

We May Be Losing The Race For AI With China: Bob Work - Breaking Defense

Posted in Ai

These students figured out their tests were graded by AI and the easy way to cheat – The Verge

On Monday, Dana Simmons came downstairs to find her 12-year-old son, Lazare, in tears. Hed completed the first assignment for his seventh-grade history class on Edgenuity, an online platform for virtual learning. Hed received a 50 out of 100. That wasnt on a practice test it was his real grade.

He was like, Im gonna have to get a 100 on all the rest of this to make up for this, said Simmons in a phone interview with The Verge. He was totally dejected.

At first, Simmons tried to console her son. I was like well, you know, some teachers grade really harshly at the beginning, said Simmons, who is a history professor herself. Then, Lazare clarified that hed received his grade less than a second after submitting his answers. A teacher couldnt have read his response in that time, Simmons knew her son was being graded by an algorithm.

Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuitys AI was scanning for specific keywords that it expected to see in students answers. And she decided to game it.

Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords anything that seems relevant to the question. The questions are things like... What was the advantage of Constantinoples location for the power of the Byzantine empire, Simmons says. So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.

I wanted to game it because I felt like it was an easy way to get a good grade, Lazare told The Verge. He usually digs the keywords out of the article or video the question is based on.

Apparently, that word salad is enough to get a perfect grade on any short-answer question in an Edgenuity test.

Edgenuity didnt respond to repeated requests for comment, but the companys online help center suggests this may be by design. According to the website, answers to certain questions receive 0% if they include no keywords, and 100% if they include at least one. Other questions earn a certain percentage based on the number of keywords included.

As COVID-19 has driven schools around the US to move teaching to online or hybrid models, many are outsourcing some instruction and grading to virtual education platforms. Edgenuity offers over 300 online classes for middle and high school students ranging across subjects from math to social studies, AP classes to electives. Theyre made up of instructional videos and virtual assignments as well as tests and exams. Edgenuity provides the lessons and grades the assignments. Lazares actual math and history classes are currently held via the platform his district, the Los Angeles Unified School District, is entirely online due to the pandemic. (The district declined to comment for this story).

Of course, short-answer questions arent the only factor that impacts Edgenuity grades Lazares classes require other formats, including multiple-choice questions and single-word inputs. A developer familiar with the platform estimated that short answers make up less than five percent of Edgenuitys course content, and many of the eight students The Verge spoke to for this story confirmed that such tasks were a minority of their work. Still, the tactic has certainly impacted Lazares class performance hes now getting 100s on every assignment.

Lazare isnt the only one gaming the system. More than 20,000 schools currently use the platform, according to the companys website, including 20 of the countrys 25 largest school districts, and two students from different high schools to Lazare told me they found a similar way to cheat. They often copy the text of their questions and paste it into the answer field, assuming its likely to contain the relevant keywords. One told me they used the trick all throughout last semester and received full credit pretty much every time.

Another high school student, who used Edgenuity a few years ago, said he would sometimes try submitting batches of words related to the questions only when I was completely clueless. The method worked more often than not. (We granted anonymity to some students who admitted to cheating, so they wouldnt get in trouble.)

One student, who told me he wouldnt have passed his Algebra 2 class without the exploit, said hes been able to find lists of the exact keywords or sample answers that his short-answer questions are looking for he says you can find them online nine times out of ten. Rather than listing out the terms he finds, though, he tried to work three into each of his answers. (Any good cheater doesnt aim for a perfect score, he explained.)

Austin Paradiso, who has graduated but used Edgenuity for a number of classes during high school, was also averse to word salads but did use the keyword approach a handful of times. It worked 100 percent of the time. I always tried to make the answer at least semi-coherent because it seemed a bit cheap to just toss a bunch of keywords into the input field, Paradiso said. But if I was a bit lazier, I easily could have just written a random string of words pertinent to the question prompt and gotten 100 percent.

Teachers do have the ability to review any content students submit, and can override Edgenuitys assigned grades the Algebra 2 student says hes heard of some students getting caught keyword-mashing. But most of the students I spoke to, and Simmons, said theyve never seen a teacher change a grade that Edgenuity assigned to them. If the teachers were looking at the responses, they didnt care, one student said.

The transition to Edgenuity has been rickety for some schools parents in Williamson County, Tennessee are revolting against their districts use of the platform, claiming countless technological hiccups have impacted their childrens grades. A district in Steamboat Springs, Colorado had its enrollment period disrupted when Edgenuity was overwhelmed with students trying to register.

Simmons, for her part, is happy that Lazare has learned how to game an educational algorithm its certainly a useful skill. But she also admits that his better grades dont reflect a better understanding of his course material, and she worries that exploits like this could exacerbate inequalities between students. Hes getting an A+ because his parents have graduate degrees and have an interest in tech, she said. Otherwise he would still be getting Fs. What does that tell you about... the digital divide in this online learning environment?

See the rest here:

These students figured out their tests were graded by AI and the easy way to cheat - The Verge

Posted in Ai

Artificial intelligence expert moves to Montreal because it’s an AI hub – Montreal Gazette

Irina Rish, now a renowned expert in the field of artificial intelligence, first became drawn to the topic as a teenager in the former Soviet republic of Uzbekistan. At 14, she was fascinated by the notion that machines might have their own thought processes.

I was interested in math in school and I was looking at how you improve problem solving and how you come up with algorithms, Rish said in a phone interview Friday afternoon. I didnt know the word yet (algorithm) but thats essentially what it was. How do you solve tough problems?

She read a book introducing her to the world of artificial intelligence and that kick-started a lifelong passion.

First of all, they sounded like just mind-boggling ideas, that you could recreate in computers something as complex as intelligence, said Rish. Its really exciting to think about creating artificial intelligence in machines. It kind of sounds like sci-fi. But the other interesting part of that is that you hope that by doing so, you can also better understand the human mind and hopefully achieve better human intelligence. So you can say AI is not just about computer intelligence but also about our intelligence. Both goals are equally exciting.

Read the original here:

Artificial intelligence expert moves to Montreal because it's an AI hub - Montreal Gazette

Posted in Ai

3 Ways Artificial Intelligence Is Transforming The Energy Industry – OilPrice.com

Back in 2017, Bill Gates penned a poignant online essay to all graduating college students around the world whereby he tapped artificial intelligence (AI), clean energy, and biosciences as the three fields he would spend his energies on if he could start all over again and wanted to make a big impact in the world today.

It turns out that the Microsoft co-founder was right on the money.

Three years down the line and deep in the throes of the worst pandemic in modern history, AI and renewable energy have emerged as some of the biggest megatrends of our time. On the one hand, AI is powering the fourth industrial revolution and is increasingly being viewed as a key strategy for mastering some of the greatest challenges of our time, including climate change and pollution. On the other hand, there is a widespread recognition that carbon-free technologies like renewable energy will play a critical role in combating climate change.

Consequently, stocks in the AI, robotics, and automation sectors as well as clean energy ETFs have lately become hot property.

From utilities employing AI and machine learning to predict power fluctuations and cost optimization to companies using IoT sensors for early fault detection and wildfire powerline/gear monitoring, here are real-life cases of how AI has continued to power an energy revolution even during the pandemic.

Top uses of AI in the energy sector

Source: Intellias

#1. Innowatts: Energy monitoring and management The Covid-19 crisis has triggered an unprecedented decline in power consumption. Not only has overall consumption suffered, but there also have been significant shifts in power usage patterns, with sharp decreases by businesses and industries while domestic use has increased as more people work from home.

Houston, Texas-based Innowatts, is a startup that has developed an automated toolkit for energy monitoring and management. The companys eUtility platform ingests data from more than 34 million smart energy meters across 21 million customers, including major U.S. utility companies such as Arizona Public Service Electric, Portland General Electric, Avangrid, Gexa Energy, WGL, and Mega Energy. Innowatts says its machine learning algorithms can analyze the data to forecast several critical data points, including short- and long-term loads, variances, weather sensitivity, and more.

Related: The Real Reason The Oil Rally Has Fizzled Out

Innowatts estimates that without its machine learning models, utilities would have seen inaccuracies of 20% or more on their projections at the peak of the crisis, thus placing enormous strain on their operations and ultimately driving up costs for end-users.

#2. Google: Boosting the value of wind energy

A while back, we reported that proponents of nuclear energy were using the pandemic to highlight its strong points vis-a-vis the short-comings of renewable energy sources. To wit, wind and solar are the least predictable and consistent among the major power sources, while nuclear and natural gas boast the highest capacity factors.

Well, one tech giant has figured out how to employ AI to iron out those kinks.

Three years ago, Google announced that it had reached 100% renewable energy for its global operations, including its data centers and offices. Today, Google is the largest corporate buyer of renewable power, with commitments totaling 2.6 gigawatts (2,600 megawatts) of wind and solar energy.

In 2017, Google teamed up with IBM to search for a solution to the highly intermittent nature of wind power. Using IBMs DeepMind AI platform, Google deployed ML algorithms to 700 megawatts of wind power capacity in the central United States--enough to power a medium-sized city.

IBM says that by using a neural network trained on widely available weather forecasts and historical turbine data, DeepMind is now able to predict wind power output 36 hours ahead of actual generation. Consequently, this has boosted the value of Googles wind energy by roughly 20 percent.

A similar model can be used by other wind farm operators to make smarter, faster and more data-driven optimizations of their power output to better meet customer demand.

IBMs DeepMind uses trained neural networks to predict wind power output 36 hours ahead of actual generation

Source: DeepMind

#3. Wildfire powerline and gear monitoring In June, Californias biggest utility, Pacific Gas & Electric, found itself in deep trouble. The company pleaded guilty for the tragic 2018 wildfire accident that left 84 people dead and PG&E saddled with hefty penalties of $13.5 billion as compensation to people who lost homes and businesses and another $2 billion fine by the California Public Utilities Commission for negligence.

It will be a long climb back to the top for the fallen giant after its stock crashed nearly 90% following the disaster despite the company emerging from bankruptcy in July.

Perhaps the loss of lives and livelihood could have been averted if PG&E had invested in some AI-powered early detection system.

Source: CNN Money

One such system is by a startup called VIA, based in Somerville, Massachusetts. VIA says it has developed a blockchain-based app that can predict when vulnerable power transmission gear such as transformers might be at risk in a disaster. VIAs app makes better use of energy data sources, including smart meters or equipment inspections. Related: Worlds Largest Oilfield Services Provider Sells U.S. Fracking Business

Another comparable product is by Korean firm Alchera which uses AI-based image recognition in combination with thermal and standard cameras to monitor power lines and substations in real time. The AI system is trained to watch the infrastructure for any abnormal events such as falling trees, smoke, fire, and even intruders.

Other than utilities, oil and gas producers have also been integrating AI into their operations. These include:

By Alex Kimani for Oilprice.com

More Top Reads From Oilprice.com:

Read the original here:

3 Ways Artificial Intelligence Is Transforming The Energy Industry - OilPrice.com

Posted in Ai

How Artificial Intelligence Will Guide the Future of Agriculture – Growing Produce

New automated harvesters like the Harvest CROO Robotics strawberry robot utilizes AI to capture images of ripe berries ready to pick.Photo by Frank Giles

Artificial intelligence, or AI as it is more commonly called, has become more prominent in conversations about technology these days. But what does it mean? And how might it shape the future of agriculture?

In many ways, AI is already at work in agricultural research and in-field applications, but there is much more to come. Researchers in the field are excited about its potential power to process massive amounts of data and learn from it at a pace that far outstretches the capability of the human mind.

The newly installed University of Florida Vice President of Agriculture and Natural Resources, Scott Angle, sees AI as a unifying element of technology as it advances.

Robotics, visioning, automation, and genetic breakthroughs will need advanced AI to benefit growers, he says. Fortunately, UF recognized this early on and is developing a program to significantly ramp up AI research at the university.

Jim Carroll is a global futurist who specializes in technology and explaining it in a way that non-computer scientists can understand. He says first and foremost, AI is not some out-of-control robot that will terrorize and destroy our way of life like it is often portrayed in the media and popular culture.

This isnt new, Carroll says. I actually found articles in Popular Mechanics magazine in the 1930s that spoke of Giant Robot Brains that would steal all our jobs.

What is AI, really? The best way to think about it is that its an algorithm at heart its a computer that is really good at processing data, whether that be pure data, images, or other information. It has been trained and learns how to recognize patterns, trends, and insights in that information. The more it does it and gets the right scores, the better it gets. Its not really that scary.

John McCarthy is considered one of the founding fathers of AI and is credited with coining the term in 1955. He was joined by Alan Turing, Marvin Minsky, Allen Newell, and Herbert Simon in the early development of the technology.

Back in 1955, AI entered the academic world as a new discipline, and in subsequent years has experienced momentum in fits and starts. The technology went through a phase of frozen funding some called the AI winter. Some of this was because AI research was divided into subfields that didnt communicate with each other. Robotics went down one path while machine learning went down another. How and where would artificial neural networks be applied to practical effect?

But, as computing power has in-creased exponentially over time, AI, as Angle notes, is becoming a unifying technology that can tie all the subfields together. What once could only be imagined is becoming reality.

Dr. Yiannis Ampatzidis, an Assistant Professor who teaches precision agriculture and machine learning at UF/IFAS, says applications are already at work in agriculture including imaging, robotics, and big data analysis.

In precision agriculture, AI is used for detecting plant diseases and pests, plant stress, poor plant nutrition, and poor water management, Ampatzidis says. These detection technologies could be aerial [using drones] or ground based.

The imaging technology used to detect plant stress also could be deployed for precision spraying applications. Currently, John Deere is working to commercialize a weed sprayer from Blue River Technology that detects weeds and applies herbicides only to the weed.

Ampatzidis notes AI is utilized in robotics as well. The technology is used in the blossoming sector of robot harvesters where it is utilized to detect ripe fruit for picking. Floridas Harvest CROO Robotics is one example. Its robot strawberry harvester was used in commercial harvest during the 2019-2020 strawberry season in Florida.

Ampatzidis says AI holds great potential in the analytics of big data. In many ways, it is the key to unlocking the power of the massive amounts of data being generated on farms and in ag research. He and his team at UF/IFAS have developed the AgroView cloud-based technology that uses AI algorithms to process, analyze, and visualize data being collected from aerial- and ground-based platforms.

The amount of these data is huge, and its very difficult for a human brain to process and analyze them, he says. AI algorithms can detect patterns in these data that can help growers make smart decisions. For example, Agroview can detect and count citrus trees, estimate tree height and canopy size, and measure plant nutrient levels.

Carroll adds there is so much data in imagery being collected today.

An AI system can often do a better analysis at a lower cost, he says. Its similar to what we are talking about in the medical field. An AI system can read the information from X-rays and be far more accurate in a diagnosis.

So, are robots and AI coming to steal all our jobs? Thats a complicated question yet to be fully played out as the technology advances. Ampatzidis believes the technology will replace repetitive jobs and ones that agriculture is already struggling to fill with human labor.

It will replace jobs in factories, in agriculture [hand harvesters and some packinghouse jobs], vehicle drivers, bookkeepers, etc., Ampatzidis says. It also will replace many white-collar jobs in the fields of law, healthcare, accounting, hospitality, etc.

Of course, AI also could develop new jobs in the area of computer science, automation, robotics, data analytics, and computer gaming.

Carroll adds people should not fear the potential creative destruction brought on by the technologies enabled by AI. I always tell my audiences, Dont fear the future, he says. I then observe that some people see the future and see a threat. Innovators see the same future and see an opportunity.

Yiannis Ampatzidis, an Assistant Professor who teaches precision agriculture and machine learning at UF/IFAS, says AI applications are already at work in agriculture.Photo by Frank Giles

In July, the University of Florida announced a $70 million public-private partnership with NVIDIA, a multinational technology company, to build the worlds fastest AI supercomputer in academia. The system will be operating in early 2021. UF faculty and staff will have the tools to apply AI in multiple fields, such as dealing with major challenges like rising sea levels, population aging, data security, personalized medicine, urban transportation, and food insecurity. UF expects to educate 30,000 AI-supporting graduates by 2030.

AlphaGo, a 2017 documentary film, probably does about as good a job as any in illustrating the potential power of AI. The film documents a team of scientists who built a supercomputer to master the board game Go that originated in Asia more than 3,000 years ago. It also is considered one of the most complex games known to man. The conventional wisdom was that no computer would be capable of learning the vast number of solutions in the game and the reasoning required to win.

The computer, AlphGo, not only mastered the game in short order it took down human masters and champions of the game.

To learn more about the film, visit AlphaGoMovie.com.

Giles is editor of Florida Grower, a Meister Media Worldwide publication. See all author stories here.

Read more:

How Artificial Intelligence Will Guide the Future of Agriculture - Growing Produce

Posted in Ai

Dentsu’s Chief Automation Officer: ‘AI Should Be Injected In Every Process’ – AdExchanger

Agencies spend too much time doing manual work.

One of the biggest time sucks? Transferring data files between enterprise systems that dont talk to each other.

Max Cheprasov, now an exec at the Dentsu Aegis holding company level, recognized these inefficiencies while working at Dentsu agency iProspect starting in 2011. He set out to document and standardize processes while outsourcing inefficient tasks so that employees could focus more on strategic client work.

Eventually, he brought artificial intelligence into the agencys workflows, specifically natural language processing and machine learning, which helped accelerate the ability to interpret data, derive insights and generate reports.

By 2017, automation made iProspect the most profitable agency within Dentsu and Cheprasov was promoted to chief automation officer in order to scale his vision across the network. He drafted an eight-year plan, through 2025, with the ultimate goal of integrating AI wherever possible.

The opportunities are limitless, he said. AI and automation should be injected in every process and workflow.

By automating mundane tasks, AI helps agencies deliver work and insights to their clients faster.

When filling out RFPs, for example, teams often spend weeks on 50-page documents that are chock full of standard questions. But by partnering with workflow automation platform Catalytic, Cheprasovs team employed AI to fill out standard information on every RFP automatically. Subject matter experts then look over the answers and tweak them where necessary.

That process condensed the time it takes to fill out an RFP from weeks to several minutes, Cheprasov said.

Dentsu also uses Catalytic to automate campaign reporting so that agencies can deliver insights to clients quicker and more frequently. The platform automates tedious work, such as transferring and validating data files and uploading them into billing systems, thereby reducing manual effort by between 65% and 95%.

Data collection, processing and reformatting should be automated, because its a horrible use of peoples time, said Sean Chou, CEO of Catalytic.

In late 2017, Dentsu first began rolling out its strategy in Japan, where it identified 900 processes that were ripe for automation. The system is now also in place in the United States and Brazil, and markets across Europe and other parts of Asia are starting to get involved.

Today, Dentsu is exploring how to use AI to build automated processes for agency workflows that havent been documented before. Using computer vision and natural language processing, Cheprasovs team can analyze keystrokes to create process maps that it can later automate.

Its a good baseline for what people do, how they do it and how it should be redesigned, he said.

Dentsus long-term goal is to arm all of its employees with a virtual assistant, accessible through a conversational interface, that can carry out manual tasks and tap into a central brain, where all of the agencys processes live. To do that, Dentsu will train staff to use low-code or no-code systems, so they can engineer assistants and document new processes on their own.

This could help automate between 30% and 60% of what Dentsu employees currently spend their time on.

Stats like that can be scary for agency employees, but Cheprasovs goal is not to do away with jobs.

Mind-numbing tasks are generally spread across roles, rather than comprising a single persons entire job, and a lot of this grunt work has already been sent offshore in any case.

The mission is to elevate human potential, Cheprasov said, not to eliminate it.

See original here:

Dentsu's Chief Automation Officer: 'AI Should Be Injected In Every Process' - AdExchanger

Posted in Ai

Carrboro startup Tanjo to leverage its AI platform to help with NC’s reopening – WRAL Tech Wire

CARRBORO A Carrboro artificial intelligence (AI) startup is leveraging its technology platform to help business and community leaders navigate with North Carolinas COVID-19 reopening.

Carrboro-based Tanjois teaming up with the Digital Health Institute for Transformation (DHIT) to build an engine that uses machine learning and advanced analytics to ingest huge amounts of national and regional data and then provide actionable insights.

Successfully reopening our economy without risking the destruction of the health of our communities is the central challenge we are attempting to overcome, said Michael Levy, president of DHIT, in a statement. More reliable, local data-driven expert guidance enabled by smart AI is critical to allow the safe and successful reopening of our communities and businesses.

Consider the breadth of intelligence: health and epidemiological data, labor and economic data, occupational data, consumer behavior and attitudinal data, and environmental data.

Tanjo,founded by serial entrepreneur Richard Boyd in 2017, said it is designing dashboard to give stakeholders real-time intelligence and predictive modeling on population health risk, consumer sentiment and community resiliency.

Richard BoydUsers will be able to view the risk to their business and county, as well as simulate the impact of implementing evidence-based recommendations, enabling them to make informed decisions.

As part of the 2020 COVID-19 Recovery Act, the North Carolina Policy Collaboratory in late July awarded the DHIT a grant to research, validate, and build a simulation platform for North Carolinas business and community leaders.

DHIT and Tanjo entered into a formal strategic partnership in November 2019, pre-COVID 19.

The seven NC counties chosen for the initial pilot are: Ashe, Buncombe, Gates, Mecklenburg, New Hanover, Robeson, and Wake.

The overall project is a collaboration between Tanjo, DHIT, the Institute for Convergent Sciences and Innovate Carolina at the University of North Carolina, Chapel Hill, and the NC Chamber Foundation, among other key stakeholders.

If you are a community organization or business located in the counties listed above and are interested in being a beta tester for this initiative, contact communityconfidence@dhitglobal.org.

See the rest here:

Carrboro startup Tanjo to leverage its AI platform to help with NC's reopening - WRAL Tech Wire

Posted in Ai

Engineer-turned-photographer eyes switch to digital field with AI skills – The Straits Times

Artificial intelligence (AI) has not taken over the world yet, but former engineer Zack Wong is preparing himself for this brave new future.

Mr Wong, 43, recently did a tech immersion course at Republic Polytechnic to pick up AI skills. He has also learnt programming and coding.

This is a far cry from how he started his career - working as an engineer dealing with the repair development of aircraft engine components.

He then switched to supply chain operations within the same industry and oversaw the coordination of material supply between the operations, inventory and purchasing departments.

Mr Wong left that behind to branch out into photography at a boutique creative agency. He produced images that were used in editorials, commercials and advertisements.

But both the creative industry and his old aviation sector were hit by the coronavirus pandemic.

"The Covid-19 situation has... forced many to look for other jobs to sustain (themselves). I was just one of the many. I was looking to return to the aerospace industry but it was also going down due to the cut in flights and lockdowns," Mr Wong told The Straits Times.

"I wanted to upgrade and reskill myself with new knowledge in the growing industry of AI. With the downscaling of my company and the loss of projects, I found myself at a crossroads - whether to continue to be a resident photographer or upgrade with new skills, especially in this digital era of cloud and AI."

Mr Wong hopes that these skills will help him fulfil his long-term plan of working in the digital field, particularly in relation to computer vision and imaging, although he acknowledged there will be challenges.

"It is especially difficult for mid-career switchers like me, especially when we have only less than three months of experience and knowledge, and companies are reluctant to give us opportunities," he said.

"I have attended quite a few courses and webinars as well in order to keep myself updated on the current job market requirements."

Read more here:

Engineer-turned-photographer eyes switch to digital field with AI skills - The Straits Times

Posted in Ai

A voice-over artist asks: Will AI take her job? – WHYY

This story is from The Pulse, a weekly health and science podcast.

Subscribe on Apple Podcasts, Stitcher or wherever you get your podcasts.

My name is Nikki Thomas, and I am a voice-over artist. I speak into a microphone, and my voice is captured. I can change my accent. My pitch. My mood.

But its still me, right? Until its not. Because I am being replaced by my own voice an AI version of my voice.

It starts with TTS, or text-to-speech. Thats the same technology used to create Siri or Alexa. It captures a human voice and then artificially replicates that sound to read any digital text out loud.

I got hired for a TTS job. I delivered my spoken words to the client. Then a few weeks later, I could type words into a text box, and my voice clone said them back to me.

I asked longtime client and audio engineer Daren Lake to compare the two. And while concluding that the AI voice actually sounded pretty good, he could still hear that a robot made it.

Its got these warbling artifacts. I call it the zoom effect or the matrix sound, he said. Despite thinking I might be able to get away with it, the engineer in him didnt like it.

So one can tell the difference now. But when this technology gets better, could this be my new method of work? I record just a few voice samples and, before I know it, an 11-hour audiobook is produced with a voice that sounds just like mine in the time it takes me to copy and paste a document? It would be much more accurate, and reliable. An AI voice never fatigues or needs a week to recover from the flu.

Could I still consider myself a voice-over artist? If theres even a role for me. How will artificial intelligence affect creativity and artistry?

I took the question to Sarah Rose Siskind, one of the creators of a robot named Sophia. Sarah laughed when I asked if she was threatened by a robot taking her job. She told me about an 11-hour day spent getting Sophia to wink reason enough for her to believe her job was not at risk.

Sophia the Robot is an interviewer, guest speaker and host with over 16,000 YouTube subscribers. Siskind was on the writing team and worked with a group to shape Sophias personality.

An artist is a major component of her personality because we wanted her personality to be fascinated with areas not traditionally considered the domain of robots, Siskind said. However, it is hard to describe her outside of a relationship to the humans who came up with the idea of creating her.

Visit link:

A voice-over artist asks: Will AI take her job? - WHYY

Posted in Ai