Page 11234..1020..»

Category Archives: Ai

What is AI? Everything you need to know about Artificial …

Posted: March 24, 2020 at 5:34 am

Video: Getting started with artificial intelligence and machine learning

It depends who you ask.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

Special report: How to implement AI and machine learning (free PDF)

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ' superintelligence' -- which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader's guide to deep learning(Tech Pro Research)

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research is evolutionary computation, which borrows from Darwin's famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialized chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

See also: How artificial intelligence is taking call centers to the next level

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size -- Google's Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don't want to build their own machine learning models but instead want to consume AI-powered, on-demand services -- such as voice, vision, and language recognition -- Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella -- and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Internally, each of the tech giants -- and others such as Facebook -- use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Read more: How we learned to talk to computers, and how they learned to answer back (PDF download)

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana's days are numbered, although Microsoft was quick to reject this.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu's self-driving car, a modified BMW 3 series.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

There's too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each -- setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

IBM Watson competes on Jeopardy! in January 14, 2011

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson's win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95 percent. Recently Microsoft's Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it's likely this more intrusive use of AI technology -- including AI that can recognize emotions -- will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM's Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK's National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a "fundamental risk to the existence of human civilization". As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft's director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about "Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away."

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

While AI won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn't have the potential to impact. As AI expert Andrew Ng puts it: "many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work", saying he sees a "significant risk of technological unemployment over the next few decades".

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it's not a given that manual and robotic labor will continue to grow hand-in-hand.

Amazon bought Kiva robotics in 2012 and today uses Kiva robots throughout its warehouses.

Read the rest here:

What is AI? Everything you need to know about Artificial ...

Posted in Ai | Comments Off on What is AI? Everything you need to know about Artificial …

artificial intelligence | Definition, Examples, and …

Posted: at 5:34 am

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasksas, for example, discovering proofs for mathematical theorems or playing chesswith great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasps instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligenceconspicuously absent in the case of Sphexmust include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and proceduresknown as rote learningis relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the add ed rule and so form the past tense of jump based on experience with similar verbs.

Read more:

artificial intelligence | Definition, Examples, and ...

Posted in Ai | Comments Off on artificial intelligence | Definition, Examples, and …

Google open-sources framework that reduces AI training costs by up to 80% – VentureBeat

Posted: at 5:34 am

Google researchers recently published a paper describing a framework SEED RL that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldnt previously compete with large AI labs.

Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washingtons Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

SEED RL, which is based on Googles TensorFlow 2.0 framework, features an architecture that takes advantage of graphics cards and tensor processing units (TPUs) by centralizing model inference. To avoid data transfer bottlenecks, it performs AI inference centrally with a learner component that trains the model using input from distributed inference. The target models variables and state information are kept local, while observations are sent to the learner at every environment step and latency is kept to a minimum thanks to a network library based on the open source universal RPC framework.

SEED RLs learner component can be scaled across thousands of cores (e.g., up to 2,048 on Cloud TPUs), and the number of actors which iterate between taking steps in the environment and running inference on the model to predict the next action can scale up to thousands of machines. One algorithm V-trace predicts an action distribution from which an action can be sampled, while another R2D2 selects an action based on the predicted future value of that action.

To evaluate SEED RL, the research team benchmarked it on the commonly used Arcade Learning Environment, several DeepMind Lab environments, and the Google Research Football environment. They say that they managed to solve a previously unsolved Google Research Football task and that they achieved 2.4 million frames per second with 64 Cloud TPU cores, representing an improvement over the previous state-of-the-art distributed agent of 80 times.

This results in a significant speed-up in wall-clock time and, because accelerators are orders of magnitude cheaper per operation than CPUs, the cost of experiments is reduced drastically, wrote the coauthors of the paper. We believe SEED RL, and the results presented, demonstrate that reinforcement learning has once again caught up with the rest of the deep learning field in terms of taking advantage of accelerators.

Read this article:

Google open-sources framework that reduces AI training costs by up to 80% - VentureBeat

Posted in Ai | Comments Off on Google open-sources framework that reduces AI training costs by up to 80% – VentureBeat

Will COVID-19 Create a Big Moment for AI and Machine Learning? – Dice Insights

Posted: at 5:34 am

COVID-19 will change how the majority of us live and work, at least in the short term. Its also creating a challenge for tech companies such as Facebook, Twitter and Google that ordinarily rely on lots and lots of human labor to moderate content. Are A.I. and machine learning advanced enough to help these firms handle the disruption?

First, its worth noting that, although Facebook has instituted a sweeping work-from-home policy in order to protect its workers (along with Googleand a rising number of other firms), it initially required its contractors who moderate content to continue to come into the office. That situation only changed after protests,according toThe Intercept.

Now, Facebook is paying those contractors while they sit at home, since the nature of their work (scanning peoples posts for content that violates Facebooks terms of service) is extremely privacy-sensitive. Heres Facebooks statement:

For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons. We have taken precautions to protect our workers by cutting down the number of people in any given office, implementing recommended work from home globally, physically spreading people out at any given office and doing additional cleaning. Given the rapidly evolving public health concerns, we are taking additional steps to protect our teams and will be working with our partners over the course of this week to send all contract workers who perform content review home, until further notice. Well ensure that all workers are paid during this time.

Facebook, Twitter, Reddit, and other companies are in the same proverbial boat: Theres an increasing need to police their respective platforms, if only to eliminate fake news about COVID-19, but the workers who handle such tasks cant necessarily do so from home, especially on their personal laptops. The potential solution? Artificial intelligence (A.I.) and machine-learning algorithms meant to scan questionable content and make a decision about whether to eliminate it.

HeresGoogles statement on the matter, via its YouTube Creator Blog.

Our Community Guidelines enforcement today is based on a combination of people and technology: Machine learning helps detect potentially harmful content and then sends it to human reviewers for assessment. As a result of the new measures were taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.

To be fair, the tech industry has been heading in this direction for some time. Relying on armies of human beings to read through every piece of content on the web is expensive, time-consuming, and prone to error. But A.I. and machine learning are still nascent, despite the hype. Google itself, in the aforementioned blog posting, pointed out how its automated systems may flag the wrong videos. Facebook is also receiving criticism that its automated anti-spam system is whacking the wrong posts, including those thatoffer vital information on the spread of COVID-19.

If the COVID-19 crisis drags on, though, more companies will no doubt turn to automation as a potential solution to disruptions in their workflow and other processes. That will force a steep learning curve; again and again, the rollout of A.I. platforms has demonstrated that, while the potential of the technology is there, implementation is often a rough and expensive processjust look at Google Duplex.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Nonetheless, an aggressive embrace of A.I. will also create more opportunities for those technologists who have mastered A.I. and machine-learning skills of any sort; these folks may find themselves tasked with figuring out how to automate core processes in order to keep businesses running.

Before the virus emerged, BurningGlass (which analyzes millions of job postings from across the U.S.), estimated that jobs that involve A.I. would grow 40.1 percent over the next decade. That percentage could rise even higher if the crisis fundamentally alters how people across the world live and work. (The median salary for these positions is $105,007; for those with a PhD, it drifts up to $112,300.)

If youre trapped at home and have some time to learn a little bit more about A.I., it could be worth your time to explore online learning resources. For instance, theres aGooglecrash coursein machine learning. Hacker Noonalso offers an interesting breakdown ofmachine learningandartificial intelligence.Then theres Bloombergs Foundations of Machine Learning,a free online coursethat teaches advanced concepts such as optimization and kernel methods.

Excerpt from:

Will COVID-19 Create a Big Moment for AI and Machine Learning? - Dice Insights

Posted in Ai | Comments Off on Will COVID-19 Create a Big Moment for AI and Machine Learning? – Dice Insights

Modex 2020: MRO, ML and AI – Supply Chain Management Review

Posted: at 5:34 am

Theres a lot of buzz around Artificial Intelligence and Machine Learning, as supply chain leaders look for the killer app within their organizations. One promising area has been demand planning. Another has been MRO, as companies look to move from preventative to predictive maintenance. The idea is that by putting sensors on motors, gears and other critical components that measure conditions like heat, voltage output and vibration, a technician can better predict when that piece of equipment might fail. If it works, PMs could become a thing of the past.

So where are we? That was one of the questions I put to Phil Jones, Targets director of supply chain engineering, and Phil Gilkes, a regional maintenance manager with Dollar Tree, during a symposium put on by the National Center for Supply Chain Automation. The framework for that question was an asset management maturity model slide that Jones put together, illustrating the progression from an MRO organization at Level 0 Survival Mode, where equipment runs to failure and repairs are done as problems occur to Level 4 Predictive Maintenance, where AI and ML are utilized to analyze events to predict the timing of future issues and schedule maintenance.

According to Jones and Gilkes, Condition Based and Predictive Maintenance is the goal for both organizations, but realistically, both organizations are somewhere between Level 0, with equipment that runs to failure, Level 1 Calendar Based Maintenance with time-based PMs and Level 2 Usage Based Maintenance, where PMs are based on run times or the number of hours on a piece of equipment. Both organizations are investigating the first step to get to condition based maintenance, which is putting sensors on machines to monitor conditions, but neither was there yet or not there beyond piloting. And remember that these are two large organizations with a network of distribution centers and experience with automation.

One of the challenges for the MRO industry to get to that Level 4 Predictive Maintenance is going to be data. In order to produce reliable and actionable results, AI and ML need data and lots of it. Otherwise, the risk is that the maintenance system will start flagging issues that arent really issues, noted John Sorensen, senior vice president of lifecycle performance services at MHS. You dont want technicians and maintenance managers to think that the system cries wolf.

Where then does a progressive maintenance team start. Sorensen and Rob Schmidt, MHSs senior vice president of distribution and fulfillment, both recommended a crawl, walk, run, sprint approach similar to the adoption of any new technology.

Dont try to put sensors on every motor and gear in a facility, which can number more than 1,000. Rather, start by categorizing components and equipment according to their criticality to the operation. Running until it breaks might be appropriate to some pieces of equipment, especially if spare parts are in inventory and the equipment is easy to fix. A limited number of sensors might be appropriate on items that are more critical, like a PLC. And finally, a broad array of sensors that can begin to gather important operating data in bulk can be deployed on mission critical items where reliability counts. At the same time, added Sorensen, you might just need a data set of 200 sensors to begin the journey.

Read the rest here:

Modex 2020: MRO, ML and AI - Supply Chain Management Review

Posted in Ai | Comments Off on Modex 2020: MRO, ML and AI – Supply Chain Management Review

How are AI and analytics disrupting the manufacturing sector? – Technology Record

Posted: at 5:34 am

What are the characteristics of companies that are disrupting the manufacturing sector?Disruptive manufacturers have two key attributes: the ability to quickly adopt emerging technologies and the ability to cultivate a culture where employees are open to innovation, easy to train and capable of quickly and proactively adapting to new market trends, processes and operating models. These characteristics enable manufacturers to rapidly implement new technologies and reap the benefits long before their competitors.

Which technologies are helping these manufacturers to succeed?Every manufacturer wants to minimise machine downtime and optimise plant floor operations. Disruptors know that the key to achieving this is to capture and analyse data. Technologies like artificial intelligence (AI), analytics, internet of things, deep learning and machine learning make it easy to collect data about machine performance so manufacturers can carry out predictive maintenance and optimise processes. Front-office teams also use these technologies to understand and quickly react to the demands of their customers and buyers.

What challenges do non-disruptive manufacturing companies face and how can they overcome these?Many non-disruptors are older companies with very rigid cultures and employees who are reluctant to adopt new technologies due to uncertainty about how they will impact job roles. To secure employee buy-in, executives should share their vision of how humans and machines will work in unison across their company. They should also highlight how new technology will improve their employees ability to complete tasks.

Most manufacturers struggle to keep pace with technology evolution. New solutions are being introduced constantly and deciding which to invest in and how to deploy them can take a while. This puts manufacturers at risk of lagging behind when the next big technology emerges.

Can you share your vision for the future of manufacturing?2020 will be a year of continuation rather than revolution. Manufacturing companies will take full control of their data by organising it into usable systems that can be accessed via both the cloud and on-premises servers. Meanwhile, technology like 5G will continue to grow as manufacturers look for dependable connectivity on factory floors, and augmented reality will be used to improve human-machine interactions.

Human-machine partnerships will proliferate, embedding automation deeper into the manufacturing space and driving near 100% uptime. Companies will have to accept that they can only drive speed and agility with tools like AI, machine learning and robotics and redirect human employees to nuanced and empathic tasks that require the deeper contextual knowledge.

This article was originally published in the Winter 2019 issue of The Record. Subscribe for FREE here to get the next issues delivered directly to your inbox.

Read more from the original source:

How are AI and analytics disrupting the manufacturing sector? - Technology Record

Posted in Ai | Comments Off on How are AI and analytics disrupting the manufacturing sector? – Technology Record

AI vs COVID-19: Here are the AI tools and services fighting coronavirus – AI News

Posted: at 5:34 am

AI tools and services are being used or offered by companies around the world to help fight the coronavirus pandemic.

In a best-case scenario, whereby the virus transmission is massively mitigated, researchers from Imperial College London predict there would still be in the order of 250,000 deaths in GB, and 1.11.2 million in the US resulting from the coronavirus.

Imperial College Londons analysis landed in Washington over the weekend and its said to be the reason behind the US stepping up its response. British PM Johnson warned that further measures in the UK will likely be introduced in the coming days and a coronavirus bill for emergency powers is making its way to the House of Commons.

Much like in wartime, technologies and social experiments that under normal circumstances would take years or decades to be tested and implemented will be rushed into use in days or weeks.

Chinas Tianhe-1 supercomputer is offering doctors around the world free access to an AI diagnosis tool for identifying coronavirus patients based on a chest scan. The supercomputer can sift through hundreds of images generated by computed tomography (CT) and can give a diagnosis in about 10 seconds.

Alibaba Cloud has launched a series of AI technologies including the International Medical Expert Communication Platform on Alibaba Groups enterprise chat and collaboration app, DingTalk. The platform allows verified medical personnel around the world to share their experiences through online messaging and video conferencing.

Another solution from Alibaba estimates the trajectory of a coronavirus outbreak in a specific region using a machine learning algorithm based on public data gathered from 31 provinces in China. Within China, it has a 98 percent accuracy rate.

For researchers and institutions working hard towards a vaccine, Alibaba has opened its AI-powered computational platform to accelerate data transfer and computation time in areas such as virtual drug screening.

Several of the other leading cloud players in China including Baidu and Tencent have opened up specific parts of their solutions for free to qualifying medical personnel. In the US, Microsoft and Google have also done the same.

Last month, scientists from South Korea-based firm Deargen published a paper with the results from a deep learning-based model called MT-DTI which predicted that, of available FDA-approved antiviral medication, the HIV drug atazanavir is the most likely to bind and block a prominent protein on the outside of the virus which causes COVID-19. In early trials, coronavirus sufferers are reportedly improving significantly using HIV drugs.

Hong Kong-based Insilico Medicine also published a paper in February which, instead of seeking to repurpose available drugs, detailed the use of a drug discovery platform which generated tens of thousands of novel molecules with the potential to bind a specific SARS-CoV-2 protein and block the viruss ability to replicate. A deep learning filtering system helped Insilico narrow down the list and the company has synthesised two of the seven molecules and plans to test them in the next two weeks with a pharmaceutical partner.

British AI startup Benevolent AI has also been active in seeking to identify approved drugs that might block the viral replication of COVID-19. The companys AI system examined a large repository of medical information to identify six compounds that effectively block a cellular pathway that appears to allow the virus into cells to make more virus particles. Baricitinib, used for treating rheumatoid arthritis, looks to be the most effective against the virus.

For its part, the White House has urged AI experts to analyse a dataset of 29,000 scholarly articles about coronavirus and use them to develop text and data-mining techniques to help scientists answer the following key questions about COVID-19:

The entire COVID-19 Open Research Dataset (CORD-19) has been made available on SemanticScholar and will be updated whenever new research is published.

While the outlook around the world is currently grim, some of these AI-powered tools and developments offer a glimmer of hope we may be to reduce the viruss spread, improve treatment for patients, and ultimately conquer the coronavirus sooner than otherwise would have been possible.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Continued here:

AI vs COVID-19: Here are the AI tools and services fighting coronavirus - AI News

Posted in Ai | Comments Off on AI vs COVID-19: Here are the AI tools and services fighting coronavirus – AI News

How Is AI Helping To Commercialize Space? – Forbes

Posted: at 5:34 am

AI Helping to commercialize space

Even before modern computers became a reality, science fiction gave us a plethora of examples of artificial intelligence and smart robots in the context of outer space. From Hal in 2001: A Space Odyssey and the computer on Star Trek to C3PO and R2D2 in Star Wars and even the fantastic machines in Hitchhikers Guide to the Galaxy, it seems that AI and space go together. While those examples are fiction, we are indeed starting to see examples in the real world where we are using artificial intelligence to help commercialize space.

AI Assisting in the Manufacturing of Satellites and Spacecraft

Satellites and spacecraft are complex and expensive pieces of equipment to put together. Within the spacecraft manufacturing operations, there are repetitive and complex tasks that need to be done with exacting measures of precision and often must be done in clean rooms with little exposure to potential contamination. AI-enabled systems and robotics are being used to help the manufacturing process and take away some of the tasks that humans currently do so that humans can focus on the parts that computers cant assemble.

When working to assemble satellites, not only can AI help to physically speed up the process but it can analyze the process itself to see if there are ways the process can be improved. In addition, the AI is also able to look at the work that has been performed and ensure that everything is done properly. Furthermore, the use of collaborative robots (cobots) as part of the manufacturing process are helping to reduce the need for human workers in clean rooms, and make more reliable manufacturing steps that can be error-prone.

AI-enhanced imagery

Satellites are generating thousands, if not millions, of images every minute of the day. Satellites process about 150 terabytes of data everyday. These images capture everything from weather and environmental imagery and data to images down to just inches of every inch of the globe. Capturing images of Earth automatically introduces a number of challenges and opportunities where AI is helping. Without AI, humans are mostly responsible for interpreting, understanding, and analyzing imagery. By the time a human gets around to interpreting an image, you may have to wait for the satellite to move back around to the same position to further refine image analysis.

The power of deep learning and AI-enabled recognition provides significant power in analyzing images and providing ability to review the millions of images produced by spacecraft. Artificial intelligence on the other end can analyze the images as they are being taken and determine if there are any issues with the images. Unlike humans, AI does not need to sleep or take breaks so it can rapidly process a lot of data. Using AI to capture images of Earth also prevents the need for large amounts of communication to and from Earth to analyze photos and determine whether a new photo needs to be taken. By cutting back on communication, the AI is saving processing power, reducing battery usage, and speeding up the image gathering process.

Satellites are also being used to analyze natural disasters from space. Detailed imagery from a satellite can help those on the ground to see victims, determine the course of the disaster, and more. Artificial intelligence is being used to help speed up the response of satellites to natural disasters. With the help of the onboard AI, satellites are able to determine where a natural disaster is located and navigate to that location. They are also able to automate the image gathering process so that the computer does not have to wait for a human in order to have a quick response.

AI systems are even being used to help analyze data collected from probes heading into deep space to see if they are capable of supporting life. The AI looks at patterns in worlds to help determine if they are habitable or might have some form of life existing on them. Potential planets are then sent to humans for further review.

Monitor the Health of Satellites

Satellites are complex pieces of equipment to operate. There are many potential problems that could arise, from equipment malfunctions to collisions with other satellites. In order to help keep satellites functioning properly, AI is used to monitor the health of satellites. AI can keep constant watch on sensors and equipment, provide alerts, and in some cases, carry out corrective action. SpaceX for example, uses AI to keep its satellites from colliding with other objects in space.

AI is also used to control the navigation of satellites and other spacecraft. The AI is able to look at the patterns of other satellites, planets, and space debris. Once the AI has found the patterns, it is able to change the path of the craft to avoid any collisions. While this is proving powerful, some AI experts have concerns about the potential vulnerability or failure of these systems. Experts believe that with AI navigation installed on a spacecraft, that the craft becomes more vulnerable. Turning to AI for cybersecurity and craft health monitoring can help to counteract this though.

In addition to keeping spacecraft operational, communicating between Earth and space can be challenging. Depending on the state of the atmosphere, interference from other signals and the environment, there may be a lot of communications difficulties that a satellite needs to overcome. AI is now being used to help control satellite communication to overcome any transmission problems. These AI-enabled systems are able to determine the amount of power and frequencies that are needed to transmit data back to Earth or to other satellites. With an AI onboard, the satellite is constantly doing this so that signals can get through as the satellite continues in its orbit.

Even spacecraft on other planets or deep in space are using AI in their operation, such as the Mars rovers currently operating on the red planet. On a recent AI Today podcast, NASA Jet Propulsion Laboratory (JPL) chief Tom Soderstrom shared insights into how AI is being used for the Mars rovers, spacecraft, and operations at facilities across the world.

AI on the mars rover is used to help it navigate the planet. The computer is able to make multiple changes to the rovers course every minute. Technology behind the Mars rovers are very similar to that used by self-driving cars. The major difference is that the rover has to navigate more complicated terrain and does not have other vehicular or pedestrian traffic to take into account. That complicated terrain is analyzed by the computer vision systems in the rover as it moves. If a terrain problem is encountered, the autonomous system makes a change to the course of the rover to avoid it or adjust navigation.

AI and Space: Made for Each Other

Over the last few years we have continued to see a large effort to commercialize space. Several companies are even looking to start tourist trips into space. Artificial intelligence is working to make space commercialization a possibility and to make space a safe environment in which to operate. The various benefits of AI in space all work together to enable further venturing into the unknown.

View original post here:

How Is AI Helping To Commercialize Space? - Forbes

Posted in Ai | Comments Off on How Is AI Helping To Commercialize Space? – Forbes

‘Embrace the AI revolution’: The growing role of AI in audio workflows – PSNEurope

Posted: at 5:34 am

Artificial Intelligence (AI) is having a transformative effect on a huge range of industries, and the world of media and entertainment is no exception. Creators and machines arecontinuing to become more intertwined, with creative workflows taking on new shapes as AI-assistance gathers momentum.

At a broad level, people are recognising that technology and creativity go hand in hand. Creative professionals are expressing an interest in how AI and machine learning can aid the creative process. And although the discussion about machines replacing humans remains prevalent, the reality is much less dystopian. Rather than being worried about losing their jobs to technology, they are recognising the potential for AI-powered tools to make processes more intuitive and reduce the time spent on tedious, uncreative tasks.

When talking about audio specifically, its no secret that AI is quickly becoming a vital cog in the machine and the truth is were only just scratching the surface. So, what role is AI currently playing within audio workflows and how is this growing trend likely to develop in the future?

Transforming workflows

When it comes to audio workflows, there are three main areas where AI is starting to have an impact: assisted mastering, assisted mixing and assisted composition. All three are at slightly different points on the adoption scale.

For example, AI is already well established in the mastering process despite this arguably being the most specialised area of music production. The goal of mastering is to make the listening experience consistent across all formats. The process varies across different formats (Spotify, CDs, movies, etc.) as each has different loudness constraints, making mastering extremely technical and potentially costly.

There are very few skilled mastering engineers around, but AIis proving to be a viable and democratising alternative for many musicians. By analysing data and learning from previous tracks, AI-powered tools enable less experienced engineers to quickly and easily achieve professional results, albeit without the finesse of a human expert.

Next, we come to assisted mixing which, although currently slightly behind mastering in terms of adoption, is developing fast. With so much content being created for OTT services such as Netflix and Amazon Prime, the volume of audio work happening in post is increasing dramatically. Facilities are therefore looking for ways to work faster and more cost-efficiently.

AI tools can help engineers and audio teams make basicdecisions and complete the more routine tasks, thereby saving valuable pre-mixing time and enabling humans to focus on the more complex and creative elements.

For example, some mastering plugins contain built-in intelligence that analyses source material (such as guitars or vocals) and puts it in the context of the rest of the mix to suggest mixing decisions. By taking on much of the initial heavy lifting, tools such as this can be hugely beneficial for less experienced users.

Finally, theres audio composition, another area of music production that is quickly realising the value of AI. More and more tools are using deep learning algorithms to identify patterns in huge amounts of source material and then utilising the insights generated to compose basic tunes and melodies.

Theyre by no means perfect. But intuitive, user-friendly AI systems are having a transformative effect on audio workflows.

Preparing for an AI future

The prevalence of AI in audio workflows is only going to gather momentum in the months and years to come. AI could be well suited to up-and-coming artists who dont rely on music as their primary income and have limited time and resources to dedicate to song writing.

But the real opportunity is in post-production, due to the time- to-market pressures involved. Sound engineers can use AI to speed up and simplify baseline tasks, enabling them to focus on the high- value aspects that require more creativity.

In the long term, AI could be used to manage complex installations and systems. With audio-over-IP, teams manage routing from central software so they can pool resources to support projects. AI could be used to manage these complex networks of computers and software.

Ultimately, were at the tip of the iceberg. For beginner and intermediate-level creative professionals, AI tools can act as an assistant that can learn their mixing habits over time and help audio sound the best it possibly can. For more experienced professionals, it can help increase efficiency by removing many of the tedious, time-consuming tasks.

More like this:

AI will never replace humans entirely, but its clear that the technology is set to play a key role in the years to come as it continues to get more advanced. Audio professionals have to be prepared to embrace the AI revolution.

See the rest here:

'Embrace the AI revolution': The growing role of AI in audio workflows - PSNEurope

Posted in Ai | Comments Off on ‘Embrace the AI revolution’: The growing role of AI in audio workflows – PSNEurope

The Latest and Greatest AI-Enabled Deepfake Takes us ‘Back to the Future’ – Animation World Network

Posted: at 5:34 am

With well over 6 million views since its mid-February release, YouTuber EZRyderX47s Back to the Future deepfake video, with Robert Downey Jr. and Tom Holland seamlessly replacing Christopher Lloyd and Michael J. Fox, has become quite the viral sensation. The video is brilliantly done, from the lip-sync to the anything but uncanny eyes; the choice of films, and clip, was inspired as well, a welcome window into a new riff on a Hollywood classic. Produced using two readily available pieces of free software HitFilm Express, from FXhome, and Deepfacelab the startingly believable piece instantly conjures up all sorts of notions, both wonderful and sinister, regarding the seemingly unlimited horizons of AI-enhanced digital technology. If todays visual magicians can create any image with stunning photoreal clarity, what, dare we ask, can propogandists, criminals and other bad actors do with the same digital tools? Ah, so nice to find a new target, if for only a few minutes, for our coronavirus-stoked paranoia.

If you watch AWN's exclusive interview with AI expert Andrew Glassner at FMX 2019, not only will you get a great overview of AI, neural networks,and machine learning fundamentals, but... you'll come away afraid... very afraid.

For the films director, Franois Brousseau (aka EZRyderX47), the underlying technology points to a limitless creative future. With these tools, I can create an almost infinite number of parallel universes, he gushes. I can revive great actors from the past. I can put actors-of-now into movies of the past. It is almost a limitless magical universe. For Josh Davies, CEO of HitFilm Express creator FXhome, the technology helps level the creative playing field, enhancing competition by enabling smaller studios to produce more impactful work. It will enable more of the things that take time and effort, so that smaller teams can achieve the level of quality that larger teams have, he notes. Larger teams will then also be able to use these tools to produce even more amazing imagery and benefit from a better workflow. In short, whats good will be made better.

So, whats a deepfake video? How are they made? How can they be detected?

The use of digital technology to replace someone in an image or video has been around for some time, from simple Photoshop morphs to elaborately crafted films like Forrest Gump. More recently, weve seen a slew of digital characters, both replaced and de-aged, from Carrie Fisher as Princess Leia to Samuel Jackson as Nick Fury. But, with the rapidly expanding integration of AI in VFX methodology and production, coupled with fast, AI-enabled GPUs, todays replacement technology has taken a significant leap in sophistication. Case in point, Martin Scorseses recent gem for Netflix, The Irishman, made use of cutting-edge AI-backed digital tools developed by ILM. Their software, ILM Facefinder, used AI to sift through thousands of images from years worth of performances by actors Robert DeNiro, Al Pacino, and Joe Pesci, matching the camera angles, framing, lighting and expressions of the scene being rendered. This gave ILM artists a relevant reference to compare against every frame in the live-action shot. These visual references were used to refine digital doubles created for each actor, so they could be transformed into the target age for each specific scene in the film. The results were dramatic, allowing the actors, all in their 70s, to be transformed back into their 20s, something not possible using even the best makeup techniques.

The term deepfake derives from the notion of deep learning, a branch of machine learning, which is itself part of the AI world, and the notion of fake, which is to say, a counterfeit or forged version of something. With new methods of channeling enormous computer processing power to analyze massive amounts of data, AI is being harnessed more and more to visualize that data; by analyzing lots of images of a persons face, for example, software can us AI and machine learning to get really good at understanding what that face really looks like, down to the pixel level, and how it can then be manipulated and recreated in new ways with a high degree of accuracy. With a deepfake video, the software can get good not only at analyzing and learning about the face you want to recreate, but it can also get good at understanding the image or video you want to transpose that face onto. Given the time to properly learn both faces, AI-enabled software can digitally create a face that overlays the two sets of learned data, placing the new face onto the old. Tom Holland becomes Michael J. Fox!

Brousseau has been releasing deepfake videos for some time; earlier efforts included replacing Nicolas Cage with Keanu Reeves in Ghost Rider, and Jim Parsons Sheldon Cooper suddenly sporting Jack Nicholsons smiling Joker face in an episode of The Big Bang Theory. The Back to the Future video is his best yet.

For the director, the process begins with the faces. To start, I had to build the 2 facesets, he explains. I had to find all the angles of the actors' faces. I found images from interviews and films. I used HitFilm to cut the scenes where the faces were at their best. I then extracted and cleaned the faces using Deepfacelab tools. I deleted the problematic images -- blurred images, obstructed faces, bad angles, etc. This part took me around two or three days. This step is not very difficult, but it takes a long time.

You need to collate a high-quality image database, using images of the person that youre trying to deepfake, Davies adds. They will also need to have some knowledge - you cant simply rip every single photo from all kinds of footage. There are a number of things that will compromise a deepfake - for example, if an actor has a beard in one set of images and not in the other, if some images are lower resolution, are blurrier, etc. Currently, there is still a degree of manual process needed to find the best images. Of course, in the future we hope this will be AI enhanced, and they will be able to automatically identify images of the same people, and also the best kinds of images to use for deepfakes.

Finding a deepfakable scene with both actors side by side is a critical, and difficult step. I tested several scenes before finding the right one, and it took me about a week of trial and error to get it right, Rousseau reveals.

The face detection phase of the scene was also challenging. Over several frames, Deepfacelab had difficulty detecting the correct angles of faces and obstructed faces, he goes on to describe. I had to do the work manually. I also had to add a mask on some frames where the faces were obstructed. This part can be tricky, and it took me one or two days of trial and error.

With the scene and face data in hand, Rousseau brought on the AI tool. At that point, I started training artificial intelligence, he states. I tried two architecture models: the DF and the LIAE. The DF was problematic, but the LIAE was doing a pretty good job. This part took me around four or five days per face; it was time consuming but pretty easy.

Once the AI learning was over, he used Deepfacelab to convert the images to an MP4 video, performing several tests with different parameters. It took him one day to process both faces.

Then, he editing the video using HitFilm Express. I used the video transition effect Fade to color, as well as the audio transition Fade, he shares At 0:28 of the video, there is a guy who passes in front of Doc for 2 frames and Deepfacelab wasn't able to correctly render it. I had to use a mask of RDJ's face provided by the Deepfacelab software. I took the mask from a frame before the guy passes and I put it on the 2 problematic frames. Then I used the Blend Darken effect on the mask for the hairs of the guy to be visible. It took me about a day and the masking part was pretty tricky.

After watching the Back to the Future deepfake a couple times and marveling at its sophisticated visual trickery, you may say to yourself, Of course its fake. Ive seen the original. But what if I hadnt? How would I ever know which is which? According to Davies, there are ways to spot a deepfake. At the moment, the main places you can see telltale signs of manipulation are on the edges of what its replacing, he says. Generally, you can see this in the central two-thirds of the face, including shadowing around the chin area and where the forehead meets the hairline. You can also find issues caused by a limited set of facial perspectives in the sampled facial datasets. Deepfake generally works better on front angles of the face, Davies continues. A way around this of course is to ensure that your actor doesnt move much, or turn his face too far from the camera. But again, AI technology will do a far better job of looking at this - it is likely they will be able to see the discrepancy in a single pixel, which far surpasses what the human eye can detect.

When asked about the arms race already begun between those creating, and those trying to uncover, deepfake videos, Davies is optimistic good will triumph over evil. It has been often assumed in the past that advancing technology will spell the end of humanity, he muses. This has never really been evidenced but it continues to be in the forefront of many peoples minds when presented with something new. Quite simply put, more money and resources will be put into working out what has been created by AI, rather than the creators making it in the first place. This is because those wanting to distinguish between real and fake life, will be supported and backed by governments, by insurance companies and industry, who want to identify anyone using this for nefarious reasons. Even now, we can see that deepfakes are being uncovered, and those fighting the manipulation of imagery will always be a step ahead of the latest deepfake tech.

Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.

See original here:

The Latest and Greatest AI-Enabled Deepfake Takes us 'Back to the Future' - Animation World Network

Posted in Ai | Comments Off on The Latest and Greatest AI-Enabled Deepfake Takes us ‘Back to the Future’ – Animation World Network

Page 11234..1020..»