Scientists Have Discovered a Brand New Electronic State of Matter – ScienceAlert

Scientists have observed a new state of electronic matter on the quantum scale, one that forms when electrons clump together in transit, and it could advance our understanding and application of quantum physics.

Movement is key to this new quantum state. When electric current is applied to semiconductors or metals, the electrons inside usually travel slowly and somewhat haphazardly in one direction.

Not so in a special type of medium known as aballistic conductor, where the movement is faster and more uniform.

The new study shows how in very thin ballistic conducting wires, electrons can gang up creating a whole new quantum state of matter made solely from speeding electrons.

"Normally, electrons in semiconductors or metals move and scatter, and eventually drift in one direction if you apply a voltage," says physicist Jeremy Levy, from the University of Pittsburgh. "But in ballistic conductors the electrons move more like cars on a highway."

"The discovery we made shows that when electrons can be made to attract one another, they can form bunches of two, three, four and five electrons that literally behave like new types of particles, new forms of electronic matter."

Ballistic conductors can be used for stretching the boundaries of what's possible in electronics and classical physics, and the one used in this particular experiment was made from lanthanum aluminate and strontium titanate.

Interestingly, when the researchers measured the levels of conductance they found they followed one of the most well-known patterns in mathematics Pascal's triangle. Asconductanceincreased, it stepped up in a pattern that matches one of the rows of Pascal's triangle, following the order 1, 3, 6, 10 and so on.

"The discovery took us some time to understand but it was because we initially did not realise we were looking at particles made up of one electron, two electrons, three electrons and so forth," says Levy.

This clumping of electrons is similar to the way that quarks bind together to form neutrons and protons, according to the researchers. Electrons in superconductors can team up like this too, joining together in pairs to coordinate movement.

The findings may have something to teach us about quantum entanglement, which in turn is key to making progress with quantum computing and a super-secure, super-fast quantum internet.

According to Levy, it's another example of how we're reverse engineering the world based on what we've found from the discovery of the fundamentals of quantum physics building on important work done in the last few decades.

"Now in the 21st century, we're looking at all the strange predictions of quantum physics and turning them around and using them," says Levy.

"When you talk about applications, we're thinking about quantum computing, quantum teleportation, quantum communications, quantum sensing ideas that use the properties of the quantum nature of matter that were ignored before."

The research has been published in Science.

See original here:
Scientists Have Discovered a Brand New Electronic State of Matter - ScienceAlert

Machine Learning on AWS

Amazon SageMaker enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. It removes the complexity that gets in the way of successfully implementing machine learning across use cases and industriesfrom running models for real-time fraud detection, to virtually analyzing biological impacts of potential drugs, to predicting stolen-base success in baseball.

Amazon SageMaker Studio: Experience the first fully integrated development environment (IDE) for machine learning with Amazon SageMaker Studio, where you can perform all ML development steps. You can quickly upload data, create and share new notebooks, train and tune ML models, move back and forth between steps to adjust experiments, debug and compare results, and deploy and monitor ML models all in a single visual interface, making you much more productive.

Amazon SageMaker Autopilot: Automatically build, train, and tune models with full visibility and control, using Amazon SageMaker Autopilot. It is the industrys first automated machine learning capability that gives you complete control and visibility into how your models were created and what logic was used in creating these models.

Link:
Machine Learning on AWS

What is machine learning? – Brookings

In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term artificial intelligence to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observationsto demonstrate, that is, an innate intelligence.

The question was how to achieve that goal. Early efforts focused primarily on whats known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950sone of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as machine learningit wasnt until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.

Machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning works.

The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning worksas well as how it doesnt.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.If you think about it long enough, this makes sense. When we look at a picture of someone, our brains unconsciously estimate how likely it is that we have seen their face before. When we drive to the store, we estimate which route is most likely to get us there the fastest. When we play a board game, we estimate which move is most likely to lead to victory. Recognizing someone, planning a trip, plotting a strategyeach of these tasks demonstrate intelligence. But rather than hinging primarily on our ability to reason abstractly or think grand thoughts, they depend first and foremost on our ability to accurately assess how likely something is. We just dont always realize that thats what were doing.

Back in the 1950s, though, McCarthy and his colleagues did realize it. And they understood something else too: Computers should be very good at computing probabilities. Transistors had only just been invented, and had yet to fully supplant vacuum tube technology. But it was clear even then that with enough data, digital computers would be ideal for estimating a given probability. Unfortunately for the first AI researchers, their timing was a bit off. But their intuition was spot onand much of what we now know as AI is owed to it. When Facebook recognizes your face in a photo, or Amazon Echo understands your question, theyre relying on an insight that is over sixty years old.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.

The machine learning algorithm that Facebook, Google, and others all use is something called a deep neural network. Building on the prior work of Warren McCullough and Walter Pitts, Frank Rosenblatt coded one of the first working neural networks in the late 1950s. Although todays neural networks are a bit more complex, the main idea is still the same: The best way to estimate a given probability is to break the problem down into discrete, bite-sized chunks of information, or what McCullough and Pitts termed a neuron. Their hunch was that if you linked a bunch of neurons together in the right way, loosely akin to how neurons are linked in the brain, then you should be able to build models that can learn a variety of tasks.

To get a feel for how neural networks work, imagine you wanted to build an algorithm to detect whether an image contained a human face. A basic deep neural network would have several layers of thousands of neurons each. In the first layer, each neuron might learn to look for one basic shape, like a curve or a line. In the second layer, each neuron would look at the first layer, and learn to see whether the lines and curves it detects ever make up more advanced shapes, like a corner or a circle. In the third layer, neurons would look for even more advanced patterns, like a dark circle inside a white circle, as happens in the human eye. In the final layer, each neuron would learn to look for still more advanced shapes, such as two eyes and a nose. Based on what the neurons in the final layer say, the algorithm will then estimate how likely it is that an image contains a face. (For an illustration of how deep neural networks learn hierarchical feature representations, see here.)

The magic of deep learning is that the algorithm learns to do all this on its own. The only thing a researcher does is feed the algorithm a bunch of images and specify a few key parameters, like how many layers to use and how many neurons should be in each layer, and the algorithm does the rest. At each pass through the data, the algorithm makes an educated guess about what type of information each neuron should look for, and then updates each guess based on how well it works. As the algorithm does this over and over, eventually it learns what information to look for, and in what order, to best estimate, say, how likely an image is to contain a face.

Whats remarkable about deep learning is just how flexible it is. Although there are other prominent machine learning algorithms tooalbeit with clunkier names, like gradient boosting machinesnone are nearly so effective across nearly so many domains. With enough data, deep neural networks will almost always do the best job at estimating how likely something is. As a result, theyre often also the best at mimicking intelligence too.

Yet as with machine learning more generally, deep neural networks are not without limitations. To build their models, machine learning algorithms rely entirely on training data, which means both that they will reproduce the biases in that data, and that they will struggle with cases that are not found in that data. Further, machine learning algorithms can also be gamed. If an algorithm is reverse engineered, it can be deliberately tricked into thinking that, say, a stop sign is actually a person. Some of these limitations may be resolved with better data and algorithms, but others may be endemic to statistical modeling.

To glimpse how the strengths and weaknesses of AI will play out in the real-world, it is necessary to describe the current state of the art across a variety of intelligent tasks. Below, I look at the situation in regard to speech recognition, image recognition, robotics, and reasoning in general.

Ever since digital computers were invented, linguists and computer scientists have sought to use them to recognize speech and text. Known as natural language processing, or NLP, the field once focused on hardwiring syntax and grammar into code. However, over the past several decades, machine learning has largely surpassed rule-based systems, thanks to everything from support vector machines to hidden markov models to, most recently, deep learning. Apples Siri, Amazons Alexa, and Googles Duplex all rely heavily on deep learning to recognize speech or text, and represent the cutting-edge of the field.

When several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

The specific deep learning algorithms at play have varied somewhat. Recurrent neural networks powered many of the initial deep learning breakthroughs, while hierarchical attention networks are responsible for more recent ones. What they all share in common, though, is that the higher levels of a deep learning network effectively learn grammar and syntax on their own. In fact, when several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

Yet for all the success of deep learning at speech recognition, key limitations remain. The most important is that because deep neural networks only ever build probabilistic models, they dont understand language in the way humans do; they can recognize that the sequence of letters k-i-n-g and q-u-e-e-n are statistically related, but they have no innate understanding of what either word means, much less the broader concepts of royalty and gender. As a result, there is likely to be a ceiling to how intelligent speech recognition systems based on deep learning and other probabilistic models can ever be. If we ever build an AI like the one in the movie Her, which was capable of genuine human relationships, it will almost certainly take a breakthrough well beyond what a deep neural network can deliver.

When Rosenblatt first implemented his neural network in 1958, he initially set it loose onimages of dogs and cats. AI researchers have been focused on tackling image recognition ever since. By necessity, much of that time was spent devising algorithms that could detect pre-specified shapes in an image, like edges and polyhedrons, using the limited processing power of early computers. Thanks to modern hardware, however, the field of computer vision is now dominated by deep learning instead. When a Tesla drives safely in autopilot mode, or when Googles new augmented-reality microscope detects cancer in real-time, its because of a deep learning algorithm.

A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.

Convolutional neural networks, or CNNs, are the variant of deep learning most responsible for recent advances in computer vision. Developed by Yann LeCun and others, CNNs dont try to understand an entire image all at once, but instead scan it in localized regions, much the way a visual cortex does. LeCuns early CNNs were used to recognize handwritten numbers, but today the most advanced CNNs, such as capsule networks, can recognize complex three-dimensional objects from multiple angles, even those not represented in training data. Meanwhile, generative adversarial networks, the algorithm behind deep fake videos, typically use CNNs not to recognize specific objects in an image, but instead to generate them.

As with speech recognition, cutting-edge image recognition algorithms are not without drawbacks. Most importantly, just as all that NLP algorithms learn are statistical relationships between words, all that computer vision algorithms learn are statistical relationships between pixels. As a result, they can be relatively brittle. A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.

What makes our intelligence so powerful is not just that we can understand the world, but that we can interact with it. The same will be true for machines. Computers that can learn to recognize sights and sounds are one thing; those that can learn to identify an object as well as how to manipulate it are another altogether. Yet if image and speech recognition are difficult challenges, touch and motor control are far more so. For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

The reason: Picking up an object like a shirt isnt just one task, but several. First you need to recognize a shirt as a shirt. Then you need to estimate how heavy it is, how its mass is distributed, and how much friction its surface has. Based on those guesses, then you need to estimate where to grasp the shirt and how much force to apply at each point of your grip, a task made all the more challenging because the shirts shape and distribution of mass will change as you lift it up. A human does this trivially and easily. But for a computer, the uncertainty in any of those calculations compounds across all of them, making it an exceedingly difficult task.

Initially, programmers tried to solve the problem by writing programs that instructed robotic arms how to carry out each task step by step. However, just as rule-based NLP cant account for all possible permutations of language, there also is no way for rule-based robotics to run through all the possible permutations of how an object might be grasped. By the 1980s, it became increasingly clear that robots would need to learn about the world on their own and develop their own intuitions about how to interact with it. Otherwise, there was no way they would be able to reliably complete basic maneuvers like identifying an object, moving toward it, and picking it up.

The current state of the art is something called deep reinforcement learning. As a crude shorthand, you can think of reinforcement learning as trial and error. If a robotic arm tries a new way of picking up an object and succeeds, it rewards itself; if it drops the object, it punishes itself. The more the arm attempts its task, the better it gets at learning good rules of thumb for how to complete it. Coupled with modern computing, deep reinforcement learning has shown enormous promise. For instance, by simulating a variety of robotic hands across thousands of servers, OpenAI recently taught a real robotic hand how to manipulate a cube marked with letters.

For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

Compared with prior research, OpenAIs breakthrough is tremendously impressive. Yet it also shows the limitations of the field. The hand OpenAI built didnt actually feel the cube at all, but instead relied on a camera. For an object like a cube, which doesnt change shape and can be easily simulated in virtual environments, such an approach can work well. But ultimately, robots will need to rely on more than just eyes. Machines with the dexterity and fine motor skills of a human are still a ways away.

When Arthur Samuels coined the term machine learning, he wasnt researching image or speech recognition, nor was he working on robots. Instead, Samuels was tackling one of his favorite pastimes: checkers. Since the game had far too many potential board moves for a rule-based algorithm to encode them all, Samuels devised an algorithm that could teach itself to efficiently look several moves ahead. The algorithm was noteworthy for working at all, much less being competitive with other humans. But it also anticipated the astonishing breakthroughs of more recent algorithms like AlphaGo and AlphaGo Zero, which have surpassed all human players at Go, widely regarded as the most intellectually demanding board game in the world.

As with robotics, the best strategic AI relies on deep reinforcement learning. In fact, the algorithm that OpenAI used to power its robotic hand also formed the core of its algorithm for playing Dota 2, a multi-player video game. Although motor control and gameplay may seem very different, both involve the same process: making a sequence of moves over time, and then evaluating whether they led to success or failure. Trial and error, it turns out, is as useful for learning to reason about a game as it is for manipulating a cube.

Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear.

From Samuels on, the success of computers at board games has posed a puzzle to AI optimists and pessimists alike. If a computer can beat a human at a strategic game like chess, how much can we infer about its ability to reason strategically in other environments? For a long time, the answer was, very little. After all, most board games involve a single player on each side, each with full information about the game, and a clearly preferred outcome. Yet most strategic thinking involves cases where there are multiple players on each side, most or all players have only limited information about what is happening, and the preferred outcome is not clear. For all of AlphaGos brilliance, youll note that Google didnt then promote it to CEO, a role that is inherently collaborative and requires a knack for making decisions with incomplete information.

Fortunately, reinforcement learning researchers have recently made progress on both of those fronts. One team outperformed human players at Texas Hold Em, a poker game where making the most of limited information is key. Meanwhile, OpenAIs Dota 2 player, which coupled reinforcement learning with whats called a Long Short-Term Memory (LSTM) algorithm, has made headlines for learning how to coordinate the behavior of five separate bots so well that they were able to beat a team of professional Dota 2 players. As the algorithms improve, humans will likely have a lot to learn about optimal strategies for cooperation, especially in information-poor environments.This kind of information would be especially valuable for commanders in military settings, who sometimes have to make decisions without having comprehensive information.

Yet theres still one challenge no reinforcement learning algorithm can ever solve. Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear. Should corporate strategy prioritize growth or sustainability? Should U.S. foreign policy prioritize security or economic development? No AI will ever be able to answer higher-order strategic reasoning, because, ultimately, those are moral or political questions rather than empirical ones. The Pentagon may lean more heavily on AI in the years to come, but it wont be taking over the situation room and automating complex tradeoffs any time soon.

From autonomous cars to multiplayer games, machine learning algorithms can now approach or exceed human intelligence across a remarkable number of tasks. The breakout success of deep learning in particular has led to breathless speculation about both the imminent doom of humanity and its impending techno-liberation. Not surprisingly, all the hype has led several luminaries in the field, such as Gary Marcus or Judea Pearl, to caution that machine learning is nowhere near as intelligent as it is being presented, or that perhaps we should defer our deepest hopes and fears about AI until it is based on more than mere statistical correlations. Even Geoffrey Hinton, a researcher at Google and one of the godfathers of modern neural networks, has suggested that deep learning alone is unlikely to deliver the level of competence many AI evangelists envision.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics? If all of it can be, then machine learning may well be all we need to get to a true artificial general intelligence. But its very unclear whether thats the case. As far back as 1969, when Marvin Minsky and Seymour Papert famously argued that neural networks had fundamental limitations, even leading experts in AI have expressed skepticism that machine learning would be enough. Modern skeptics like Marcus and Pearl are only writing the latest chapter in a much older book. And its hard not to find their doubts at least somewhat compelling. The path forward from the deep learning of today, which can mistake a rifle for a helicopter, is by no means obvious.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics?

Yet the debate over machine learnings long-term ceiling is to some extent beside the point. Even if all research on machine learning were to cease, the state-of-the-art algorithms of today would still have an unprecedented impact. The advances that have already been made in computer vision, speech recognition, robotics, and reasoning will be enough to dramatically reshape our world. Just as happened in the so-called Cambrian explosion, when animals simultaneously evolved the ability to see, hear, and move, the coming decade will see an explosion in applications that combine the ability to recognize what is happening in the world with the ability to move and interact with it. Those applications will transform the global economy and politics in ways we can scarcely imagine today. Policymakers need not wring their hands just yet about how intelligent machine learning may one day become. They will have their hands full responding to how intelligent it already is.

View original post here:
What is machine learning? - Brookings

Why 2020 will be the Year of Automated Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

As the fuel that powers their ongoing digital transformation efforts, businesses everywhere are looking for ways to derive as much insight as possible from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, in turn, led to a call for more data scientists proficient with the latest artificial intelligence (AI) and machine learning (ML) tools.

But such highly-skilled data scientists are expensive and in short supply. In fact, theyre such a precious resource that the phenomenon of the citizen data scientist has recently arisen to help close the skills gap. A complementary role, rather than a direct replacement, citizen data scientists lack specific advanced data science expertise. However, they are capable of generating models using state-of-the-art diagnostic and predictive analytics. And this capability is partly due to the advent of accessible new technologies such as automated machine learning (AutoML) that now automate many of the tasks once performed by data scientists.

Algorithms and automation

According to a recent Harvard Business Review article, Organisations have shifted towards amplifying predictive power by coupling big data with complex automated machine learning. AutoML, which uses machine learning to generate better machine learning, is advertised as affording opportunities to democratise machine learning by allowing firms with limited data science expertise to develop analytical pipelines capable of solving sophisticated business problems.

Comprising a set of algorithms that automate the writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By way of illustration, a standard ML pipeline is made up of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. But the considerable expertise and time it takes to implement these steps means theres a high barrier to entry.

AutoML removes some of these constraints. Not only does it significantly reduce the time it would typically take to implement an ML process under human supervision, it can also often improve the accuracy of the model in comparison to hand-crafted models, trained and deployed by humans. In doing so, it offers organisations a gateway into ML, as well as freeing up the time of ML engineers and data practitioners, allowing them to focus on higher-order challenges.

SEE ALSO:

Overcoming scalability problems

The trend for combining ML with Big Data for advanced data analytics began back in 2012, when deep learning became the dominant approach to solving ML problems. This approach heralded the generation of a wealth of new software, tooling, and techniques that altered both the workload and the workflow associated with ML on a large scale. Entirely new ML toolsets, such as TensorFlow and PyTorch were created, and people increasingly began to engage more with graphics processing units (GPUs) to accelerate their work.

Until this point, companies efforts had been hindered by the scalability problems associated with running ML algorithms on huge datasets. Now, though, they were able to overcome these issues. By quickly developing sophisticated internal tooling capable of building world-class AI applications, the BigTech powerhouses soon overtook their Fortune 500 peers when it came to realising the benefits of smarter data-driven decision-making and applications.

Insight, innovation and data-driven decisions

AutoML represents the next stage in MLs evolution, promising to help non-tech companies access the capabilities they need to quickly and cheaply build ML applications.

In 2018, for example, Google launched its Cloud AutoML. Based on Neural Architecture Search (NAS) and transfer learning, it was described by Google executives as having the potential to make AI experts even more productive, advance new fields in AI, and help less-skilled engineers build powerful AI systems they previously only dreamed of.

The one downside to Googles AutoML is that its a proprietary algorithm. There are, however, a number of alternative open-source AutoML libraries such as AutoKeras, developed by researchers at Texas University and used to power the NAS algorithm.

Technological breakthroughs such as these have given companies the capability to easily build production-ready models without the need for expensive human resources. By leveraging AI, ML, and deep learning capabilities, AutoML gives businesses across all industries the opportunity to benefit from data-driven applications powered by statistical models - even when advanced data science expertise is scarce.

With organisations increasingly reliant on civilian data scientists, 2020 is likely to be the year that enterprise adoption of AutoML will start to become mainstream. Its ease of access will compel business leaders to finally open the black box of ML, thereby elevating their knowledge of its processes and capabilities. AI and ML tools and practices will become ever more ingrained in businesses everyday thinking and operations as they become more empowered to identify those projects whose invaluable insight will drive better decision-making and innovation.

By Senthil Ravindran, EVP and global head of cloud transformation and digital innovation, Virtusa

View original post here:
Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – HPCwire

SAN JOSE, Calif., Feb. 21, 2020 Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.

It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.

SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.

The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.

Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.

In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.

Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go towww.inspursystems.com.

Source: Inspur

Original post:
Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - HPCwire

Cisco Enhances IoT Platform with 5G Readiness and Machine Learning – The Fast Mode

Cisco on Friday announced advancements to its IoT portfolio that enable service provider partners to offer optimized management of cellular IoT environments and new 5G use-cases.

Cisco IoT Control Center(formerly Jasper Control Center) is introducing new innovations to improve management and reduce deployment complexity. These include:

Using Machine Learning (ML) to improve management: With visibility into 3 billion events every day, Cisco IoT Control Center uses the industry's broadest visibility to enable machine learning models to quickly identify anomalies and address issues before they impact a customer. Service providers can also identify and alert customers of errant devices, allowing for greater endpoint security and control.

Smart billing to optimize rate plans:Service providers can improve customer satisfaction by enabling Smart billing to automatically optimize rate plans. Policies can also be created to proactively send customer notifications should usage changes or rate plans need to be updated to help save enterprises money.

Support for global supply chains: SIM portability is an enterprise requirement to support complex supply chains spanning multiple service providers and geographies. It is time-consuming and requires integrations between many different service providers and vendors, driving up costs for both. Cisco IoT Control Center now provides eSIM as a service, enabling a true turnkey SIM portability solution to deliver fast, reliable, cost-effective SIM handoffs between service providers.

Cisco IoT Control Center has taken steps towards 5G readiness to incubate and promote high value 5G business use cases that customers can easily adopt.

Vikas Butaney, VP Product Management IoT, CiscoCellular IoT deployments are accelerating across connected cars, utilities and transportation industries and with 5G and Wi-Fi 6 on the horizon IoT adoption will grow even faster. Cisco is investing in connectivity management, IoT networking, IoT security, and edge computing to accelerate the adoption of IoT use-cases.

Go here to read the rest:
Cisco Enhances IoT Platform with 5G Readiness and Machine Learning - The Fast Mode

Machine learning could speed the arrival of ultra-fast-charging electric car – Chemie.de

Using machine learning, a Stanford-led research team has slashed battery testing times - a key barrier to longer-lasting, faster-charging batteries for electric vehicles.

Battery performance can make or break the electric vehicle experience, from driving range to charging time to the lifetime of the car. Now, artificial intelligence has made dreams like recharging an EV in the time it takes to stop at a gas station a more likely reality, and could help improve other aspects of battery technology.

For decades, advances in electric vehicle batteries have been limited by a major bottleneck: evaluation times. At every stage of the battery development process, new technologies must be tested for months or even years to determine how long they will last. But now, a team led by Stanford professors Stefano Ermon and William Chueh has developed a machine learning-based method that slashes these testing times by 98 percent. Although the group tested their method on battery charge speed, they said it can be applied to numerous other parts of the battery development pipeline and even to non-energy technologies.

"In battery testing, you have to try a massive number of things, because the performance you get will vary drastically," said Ermon, an assistant professor of computer science. "With AI, we're able to quickly identify the most promising approaches and cut out a lot of unnecessary experiments."

The study, published by Nature on Feb. 19, was part of a larger collaboration among scientists from Stanford, MIT and the Toyota Research Institute that bridges foundational academic research and real-world industry applications. The goal: finding the best method for charging an EV battery in 10 minutes that maximizes the battery's overall lifetime. The researchers wrote a program that, based on only a few charging cycles, predicted how batteries would respond to different charging approaches. The software also decided in real time what charging approaches to focus on or ignore. By reducing both the length and number of trials, the researchers cut the testing process from almost two years to 16 days.

"We figured out how to greatly accelerate the testing process for extreme fast charging," said Peter Attia, who co-led the study while he was a graduate student. "What's really exciting, though, is the method. We can apply this approach to many other problems that, right now, are holding back battery development for months or years."

Designing ultra-fast-charging batteries is a major challenge, mainly because it is difficult to make them last. The intensity of the faster charge puts greater strain on the battery, which often causes it to fail early. To prevent this damage to the battery pack, a component that accounts for a large chunk of an electric car's total cost, battery engineers must test an exhaustive series of charging methods to find the ones that work best.

The new research sought to optimize this process. At the outset, the team saw that fast-charging optimization amounted to many trial-and-error tests - something that is inefficient for humans, but the perfect problem for a machine.

"Machine learning is trial-and-error, but in a smarter way," said Aditya Grover, a graduate student in computer science who co-led the study. "Computers are far better than us at figuring out when to explore - try new and different approaches - and when to exploit, or zero in, on the most promising ones."

The team used this power to their advantage in two key ways. First, they used it to reduce the time per cycling experiment. In a previous study, the researchers found that instead of charging and recharging every battery until it failed - the usual way of testing a battery's lifetime -they could predict how long a battery would last after only its first 100 charging cycles. This is because the machine learning system, after being trained on a few batteries cycled to failure, could find patterns in the early data that presaged how long a battery would last.

Second, machine learning reduced the number of methods they had to test. Instead of testing every possible charging method equally, or relying on intuition, the computer learned from its experiences to quickly find the best protocols to test.

By testing fewer methods for fewer cycles, the study's authors quickly found an optimal ultra-fast-charging protocol for their battery. In addition to dramatically speeding up the testing process, the computer's solution was also better - and much more unusual - than what a battery scientist would likely have devised, said Ermon.

"It gave us this surprisingly simple charging protocol - something we didn't expect," Ermon said. Instead of charging at the highest current at the beginning of the charge, the algorithm's solution uses the highest current in the middle of the charge. "That's the difference between a human and a machine: The machine is not biased by human intuition, which is powerful but sometimes misleading."

The researchers said their approach could accelerate nearly every piece of the battery development pipeline: from designing the chemistry of a battery to determining its size and shape, to finding better systems for manufacturing and storage. This would have broad implications not only for electric vehicles but for other types of energy storage, a key requirement for making the switch to wind and solar power on a global scale.

"This is a new way of doing battery development," said Patrick Herring, co-author of the study and a scientist at the Toyota Research Institute. "Having data that you can share among a large number of people in academia and industry, and that is automatically analyzed, enables much faster innovation."

The study's machine learning and data collection system will be made available for future battery scientists to freely use, Herring added. By using this system to optimize other parts of the process with machine learning, battery development - and the arrival of newer, better technologies - could accelerate by an order of magnitude or more, he said.

The potential of the study's method extends even beyond the world of batteries, Ermon said. Other big data testing problems, from drug development to optimizing the performance of X-rays and lasers, could also be revolutionized by the use of machine learning optimization. And ultimately, he said, it could even help to optimize one of the most fundamental processes of all.

"The bigger hope is to help the process of scientific discovery itself," Ermon said. "We're asking: Can we design these methods to come up with hypotheses automatically? Can they help us extract knowledge that humans could not? As we get better and better algorithms, we hope the whole scientific discovery process may drastically speed up."

Read the original here:
Machine learning could speed the arrival of ultra-fast-charging electric car - Chemie.de

Anti-Aging Researcher David Sinclair Takes Metformin, NMN …

David Sinclair is working on various anti-aging molecules. He was famous for discovering the anti-aging effect of resveratrol and sirtuins. David Sinclair was interviewed on the Joe Rogan show about antiaging.

In 2013, GlaxoSmithKline shutdown Sirtris (David Sinclairs company) about five years after spending $720 million to buy Sirtris.

David A. Sinclair, Ph.D., A.O. is a Professor in the Department of Genetics and co-Director of the Paul F. Glenn Center for the Biology of Aging at Harvard Medical School. Dr. Sinclair is co-founder of several biotechnology companies (Sirtris, Ovascience, Genocea, Cohbar, MetroBiotech, ArcBio, Liberty Biosecurity) and is on the boards of several others. He is also co-founder and co-chief editor of the journal Aging.

Life Biosciences was co-founded in 2017 by David A. Sinclair, PhD, AO, a professor in the Department of Genetics at Harvard Medical School, and Tristan Edwards, an investment professional who developed its innovative company structure.

Sinclairs lab continues to work on resveratrol and analogs of it, as well as on mitochondria and NAD, all directed to understanding aging and how to prevent it.

His antiaging regimen is to activate pathways to improve the bodies defenses against aging.

He is testing NMN on human subjects. He describes NMN is fuel for sirtuins. NMN is related to NR. NR increases the levels of NAD. Sirtuins need NAD to work. We lose NAD as we age. We have half of the NAD by the time we are 50.

He takes a gram of NMN (Nicotinamide MonoNucleotide) and takes half a gram resveratrol in the morning with yogurt.

He is personally taking 1 gram of Metformin once a day at night.

He gives himself temperature treatments. He exposes himself to heat in a hot tub and then cold in a cold bath. The temperature treatments are again to activate the pathways to aging defense.

He also performs intermittent fasting. He skips meals and is a night time eater. He limits is sugar and carbs. He limits his eating of meat.

He is not taking Rapamycin because of concern over side-effects.

Anti-oxidants are a failure in the anti-aging field.

Metro Biotech makes a super-NAD booster which is called MIB-626. They hope to get it on the market to treat diseases in three years. It is in clinical trials for safety now.

Research has found the lining of blood vessels needs NAD.

SRT-2104 has had successful antiaging effects.

He gave the recent keynotepresentation atMonteJadeevent with a talk entitled the Future for You.He gave an annual update on molecular nanotechnology at Singularity University on nanotechnology, gave a TEDX talk on energy, and advises USC ASTE 527 (advanced space projects program). He has been interviewed for radio, professional organizations. podcasts and corporate events. He was recently interviewed by the radio program Steel on Steel on satellites and high altitude balloons that will track all movement in many parts of the USA.

He fundraises for various high impact technology companies and has worked in computer technology, insurance, healthcare and with corporate finance.

He has substantial familiarity with a broad range of breakthrough technologies like age reversal and antiaging, quantum computers, artificial intelligence, ocean tech, agtech, nuclear fission, advanced nuclear fission, space propulsion, satellites, imaging, molecular nanotechnology, biotechnology, medicine, blockchain, crypto and many other areas.

See original here:
Anti-Aging Researcher David Sinclair Takes Metformin, NMN ...

Machine learning finds a novel antibiotic able to kill superbugs – STAT – STAT

For decades, discovering novel antibiotics meant digging through the same patch of dirt. Biologists spent countless hours screening soil-dwelling microbes for properties known to kill harmful bacteria. But as superbugs resistant to existing antibiotics have spread widely, breakthroughs were becoming as rare as new places to dig.

Now, artificial intelligence is giving scientists a reason to dramatically expand their search into databases of molecules that look nothing like existing antibiotics.

A study published Thursday in the journal Cell describes how researchers at the Massachusetts Institute of Technology used machine learning to identify a molecule that appears capable of countering some of the worlds most formidable pathogens.

advertisement

When tested in mice, the molecule, dubbed halicin, effectively treated the gastrointestinal bug Clostridium difficile (C. diff), a common killer of hospitalized patients, and another type of drug-resistant bacteria that often causes infections in the blood, urinary tract, and lungs.

The most surprising feature of the molecule? It is structurally distinct from existing antibiotics, the researchers said. It was found in a drug-repurposing database where it was initially identified as a possible treatment for diabetes, a feat that showcases the power of machine learning to support discovery efforts.

Now were finding leads among chemical structures that in the past we wouldnt have even hallucinated that those could be an antibiotic, said Nigam Shah, professor of biomedical informatics at Stanford University. It greatly expands the search space into dimensions we never knew existed.

Shah, who was not involved in the research, said that the generation of a promising molecule is just the first step in a long and uncertain process of testing its safety and effectiveness in humans.

But the research demonstrates how machine learning, when paired with expert biologists, can speed up time-consuming preclinical work, and give researchers greater confidence that the molecule theyre examining is worth pursuing through more costly phases of drug discovery.

That is an especially pressing challenge in the development of new antibiotics, because a lack of economic incentives has caused pharmaceutical companies to pull back from the search for badly needed treatments. Each year in the U.S., drug-resistant bacteria and fungi cause more than 2.8 million infections and 35,000 deaths, with more than a third of fatalities attributable to C. diff, according to the the Centers for Disease Control and Prevention.

The damage is far greater in countries with fewer health care resources.

Without the development of novel antibiotics, the World Health Organization estimates that the global death toll from drug resistant infections is expected to rise to 10 million a year by 2050, up from about 700,000 a year currently.

In addition to finding halicin, the researchers at MIT reported that their machine learning model identified eight other antibacterial compounds whose structures differ significantly from known antibiotics.

I do think this platform will very directly reduce the cost involved in the discovery phase of antibiotic development, said James Collins, a co-author of the study who is a professor of bioengineering at MIT. With these models, one can now get after novel chemistries in a shorter period of time involving less investment.

The machine learning platform was developed by Regina Barzilay, a professor of computer science and artificial intelligence who works with Collins as co-lead of the Jameel Clinic for Machine Learning in Health at MIT. It relies on a deep neural network, a type of AI architecture that uses multiple processing layers to analyze different aspects of data to deliver an output.

Prior types of machine learning systems required close supervision from humans to analyze molecular properties in drug discovery and produced spotty results. But Barzilays model is part of a new generation of machine learning systems that can automatically learn chemical properties connected to a specific function, such as an ability to kill bacteria.

Barzilay worked with Collins and other biologists at MIT to train the system on more than 2,500 chemical structures, including those that looked nothing like antibiotics. The effect was to counteract bias that typically trips up most human scientists who are trained to look for molecular structures that look a lot like other antibiotics.

The neural net was able to isolate molecules that were predicted to have antibacterial qualities but didnt look like existing antibiotics, resulting in the identification of halicin.

To use a crude analogy, its like you show an AI all the different means of transportation, but youve not shown it an electric scooter, said Shah, the bioinformatics professor at Stanford. And then it independently looks at an electronic scooter and says, Yeah, this could be useful for transportation.

In follow-up testing in the lab, Collins said, halicin displayed a remarkable ability to fight a wide range of multidrug-resistant pathogens. Tested against 36 such pathogens, it displayed potency against 35 of them. Collins said testing in mice showed excellent activity against C. diff, tuberculosis, and other bacteria.

The ability to identify molecules with specific antibiotic properties could aid in the development of drugs to treat so-called orphan conditions that affect a small percentage of the population but are not targeted by drug companies because of the lack of financial rewards.

Collins noted that commercializing halicin would take many months of study to evaluate its toxicity in humans, followed by multiple phases of clinical trials to establish safety and efficacy.

Read the original post:
Machine learning finds a novel antibiotic able to kill superbugs - STAT - STAT

Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.

Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.

Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.

Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.

Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.

Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.

We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.

Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.

Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.

Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.

The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.

Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.

Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.

As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.

The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.

comments

Go here to see the original:
Machine Learning: Real-life applications and it's significance in Data Science - Techstory

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – Yahoo Finance

Highlights:

Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.

It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.

SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.

The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.

Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.

In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.

Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to http://www.inspursystems.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200221005123/en/

Contacts

Media Fiona LiuLiuxuan01@inspur.com

Read more:
Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - Yahoo Finance

Artificial Intelligence and Machine Learning in the Operating Room – 24/7 Wall St.

Most applications of artificial intelligence (AI) and machine learning technology provide only data to physicians, leaving the doctors to form a judgment on how to proceed. Because AI doesnt actually perform any procedure or prescribe a course of medication, the software that diagnoses health problems does not have to pass a randomized clinical trial as do devices such as insulin pumps or new medications.

A new study published Monday at JAMA Network discusses a trial including 68 patients undergoing elective noncardiac surgery under general anesthesia. The object of the trial was to determine if a predictive early warning system for possible hypotension (low blood pressure) during the surgery might reduce the time-weighted average of hypotension episodes during the surgery.

In other words, not only would the device and its software keep track of the patients mean average blood pressure, but it would sound an alarm if an 85% or greater risk of a patients blood pressure falling below 65 mm of mercury (Hg) was possible in the next 15 minutes. The device also encouraged the anesthesiologist to take preemptive action.

Patients in the control group were connected to the same AI device and software, but only routine pulse and blood pressure data were displayed. That means that the anesthesiologist had no early warning about a hypotension event and could take no action to prevent the event.

Among patients fully connected to the device and software, the median time-weighted average of hypotension was 0.1 mm Hg, compared to an average of 0.44 mm Hg in the control group. In the control group, the median time of hypotension per patient was 32.7 minutes, while it was just 8.0 minutes among the other patients. Most important, perhaps, two patients in the control group died from serious adverse events, while no patients connected to the AI device and software died.

The algorithm used by the device was developed by different researchers who had trained the software on thousands of waveform features to identify a possible hypotension event 15 minutes before it occurs during surgery. The devices used were a Flotrac IQ sensor with the early warning software installed and a HemoSphere monitor. The devices are made by Edwards Lifesciences, and Edwards also had five of eight researchers among the developers of the algorithm. The study itself was conducted in the Netherlands at Amsterdam University Medical Centers.

In an editorial at JAMA Network, associate editor Derek Angus wrote:

The final model predicts the likelihood of future hypotension via measurement of multiple variables characterizing dynamic interactions between left ventricular contractility, preload, and afterload. Although clinicians can look at arterial pulse pressure waveforms and, in combination with other patient features, make educated guesses about the possibility of upcoming episodes of hypotension, the likelihood is high that an AI algorithm could make more accurate predictions.

Among the past decades biggest health news stories were the development of immunotherapies for cancer and a treatment for cystic fibrosis. AI is off to a good start in the new decade.

By Paul Ausick

View original post here:
Artificial Intelligence and Machine Learning in the Operating Room - 24/7 Wall St.

How businesses and governments should embrace AI and Machine Learning – TechCabal

Leadership team of credit-as-a-service startup Migo, one of a growing number of businesses using AI to create consumer-facing products.

The ability to make good decisions is literally the reason people trust you with responsibilities. Whether you work for a government or lead a team at a private company, your decision-making process will affect lives in very real ways.

Organisations often make poor decisions because they fail to learn from the past. Wherever a data-collection reluctance exists, there is a fair chance that mistakes will be repeated. Bad policy goals will often be a consequence of faulty evidentiary support, a failure to sufficiently look ahead by not sufficiently looking back.

But as Daniel Kahneman, author of Thinking Fast and Slow, says:

The idea that the future is unpredictable is undermined every day by the ease with which the past is explained. If governments and business leaders will live up to their responsibilities, enthusiastically embracing methodical decision-making tools should be a no-brainer.

Mass media representations project artificial intelligence in futuristic, geeky terms. But nothing could be further from the truth.

While it is indeed scientific, AI can be applied in practical everyday life today. Basic interactions with AI include algorithms that recommend articles to you, friend suggestions on social media and smart voice assistants like Alexa and Siri.

In the same way, government agencies can integrate AI into regular processes necessary for society to function properly.

Managing money is an easy example to begin with. AI systems can be used to streamline data points required during budget preparations and other fiscal processes. Based on data collected from previous fiscal cycles, government agencies could reasonably forecast needs and expectations for future years.

With its large trove of citizen data, governments could employ AI to effectively reduce inequalities in outcomes and opportunities. Big Data gives a birds-eye view of the population, providing adequate tools for equitably distributing essential infrastructure.

Perhaps a more futuristic example is in drafting legislation. Though a young discipline, legimatics includes the use of artificial intelligence in legal and legislative problem-solving.

Democracies like Nigeria consider public input a crucial aspect of desirable law-making. While AI cannot yet be relied on to draft legislation without human involvement, an AI-based approach can produce tools for specific parts of legislative drafting or decision support systems for the application of legislation.

In Africa, businesses are already ahead of most governments in AI adoption. Credit scoring based on customer data has become popular in the digital lending space.

However, there is more for businesses to explore with the predictive powers of AI. A particularly exciting prospect is the potential for new discoveries based on unstructured data.

Machine learning could broadly be split into two sections: supervised and unsupervised learning. With supervised learning, a data analyst sets goals based on the labels and known classifications of the dataset. The resulting insights are useful but do not produce the sort of new knowledge that comes from unsupervised learning processes.

In essence, AI can be a medium for market-creating innovations based on previously unknown insight buried in massive caches of data.

Digital lending became a market opportunity in Africa thanks to growing smartphone availability. However, customer data had to be available too for algorithms to do their magic.

This is why it is desirable for more data-sharing systems to be normalised on the continent to generate new consumer products. Fintech sandboxes that bring the public and private sectors together aiming to achieve open data standards should therefore be encouraged.

Artificial intelligence, like other technologies, is neutral. It can be used for social good but also can be diverted for malicious purposes. For both governments and businesses, there must be circumspection and a commitment to use AI responsibly.

China is a cautionary tale. The Communist state currently employs an all-watching system of cameras to enforce round-the-clock citizen surveillance.

By algorithmically rating citizens on a so-called social credit score, Chinas ultra-invasive AI effectively precludes individual freedom, compelling her 1.3 billion people to live strictly by the Politburos ideas of ideal citizenship.

On the other hand, businesses must be ethical in providing transparency to customers about how data is harvested to create products. At the core of all exchange must be trust, and a verifiable, measurable commitment to do no harm.

Doing otherwise condemns modern society to those dystopian days everybody dreads.

How can businesses and governments use Artificial Intelligence to find solutions to challenges facing the continent? Join entrepreneurs, innovators, investors and policymakers in Africas AI community at TechCabals emerging tech townhall. At the event, stakeholders including telcos and financial institutions will examine how businesses, individuals and countries across the continent can maximize the benefits of emerging technologies, specifically AI and Blockchain. Learn more about the event and get tickets here.

Continue reading here:
How businesses and governments should embrace AI and Machine Learning - TechCabal

How to Pick a Winning March Madness Bracket – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Introduction

In 2019, over 40 million Americans wagered money on March Madness brackets, according to the American Gaming Association. Most of this money was bet in bracket pools, which consist of a group of people each entering their predictions of the NCAA tournament games along with a buy-in. The bracket that comes closest to being right wins. If you also consider the bracket pools where only pride is at stake, the number of participants is much greater. Despite all this attention, most do not give themselves the best chance to win because they are focused on the wrong question.

The Right Question

Mistake #3 in Dr. John Elders Top 10 Data Science Mistakes is to ask the wrong question. A cornerstone of any successful analytics project starts with having the right project goal; that is, to aim at the right target. If youre like most people, when you fill out your bracket, you ask yourself, What do I think is most likely to happen? This is the wrong question to ask if you are competing in a pool because the objective is to win money, NOT to make the most correct bracket. The correct question to ask is: What bracket gives me the best chance to win $? (This requires studying the payout formula. I used ESPN standard scoring (320 possible points per round) with all pool money given to the winner. (10 points are awarded for each correct win in the round of 64, 20 in the round of 32, and so forth, doubling until 320 are awarded for a correct championship call.))

While these questions seem similar, the brackets they produce will be significantly different.

If you ignore your opponents and pick the teams with the best chance to win games you will reduce your chance of winning money. Even the strongest team is unlikely to win it all, and even if they do, plenty of your opponents likely picked them as well. The best way to optimize your chances of making money is to choose a champion team with a good chance to win who is unpopular with your opponents.

Knowing how other people in your pool are filling out their brackets is crucial, because it helps you identify teams that are less likely to be picked. One way to see how others are filling out their brackets is via ESPNs Who Picked Whom page (Figure 1). It summarizes how often each team is picked to advance in each round across all ESPN brackets and is a great first step towards identifying overlooked teams.

Figure 1. ESPNs Who Picked Whom Tournament Challenge page

For a team to be overlooked, their perceived chance to win must be lower than their actual chance to win. The Who Picked Whom page provides an estimate of perceived chance to win, but to find undervalued teams we also need estimates for actual chance to win. This can range from a complex prediction model to your own gut feeling. Two sources I trust are 538s March Madness predictions and Vegas future betting odds. 538s predictions are based on a combination of computer rankings and has predicted performance well in past tournaments. There is also reason to pay attention to Vegas odds, because if they were too far off, the sportsbooks would lose money.

However, both sources have their flaws. 538 is based on computer ratings, so while they avoid human bias, they miss out on expert intuition. Most Vegas sportsbooks likely use both computer ratings and expert intuition to create their betting odds, but they are strongly motivated to have equal betting on all sides, so they are significantly affected by human perception. For example, if everyone was betting on Duke to win the NCAA tournament, they would increase Dukes betting odds so that more people would bet on other teams to avoid large losses. When calculating win probabilities for this article, I chose to average 538 and Vegas predictions to obtain a balance I was comfortable with.

Lets look at last year. Figure 2 compares a teams perceived chance to win (based on ESPNs Who Picked Whom) to their actual chance to win (based on 538-Vegas averaged predictions) for the leading 2019 NCAA Tournament teams. (Probabilities for all 64 teams in the tournament appear in Table 6 in the Appendix.)

Figure 2. Actual versus perceived chance to win March Madness for 8 top teams

As shown in Figure 2, participants over-picked Duke and North Carolina as champions and under-picked Gonzaga and Virginia. Many factors contributed to these selections; for example, most predictive models, avid sports fans, and bettors agreed that Duke was the best team last year. If you were the picking the bracket most likely to occur, then selecting Duke as champion was the natural pick. But ignoring selections made by others in your pool wont help you win your pool.

While this graph is interesting, how can we turn it into concrete takeaways? Gonzaga and Virginia look like good picks, but what about the rest of the teams hidden in that bottom left corner? Does it ever make sense to pick teams like Texas Tech, who had a 2.6% chance to win it all, and only 0.9% of brackets picking them? How much does picking an overvalued favorite like Duke hurt your chances of winning your pool?

To answer these questions, I simulated many bracket pools and found that the teams in Gonzagas and Virginias spots are usually the best picksthe most undervalued of the top four to five favorites. However, as the size of your bracket pool increases, overlooked lower seeds like third-seeded Texas Tech or fourth-seeded Virginia Tech become more attractive. The logic for this is simple: the chance that one of these teams wins it all is small, but if they do, then you probably win your pool regardless of the number of participants, because its likely no one else picked them.

Simulations Methodology

To simulate bracket pools, I first had to simulate brackets. I used an average of the Vegas and 538 predictions to run many simulations of the actual events of March Madness. As discussed above, this method isnt perfect but its a good approximation. Next, I used the Who Picked Whom page to simulate many human-created brackets. For each human bracket, I calculated the chance it would win a pool of size by first finding its percentile ranking among all human brackets assuming one of the 538-Vegas simulated brackets were the real events. This percentile is basically the chance it is better than a random bracket. I raised the percentile to the power, and then repeated for all simulated 538-Vegas brackets, averaging the results to get a single win probability per bracket.

For example, lets say for one 538-Vegas simulation, my bracket is in the 90th percentile of all human brackets, and there are nine other people in my pool. The chance I win the pool would be. If we assumed a different simulation, then my bracket might only be in the 20th percentile, which would make my win probability . By averaging these probabilities for all 538-Vegas simulations we can calculate an estimate of a brackets win probability in a pool of size , assuming we trust our input sources.

Results

I used this methodology to simulate bracket pools with 10, 20, 50, 100, and 1000 participants. The detailed results of the simulations are shown in Tables 1-6 in the Appendix. Virginia and Gonzaga were the best champion picks when the pool had 50 or fewer participants. Yet, interestingly, Texas Tech and Purdue (3-seeds) and Virginia Tech (4-seed) were as good or better champion picks when the pool had 100 or more participants.

General takeaways from the simulations:

Additional Thoughts

We have assumed that your local pool makes their selections just like the rest of America, which probably isnt true. If you live close to a team thats in the tournament, then that team will likely be over-picked. For example, I live in Charlottesville (home of the University of Virginia), and Virginia has been picked as the champion in roughly 40% of brackets in my pools over the past couple of years. If you live close to a team with a high seed, one strategy is to start with ESPNs Who Picked Whom odds, and then boost the odds of the popular local team and correspondingly drop the odds for all other teams. Another strategy Ive used is to ask people in my pool who they are picking. It is mutually beneficial, since Id be less likely to pick whoever they are picking.

As a parting thought, I want to describe a scenario from the 2019 NCAA tournament some of you may be familiar with. Auburn, a five seed, was winning by two points in the waning moments of the game, when they inexplicably fouled the other team in the act of shooting a three-point shot with one second to go. The opposing player, a 78% free throw shooter, stepped to the line and missed two out of three shots, allowing Auburn to advance. This isnt an alternate reality; this is how Auburn won their first-round game against 12-seeded New Mexico State. They proceeded to beat powerhouses Kansas, North Carolina, and Kentucky on their way to the Final Four, where they faced the exact same situation against Virginia. Virginias Kyle Guy made all his three free throws, and Virginia went on to win the championship.

I add this to highlight an important qualifier of this analysisits impossible to accurately predict March Madness. Were the people who picked Auburn to go to the Final Four geniuses? Of course not. Had Terrell Brown of New Mexico State made his free throws, they would have looked silly. There is no perfect model that can predict the future, and those who do well in the pools are not basketball gurus, they are just lucky. Implementing the strategies talked about here wont guarantee a victory; they just reduce the amount of luck you need to win. And even with the best modelsyoull still need a lot of luck. It is March Madness, after all.

Appendix: Detailed Analyses by Bracket Sizes

At baseline (randomly), a bracket in a ten-person pool has a 10% chance to win. Table 1 shows how that chance changes based on the round selected for a given team to lose. For example, brackets that had Virginia losing in the Round of 64 won a ten-person pool 4.2% of the time, while brackets that picked them to win it all won 15.1% of the time. As a reminder, these simulations were done with only pre-tournament informationthey had no data indicating that Virginia was the eventual champion, of course.

Table 1 Probability that a bracket wins a ten-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

In ten-person pools, the best performing brackets were those that picked Virginia or Gonzaga as the champion, winning 15% of the time. Notably, early round picks did not have a big influence on the chance of winning the pool, the exception being brackets that had a one or two seed losing in the first round. Brackets that had a three seed or lower as champion performed very poorly, but having lower seeds making the Final Four did not have a significant impact on chance of winning.

Table 2 shows the same information for bracket pools with 20 people. The baseline chance is now 5%, and again the best performing brackets are those that picked Virginia or Gonzaga to win. Similarly, picks in the first few rounds do not have much influence. Michigan State has now risen to the third best Champion pick, and interestingly Purdue is the third best runner-up pick.

Table 2 Probability that a bracket wins a 20-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool size increases to 50, as shown in Table 3, picking the overvalued favorites (Duke and North Carolina) as champions significantly lowers your baseline chances (2%). The slightly undervalued two and three seeds now raise your baseline chances when selected as champions, but Virginia and Gonzaga remain the best picks.

Table 3 Probability that a bracket wins a 50-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

With the bracket pool size at 100 (Table 4), Virginia and Gonzaga are joined by undervalued three-seeds Texas Tech and Purdue. Picking any of these four raises your baseline chances from 1% to close to 2%. Picking Duke or North Carolina again hurts your chances.

Table 4 Probability that a bracket wins a 100-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool grows to 1000 people (Table 5), there is a complete changing of the guard. Virginia Tech is now the optimal champion pick, raising your baseline chance of winning your pool from 0.1% to 0.4%, followed by the three-seeds and sixth-seeded Iowa State are the best champion picks.

Table 5 Probability that a bracket wins a 1000-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

For Reference, Table 6 shows the actual chance to win versus the chance of being picked to win for all teams seeded seventh or better. These chances are derived from the ESPN Who Picked Whom page and the 538-Vegas predictions. The data for the top eight teams in Table 6 is plotted in Figure 2. Notably, Duke and North Carolina are overvalued, while the rest are all at least slightly undervalued.

The teams in bold in Table 6 are examples of teams that are good champion picks in larger pools. They all have a high ratio of actual chance to win to chance of being picked to win, but a low overall actual chance to win.

Table 6 Actual odds to win Championship vs Chance Team is Picked to Win Championship.

Undervalued teams in green; over-valued in red.

About the Author

Robert Robison is an experienced engineer and data analyst who loves to challenge assumptions and think outside the box. He enjoys learning new skills and techniques to reveal value in data. Robert earned a BS in Aerospace Engineering from the University of Virginia, and is completing an MS in Analytics through Georgia Tech.

In his free time, Robert enjoys playing volleyball and basketball, watching basketball and football, reading, hiking, and doing anything with his wife, Lauren.

Read the original:
How to Pick a Winning March Madness Bracket - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Google Teaches AI To Play The Game Of Chip Design – The Next Platform

If it wasnt bad enough that Moores Law improvements in the density and cost of transistors is slowing. At the same time, the cost of designing chips and of the factories that are used to etch them is also on the rise. Any savings on any of these fronts will be most welcome to keep IT innovation leaping ahead.

One of the promising frontiers of research right now in chip design is using machine learning techniques to actually help with some of the tasks in the design process. We will be discussing this at our upcoming The Next AI Platform event in San Jose on March 10 with Elias Fallon, engineering director at Cadence Design Systems. (You can see the full agenda and register to attend at this link; we hope to see you there.) The use of machine learning in chip design was also one of the topics that Jeff Dean, a senior fellow in the Research Group at Google who has helped invent many of the hyperscalers key technologies, talked about in his keynote address at this weeks 2020 International Solid State Circuits Conference in San Francisco.

Google, as it turns out, has more than a passing interest in compute engines, being one of the large consumers of CPUs and GPUs in the world and also the designer of TPUs spanning from the edge to the datacenter for doing both machine learning inference and training. So this is not just an academic exercise for the search engine giant and public cloud contender particularly if it intends to keep advancing its TPU roadmap and if it decides, like rival Amazon Web Services, to start designing its own custom Arm server chips or decides to do custom Arm chips for its phones and other consumer devices.

With a certain amount of serendipity, some of the work that Google has been doing to run machine learning models across large numbers of different types of compute engines is feeding back into the work that it is doing to automate some of the placement and routing of IP blocks on an ASIC. (It is wonderful when an idea is fractal like that. . . .)

While the pod of TPUv3 systems that Google showed off back in May 2018 can mesh together 1,024 of the tensor processors (which had twice as many cores and about a 15 percent clock speed boost as far as we can tell) to deliver 106 petaflops of aggregate 16-bit half precision multiplication performance (with 32-bit accumulation) using Googles own and very clever bfloat16 data format. Those TPUv3 chips are all cross-coupled using a 3232 toroidal mesh so they can share data, and each TPUv3 core has its own bank of HBM2 memory. This TPUv3 pod is a huge aggregation of compute, which can do either machine learning training or inference, but it is not necessarily as large as Google needs to build. (We will be talking about Deans comments on the future of AI hardware and models in a separate story.)

Suffice it to say, Google is hedging with hybrid architectures that mix CPUs and GPUs and perhaps someday other accelerators for reinforcement learning workloads, and hence the research that Dean and his peers at Google have been involved in that are also being brought to bear on ASIC design.

One of the trends is that models are getting bigger, explains Dean. So the entire model doesnt necessarily fit on a single chip. If you have essentially large models, then model parallelism dividing the model up across multiple chips is important, and getting good performance by giving it a bunch of compute devices is non-trivial and it is not obvious how to do that effectively.

It is not as simple as taking the Message Passing Interface (MPI) that is used to dispatch work on massively parallel supercomputers and hacking it onto a machine learning framework like TensorFlow because of the heterogeneous nature of AI iron. But that might have been an interesting way to spread machine learning training workloads over a lot of compute elements, and some have done this. Google, like other hyperscalers, tends to build its own frameworks and protocols and datastores, informed by other technologies, of course.

Device placement meaning, putting the right neural network (or portion of the code that embodies it) on the right device at the right time for maximum throughput in the overall application is particularly important as neural network models get bigger than the memory space and the compute oomph of a single CPU, GPU, or TPU. And the problem is getting worse faster than the frameworks and hardware can keep up. Take a look:

The number of parameters just keeps growing and the number of devices being used in parallel also keeps growing. In fact, getting 128 GPUs or 128 TPUv3 processors (which is how you get the 512 cores in the chart above) to work in concert is quite an accomplishment, and is on par with the best that supercomputers could do back in the era before loosely coupled, massively parallel supercomputers using MPI took over and federated NUMA servers with actual shared memory were the norm in HPC more than two decades ago. As more and more devices are going to be lashed together in some fashion to handle these models, Google has been experimenting with using reinforcement learning (RL), a special subset of machine learning, to figure out where to best run neural network models at any given time as model ensembles are running on a collection of CPUs and GPUs. In this case, an initial policy is set for dispatching neural network models for processing, and the results are then fed back into the model for further adaptation, moving it toward more and more efficient running of those models.

In 2017, Google trained an RL model to do this work (you can see the paper here) and here is what the resulting placement looked like for the encoder and decoder, and the RL model to place the work on the two CPUs and four GPUs in the system under test ended up with 19.3 percent lower runtime for the training runs compared to the manually placed neural networks done by a human expert. Dean added that this RL-based placement of neural network work on the compute engines does kind of non-intuitive things to achieve that result, which is what seems to be the case with a lot of machine learning applications that, nonetheless, work as well or better than humans doing the same tasks. The issue is that it cant take a lot of RL compute oomph to place the work on the devices to run the neural networks that are being trained themselves. In 2018, Google did research to show how to scale computational graphs to over 80,000 operations (nodes), and last year, Google created what it calls a generalized device placement scheme for dataflow graphs with over 50,000 operations (nodes).

Then we start to think about using this instead of using it to place software computation on different computational devices, we started to think about it for could we use this to do placement and routing in ASIC chip design because the problems, if you squint at them, sort of look similar, says Dean. Reinforcement learning works really well for hard problems with clear rules like Chess or Go, and essentially we started asking ourselves: Can we get a reinforcement learning model to successfully play the game of ASIC chip layout?

There are a couple of challenges to doing this, according to Dean. For one thing, chess and Go both have a single objective, which is to win the game and not lose the game. (They are two sides of the same coin.) With the placement of IP blocks on an ASIC and the routing between them, there is not a simple win or lose and there are many objectives that you care about, such as area, timing, congestion, design rules, and so on. Even more daunting is the fact that the number of potential states that have to be managed by the neural network model for IP block placement is enormous, as this chart below shows:

Finally, the true reward function that drives the placement of IP blocks, which runs in EDA tools, takes many hours to run.

And so we have an architecture Im not going to get a lot of detail but essentially it tries to take a bunch of things that make up a chip design and then try to place them on the wafer, explains Dean, and he showed off some results of placing IP blocks on a low-powered machine learning accelerator chip (we presume this is the edge TPU that Google has created for its smartphones), with some areas intentionally blurred to keep us from learning the details of that chip. We have had a team of human experts places this IP block and they had a couple of proxy reward functions that are very cheap for us to evaluate; we evaluated them in two seconds instead of hours, which is really important because reinforcement learning is one where you iterate many times. So we have a machine learning-based placement system, and what you can see is that it sort of spreads out the logic a bit more rather than having it in quite such a rectangular area, and that has enabled it to get improvements in both congestion and wire length. And we have got comparable or superhuman results on all the different IP blocks that we have tried so far.

Note: I am not sure we want to call AI algorithms superhuman. At least if you dont want to have it banned.

Anyway, here is how that low-powered machine learning accelerator for the RL network versus people doing the IP block placement:

And here is a table that shows the difference between doing the placing and routing by hand and automating it with machine learning:

And finally, here is how the IP block on the TPU chip was handled by the RL network compared to the humans:

Look at how organic these AI-created IP blocks look compared to the Cartesian ones designed by humans. Fascinating.

Now having done this, Google then asked this question: Can we train a general agent that is quickly effective at placing a new design that it has never seen before? Which is precisely the point when you are making a new chip. So Google tested this generalized model against four different IP blocks from the TPU architecture and then also on the Ariane RISC-V processor architecture. This data pits people working with commercial tools and various levels tuning on the model:

And here is some more data on the placement and routing done on the Ariane RISC-V chips:

You can see that experience on other designs actually improves the results significantly, so essentially in twelve hours you can get the darkest blue bar, Dean says, referring to the first chart above, and then continues with the second chart above. And this graph showing the wireline costs where we see if you train from scratch, it actually takes the system a little while before it sort of makes some breakthrough insight and was able to significantly drop the wiring cost, where the pretrained policy has some general intuitions about chip design from seeing other designs and people that get to that level very quickly.

Just like we do ensembles of simulations to do better weather forecasting, Dean says that this kind of AI-juiced placement and routing of IP block sin chip design could be used to quickly generate many different layouts, with different tradeoffs. And in the event that some feature needs to be added, the AI-juiced chip design game could re-do a layout quickly, not taking months to do it.

And most importantly, this automated design assistance could radically drop the cost of creating new chips. These costs are going up exponentially, and data we have seen (thanks to IT industry luminary and Arista Networks chairman and chief technology officer Andy Bechtolsheim), an advanced chip design using 16 nanometer processes cost an average of $106.3 million, shifting to 10 nanometers pushed that up to $174.4 million, and the move to 7 nanometers costs $297.8 million, with projections for 5 nanometer chips to be on the order of $542.2 million. Nearly half of that cost has been and continues to be for software. So we know where to target some of those costs, and machine learning can help.

The question is will the chip design software makers embed AI and foster an explosion in chip designs that can be truly called Cambrian, and then make it up in volume like the rest of us have to do in our work? It will be interesting to see what happens here, and how research like that being done by Google will help.

Read the rest here:
Google Teaches AI To Play The Game Of Chip Design - The Next Platform

How to Train Your AI Soldier Robots (and the Humans Who Command Them) – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (part a.), which asks how institutions, organizational structures, and infrastructure will affect AI development, and will artificial intelligence require the development of new institutions or changes to existing institutions.

Artificial intelligence (AI) is often portrayed as a single omnipotent force the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (2001: A Space Odyssey), reason with it (Wargames), blow it up (Star Wars: The Phantom Menace), or be defeated by it (Dr. Strangelove). Sometimes the AI is an automated version of a human, perhaps a human fighters faithful companion (the robot R2-D2 in Star Wars).

These science fiction tropes are legitimate models for military discussion and many are being discussed. But there are other possibilities. In particular, machine learning may give rise to new forms of intelligence; not natural, but not really artificial if the term implies having been designed in detail by a person. Such new forms of intelligence may resemble that of humans or other animals, and we will discuss them using language associated with humans, but we are not discussing robots that have been deliberately programmed to emulate human intelligence. Through machine learning they have been programmed by their own experiences. We speculate that some of the characteristics that humans have evolved over millennia will also evolve in future AI, characteristics that have evolved purely for their success in a wide range of situations that are real, for humans, or simulated, for robots.

As the capabilities of AI-enabled robots increase, and in particular as behaviors emerge that are both complex and outside past human experience, how will we organize, train, and command them and the humans who will supervise and maintain them? Existing methods and structures, such as military ranks and doctrine, that have evolved over millennia to manage the complexity of human behavior will likely be necessary. But because robots will evolve new behaviors we cannot yet imagine, they are unlikely to be sufficient. Instead, the military and its partners will need to learn new types of organization and new approaches to training. It is impossible to predict what these will be but very possible they will differ greatly from approaches that have worked in the past. Ongoing experimentation will be essential.

How to Respond to AI Advances

The development of AI, especially machine learning, will lead to unpredictable new types of robots. Advances in AI suggest that humans will have the ability to create many types of robots, of different shapes, sizes, or degrees of independence or autonomy. It is conceivable that humans may one day be able to design tiny AI bullets to pierce only designated targets, automated aircraft to fly as loyal wingmen alongside human pilots, or thousands of AI fish to swim up an enemys river. Or we could design AI not as a device but as a global grid that analyzes vast amounts of diverse data. Multiple programs funded by the Department of Defense are on their way to developing robots with varying degrees of autonomy.

In science fiction, robots are often depicted as behaving in groups (like the robot dogs in Metalhead). Researchers inspired by animal behaviors have developed AI concepts such as swarms, in which relatively simple rules for each robot can result in complex emergent phenomena on a larger scale. This is a legitimate and important area of investigation. Nevertheless, simply imitating the known behaviors of animals has its limits. After observing the genocidal nature of military operations among ants, biologists Bert Holldobler and E. O. Wilson wrote, If ants had nuclear weapons, they would probably end the world in a week. Nor would we want to limit AI to imitating human behavior. In any case, a major point of machine learning is the possibility of uncovering new behaviors or strategies. Some of these will be very different from all past experience; human, animal, and automated. We will likely encounter behaviors that, although not human, are so complex that some human language, such as personality, may seem appropriately descriptive. Robots with new, sophisticated patterns of behavior may require new forms of organization.

Military structure and scheme of maneuver is key to victory. Groups often fight best when they dont simply swarm but execute sophisticated maneuvers in hierarchical structures. Modern military tactics were honed over centuries of experimentation and testing. This was a lengthy, expensive, and bloody process.

The development of appropriate organizations and tactics for AI systems will also likely be expensive, although one can hope that through the use of simulation it will not be bloody. But it may happen quickly. The competitive international environment creates pressure to use machine learning to develop AI organizational structure and tactics, techniques, and procedures as fast as possible.

Despite our considerable experience organizing humans, when dealing with robots with new, unfamiliar, and likely rapidly-evolving personalities we confront something of a blank slate. But we must think beyond established paradigms, beyond the computer as all-powerful or the computer as loyal sidekick.

Humans fight in a hierarchy of groups, each soldier in a squad or each battalion in a brigade exercising a combination of obedience and autonomy. Decisions are constantly made at all levels of the organization. Deciding what decisions can be made at what levels is itself an important decision. In an effective organization, decision-makers at all levels have a good idea of how others will act, even when direct communication is not possible.

Imagine an operation in which several hundred underwater robots are swimming up a river to accomplish a mission. They are spotted and attacked. A decision must be made: Should they retreat? Who decides? Communications will likely be imperfect. Some mid-level commander, likely one of the robot swimmers, will decide based on limited information. The decision will likely be difficult and depend on the intelligence, experience, and judgment of the robot commander. It is essential that the swimmers know who or what is issuing legitimate orders. That is, there will have to be some structure, some hierarchy.

The optimal unit structure will be worked out through experience. Achieving as much experience as possible in peacetime is essential. That means training.

Training Robot Warriors

Robots with AI-enabled technologies will have to be exercised regularly, partly to test them and understand their capabilities and partly to provide them with the opportunity to learn from recreating combat. This doesnt mean that each individual hardware item has to be trained, but that the software has to develop by learning from its mistakes in virtual testbeds and, to the extent that they are feasible, realistic field tests. People learn best from the most realistic training possible. There is no reason to expect machines to be any different in that regard. Furthermore, as capabilities, threats, and missions evolve, robots will need to be continuously trained and tested to maintain effectiveness.

Training may seem a strange word for machine learning in a simulated operational environment. But then, conventional training is human learning in a controlled environment. Robots, like humans, will need to learn what to expect from their comrades. And as they train and learn highly complex patterns, it may make sense to think of such patterns as personalities and memories. At least, the patterns may appear that way to the humans interacting with them. The point of such anthropomorphic language is not that the machines have become human, but that their complexity is such that it is helpful to think in these terms.

One big difference between people and machines is that, in theory at least, the products of machine learning, the code for these memories or personalities, can be uploaded directly from one very experienced robot to any number of others. If all robots are given identical training and the same coded memories, we might end up with a uniformity among a units members that, in the aggregate, is less than optimal for the unit as a whole.

Diversity of perspective is accepted as a valuable aid to human teamwork. Groupthink is widely understood to be a threat. Its reasonable to assume that diversity will also be beneficial to teams of robots. It may be desirable to create a library of many different personalities or memories that could be assigned to different robots for particular missions. Different personalities could be deliberately created by using somewhat different sets of training testbeds to develop software for the same mission.

If AI can create autonomous robots with human-like characteristics, what is the ideal personality mix for each mission? Again, we are using the anthropomorphic term personality for the details of the robots behavior patterns. One could call it a robots programming if that did not suggest the existence of an intentional programmer. The robots personalities have evolved from the robots participation in a very large number of simulations. It is unlikely that any human will fully understand a given personality or be able to fully predict all aspects of a robots behavior.

In a simple case, there may be one optimum personality for all the robots of one type. In more complicated situations, where robots will interact with each other, having robots that respond differently to the same stimuli could make a unit more robust. These are things that military planners can hope to learn through testing and training. Of course, attributes of personality that may have evolved for one set of situations may be less than optimal, or positively dangerous, in another. We talk a lot about artificial intelligence. We dont discuss artificial mental illness. But there is no reason to rule it out.

Of course, humans will need to be trained to interact with the machines. Machine learning systems already often exhibit sophisticated behaviors that are difficult to describe. Its unclear how future AI-enabled robots will behave in combat. Humans, and other robots, will need experience to know what to expect and to deal with any unexpected behaviors that may emerge. Planners need experience to know which plans might work.

But the human-robot relationship might turn out to be something completely different. For all of human history, generals have had to learn their soldiers capabilities. They knew best exactly what their troops could do. They could judge the psychological state of their subordinates. They might even know when they were being lied to. But todays commanders do not know, yet, what their AI might prove capable of. In a sense, it is the AI troops that will have to train their commanders.

In traditional military services, the primary peacetime occupation of the combat unit is training. Every single servicemember has to be trained up to the standard necessary for wartime proficiency. This is a huge task. In a robot unit, planners, maintainers, and logisticians will have to be trained to train and maintain the machines but may spend little time working on their hardware except during deployment.

What would the units look like? What is the optimal unit rank structure? How does the human rank structure relate to the robot rank structure? There are a million questions as we enter uncharted territory. The way to find out is to put robot units out onto test ranges where they can operate continuously, test software, and improve machine learning. AI units working together can learn and teach each other and humans.

Conclusion

AI-enabled robots will need to be organized, trained, and maintained. While these systems will have human-like characteristics, they will likely develop distinct personalities. The military will need an extensive training program to inform new doctrines and concepts to manage this powerful, but unprecedented, capability.

Its unclear what structures will prove effective to manage AI robots. Only by continuous experimentation can people, including computer scientists and military operators, understand the developing world of multi-unit human and robot forces. We must hope that experiments lead to correct solutions. There is no guarantee that we will get it right. But there is every reason to believe that as technology enables the development of new and more complex patterns of robot behavior, new types of military organizations will emerge.

Thomas Hamilton is a Senior Physical Scientist at the nonprofit, nonpartisan RAND Corporation. He has a Ph.D. in physics from Columbia University and was a research astrophysicist at Harvard, Columbia, and Caltech before joining RAND. At RAND he has worked extensively on the employment of unmanned air vehicles and other technology issues for the Defense Department.

Image: Wikicommons (U.S. Air Force photo by Kevin L. Moses Sr.)

View original post here:
How to Train Your AI Soldier Robots (and the Humans Who Command Them) - War on the Rocks

Not fasting is killing us, but fasting can hurt us too. Here’s what to do. – Mashable

There's a switch inside every cell in your body. Flip it on and you're in growth mode. Your cells start dividing but in the process, they make a lot of junk like mis-folded proteins, which help create the conditions for our biggest diseases (including cardiovascular, Alzheimer's and the big C). Flip the switch off, though, and your cells literally take out the trash leaving them clean, renewed, effectively young.

We know how to flip the switch. The trick is figuring out when. Because leaving your body in cleanup mode for too long can also be extremely bad for your health, in the much shorter term. Doing so has been the cause of anxiety, misery and disorder, for decades. It's also known as starvation.

The delicate dance of food consumption is at the heart of The Switch, a new book about new body-energy science and how it can help us live longer. Author and research scientist James Clement studies people who reach the age of 110; Harvard's David Sinclair, who recently wrote a groundbreaking book on the end of aging, is his mentor. As Clement's book hit shelves, an unrelated study in Nature confirmed its premise: mTOR (your genetic "on" switch) cannot coexist with autophagy (trash removal), and that is "implicated in metabolic disorders, neuro-degeneration, cancer and aging," the study said.

In other words: We age faster, get sicker and harm our brains when we fill the hours we're awake with food, day in and day out. Organic beings need more of a break than just a good night's rest in order to properly take out the trash. We're the opposite of automobiles. We break down eventually unless we run out of fuel. (Glycogen, which is what the body converts food into, is our gas.)

These revelations shed a new spotlight on fasting, the main way to induce autophagy (you can also kickstart it with intense exercise on a mostly empty stomach). But this is where we run into problems, and not just because autophagy literally translates to "eating yourself." (It can be hard for scientists to explain that this is actually a good thing and that all living things do it, from simple yeast all the way up to primates; we were designed to work this way by millennia of feast and famine.)

The problem isn't the science, it's the culture. For most of history, fasting was locked into human lives at a steady, healthy pace in some form of ritual, religious or otherwise. But in the modern world, we make our own rituals, and they easily shade into obsessions. This happens a lot with new diets: We get the zeal of the convert. We bore our friends to death with the particulars. And we take it too far, which in the case of fasting can be dangerous.

In a column published this week, the New York Times' veteran health columnist Jane Brody came around to the value of intermittent fasting. But she sounded a personal note of caution: "For people with a known or hidden tendency to develop an eating disorder, fasting can be the perfect trigger, which I discovered in my early 20s. In trying to control my weight, I consumed little or nothing all day, but once I ate in the evening, I couldnt stop and ended up with a binge eating disorder."

Something similar, at least to the first part of that story, seems to have happened to Twitter CEO Jack Dorsey. Last year Dorsey boasted about fasting for 22 hours, eating just one meal at dinnertime, and skipping food for the whole damn weekend. "I felt like I was hallucinating," he enthused, boasting of his increased focus and euphoria.

But as many withering articles pointed out, Dorsey's words would have triggered concern if they came from the mouth of a teenage girl since focus and euphoria can also be early signs of anorexia and bulimia. Clearly there is a tangled set of gendered assumptions at play here. "Its both remarkable and depressing to watch Jack Dorsey blithely describe a diet that would put any woman or any non-wealthy man into the penalty box of public opinion," wrote Washington Post columnist Monica Hesse.

That's not what The Switch is about. Clement doesn't endorse Dorsey's extreme approach, since the research shows benefits diminish after 16 hours of fasting. "I have friends who are bulimic, I know how serious a problem it is," he said when I raised the issue. "The kind of fasting that I'm talking about is just making sure your mTOR and autophagy are in balance."

Indeed, The Switch is a very balanced book, with plenty of nuanced suggestions for how you can make your food situation just a little bit better without making too many radical changes. (That probably explains why it hasn't taken off on the diet book media circuit, which tends to favor rules that are extreme, unusual, and headline-friendly.)

Here's a breakdown of Clement's advice.

Like most medicine, the mTOR switch is good for you if used at the correct dose, and poison at high doses. There's a reason it exists: It's your body's way of saying "times are good, let's grow muscle and fat!" Fat isn't inherently bad for you, either on your body or in your diet. Indeed, the good fats are what Clement suggests we consume the most fish, avocados, plant-based oils and nuts, macadamias especially alongside regular greens, most legumes and a little fruit.

If you're cutting down the amount of time you eat, then the content of your meals matters more. Clement himself gets good results from a meatless version of the ketogenic diet, which he says makes him less hungry but he doesn't rule out other diets that focus on good fat and fiber.

At the very least, be sure to avoid the stuff that spikes blood sugar. It will make you too hungry too soon, which will make autophagy impossible. You didn't think this whole Switch thing was going to give you permission to snarf on soda and hot dogs, did you?

Well, it does, actually just very occasionally.

Clement brings a lot of science on protein to the table, and the bad news is you're probably eating way more of it than you think you need. Animal protein flips the mTOR switch into high gear (which is why Clement is into mostly vegan keto). Sadly, so does regular dairy, and as a milk fan I found the new studies on this particularly hard reading.

But it makes evolutionary sense. Cow milk is designed to make calves grow many sizes in a short space of time, and the way you do that is by activating the mTOR pathway. So it's hard to switch into autophagy if you're chugging milk all the time. (Non-cow milks and cheeses seem to be fine, mTOR-wise.)

Which isn't to say you can't have meat and milk at all. This isn't one of those fundamentally restrictive diets we always break. Clement suggests dividing the week or month or year into growth and fasting phases. You might decide to eat as much as you want for three months of the year (which takes care of the holidays problem), say, or try doing the fasting thing for five days a month.

Whichever way you do it, the sweet spot seems to put you in growth mode around 20 percent of the time. But that's not a hard and fast number, because again, this isn't one-size-fits-all. (It certainly doesn't apply to kids, who need to grow more like calves.) I told Clement that after reading the book I was thinking of only allowing myself meat or milk on the weekends; he enthusiastically endorsed the idea.

Ready to turn on autophagy for its disease-fighting benefits? Ready to avoid doing it too much? Ready to eat more nutritious food when you break your fast? Then it's time to figure out how long you want to fast for and you'll be surprised about how little time it takes to see the effects.

The math varies from human to human, but "you only have about six to 10 hours worth of glycogen stored in your body at any given time," says Clement. "So you can actually burn through those overnight if you didn't load up with carbs in your evening meal or 11 o'clock snacks."

That provides one particularly effortless way to fast for those of us who don't wake up hungry (and if you're eating the right stuff, you generally won't). Let's say you ate your last bite at 9 p.m. and wake up at 7 a.m. Congratulations, you're already out of glycogen and in autophagy! Now the question is: how long is it comfortable for you to stay foodless, bearing in mind you don't want to go past a total of 16 hours? (In this example, that would be 1 p.m.)

You'll definitely want to hydrate immediately, of course: Sleep literally shrivels your brain. You might want to drink some coffee, which enhances autophagy (the all-time Guinness World Record oldest human, Jeanne Calment of France, took no breakfast but coffee, and died at 122). If you can stand to do so, this would be a great time to work out. Exercise seems to act like an autophagy power up; one study suggests working up a sweat might boost our cells' trash-cleaning effectiveness all the way up to the 80-minute mark.

So if you went from 9 p.m. to 1 p.m., or whatever 16-hour period suits your schedule (7 p.m. to 11 a.m. seems to be a popular one for fasters who don't make late dinner reservations, and it is easily remembered as "7-11"), then congratulations. You just did the maximally beneficial fast. Take that, Jack Dorsey.

But if you didn't? No sweat. If you only made it until 10 a.m., or 8 a.m. before needing food, your entire body still got a boost of cleanup time. And if you needed an immediate breakfast, that's fine too. Fasting doesn't have to happen every day; in fact it's imperative that it doesn't. Every morning is an opportunity to listen to your body and see if it's ready for a quick restorative food break.

Everyone who's ever tried to diet knows the terrible guilt that comes after grabbing obviously bad food, Don't stress over it, says Clement. Don't be maniacal. The whole point is to be in balance. We all need mTOR-boosting feasts from time to time. "It's fine to have one pepperoni pizza on a Sunday, or whatever," he says. So long as you're eating well most of the time and fasting every now and again, you'll see positive effects.

And if you can't fast at all and can't stop snacking? No worries, just change what you're eating. "If you switch over to snacking on either very low glycemic veggies like broccoli tops or carrots, or nuts, then you're not going to be replenishing your glycogen stores," Clement says. Stick a small bowl of almonds and blueberries in the kitchen and you'll be surprised, over time, at how little it takes to satisfy supposedly giant cravings.

That was what I learned, not from Clement's book, but from David Sinclair's. The Harvard geneticist and Clement mentor doesn't focus so much on lengthy fasts, although he takes a number of fast-mimicking supplements. His dieting approach is to simply eat less, to "flip a switch in your head that allows you to be OK with being a little hungry." For some of us, such small moves may be more effective than going all-out on a new diet.

If youd like to talk to someone about your eating behaviors, call the National Eating Disorder Associations helpline at 800-931-2237. You can also text NEDA to 741-741 to be connected with a trained volunteer at the Crisis Text Line or visit NEDA's website for more information.

Read this article:
Not fasting is killing us, but fasting can hurt us too. Here's what to do. - Mashable

Machine learning is making NOAA’s efforts to save ice seals and belugas faster – FedScoop

Written by Dave Nyczepir Feb 19, 2020 | FEDSCOOP

National Oceanic and Atmospheric Administration scientists are preparing to use machine learning (ML) to more easily monitor threatened ice seal populations in Alaska between April and May.

Ice flows are critical to seal life cycles but are melting due to climate change which has hit the Arctic and sub-Arctic regions hardest. So scientists are trying to track species population distributions.

But surveying millions of aerial photographs of sea ice a year for ice seals takes months. And the data is outdated by the time statisticians analyze it and share it with the NOAA assistant regional administrator for protected resources in Juneau, according to aMicrosoft blog post.

NOAAs Juneau office oversees conservation and recovery programs for marine mammals statewide and can instruct other agencies to limit permits for activities that might hurt species feeding or breeding. The faster NOAA processes scientific data, the faster it can implement environmental sustainability policies.

The amazing thing is how consistent these problems are from scientist to scientist, Dan Morris, principal scientist and program director of MicrosoftAI for Earth, told FedScoop.

To speed up monitoring from months to mere hours, NOAAs Marine Mammal Laboratory partnered with AI for Earth in the summer of 2018 to develop ML models recognizing seals in real-time aerial photos.

The models were trained during a one-week hackathon using 20 terabytes of historical survey data in the cloud.

In 2007, the first NOAA survey done by helicopter captured about 90,000 images that took months to analyze and find 200 seals. The challenge isthe seals are solitary, and aircraft cant fly so low as to spook them. But still, scientists need images to capture the difference between threatened bearded and ringed seals and unthreatened spotted and ribbon seals.

Alaskas rainy, cloudy climate has led scientists to adopt thermal and color cameras, but dirty ice and reflections continue to interfere. A 2016 survey of 1 million sets of images took three scientists six months to identify about 316,000 seal hotspots.

Microsofts ML, on the other hand, can distinguish seals from rocks and, coupled with improved cameras on a NOAA turboprop airplane, will be used in flyovers of the Beaufort Sea this spring.

NOAA released a finalized Artificial Intelligence Strategy on Tuesday aimed at reducing the cost of data processing and incorporating AI into scientific technologies and services addressing mission priorities.

Theyre a very mature organization in terms of thinking about incorporating AI into remote processing of their data, Morris said.

The camera systems on NOAA planes are also quite sophisticated because the agencys forward-thinking ecologists are assembling the best hardware, software and expertise for their biodiversity surveys, he added.

While the technical integration of AI for Earths models with the software systems on NOAAs planes has taken a year to perfect, another agency project was able to apply a similar algorithm more quickly.

The Cook Inlets endangered beluga whale population numbered 279 last year down from about 1,000three decades ago.

Belugas increasingly rely on echolocation to communicate with sediment from melting glaciers dirtying the water they live in. But the noise from an increasing number of cargo ships and military and commercial flights can disorient the whales. Calves can get lost if they cant hear their mothers clicks and whistles, and adults cant catch prey or identify predators.

NOAA is using ML tools to distinguish a whales whistle from man-made noises and identify areas where theres dangerous overlap, such as where belugas feed and breed. The agency can then limit construction or transportation during those periods, according to the blog post.

Previously, the projects 15 mics recorded sounds for six months along the seafloor, scientists collected the data, and then they spent the remainder of the year classifying noises to determine how the belugas spent their time.

AI for Earths algorithms matched scientists previously classified logs 99 percent of the time last fall and have been since introduced into the field.

The ML was implemented faster than the seal projects because the software runs offline at a lab in Seattle, so integration was easier, Morris said.

NOAA intends to employ ML in additional biodiversity surveys. And AI for Earth plans to announce more environmental sustainability projects in the acoustic space in the coming weeks, Morris added, thoughhe declined to name partners.

Original post:
Machine learning is making NOAA's efforts to save ice seals and belugas faster - FedScoop

Pluto7, a Google Cloud Premier Partner, Achieved the Machine Learning Specialization and is Recognized by Google Cloud as a Machine Learning…

Pluto7 is a services and solutions company focused on accelerating business transformation. As a Google Cloud Premier Partner, we service the retail, manufacturing, healthcare, and hi-tech industries.

Pluto7 just achieved the Google Cloud Machine Learning Specialization for combining business consultancy and unique machine learning solutions built on Google Cloud.

With Pluto7 comes unique capabilities for machine learning, artificial intelligence, and analytics. Brought to you by a company that contains some of the finest minds in data science, able to draw on its surroundings in the very heart of Silicon Valley, California.

Businesses are looking for practical solutions to real-world challenges. And by that, we do not just mean providing the tech and leaving you to stitch it all together. Instead, Pluto7s approach is to apply innovation to your desired outcome, alongside the experience needed to make it all happen. This is where their range of consultancy services comes into play. These are designed to create an interconnected tech stack and to champion data empowerment through ML/AI.

Pluto7s services and solutions allow businesses to speed up and scale-out sophisticated machine learning models. They have successfully guided many businesses through the digital transformation process by leveraging the power of artificial intelligence, analytics, and IoT solutions.

What does this mean for a partner to be specialized?

When you see a Google Cloud partner with a Specialization, it indicates proficiency and experience with Google Cloud. Pluto7 is recognized by Google Cloud as a machine learning specialist with deep technical capabilities. The organizations that receive this distinction, demonstrates their ability to lead a customer through the entire AI journey. Pluto7 designs, builds, migrates, tests, and operates industry-specific solutions for their customers.

Pluto7 has a plethora of previous experience in deploying accelerated solutions and custom applications in machine learning and AI. The many proven success stories from industry leaders like ABinBev, DxTerity, L-Nutra, CDD, USC, UNM are publically available on their website. These customers have leveraged Pluto7 and Google Cloud technology to see tangible and transformative results.

On top of all this, Pluto7 has a business plan that aligns with the Specialization. Because of their design, build, and implementation methodologies they are able to successfully drive innovation, accelerate business transformation, and boost human creativity.

ML Services and Solutions

Pluto7 has created Industry-specific use cases for marketing, sales, and supply chains and integrated these to deliver a game-changing customer experience. These capabilities are brought to life through their partnership with Google Cloud, one of the most innovative platforms for AI and ML out there. The following solution suites are created to solve some of the most difficult problems through a combination of innovative technology and deep industry expertise.

Demand ML - Increase efficiency and lower costs

Pluto7 helps supply chain leaders manage unpredictable fluctuations. These solutions allow businesses to achieve demand forecast accuracy of more than 90%, manage complex and unpredictable fluctuations while delivering the right product at the right time -- all using AI to predict and recommend based on real-time data at scale.

Preventive Maintenance - Improve quality, production and reduce associated costs

Pluto7 improves the production efficiency of production plants from 45-80% to reduce downtime and maintain quality. They leverage machine learning and predictive analytics to determine the remaining value of assets and accurately determine when a manufacturing plant, machine, component or part is likely to fail, and thus needs to be replaced.

Marketing ML - Increase marketing ROI

Pluto7s marketing solutions improve click-through rates and predict traffic rates accurately. Pluto7 can help you analyze marketing data in real-time to transform prospect and customer engagement with hyper-personalization. Businesses are able to leverage machine learning for better customer segmentation, campaign targeting, and content optimization.

Contact Pluto7

If you would like to begin your AI journey, Pluto7 recommends starting with a discovery workshop. This workshop is co-driven by Pluto7 and Google Cloud to understand business pain points and set up a strategy to begin solving. Visit the website at http://www.pluto7.com and contact us to get started today!

View source version on businesswire.com: https://www.businesswire.com/news/home/20200219005054/en/

Contacts

Sierra ShepardGlobal Marketing Teammarketing@pluto7.com

See more here:
Pluto7, a Google Cloud Premier Partner, Achieved the Machine Learning Specialization and is Recognized by Google Cloud as a Machine Learning...

Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages – Business Wire

TAMPA, Fla. & SEATTLE--(BUSINESS WIRE)--Syniverse, the worlds most connected company, and RealNetworks, a leader in digital media software and services, today announced they have incorporated sophisticated machine learning (ML) features into their integrated offering that gives carriers visibility and control over mobile messaging traffic. By integrating RealNetworks Kontxt application-to-person (A2P) message categorization capabilities into Syniverse Messaging Clarity, mobile network operators (MNOs), internet service providers (ISPs), and messaging aggregators can identify and block spam, phishing, and malicious messages by prioritizing legitimate A2P traffic, better monetizing their service.

Syniverse Messaging Clarity, the first end-to-end messaging visibility solution, utilizes the best-in-class grey route firewall, and clearing and settlement tools to maximize messaging revenue streams, better control spam traffic, and closely partner with enterprises. The solution analyzes the delivery of messages before categorizing them into specific groupings, including messages being sent from one person to another person (P2P), A2P messages, or outright spam. Through its existing clearing and settlement capabilities, Messaging Clarity can transform upcoming technologies like Rich Communication Services (RCS) and chatbots into revenue-generating products and services without the clutter and cost of spam or fraud.

The foundational Kontxt technology adds natural language processing and deep learning techniques to Messaging Clarity to continually update and improve its understanding of messages and clarification. This new feature adds to Messaging Claritys ability to identify, categorize, and ascribe a monetary value to the immense volume and complexity of messages that are delivered through text messaging, chatbots, and other channels.

The Syniverse and RealNetworks Kontxt message classification provides companies the ability to ensure that urgent messages, like one-time passwords, are sent at a premium rate compared with lower-priority notifications, such as promotional offers. The Syniverse Messaging Clarity solution also helps eliminate instances of extreme message spam phishing (smishing). This type of attack recently occurred with a global shipping company when spam texts were sent to consumers with the request to click a link to receive an update on a package delivery for a phantom order.

CLICK TO TWEET: Block #spam and categorize & prioritize #textmessages with @Syniverse & @RealNetworks #Kontxt. #MNO #ISPs #Messaging #MachineLearning #AI http://bit.ly/2HalZkv

Supporting Quotes

Syniverse offers companies the capability to use machine learning technologies to gain insight into what traffic is flowing through their networks, while simultaneously ensuring consumer privacy and keeping the actual contents of the messages hidden. The Syniverse Messaging Clarity solution can generate statistics examining the type of traffic sent and whether it deviates from the senders traffic pattern. From there, the technology analyzes if the message is a valid one or spam and blocks the spam.

The self-learning Kontxt algorithms within the Syniverse Messaging Clarity solution allow its threat-assessment techniques to evolve with changes in message traffic. Our analytics also verify that sent messages conform to network standards pertaining to spam and fraud. By deploying Messaging Clarity, MNOs and ISPs can help ensure their compliance with local regulations across the world, including the U.S. Telephone Consumer Protection Act, while also avoiding potential costs associated with violations. And, ultimately, the consumer -- who is the recipient of more appropriate text messages and less spam -- wins as well, as our Kontxt technology within the Messaging Clarity solution works to enhance customer trust and improve the overall customer experience.

Digital Assets

Supporting Resources

About Syniverse

As the worlds most connected company, Syniverse helps mobile operators and businesses manage and secure their mobile and network communications, driving better engagements and business outcomes. For more than 30 years, Syniverse has been the trusted spine of mobile communications by delivering the industry-leading innovations in software and services that now connect more than 7 billion devices globally and process over $35 billion in mobile transactions each year. Syniverse is headquartered in Tampa, Florida, with global offices in Asia Pacific, Africa, Europe, Latin America and the Middle East.

About RealNetworks

Building on a legacy of digital media expertise and innovation, RealNetworks has created a new generation of products that employ best-in-class artificial intelligence and machine learning to enhance and secure our daily lives. Kontxt (www.kontxt.com) is the foremost platform for categorizing A2P messages to help mobile carriers build customer loyalty and drive new revenue through text message classification and antispam. SAFR (www.safr.com) is the worlds premier facial recognition platform for live video. Leading in real world performance and accuracy as tested by NIST, SAFR enables new applications for security, convenience, and analytics. For information about our other products, visit http://www.realnetworks.com.

RealNetworks, Kontxt, SAFR and the companys respective logos are trademarks, registered trademarks, or service marks of RealNetworks, Inc. Other products and company names mentioned are the trademarks of their respective owners.

Results shown from NIST do not constitute an endorsement of any particular system, product, service, or company by NIST: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt-ongoing.

Continued here:
Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages - Business Wire