Op-ed: Hyperwar is coming. America needs to bring AI into the fight to win with caution – CNBC

A People's Liberation Army Navy fleet including the aircraft carrier Liaoning, submarines, vessels and fighter jets takes part in a review in the South China Sea on Apr. 12, 2018.

Visual China Group | Getty Images

The United States recently sent two aircraft carrier strike groups into the South China Sea in a show of military strength. The move of multiple American warships is in reaction to China holding military exercises in international waters that are contested by Vietnam and the Philippines. The stand-off raises global tensions at a time when each superpower has developed advanced technological capabilities in terms of artificial intelligence, remote imaging, and autonomous weapons systems. It is important officials in each nation understand how emerging technologies speed up decision-making but through crisis acceleration run the risk of dangerous miscalculation.

Harkening back to Prussian general and military theorist Carl von Clausewitz's famous work"On War," military doctrine the world over has been rooted in an understanding of the ever-changing character of war, the ways in which war manifests in the real world, and the never-changing nature of war, those abstractions that differentiate war from other acts namely its violent, political, and interactive elements. Military scholars and decision makers alike have discussed and debated these definitions time and time again, with the character of war often being defined by the technologies of the day, and the nature of war being articulated as the human element of armed conflict.

With the advent of AI and other emerging technologies, though, these time-honored definitions are likely to change. At a fundamental level, battle, war, and conflict are time-competitive processes. From time immemorial, humans have sought to be faster in the ultimate competition of combat, in an absolute as well as in a relative sense. And in that regard, AI will dramatically change the speed of war. It will not only enhance the human role in conflict, but will also leverage technology as never before. For not only is technology changing, the rate of that alteration is accelerating.

This is the central issue before us for armed conflict, and the side that can create, master, and leverage an equilibrium between the nature of war and the character of war, especially within the new environment of AI, data analytics, and supercomputing, will inevitably prevail in conflict. In a geopolitical environment increasingly defined by new and emerging technologies, national defense stands as one of the most consequential areas of development for the 21st century. It is important to assess the revolutionary impacts of artificial intelligence and other emerging technologies on nearly every facet of national security and armed conflict, including the accelerated pace of warfare and the critical role of continued human control.

AI, once fully realized, has the potential to be one of the single greatest force multiplier for military and security forces in human history.

Ultimately, there are significant opportunities to deploy AI-based tools, as well as major rising threats that need to be considered and addressed. A variety of technologies can improve decision-making, speed, and scalability some to a dizzying degree. But, as with so many other AI applications, policy and operational shifts are necessary to facilitate the proper integration and innovation of these emerging technologies and make sure they strengthen, not weaken, leadership capacity, general readiness, and performance in the field.

Throughout human history, militaries have operated as the most overt political tool available to governments and society. Clausewitz himself famously wrote, "War is a continuation of politics by other means." And while modern security forces play a variety of interchangeable roles (peacekeeping, stabilization, and national defense), they invariably represent the threat of violence the "mailed fist" purpose-built to ensure a particular outcome.

With this as context, it is no surprise that the ability to assure outcomes and plan for all contingencies, violent or otherwise, takes up a significant portion of military leadership and military strategists' time and energy. Here, through anything from predictive analytics to lightning-fast target acquisition, AI, once fully realized, has the potential to be one of the single greatest force multiplier for military and security forces in human history.

Indeed, as noted in a Congressional Research Service report: "AI has the potential to impart a number of advantages in the military context, [though] it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to faster, more informed military decision-making, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulations."

Rather than constituting a single weapon system itself, AI is instead being built into a wide variety of weapons systems and core infrastructure. Tanks, artillery, aircraft, submarines versions of each can already detect objects and targets on their own and maneuver accordingly.

Alan Radecki | U.S. Navy | Northrop Grumman | Getty Images

The interim report of the U.S. National Security Commission on Artificial Intelligence warns that:

"How the United States adopts AI will have profound ramifications for our immediate security, economic well-being, and position in the world. Developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. We are concerned that America's role as the world's leading innovator is threatened. We are concerned that strategic competitors and non-state actors will employ AI to threaten Americans, our allies, and our values. We know strategic competitors are investing in research and application. It is only reasonable to conclude that AI-enabled capabilities could be used to threaten our critical infrastructure, amplify disinformation, and wage war."

AI's role in the military and on the battlefield is thus one of catalytic power, both for good and ill. Yet, the strength of AI does not manifest in the way a bomb or new weapons platform might perform or act. Its utility is much broader. As noted by Brookings Institution scholar Chris Meserole, AI is being deployed in myriad ways by the American military: "Rather than constituting a single weapon system itself, AI is instead being built into a wide variety of weapons systems and core infrastructure. Tanks, artillery, aircraft, submarinesversions of each can already detect objects and targets on their own and maneuver accordingly."

This dynamic becomes particularly clear within the context of the spectrum of conflict modern militaries deal with today, notably hybrid conflict. Looking ahead, it will also define warfare of the future, namely through what John Allen and Amir Husain have coined as "hyperwar." The distinctions between hybrid warfare and hyperwar are important. As noted in a recent NATO report: "Hybrid threats combine military and non-military as well as covert and overt means, including disinformation, cyber attacks, economic pressure, deployment of irregular armed groups and use of regular forces. Hybrid methods are used to blur the lines between war and peace, and attempt to sow doubt in the minds of target populations."

Russian servicemen operate an S-400 Triumf anti-aircraft missile system on combat duty in the Kaliningrad Region in 2019. The system is designed to repel any contemporary aerospace attack, such as stealth and fighter aircraft, bombers, cruise and ballistic missiles, drones and hypersonic targets.

Vitaly Nevar | TASS | Getty Images

In this environment, AI can super-charge an adversary's ability to sow chaos in the battlespace and incorporate deception and surprise into their tactics in new and novel ways. By contrast, hyperwar may be defined as a type of conflict where human decision-making is almost entirely absent from the observe-orient-decide-act (OODA) loop, a popular framework developed by U.S. Air Force Colonel John Boyd for training individuals to make time-sensitive decisions as quickly as possible, especially when there is limited time to gather information. As a consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses.

The implications of these developments are many and game changing, ranging across the spectrum from advanced technological modernization to how leaders in the era of hyperwar are recruited, educated and trained. The topics of speed, efficiency, and accuracy as well as the necessity for human control of AI-powered military capabilities represent the heart of the issue in the current AI-national security debate.

With U.S. and Chinese forces maneuvering in the relatively tight operational environment of the South China Sea, reaction times are already short. As more sophisticated AI eventually enables faster and more comprehensive intelligence collection and analysis, rapid decision support, and even wide area target acquisition, we could see a premium being placed on "going first" and not risk being caught flat-footed. While AI has the capacity to magnify military capabilities and accelerate the speed of conflict, it can also be inherently destabilizing.

Now is the time for the U.S. and China to have the hard conversations about norms of behavior in an AI enabled, hyperwar environment. With both sides moving rapidly to field arsenals of hypersonic weapons, action and reaction times will become shorter and shorter and the growing imbalance of the character and nature of war will create strong incentives, in moments of intense crisis, for conflict not peace. This is foreseeable now, and demands the engagement of both powers to understand, seek, and preserve the equilibrium that can prevent the sort of miscalculation and high-speed escalation to the catastrophe that none of us wants.

ByJohn R. Allen, a retired four-star military general and Brookings Institution president, and Darrell West, vice president of governance studies at Brookings. This article is excerpted from their forthcomingTurning Point: Policymaking in the Era of Artificial Intelligence.

Link:

Op-ed: Hyperwar is coming. America needs to bring AI into the fight to win with caution - CNBC

AI Predicts The Future – AI Daily

Although artificial intelligence is becoming increasingly popular, no one knows how far it can actually evolve, and although some people believe that it will become fully aware of our super-species, others believe that this is not possible and that AI will only support humans in the future.

There are still many ethical issues associated with artificial intelligence that companies do not need to address at present, such as the ethical implications of the future use of artificial intelligence.

In 2020, more than ever, AI assistants will be arranging their calendars, planning their journeys, ordering pizza and more. AI is really woven into our lives today, so much so that most people don't think for a second that when they search Google or watch Netflix, highly accurate AI-driven predictions are at work that make the experience flow. They interact with us and help us make more informed decisions about our daily lives and activities.

As these services learn to anticipate our behavior and better understand our habits, they become increasingly useful to us as we age.

This will lead to a glorious utopia if people follow more meaningful aspirations in their lives, rather than let economic necessity dictate what they spend their time on. There is no doubt that the economic and social changes that AI promises to bring, and in some cases threatens to bring, go beyond anything imagined in previous technological revolutions. A future in which machines do physical labor, just as they did during the Industrial Revolution, but where decisions are made, work is thought and done by machines, not by humans.

This is an outcome that continues to be hotly debated and is not likely to be achieved until 2020. As AI technology and machine learning applications improve, we are likely to see more tools like GPT 2.0 making more and better predictions for the future. Asking it to predict our future based on the past curated by Reddit is certainly something new, but it's certainly not the end of the line.

In smart cities, AI chips will help locate missing people and search for stolen vehicles. The same idea must be applied to car accidents, as cars learn to break laws and driving rules when it means saving more lives. AI chips will also improve our ability to process visual data more efficiently, paving the way for autonomous vehicles of the future.

Because AI can read and synthesise a huge dataset much faster than we can, it can be used to predict all sorts of things, from virus outbreaks to crimes. By processing data from any source, chips can provide more privacy and reliability as well as lower costs in the future.

If we continue to improve, we will probably be able to make more and better predictions about the future. In the meantime, the model railway lets us train on what we can expect in weeks, months or years. I think asking us to philosophise about our future, based on the past curated by Reddit, is new, but I'm glad we did.

Read more from the original source:

AI Predicts The Future - AI Daily

Tackling Tuberculosis with AI: Novel TBMeld Platform to Transform Research in the Prevention and Treatment of Tuberculosis – GlobeNewswire

-InveniAI announces partnership with the Mueller Health Foundation (MHF) to create a first of its kind AI-driven platform to be used for the discovery, tracking, and evaluation of 1) novel TB vaccines, 2) new TB treatment combinations with existing medicines, 3) novel TB treatment opportunities with adjuvant combinations, and 4) tailored precision medicine approaches for the prevention and treatment of TB.

GUILFORD, Conn., July 31, 2020 (GLOBE NEWSWIRE) -- InveniAI LLC, a global leader pioneering the application of artificial intelligence (AI) and machine learning (ML) to transform innovation across drug discovery and development, is pleased to announce a strategic partnership with The Mueller Health Foundation (MHF), a private foundation dedicated to supporting innovative, accessible and affordable solutions to generate transformational treatment modalities and ultimately cures for lethal infectious diseases across the globe. The main focus for the Mueller Health Foundation has been primarily on the management of multidrug resistance (MDR), extensively drug-resistant (XDR), and programmatically incurable forms of tuberculosis (totally drug-resistant tuberculosis), which pose enormous challenges similar to those in the pre-Antibiotics era. The Mueller Health Foundation will partner with InveniAI to create a new high-value AI-driven machine learning platform called TBMeld to identify and accelerate transformative solutions and vaccines for the management, treatment, and cure of TB. Additionally, the new platform will incorporate predictive modeling functionalities to estimate the effectiveness of TB drug compounds and compound combinations, as well as the effectiveness of new TB vaccines.

Our mission at the Mueller Health Foundation (MHF) is to pioneer the use of precision medicine to be able to provide tailored, highly effective, and shorter treatment options for TB patients affected by both resistant and non-resistant strains of TB. We at MHF are also exploring novel ways of making existing TB vaccines more effective for all types of patients, said Dr. Peter Mueller, Founder and President of the Mueller Health Foundation. He also states: We are particularly excited to make use of machine learning algorithms and predictive modeling capabilities offered by InveniAI to find better approaches to preventing and treating tuberculosis faster and more effectively. What especially excites us is the opportunity to integrate this new platform, TBMeld, into our own blockchain initiative TBConnect, which aims to create a global network of TB scientists, health care providers, NGOs, and government entities to foster the sharing and analysis of both research and public health information.

We are excited to create an AlphaMeld-powered platform that, together with complementary technologies such as blockchain and the crowdsourcing of knowledge, information, and experts around the world, will allow for the comprehensive collation of troves of data to expedite the discovery of novel solutions and vaccines to address this unmet need in TB resistant management. The predictive foresight enabled by the platform will allow us to focus on concepts that will have the highest probability of success, said Krishnan Nandabalan, Ph.D. President and CEO, InveniAI LLC.

About TuberculosisTuberculosis has been named by WHO as one of 10 leading causes of death in the world and it is a disease that occurs in every part of the world. According to WHO latest report in 2019, an estimated 10 million people fell ill with tuberculosis worldwide in 2018. This included 5.7 million men, 3.2 million women and 1.1 million children. An estimated 1.5 million people died of the disease in 2018. There were cases in all countries and age groups. In 2018, the 30 high TB burden countries accounted for 87% of new TB cases. Eight countries account for two thirds of the total, with India leading the count, followed by, China, Indonesia, the Philippines, Pakistan, Nigeria, Bangladesh and South Africa. Multidrug-resistant TB (MDR-TB) remains a public health crisis and a health security threat. WHO estimates that there were 484 000 new cases with resistance to rifampicin. While the TB incidence is falling at about 2% per year globally, this rate needs to be accelerated to a 45% annual decline to reach health targets of the Sustainable Development Goals by 2030. (Source: WHO 2019 Global Tuberculosis Report)

About The Mueller Health FoundationThe Mueller Health Foundation is a family-led, philanthropic organization installed in February 2015 that prides itself on supporting innovative, accessible and affordable solutions to generate transformational treatment modalities and ultimately cures for lethal infectious diseases across the globe. The Mueller Health Foundation accomplishes this by addressing latency and increased antimicrobial resistance as the underlying core problem. Since inception of The Mueller Health Foundation, the main focus has been primarily on the management of multidrug resistance (MDR), extensively drug resistant (XDR) and programmatically incurable forms of tuberculosis (totally drug resistant tuberculosis). The Mueller Health Foundation focuses on core functions to address tuberculosis that include:

For more information, please visit: http://www.muellerhealthfoundation.org.

About InveniAIInveniAI LLC, based in Guilford, Conn., is a global leader pioneering the application of artificial intelligence (AI) and machine learning (ML) to transform innovation across drug discovery and development by identifying and accelerating transformative therapies for diseases with unmet medical needs. The company leverages AI and ML to harness petabytes of disparate data sets to recognize and unlock value for AI-based drug discovery and development. Numerous industry collaborations in Big Pharma, Specialty Pharma, Biotech, and Consumer Healthcare showcase the value of leveraging our technology to meld human experience and expertise with the power of machines to augment R&D decision-making across all major therapeutic areas. The company leverages the AlphaMeld platform to generate drug candidates for our industry partners and internal drug portfolio. For more information, visit http://www.inveniai.com.

Contact:Anita Ganjoo, Ph.D.CommunicationsInveniAIT: +1 203-273-8388aganjoo@inveniai.com

Judith MuellerExecutive DirectorThe Mueller Health FoundationT: +1 508-333-1184judith@muellerhealthfoundation.org

View original post here:

Tackling Tuberculosis with AI: Novel TBMeld Platform to Transform Research in the Prevention and Treatment of Tuberculosis - GlobeNewswire

Toyota’s billion-dollar AI research center has a new self-driving car – The Verge

The Toyota Research Institute (TRI) showed its first self-driving car this week, a Lexus LS 600hL test vehicle equipped with LIDAR, radar, and camera arrays to enable self-driving without relying too heavily on high-definition maps.

The vehicle is the base for two of TRIs self-driving research paths: Chauffeur and Guardian. Chauffeur is research into Level 4 self-driving, where the car is restricted to certain geographical areas like a city or interstates, as well as Level 5 autonomy, which would work anywhere. Guardian is a driver-assist system that monitors the environment around the vehicle, alerting the driver to potential hazards and stepping in to assist with crash avoidance when necessary.

Toyota thinks Guardians research will be deployed more quickly than Chauffeur. Similar tech is available in many cars today in safety features like Automatic Emergency Braking.

The car is part of a billion-dollar investment Toyota announced in late 2015 into the TRI, which has a mandate to develop AI technologies for autonomous cars and robot helpers for the home. The Institute has its headquarters near Stanford in California and satellite facilities near MIT in Massachusetts and the University of Michigan campus in Ann Arbor.

Read more:

Toyota's billion-dollar AI research center has a new self-driving car - The Verge

Google Wants You to Build AI That Benefits Society

Okay Google

Google’s philanthropic wing announced a new artificial intelligence contest Monday: develop or propose an algorithm that can help feed the hungry, protect wildlife, improve healthcare, or solve any of society’s many other problems.

As The Verge reported, this “AI for Social Good” contest may be an effort to reaffirm Google’s position as the good guys after a rash of controversies and ethical quandaries. The corporation is often seen as the most moral of the Silicon Valley giants — though that’s admittedly a low bar.

How to Help

Criticisms of Google aside, there’s a good chance that the algorithms funded and built as part of this contest could accomplish some good deeds — as long as people actually want to implement them in the real world.

In an era of looming climate collapse, extinction, and economic inequality, we could use all the help we can get.

Don’t Give up

Of course, we can’t sit back and let algorithms try and sort out our problems — the most cutting-edge AI systems out there can just gather data, spot patterns, and make predictions. That’s about it, really.

So while Google’s contest could bring about some tangible good, we all need to remember that technology isn’t about to fix all of society. The rest is up to those with the power to change things, if we can convince them to.

READ MORE: Google is hosting a global contest to develop AI that’s beneficial for humanity [The Verge]

More on fixing AI: Google Adds AI Fairness Lessons to its Machine Learning Crash Course

Continued here:

Google Wants You to Build AI That Benefits Society

Satellites and AI will bring real-time, real-world data to your phone – TNW

The line for the SXSW panel Eyes in the Sky: The Future of AI and Satellites snaked around many corners in Austins JW Marriot Hotel understandably, AI coupled with space shit, bring it on.

Spaceknow Incs CEO Pavel Machalek did most of the talking during this session. Spaceknow is a San Francisco based company building an AI system that can process the petabytes of data from the hundreds of commercial satellites circling us up above.

Gary Vaynerchuk was so impressed with TNW Conference 2016 he paused mid-talk to applaud us.

We are digitizing the physical world, so we can build apps on top it, Machalek stated. According to the Czech CEO, were currently going through a sea of change in how we use satellite data.

Everything from camera technology to actual satellites to launching those satellites to space is getting cheaper. Couple that with the abundance of computer power and the development of more robust machine learning system, and it follows that we can start extracting actionable information about the world, Machalek says

His company works for lots of industrial clients, who want to know how many ships visit a certain harbor, or how many trucks pull up to a refinery to move oil. But some of the information theyre extracting is also coupled to the Bloomberg terminal, informing investors about the growth of industrial areas in China.

By counting and classifying things you get an as objective grip on reality as possible, he says, after telling a story on how the information they collect contradicted the official numbers the Chinese government put out. In a world like this, in which people make up statistics, our numbers offer an objective look.

In a similar way, Spaceknow also distributes the Africa Night Lights Index, an index that is based on the light intensity measured by satellites and then aggregated according to individual countries as a more reliable economic indicator for developing countries in Africa.

In the end, Machalek says that hed like to cover the whole world with Spaceknows system, allowing anyone with a smartphone to do real-time queries about real-world data meaning you could check from space how long the line is for the bar you want to go to, I guess.

Read next: Future voice interfaces could turn us all into geniuses -- or idiots

See more here:

Satellites and AI will bring real-time, real-world data to your phone - TNW

Researchers Hope to Harness AI to Understand, Predict Response to ICIs – Cancer Therapy Advisor

Researchers in France are launching a new trial that will harness the power of artificial intelligence-based analysis and apply it to the identification of mechanisms underlying the response or resistance to immunotherapy with immune checkpoint inhibitors in patients with non-small cell lung cancer (NSCLC).1

In the first part of the study the PIONEER trial patients with NSCLC treated with immune checkpoint inhibitors will be carefully monitored with comprehensive searches for immune-related markers, gut microbiota, pharmacokinetics, pharmacogenomics, pharmacogenetics, and multiple biological parameters, explained study researcher Joseph Ciccolini, PharmD, PhD, professor of pharmacokinetics for the Center for Research on Cancer at Aix-Marseille University in France.

All of this knowledge will lead to an original data analysis step using sophisticated mathematical modeling the QUANTIC part of the project to help understand what makes patients respond (and survive) and what makes other patients progress on immunotherapy, Dr Ciccolini said.

The goal of this study is an important one, said Lee A.D. Cooper, PhD, director of computational pathology and the center for computational imaging and signal analysis at Northwestern University Feinberg School of Medicine in Chicago, Illinois.

Understanding who will respond to these therapies is one of the fundamental challenges that exists right now, Dr Cooper said. Predicting response is a challenging problem. The QUANTIC team is not just looking at predicting who will respond to treatment, but [is] going beyond that to understanding why these people respond.

In their explanation of the combined PIONeeR/QUANTIC project, Pr. Ciccolini and colleagues explain that the data from PIONeeR will result in hundreds of quantitative variables per time point per patient.

To analyze the data, the researchers will use mechanistic modeling. This type of modeling is thought to have superior value to artificial intelligence because the data produced are interpretable.

This allows [the models] to account for the biological meaning of part of the data, (eg, quantification of immune players), and to test biological hypotheses, which improves our mechanistic understanding of the processes at play, they wrote.

Not all of the data will have biological meaning, though, so some part of the modeling will remain biologically agnostic and will rely on machine learning alone. Additionally, the researchers state that mixed-effect statistical learning will be used in the analysis.

All patients data will be pooled together for the learning process, which strengthens estimation of the mechanistic parameters, they wrote. Machine learning for inclusion of baseline covariates will further yield new algorithms able to predict the response/relapse patterns, including possible pseudo- or hyperprogression.

Go here to read the rest:

Researchers Hope to Harness AI to Understand, Predict Response to ICIs - Cancer Therapy Advisor

Making The Internet Of Things (IoT) More Intelligent With AI – Forbes

According IoT Analytics, there are over 17 Billion connected devices in the world as of 2018, with over 7 Billion of these internet of things (IoT) devices. The Internet of Things is the collection of those various sensors, devices, and other technologies that arent meant to directly interact with consumers, like phones or computers. Rather, IoT devices help provide information, control, and analytics to connect a world of hardware devices to each other and the greater internet. With the advent of cheap sensors and low cost connectivity, IoT devices are proliferating.

02 April 2019, Lower Saxony, Hannover: A so-called learning factory with a conveyor belt for sorting ... [+] products is located at the Fischertechnik stand at the Hanover Fair. From 1 to 5 April, everything at Hannover Messe will revolve around networking, learning machines and the Internet of Things. Photo: Hauke-Christian Dittrich/dpa (Photo by Hauke-Christian Dittrich/picture alliance via Getty Images)

It is no wonder that companies are inundated with data that comes from these devices and are looking to AI to help manage the devices as well as gain more insight and intelligence from the data exuded by these masses of chatty systems. However, it is much more difficult to manage and extract valuable information from these systems than we might expect. There are many aspects and subcomponents to IoT such as connectivity, security, data storage, system integration, device hardware, application development, and even networks and processes which are ever changing in this space. Another layer of complication with IoT has to do with scale of functionality. Often times, its easy to build sensors to be accessed from a smart device but to create devices that are reliable, remotely controlled and upgraded, secure, and cost effective is a much more complicated matter.

How AI is Transforming IoT

On a recent AI Today podcast, Rashmi Misra from Microsoft shared how AI and IoT are combining to provide greater visibility and control of the wide array of devices and sensors connected to the internet. At Microsoft, Rashmi leads a team that builds IoT and artificial intelligence (AI) solutions, where she works across partners of all sorts such as device manufacturers, application developers, systems integrators, and other vertically focused partners who want to play key AI technologies in IoT fields. Her Microsoft team is focused on gaining insights and knowledge from data that is created from IoT devices, simplifying the access and reporting of that data. (Disclosure: I am a host of the AI Today podcast)

IoT is transforming business models by helping companies move from simply making products and services to companies that give their customers desired outcomes. By impacting organizations business models, the combination of IoT-enabled devices and sensors with machine learning creates a collaborative and interconnected world that aligns itself around outcomes and innovation. This combination of IoT and AI is changing many industries and the relationships that businesses have with its customers. Businesses can now collect and transform data into usable and valuable information with IoT.

As an organization applies digital transformation principles to its business, the combination of IoT and AI can create a disruption within its industry. Whether an organization is using IoT and AI to engage customers, implement conversational agents for customers, customize user experiences, obtain analytics, or optimize productivity with insights and predictions, the use of IoT and AI creates a dynamic where companies are able to gain high quality insight into every piece of data, from what customers are actually looking at and touching to how employees, suppliers, and partners are interacting with different aspects of the ecosystem. Instead of just having the business processes modeled in software in a way that approximates the real world, IoT devices give systems an actual interface to the real world. Any place where you can put a sensor or a device to measure, interact, or analyze something, you can put an IoT device connected to the AI-enabled cloud to add significant amounts of value.

Using AI to Help Make Sense of IoT Data

Common challenges organizations face today with AI and IoT are with application, accessibility, and analysis of IoT data. If you have a pool of data from various sources you can run some statistical analysis with that data. But, if you want to be proactive in predicting events to to take future actions accordingly such as when to change a drill bit or anticipate a breakdown in a piece of machinery, a business needs to learn how to use these technologies to apply them to discern this kind of data and process.

The sheer quantity of IoT data, especially in organizations that have deployed sensors or tags down to the individual unit level is significant. The massive amount of constantly changing data is too difficult to manage with traditional business intelligence and analytics tools. This is where AI steps in. Through the use of unsupervised learning and clustering approaches, machine learning systems can automatically identify normal and abnormal patterns in data and alert when things deviate from observed norms, without requiring advance setup by human operators. Likewise, these AI-enabled IoT systems can automatically surface relevant insights that might not be visible for the haystack of data that makes those insights almost invisible.

Enterprises are implementing AI-enabled IoT systems in a number of different ways. Solutions firms are producing prepackaged code and templates that include tried and tested models for specific application domains such as shipping and logistics, manufacturing, energy, environmental, building and facilities operations, and other models. Others are creating customer solutions building and training their own models, leveraging cloud-providers to harness external CPU power. Some solutions centralize AI capabilities in on-premise solutions or cloud-based offerings, while others aim to decentralize AI capability, pushing machine learning models to the edge to keep the data close to the device and speed up performance. There are a number of ways to implement this technology and the challenge is in applying it and appropriately accessing it.

Today, we are seeing a lot of growth with both AI and IoT. These technologies combine to enable the next level of automation and productivity while decreasing costs. As consumers, businesses, and governments start to control IoT in a variety of environments our world will change greatly and allow us all to make better choices. Its already rapidly changing everything from retail to supply chain to health care. AI-enabled IoT is transforming the energy industry with smart energy solutions, where a city or town wants to create a delocalized power trade due to houses with solar panels. Rashmi shares an example of how IoT is changing supply chain and logistics. In that example, milk is very susceptible to changes in temperature. If you produce the milk and it gets transported from one place to another, you can use IoT to track the humidity of the environment during the transportation of the milk every step of the way. The private and public sector stand to have a huge impact by gaining more intelligence from all the devices out there.

The proliferation of IoT devices is making the future a very connected and instant access to information world. There is now need for AI to manage all those devices and make sense of the data that comes back from them. In these ways, AI and IoT are very symbiotic and will continue to have an intertwined relationship moving forward.

More:

Making The Internet Of Things (IoT) More Intelligent With AI - Forbes

Precipio Achieves Impressive Initial Results of AI Decision-Support Tool – AiThority

Groundbreaking clinical and economic impact to be generated

Specialty diagnostics companyPrecipio, Inc., announced that preliminary results from its artificial intelligence (AI) initial MVP (Minimal Viable Product) model demonstrate a profound clinical value. The next phase is to develop a working platform expected to become commercial within the next 12-18 months.

The diagnosis of hematopoietic diseases (via the analysis of bone marrow and peripheral blood samples) has always suffered from an inherent systemic flaw, stemming from theexpectationthat oncologists provide a clinical suspicion upon submitting a biopsy for diagnosis. The clinical suspicion determines the pathway of diagnosis, and is the sole driver for the laboratory in its testing selection, intended to confirm/rule out the oncologists clinical suspicion. If the clinical suspicion is incorrect, the lab will embark down the wrong path, potentially resulting in a mistaken diagnosis. Numerous studies[2]indicate that in blood-related cancers, the rate of misdiagnosis is approximately 25% of patients tested.

As part of our mission to battle misdiagnosis, Precipio took a different approach, whereby the oncologists clinical suspicion was one factor within the decision process, rather than the sole driving factor. Precipio developed a proprietary triaging algorithm that examines multiple patient factors (CBC tests, chemistry results, patient history, clinical symptoms and others) in order to arrive at its own clinical suspicion, which more often than not, varied from the clinicians suspicion. We believe this process is a substantive contributor to Precipios >99% accuracy rate.

Recommended AI News:AiThority Interview With Adrian Leer, Managing Director Triad Group Plc

Precipios data clearly demonstrates that overall clinicians have a low likelihood of assessing the correct clinical suspicion. Precipios initial AI MPV achieved accuracy results which are, in aggregate, double that of clinicians. With initial based on a limited data set, we believe that additional data will yield far better results. The machine-learning components of the iAI systems currently in development enable our model to improve with each case, and reach increasingly higher levels of accuracy..

Although the current algorithm is used internally on every case analyzed by our lab staff at Precipio, we recognized that this systematic flaw is a global problem, requiring a global solution. A computerized system that receives input factors and provides a correct clinical suspicion would serve as a decision-support tool for any physician suspecting their patient has a blood-related cancer. A SaaS-based system could effectively provide access to this solution on a worldwide scale.

Each year, approximately 40 million people in the US undergo routine blood tests, with those test results providing precisely the factors needed for our AI system. Absent a proper assessment, many early-stage cancer patients go undetected until the patient becomes symptomatic, typically associated with disease progression. Access to the AI model may provide a groundbreaking change of the way suspected blood-related cancer patients are assessed prior to the biopsy and diagnosis stages. While it is currently early to quantify the value of this system, given the annual spend on diagnostics, and the waste related to blood-related cancers due to misdiagnosis, the market potential appears to be substantial.

Recommended AI News:Keith Block Steps Down as Salesforce Co-CEO; Marc Benioff is Chair and CEO

The potential beneficiaries from this system among the various healthcare constituents are:

AI appears in almost every facet of our life, helping us navigate movie decisions and the traffic while driving home from work. Why not use AI technology to navigate the complex process of diagnosing patients with terrible diseases such as blood-related cancers, said Ori Karev, Precipios Chief Strategy Officer, and architect of this initiative. The team at Precipio have developed a groundbreaking algorithm with demonstrated clinical benefit to their patients and physicians. Now we are going to transform this into a product for patients and their physicians around the world.

Over the next few months, under Oris leadership, a team will be formed to develop this project. The team will also be working with leading payors, healthcare networks, and EMR companies to partner with Precipio in the development of this service. The company will continue to provide periodic updates as these efforts progress.

Recommended AI News:AWS Acquires a Powerful Data Exploration and Visualization to Boost Python & R Integration

Excerpt from:

Precipio Achieves Impressive Initial Results of AI Decision-Support Tool - AiThority

MIT researchers release Clevrer to advance visual reasoning and neurosymbolic AI – VentureBeat

Researchers from Harvard University and MIT-IBM Watson AI Lab have released Clevrer, a data set for evaluating AI models ability to recognize causal relationships and carry out reasoning. A paper sharing initial findings about the CoLlision Events for Video REpresentation and Reasoning (Clevrer) data set was published this week at the entirely digital International Conference of Representation Learning (ICLR).

Clevrer builds on Clevr, a data set released in 2016 by a team from Stanford University and Facebook AI Research, including ImageNet creator Dr. Fei-Fei Li, for analyzing the visual reasoning abilities of neural networks. Clevrer cocreators like Chuang Gan of MIT-IBM Watson Lab and Pushmeet Kohli of Deepmind introduced Neuro-Symbolic Concept Learner (NS-DR), a neuralsymbolic model applied to Clevr at ICLR one year ago.

We present a systematic study of temporal and causal reasoning in videos. This profound and challenging problem deeply rooted to the fundamentals of human intelligence has just begun to be studied with modern AI tools, the paper reads. Our newly introduced Clevrer data set and the NS-DR model are preliminary steps toward this direction.

The data set includes 20,000 synthetic videos of colliding objects on a tabletop created with the Bullet physics simulator, together with a natural language data set of questions and answers about objects in videos. The more than 300,000 questions and answers are categorized as descriptive, explanatory, predictive, and counterfactual.

GamesBeat Summit 2020 Online | Live Now, Upgrade your pass for networking and speaker Q&A.

MIT-IBM Watson Lab director David Cox told VentureBeat in an interview that he believes the data set can make progress toward creating hybrid AI that combines neural networks and symbolic AI. IBM Research will apply the approach to IT infrastructure management and industrial settings like factories and construction sites, Cox said.

I think this is actually going to be important for pretty much every kind of application, Cox said. The very simple world that were seeing are these balls moving around is really the first step on the journey to look at the world, understand that world, be able to make plans about how to make things happen in that world. So we think thats probably going to be across many domains, and indeed vision and robotics are great places to start.

The MIT-IBM Watson AI Lab was created three years ago as a way to look for disruptive advances in AI related to the general theme of broad AI. Some of that work like ObjectNet highlighted the brittle nature of deep learning success stories like ImageNet, but the lab has focused on the combination of neural networks and symbolic or classical AI.

Like neural networks, symbolic AI has been around for decades. Cox argues that just as neural networks waited for the right conditions enough data, ample compute symbolic AI was waiting for neural networks in order to experience a resurgence.

Cox says the two forms of AI complement each other well and together can build more robust and reliable models with less data and more energy efficiency. In a conversation with VentureBeat at the start of the year, IBM Research director Dario Gil called neurosymbolic AI one of the top advances expected in 2020.

Rather than map inputs and outputs like neural networks, whatever you want the outcome to be, you can represent knowledge or programs. Cox says this may lead to AI better equipped to solve real-world problems.

Google has a river of data, Amazon has a river of data, and thats great, but the vast majority of problems are more like puzzles, and we think that to move forward and actually make AI live beyond the hype we need to build systems that can do that, that have a logical component, can flexibly reconfigure themselves, that can act on the environment and experiments, that can interpret that information, and define their own internal mental models of the world, Cox said.

The joint MIT-IBM Watson AI Lab was created in 2017 with a $240 million investment.

More here:

MIT researchers release Clevrer to advance visual reasoning and neurosymbolic AI - VentureBeat

DeepMind Shows AI Has Trouble Seeing Homer Simpson’s Actions – IEEE Spectrum

The best artificial intelligence still has trouble visually recognizing many of Homer Simpsons favorite behaviors such as drinking beer, eating chips, eating doughnuts, yawning, and the occasional face-plant. Those findings from DeepMind, the pioneering London-based AI lab, also suggest the motive behind why DeepMind has created a huge new dataset of YouTube clips to help train AI on identifying human actions in videos that go well beyond Mmm, doughnuts or Doh!

The most popular AI used by Google, Facebook, Amazon, and other companies beyond Silicon Valley is based on deep learning algorithms that can learn to identify patterns in huge amounts of data. Over time, such algorithms can become much better at a wide variety of tasks such as translating between English and Chinese for Google Translateor automatically recognizing the faces of friends in Facebook photos.But even the most finely tuned deep learning relies on having lots of quality data to learn from.To help improve AIscapability to recognizehuman actions in motion,DeepMind has unveiled itsKinetics dataset consisting of 300,000 video clips and 400 human action classes.

AI systems are now very good at recognizing objects in images, but still have trouble making sense of videos, says aDeepMind spokesperson.One of the main reasons for this is that the research community has so far lacked a large, high-quality video dataset.

DeepMind enlisted the help of online workers through Amazons Mechanical Turk service to help correctly identify and label the actions inthousands of YouTube clips. Each of the 400 human action classes in the Kinetics dataset has at least 400 video clips, with each clip lasting around 10 seconds and taken from separate YouTube videos. More details can be found in a DeepMind paper on the arXiv preprint server.

The new Kinetics dataset seems likely to represent a new benchmark for training datasets intended to improve AI computer vision for video. It has far more video clips and action classes than the HMDB-51 and UCF-101 datasets that previously formed the benchmarks for the research community. DeepMind also made a point of ensuring it had a diverse datasetone that did not include multiple clips from the same YouTube videos.

Tech giants such as Googlea sister company to DeepMind under the umbrella Alphabet grouparguably have the best access to large amounts of video data that could prove helpful in training AI. Alphabets ownership of YouTube, the incredibly popular, online, video-streaming service, does not hurt either. But other companies and independent research groups must rely on publicly available datasets to train their deep learning algorithms.

Early training and testing with the Kinetics dataset showed some intriguing results. For example, deep learning algorithms showed accuracies of 80percent or greater in classifying actions such as playing tennis, crawling baby, presenting weather forecast, cutting watermelon, and bowling. But the classification accuracy dropped to around 20 percent or less for the Homer Simpson actions, including slapping and headbutting, and an assortment of other actions such as making a cake, tossing coin and fixing hair.

AI faces special challenges with classifying actions such as eating because it may not be able to accurately identify the specific food being consumedespecially if the hot dog or burger is already partially consumed or appears very small within the overall video. Dancing classes and actions focused on a specific part of the body can also prove tricky. Some actions also occur fairly quickly and are only visible for a small number of frames within a video clip, according to a DeepMind spokesperson.

DeepMind also wanted to see if the new Kinetics dataset has enough gender balance to allow for accurate AI training. Past cases have shown how imbalanced training datasets can lead to deep learning algorithms performing worse at recognizing the faces of certain ethnic groups. Researchers have also shown how such algorithms can pick up gender and racial biases from language.

A preliminary study showed that the new Kinetics dataset seems to fairly balanced. DeepMind researchers found that no single gender dominated within 340 out of the 400 action classesor else it was not possible to determine gender in those actions. Those action classes that did end up gender imbalanced included YouTube clips of actionssuch as shaving beard or dunking basketball (mostly male) and filling eyebrows or cheerleading (mostly female).

But even action classes that had gender imbalance did not show much evidence of classifier bias. This means that even the Kinetics action classes featuring mostly male participantssuch as playing poker or hammer throwdid not seem to bias AI to the point where the deep learning algorithms had trouble recognizing female participants performing the same actions.

DeepMind hopes that outside researchers can help suggest new human action classes for the Kinetics dataset. Any improvements may enable AI trained on Kinetics to better recognize both the most elegant of actions and the clumsier moments in videos that lead people to say doh! In turn, that could lead to new generations of computer software and robots with the capacity to recognize what all those crazy humans are doing on YouTube or in other video clips.

Video understanding represents a significant challenge for the research community, and we are in the very early stages with this, according to the DeepMind spokesperson. Any real-world applications are still a really long way off, but you can see potential in areas such as medicine, for example, aiding the diagnosis of heart problems in echocardiograms.

IEEE Spectrums general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

A deep learning approach could make self-driving cars better at adapting to new situations 26Apr2016

A tech startup aims to spread the wealth of deep learning AI to many industries 3Mar2016

Google engineers balanced speed and accuracy to deploy deep learning in Chinese-to-English translations 3Oct2016

If machine learning systems can be taught using simulated data from Grand Theft Auto V instead of data annotated by humans, we could get to reliable vehicle autonomy much faster 8Jun

Adversarial grasping helps robots learn better ways of picking up and holding onto objects 5Jun

Reverse engineering 1 cubic millimeter of brain tissue could lead to better artificial neural networks 30May

The FDA needs computer experts with industry experience to help oversee AI-driven health apps and wearables software 29May

The prototype chip learns a style of music, then composes its own tunes 23May

Crashing into objects has taught this drone to fly autonomously, by learning what not to do 10May

Silicon Valley startup Verdigris cloud-based analysis can tell whether youre using a Chromebook or a Mac, or whether a motor is running fine or starting to fail 3May

An artificial intelligence program correctly identifies 355 more patients who developed cardiovascular disease 1May

MITs WiGait wall sensor can unobtrusively monitor people for many health conditions based on their walking patterns 1May

Facebook's Yael Maguire talks about millimeter wave networks, Aquila, and flying tethered antennas at the F8 developer conference 19Apr

Machine learning uses data from smartphones and wearables to identify signs of relationship conflicts 18Apr

Machine-learning algorithms that readily pick up cultural biases may pose ethical problems 13Apr

AI and robots have to work in a way that is beneficial to people beyond reaching functional goals and addressing technical problems 29Mar

Understanding when they don't understand will help make robots more useful 15Mar

Palo Alto startup twoXAR partners with Santen Pharmaceutical to identify new glaucoma drugs; efforts on rare skin disease, liver cancer, atherosclerosis, and diabetic nephropathy also under way 13Mar

And they have a new piece of hardwarethe Jetson TX2that they hope everyone will use for this edge processing 8Mar

A deep-learning AI has beaten human poker pros with the hardware equivalent of a gaming laptop 2Mar

View post:

DeepMind Shows AI Has Trouble Seeing Homer Simpson's Actions - IEEE Spectrum

Global AI in Asset Management Market By Technology, By Deployment Mode, By Application, By End User, By Region, Industry Analysis and Forecast, 2020 -…

New York, Sept. 28, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global AI in Asset Management Market By Technology, By Deployment Mode, By Application, By End User, By Region, Industry Analysis and Forecast, 2020 - 2026" - https://www.reportlinker.com/p05975407/?utm_source=GNW The main areas where AI is gaining attraction in financial assets management comprise personal financial management, investment banking, and fraud detection. With the aid of technologies like machine learning and AI and predictive analytics, financial institutes can accomplish their financial assets more effectively and can also meet the changing behavior of the consumer. This will beneficial for organizations to improve their business operations and the process by automation that further results in enhanced customer experience.

The increasing demand for automation systems in financial products along with Changing customer behavior and are the two major factors that contribute to the growing demand for artificial intelligence (AI) in the market for assets management. AI is largely dependent on digital data produced from various sources like business processes and customer service. Financial providing banks such as investment banks and other institutes are using artificial intelligence to recognize and analyze the veiled patterns from the collected data to improve the proficiencies of asset management. The companies can prepare themselves by adopting these technologies, in order to deal with continually changing acquiescence and regulatory environment that are related to market risks.

The exponential increase in data volumes, low-interest rates, and strict regulations are promising asset managers that reconsider their traditional strategies business. Furthermore, new technological advancements have covered that of artificial intelligence for asset management. Connection of knowledge, NLP (natural language processing) techniques, and domain-enriched ML (machine learning), etc. has been adopted by numerous FinTech companies to deal with enhanced financial and investment services.

Based on Technology, the market is segmented into Machine Learning, Natural Language Processing and Others. Based on Deployment Mode, the market is segmented into On-Premise and Cloud. Based on Application, the market is segmented into Portfolio Optimization, Risk & Compliance, Conversational Platform, Process Automation, Data Analysis and Others. Based on End User, the market is segmented into BFSI, Automotive, Healthcare, Retail & eCommerce, Energy & Utilities, Media & Entertainment and Others. Based on Regions, the market is segmented into North America, Europe, Asia Pacific, and Latin America, Middle East & Africa.

The major strategies followed by the market participants are Partnerships and Product Launches. Based on the Analysis presented in the Cardinal matrix; Microsoft Corporation is the major forerunner in the AI in Asset Management Market. Companies such as Amazn.com, Inc., IBM Corporation, Salesforce.com, Inc., BlackRock, Inc., Genpact Limited, IPsoft, Inc., Lexalytics, Inc., Infosys Limited, and Narrative Science, Inc. are some of the key innovators in the market.

The market research report covers the analysis of key stake holders of the market. Key companies profiled in the report include IBM Corporation, Microsoft Corporation, Genpact Limited, Infosys Limited, Amazon.com, Inc., BlackRock, Inc. (PNC Financial Services Group), IPsoft, Inc., Salesforce.com, Inc., Lexalytics, Inc. and Narrative Science, Inc.

Recent strategies deployed in AI in Asset Management Market

Partnerships, Collaborations, and Agreements:

Sep-2020: Salesforce announced its collaboration with ServiceMax, the leader in asset-centric field service management. Following the collaboration, the latter company launched ServiceMax Asset 360 for Salesforce, a new product built on Salesforce Field Service. This product would bring ServiceMaxs asset-centric approach and decade-plus of experience to more customers across a broader set of industries to help them keep critical assets running.

Aug-2020: IPsoft announced its partnership with Sterling National Bank, the wholly-owned operating bank subsidiary of Sterling Bancorp. Following the partnership, Sterling National Bank deployed Amelia, the industry-leading Digital Employee platform. Amelia would accelerate Sterlings digital transformation and provide enhanced customer service.

Aug-2020: Salesforce entered into collaboration with Sitetracker, a cloud-based project management company. Sitetracker customers can now benefit from Einstein Analytics native artificial intelligence to predict project outcomes and durations, gain insights on financial and project performance and enhance the utilization of vendors, project managers, and field teams. Sitetracker updated its platform to provide customers AI-driven predictive reporting and dashboarding through Salesforce Einstein Analytics. With this upgradation, Sitetracker customers can gather specific, deep, and up-to-the-minute strategic actionable insights on how they deploy and maintain critical infrastructure.

Aug-2020: Microsoft teamed up with SimCorp following which the latter company integrated its front-to-back investment management platform, SimCorp Dimension, with Microsoft Azure as part of the firms cloud transformation. The multi-asset investment management solutions provider will now be able to serve clients with a scalable, secure, and cost-efficient public cloud solution during current heightened market conditions, increased competition, and regulation.

Jul-2020: BlackRock came into partnership with Citi following which Citi and BlackRock and its Aladdin business to improve the administration of securities services for mutual clients. Aladdin is an end-to-end investment management platform that utilizes a combination of risk analytics, portfolio management, trading, and operations onto a unified platform. By integrating the Aladdin Provider network, Citi has been optimizing its operating model to support both BlackRocks asset management business and the wider Aladdin system.

Jul-2020: IPsoft partnered with Microsoft following which Amelia, a comprehensive digital employee, would be available in the Microsoft Azure Marketplace, an online store providing applications and services for use on Azure. The addition of Amelia aimed to enable Microsoft sellers, partners, and customers to easily integrate and take advantage of her conversational AI for the enterprise.

Jul-2020: IBM came into partnership with Verizon, a telecommunications company. Under this partnership, IBMs AI, hybrid cloud, and asset management tools have been integrated with Verizons wireless carriers 5G and edge computing technologies. The partnership combined low latency 5G networking and multi-access computing along with the wireless carriers ThingSpace IoT platform and an asset tracking system. IBM would provide its Watson AI tools along with data analytics and its Maximo asset monitor.

Jul-2020: Infosys is partnering with Vanguard, an American registered investment advisor. The partnership would deliver a technology-driven approach to plan administration and fundamentally reshape the corporate retirement plan experience for its sponsors and participants.

Jul-2020: Microsoft Corporation entered into partnership with MSCI Inc. The partnership would accelerate innovation among the global investment industry. By bringing together the power of Microsofts cloud and AI technologies with MSCIs global reach through its portfolio of investment decision support tools, the companies aim to unlock new innovations for the industry and enhance MSCIs client experience among the worlds most sophisticated investors, including asset managers, asset owners, hedge funds and banks.

Jun-2020: BlackRock signed a partnership agreement with Northern Trust, a financial services company. Following the partnership, the latter company deployed BlackRocks Aladdin investment management platform. The partnership connected Northern Trusts fund accounting, fund administration, asset servicing, and middle office capabilities to BlackRocks Aladdin platform, creating greater connectivity between the asset manager and asset servicer.

Jun-2020: IBM extended its partnership with Siemens following which the companies announced a new solution designed to optimize the Service Lifecycle Management (SLM) of assets by dynamically connecting real-world maintenance activities and asset performance back to design decisions and field modifications. This new solution established an end-to-end digital thread between equipment manufacturers and the owner/operators of that equipment by using elements of the Xcelerator portfolio from Siemens Digital Industries Software and IBM Maximo. The combined capabilities of IBM and Siemens can help companies create and manage a closed-loop, end-to-end digital twin that breaks down traditional silos to service innovation and revenue generation.

May-2020: IPsoft extended its partnership with Unisys Corporation to embed cognitive AI capabilities within InteliServe, the Unisys pervasive workplace automation platform. Together, the companies provide an integrated suite of best-in-class cognitive technology that resolves all workplace issues from tech and HR to legal and finance. Amelia is now the first point of contact for InteliServe, bringing a consistent experience and reaching all users regardless of work location (home, office, or on the run).

May-2020: Infosys announced partnership with Avaloq, a leading wealth management software and digital technology provider. The partnership was focused to provide end-to-end (e2e) wealth management capabilities through digital platforms. Infosys would be an implementation partner for Avaloqs wealth management suite of solutions to help clients modernize and transform their legacy systems into cutting-edge digital advisory platforms.

Apr-2020: BlackRock partnered with Microsoft Corporation to host BlackRocks Aladdin infrastructure on the Microsoft Azure cloud platform. The partnership was focused on bringing enhanced capabilities to BlackRock and its Aladdin clients, which include many of the worlds most sophisticated institutional investors and wealth managers. By adopting Microsoft Azure, BlackRock accelerated innovation on Aladdin through greater computing scale and unlock new capabilities to enhance the client experience.

Mar-2020: Genpact partnered with HighRadius, an enterprise SaaS (Software-as-a-service) fintech company. The partnership was focused on providing improvements to enterprise accounts receivable and bringing together digital automation solutions powered by advanced machine learning and artificial intelligence. The companies would solve this challenge by delivering a transformative digital automation solution that enables businesses to maximize their working capital while enhancing customer and user experiences.

Acquisition and Mergers:

Aug-2020: Salesforce completed its acquisition of Tableau Software, an interactive data visualization software company. The acquisition helped Salesforce in enabling companies around the world to tap into data across their entire business and surface deeper insights to make smarter decisions, drive intelligent, connected customer experiences, and accelerate innovation.

May-2019: BlackRock acquired eFront, the French alternative investment management software and solutions provider. The acquisition expanded its presence and technology capabilities in France, Europe, and across the world. Additionally, eFront extended Aladdins end-to-end processing capabilities in alternative asset classes, enabling clients to get an enterprise view of their portfolio.

May-2018: Microsoft announced the acquisition of Semantic Machines, a developer of new approaches for building conversational AI. Together the companies will develop their work in conversational AI with Microsofts digital assistant Cortana and social chatbots like XiaoIce.

Mar-2017: Genpact completed the acquisition of Rage Frameworks, a leader in knowledge-based automation technology and services providing AI for the Enterprise. The acquisition extended the frontier of AI for the enterprise. Genpact embedded Rages AI in business operations and applied it to complex enterprise issues to allow clients to generate insights and drive decisions and action, at a scale and speed that humans alone could not achieve.

Product Launches and Product Expansions:

Sep-2020: Salesforce launched the next generation of Salesforce Field Service, with new appointment scheduling and optimization capabilities, artificial intelligence-driven guidance for dispatchers, asset performance insights, and automated customer communications. This service equipped teams across industries with AI-powered tools to deliver trusted, mission-critical field service.

Aug-2020: AWS launched Contact Center Intelligence (CCI) solutions, a combination of services powered by AWSs machine learning technology. These solutions aimed to help enterprises add ML-based intelligence to their contact centers. AWS CCI solutions enabled organizations to leverage machine learning functionality such as text-to-speech, translation, enterprise search, chatbots, business intelligence, and language comprehension in their current contact center environments.

Nov-2019: IBM introduced the Maximo Asset Monitor, a new AI-powered monitoring solution. The solution was designed to help maintenance and operations leaders better understand and improve the performance of their high-value physical assets. This solution helps in generating essential insights with AI-powered anomaly detection and provides enterprise-wide visibility into critical equipment performance.

Nov-2019: Microsoft Researchs Natural Language Processing Group unveiled dialogue generative pre-trained transformer. DialoGPT is a deep-learning natural processing model for use in automatic conversation response generation. This model has been trained on more than 147M dialogues and achieves the results on several benchmarks.

Jun-2019: Amazon Connect introduced AI-Powered Speech Analytics, a solution that provides customer insights in real-time. It helps agents and supervisors better understand and respond to customer needs so they can resolve customer issues and improve the overall customer experience. The solution includes pre-trained AWS artificial intelligence (AI) services that enabled customers to transcribe, translate, and analyze each customer interaction in Amazon Connect, and presents this information to assist contact center agents during their conversations.

May-2019: Salesforce released Einstein Analytics for Financial Services, a customizable analytics solution. The solution delivers AI-augmented business intelligence for wealth advisors, retail bankers, and managers. Einstein Analytics for Financial Services includes Actionable insights powered by AI, Built-in industry dashboards, a Customizable platform to analyze external data, and Built-in compliance with industry regulations.

Scope of the Study

Market Segmentation:

By Technology

Machine Learning

Natural Language Processing

Others

By Deployment Mode

On-Premise

Cloud

By Application

Portfolio Optimization

Risk & Compliance

Conversational Platform

Process Automation

Data Analysis

Others

By End User

BFSI

Automotive

Healthcare

Retail & eCommerce

Energy & Utilities

Media & Entertainment

Others

By Geography

North America

o US

o Canada

o Mexico

o Rest of North America

Europe

o Germany

o UK

o France

o Russia

o Spain

o Italy

o Rest of Europe

Asia Pacific

o China

o Japan

o India

o South Korea

o Singapore

o Malaysia

o Rest of Asia Pacific

LAMEA

o Brazil

o Argentina

o UAE

o Saudi Arabia

o South Africa

o Nigeria

o Rest of LAMEA

Companies Profiled

IBM Corporation

Microsoft Corporation

Genpact Limited

Infosys Limited

Amazon.com, Inc.

BlackRock, Inc. (PNC Financial Services Group)

IPsoft, Inc.

Salesforce.com, Inc.

Lexalytics, Inc.

Narrative Science, Inc.

Originally posted here:

Global AI in Asset Management Market By Technology, By Deployment Mode, By Application, By End User, By Region, Industry Analysis and Forecast, 2020 -...

3 Ways Companies Are Building a Business Around AI – Harvard Business Review

There is no argument about whether artificial intelligence (AI) is coming. It is here, in automobiles, smartphones, aircraft, and much else. Not least in the online search abilities, speech and translation features, and image recognition technology of my employer, Alphabet.

The question now moves to how broadly AI will be employed in industry and society, and by what means. Many other companies, including Microsoft and Amazon, also already offerAI tools which, like Google Cloud, where I work, will be sold online as cloud computing services. There are numerous otherAI products available to business, like IBMs Watson, or software from emerging vendors.

Whatever hype businesspeople read aroundAI and there is a great deal the intentions and actions of so many players should alert them to the fundamental importance of this new technology.

This is no simple matter, as AIis both familiar and strange. At heart, the algorithms and computation are dedicated to unearthing novel patterns, which is what science, technology, markets, and the humanistic arts have done throughout the story of humankind.

The strange part is how todaysAI works, building subroutines of patterns, and loops of patterns about other patterns, training itself through multiple layers that are only possible with very large amounts of computation. For perhaps the first time, we have invented a machine that cannot readily explain itself.

In the face of such technical progress, paralysis is rarely a good strategy. The question then becomes: How should a company that isnt involved in buildingAI think about using it? Even in these early days, practices of successful early adopters offer several useful lessons:

CAMP3 is a 26-person company, headquartered in Alpharetta, Georgia, that deploys and manages wireless sensor networks for agriculture. The company also sells Googles G Suite email and collaboration products on a commission basis.

Founder and chief executive Craig Ganssle was an early user of Google Glass. Glass failed as a consumer product, but the experience of wearing a camera and collecting images in the field inspired Ganssle to think about ways farmers could useAI to spot plant diseases and pests early on.

AI typically works by crunching very large amounts of data to figure out telltale patterns, then testing provisional patterns against similar data it hasnt yet processed. Once validated, the pattern-finding methodology is strengthened by feeding it more data.

CAMP3s initial challenge was securing enough visual data to train itsAI product. Not only were there relatively few pictures of diseased crops and crop pests, but they were scattered across numerous institutions, often without proper identification.

Finding enough images of northern corn leaf blight [NCLB] took 10 months, said Ganssle. There were lots of pictures in big agricultural universities, but no one had the information well-tagged. Seed companies had pictures too, but no one had pictures of healthy corn, corn with early NCLB, corn with advanced NCLB.

They collected whatever they could from every private, educational, and government source they could, and then took a lot of pictures themselves. Training the data, in this case, may have been easier than getting the data in the first place.

That visual training data is a scarce commodity, and a defensible business asset. Initial training for things like NCLB, cucumber downy mildew, or sweet corn worm initially required tens of thousands of images, he said. With a system trained, he added, it now requires far fewer images to train for a disease.

CAMP3 trains the images on TensorFlow, anAI software framework first developed by Googleand then open sourced. For computing, he relied on Amazon Web Services and Google Compute Engine. Now we can take the machine from kindergarten to PhD-style analysis in a few hours, Ganssle said.

The painful process of acquiring and correctly tagging the data, including time and location information for new pictures the company and customers take, gave CAMP3 what Ganssle considers a key strategic asset. Capture something other people dont have, and organize it with a plan for other uses down the road, he said.

WithAI, you never know what problem you will need to tackle next. This could be used for thinking about soils, or changing water needs. When we look at new stuff, or start to do predictive modeling, this will be data that falls off the truck, that we pick up and use.

TalkIQ is a company that monitors sales and customer service phone calls, turns the talk into text, and then scans the words in real time for keywords and patterns that predict whether a company is headed for a good outcome a new sale, a happy customer.

The company got its start after Jack Abraham, a former eBay executive and entrepreneur, founded ZenReach, a Phoenix company that connects online and offline commerce, in part through extensive call centers.

I kept thinking that if I could listen to everything our customers were asking for, I would capture the giant brain of the company, said Abraham. Why does one rep close 50% of his calls, while the other gets 25%?

The data from those calls could improve performance at ZenReach, he realized, but could also be the training set for a new business that served other companies. TalkIQ,based in San Francisco, took two years to build. Data scientists examined half a million conversations preserved in the companys computer-based ZenReach phone system.

As withCAMP3, part of the challenge was correctly mapping information in this case, conversations in crowded rooms, sometimes over bad phone connections and tagging things like product names, features, and competitors. TalkIQ uses automated voice recognition and algorithms that understand natural language, among other tools.

Since products and human interactions change even faster than biology, the training corpus for TalkIQ needs to train almost continuously to predict well, said Dan OConnell, the companys chief executive. Every prediction depends on accurate information, he said. At the same time, you have to be careful of overfitting, or building a model so complex that the noise is contributing to results as much as good data.

Built as an adjacency to ZenReach, TalkIQ must also tweak for individual customer and vertical industry needs. The product went into commercial release in January, and according to Abraham now has 27 companies paying for the service. If were right, this is how every company will run in the future.

Last March the Denver-based company Blinker launched a mobile app for buying and selling cars in the state of Colorado. Customers are asked to photograph the back of their vehicle, and within moments of uploading the image the cars year, make and model, and resale value are identified. From there it is a relatively simple matter to offer the car, or seek refinancing and insurance.

TheAI that identifies the car so readily seems like magic. In fact, the process is done using TensorFlow, along with the Google Vision API, to identify the vehicle. Blinker has agreements with third-party providers of motor vehicle data, and once it identifies the plate number, it can get the other information from the files (where possible, the machine also checks available image data.)

Blinker has filed for patents on a number of the things it does, but the companys founder and chief executive thinks his real edge is his 44 years in the business of car dealerships.

Whatever you do, you are still selling cars, said Rod Buscher. People forget that the way it feels, and the pain points of buying a car, are still there.

He noted that Beepi, an earlier peer-to-peer attempt to sell cars online, raised $150 million, with a great concept and smart guys. They still lost it all. The key to our success is domain knowledge: I have a team of experts from the auto selling business.

That means taking out the intrusive ads and multi-click processes usually associated with selling cars online and giving customers a sense of fast, responsive action. If the car is on sale, the license number is covered with a Blinker logo, offering the seller a sense of privacy (and Blinker some free advertising.)

Blinker, which hopes to go national over the next few years, does haveAI specialists, who have trained a system with over 70,000 images of cars. Even these had the human touch the results were verified on Amazons Mechanical Turk, a service where humans perform inexpensive tasks online.

While theAI work goes on, Buscher spent over a year bringing in focus groups to see what worked, and then watched how buyers and sellers interacted (frequently, they did their sales away from Blinker, something else the company had to fix).

Ive never been in tech, but Im learning that on the go, he said. You still have to know what a good and bad customer experience is like.

No single tool, even one as powerful asAI, determines the fate of a business. As much as the world changes, deep truths around unearthing customer knowledge, capturing scarce goods, and finding profitable adjacencies will matter greatly. As ever, the technology works to the extent that its owners know what it can do, and know their market.

See the article here:

3 Ways Companies Are Building a Business Around AI - Harvard Business Review

COVID-19: AI can help – but the right human input is key – World Economic Forum

Artificial intelligence (AI) has the potential to help us tackle the pressing issues raised by the COVID-19 pandemic. It is not the technology itself, though, that will make the difference but rather the knowledge and creativity of the humans who use it.

Indeed, the COVID-19 crisis will likely expose some of the key shortfalls of AI. Machine learning, the current form of AI, works by identifying patterns in historical training data. When used wisely, AI has the potential to exceed humans not only through speed but also by detecting patterns in that training data that humans have overlooked.

However, AI systems need a lot of data, with relevant examples in that data, in order to find these patterns. Machine learning also implicitly assumes that conditions today are the same as the conditions represented in the training data. In other words, AI systems implicitly assume that what has worked in the past will still work in the future.

A new strain of Coronavirus, COVID 19, is spreading around the world, causing deaths and major disruption to the global economy.

Responding to this crisis requires global cooperation among governments, international organizations and the business community, which is at the centre of the World Economic Forums mission as the International Organization for Public-Private Cooperation.

The Forum has created the COVID Action Platform, a global platform to convene the business community for collective action, protect peoples livelihoods and facilitate business continuity, and mobilize support for the COVID-19 response. The platform is created with the support of the World Health Organization and is open to all businesses and industry groups, as well as other stakeholders, aiming to integrate and inform joint action.

As an organization, the Forum has a track record of supporting efforts to contain epidemics. In 2017, at our Annual Meeting, the Coalition for Epidemic Preparedness Innovations (CEPI) was launched bringing together experts from government, business, health, academia and civil society to accelerate the development of vaccines. CEPI is currently supporting the race to develop a vaccine against this strand of the coronavirus.

What does this have to do with the current crisis? We are facing unprecedented times. Our situation is jarringly different from that of just a few weeks ago. Some of what we need to try today will have never been tried before. Similarly, what has worked in the past may very well not work today.

Humans are not that different from AI in these limitations, which partly explains why our current situation is so daunting. Without previous examples to draw on, we cannot know for sure the best course of action. Our traditional assumptions about cause and effect may no longer hold true.

Humans have an advantage over AI, though. We are able to learn lessons from one setting and apply them to novel situations, drawing on our abstract knowledge to make best guesses on what might work or what might happen. AI systems, in contrast, have to learn from scratch whenever the setting or task changes even slightly.

The COVID-19 crisis, therefore, will highlight something that has always been true about AI: it is a tool, and the value of its use in any situation is determined by the humans who design it and use it. In the current crisis, human action and innovation will be particularly critical in leveraging the power of what AI can do.

One approach to the novel situation problem is to gather new training data under current conditions. For both human decision-makers and AI systems alike, each new piece of information about our current situation is particularly valuable in informing our decisions going forward. The more effective we are at sharing information, the more quickly our situation is no longer novel and we can begin to see a path forward.

Projects such as the COVID-19 Open Research Dataset, which provides the text of over 24,000 research papers, the COVID-net open-access neural network, which is working to collaboratively develop a system to identify COVID-19 in lung scans, and an initiative asking individuals to donate their anonymized data, represent important efforts by humans to pool data so that AI systems can then sift through this information to identify patterns.

Global spread of COVID-19

Image: World Economic Forum

A second approach is to use human knowledge and creativity to undertake the abstraction that the AI systems cannot do. Humans can discern between places where algorithms are likely to fail and situations in which historical training data is likely still relevant to address critical and timely issues, at least until more current data becomes available.

Such systems might include algorithms that predict the spread of the virus using data from previous pandemics or tools that help job seekers identify opportunities that match their skillsets. Even though the particular nature of COVID-19 is unique and many of the fundamental rules of the labour market are not operating, it is still possible to identify valuable, although perhaps carefully circumscribed, avenues for applying AI tools.

Efforts to leverage AI tools in the time of COVID-19 will be most effective when they involve the input and collaboration of humans in several different roles. The data scientists who code AI systems play an important role because they know what AI can do and, just as importantly, what it cant. We also need domain experts who understand the nature of the problem and can identify where past training data might still be relevant today. Finally, we need out-of-the-box thinkers who push us to move beyond our assumptions and can see surprising connections.

Toronto-based startup Bluedot is an example of such a collaboration. In December it was one of the first to identify the emergence of a new outbreak in China. Its system relies on the vision of its founder, who believed that predicting outbreaks was possible, and combines the power several different AI tools with the knowledge of epidemiologists who identified where and how to look for evidence of emerging diseases. These epidemiologists also verify the results at the end.

Reinventing the rules is different from breaking the rules, though. As we work to address our current needs, we must also keep our eye on the long-term consequences. All of the humans involved in developing AI systems need to maintain ethical standards and consider possible unintended consequences of the technologies they create. While our current crisis is very pressing, we cannot sacrifice our fundamental principles to address it.

The key takeaway is this: Despite the hype, there are many ways that humans in which still surpass the capabilities of AI. The stunning advances that AI has made in recent years are not an inherent quality of the technology, but rather a testament to the humans who have been incredibly creative in how they use a tool that is mathematically and computationally complex and yet at its foundation still quite simple and limited.

As we seek to move rapidly to address our current problems, therefore, we need to continue to draw on this human creativity from all corners, not just the technology experts but also those with knowledge of the settings, as well as those who challenge our assumptions and see new connections. It is this human collaboration that will enable AI to be the powerful tool for good that it has the potential to be.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

Written by

Matissa Hollister, Assistant Professor of Organizational Behaviour, McGill University

The views expressed in this article are those of the author alone and not the World Economic Forum.

Read this article:

COVID-19: AI can help - but the right human input is key - World Economic Forum

Imec & GLOBALFOUNDRIES Partner Up And Announce Breakthroughs In AI Chip On IoT Devices – Wccftech

Imec, a world-leading research and innovation hub in nanoelectronics and digital technologies, and GLOBALFOUNDRIES, the worlds leading specialty foundry, today announced a hardware demonstration of a new artificial intelligence chip.

Based on imecs Analog in Memory Computing (AiMC) architecture utilizing GFs 22FDX solution, the new chip is optimized to perform deep neural network calculations on in-memory computing hardware in the analog domain. Achieving record-high energy efficiency up to 2,900 TOPS/W, the accelerator is a key enabler for inference-on-the-edge for low-power devices. The privacy, security, and latency benefits of this new technology will have an impact on AI applications in a wide range of edge devices, from smart speakers to self-driving vehicles.

Chinese Semiconductor Manufacturer SMIC to Introduce 7nm Node

Since the early days of the digital computer age, the processor has been separated from the memory. Operations performed using a large amount of data require a similarly large number of data elements to be retrieved from the memory storage. This limitation, known as the von Neumann bottleneck, can overshadow the actual computing time, especially in neural networks which depend on large vector-matrix multiplications. These computations are performed with the precision of a digital computer and require a significant amount of energy. However, neural networks can also achieve accurate results if the vector-matrix multiplications are performed with a lower precision on analog technology.

To address this challenge, imec and its industrial partners in imecs industrial affiliation machine learning program, including GF, developed a new architecture that eliminates the von Neumann bottleneck by performing analog computations in SRAM cells. The resulting Analog Inference Accelerator (AnIA), built on GFs 22FDX semiconductor platform, has exceptional energy efficiency. Characterization tests demonstrate power efficiency peaking at 2,900 tera operations per second per watt (TOPS/W). Pattern recognition in tiny sensors and low-power edge devices, which is typically powered by machine learning in data centers, can now be performed locally on this power-efficient accelerator.

Looking ahead, GF will include AiMC as a feature able to be implemented on the 22FDX platform for a differentiated solution in the AI market space. GFs 22FDX employs 22nm FD-SOI technology to deliver outstanding performance at extremely low power, with the ability to operate at 0.5 Volt ultralow-power and at 1 pico amp per micron for ultralow standby leakage. 22FDX with the new AiMC feature is in development at GFs state-of-the-art 300mm production line at Fab 1 in Dresden, Germany.

Read the original post:

Imec & GLOBALFOUNDRIES Partner Up And Announce Breakthroughs In AI Chip On IoT Devices - Wccftech

AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming – Express.co.uk

Fear surrounding artificial intelligence has remained prevalent as society has witnessed the mass leaps the technology sector has made in recent years. Shadow Robot Company Director, Rich Walker explained it is not evil A.I. people should necessarily be afraid of but rather the companies they masquerade behind. During an interview with Express.co.uk, Mr Walker explained advanced A.I. that had nefarious intent for mankind would not openly show itself.

He noted companies that actively do harm to society and people within them would be more appealing to A.I. that had goals of destroying humanity.

He said: There is the kind of standard fear of A.I. that comes from science fiction.

Which is either the humanoid robot, like from the Terminator, takes over and tries to destroy humanity.

Or it is the cold compassionless machine that changes the world around it in its own image and there is no space for humans in there.

DON'T MISS:Elon Musk issues terrifying prediction on 'AI robot swarms'

There is actually quite a good argument that there are cold compassionless machines that change the world around us in their own image.

They are called corporations.

We shouldnt necessarily worry about A.I as something that will come along and change everything.

We already have these organisations that will do that.

They operate outside of national rules of laws and societal codes of conduct.

So, A.I. is not the bit that makes that happen, the bits that make that happen are already in place.

He later added: I guess you could say that a company that has known for 30 years that climate change was inevitable and has systematically defunded research into climate change and funded research that shows climate change isnt happening is the kind of organisation I am thinking of.

That is the kind of behaviour you have to say: That is trying to destroy humanity.

DON'T MISSTESS satellite presents stunning new southern sky mosaic[VIDEO]Life discovered deep underground points to subterranean Galapagos'[INTERVIEW]Shadow land: Alien life can exist in 2D universe'[INTERVIEW]

They would argue no they are not trying to do that but the fact would be the effects of what you are doing is trying to destroy humanity.

If you wanted to have an Artificial Intelligence that was a bad guy, a large corporation that profits from fossil fuels and systematically hid the information that fossil fuels were bad for the planet, that would be an A.I bad guy in my book.

The Shadow Robot Company has directed there focus on creating complex dexterous robot hands that mimicked humans hands.

The robotics company uses tactical Telerobot technology to demonstrate how A.I programmes can be used alongside human interaction to create complex robotic relationship.

Read more:

AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming - Express.co.uk

Microsoft shares its vision to become AI industry-leader – TNW

Microsoft last week filed its annual report with the SEC, and with it a new vision that emphasizes AI. The documents state the company will no longer be focused on mobile, but instead on implementing AI solutions.

In the documents, under the heading Our Vision the company put:

Our strategy is to build best-in-class platforms and productivity services for an intelligent cloud and an intelligent edge infused with artificial intelligence (AI)

Which should come as a surprise to no-one, Microsoft has gobbled up AI companies like Pac-Man eating dots, and is using them to do things like teach computers to be amazing at Ms. Pac-Man.

Microsoft started off 2017 by purchasing an AI company, which was added to its already robust machine-learningresearch team.

The companys Microsoft Research AI (MSR AI) group has been doing some impressive work, including helping the blind better understand their surroundings. And Microsoft makes hardware now, in the form of an AI co-processor chip for Hololens 2.0.

Microsoft isnt the only major legacy tech company fully prepared to shift to an AI-driven vision. IBM is famously flaunting their ride on the AI hype train with their cloud-based Watson, and Apple of course has Siri.

Theres no need to ask if AI is the future or not, because it is. Its only a matter of time now before Cortana gets appointed CEO (Sorry Satya!).

Microsoft's New Artificial Intelligence Mission Is Nothing To Dismiss on Seeking Alpha

Read next: Instagram now lets two people go Live in the same stream

Continued here:

Microsoft shares its vision to become AI industry-leader - TNW

What have you learned about machine learning and AI? – The Register

Reg Events Machine learning, AI and robotics are escaping from the lab, and popping up businesses, and we want to know how youre putting them to work.

The call for papers for M3 is open now, and we want to hear how real world organisations like yours are using artificial intelligence, machine learning algorithms, deep learning, and predictive analytics to solve real world business and technology problems.

Whether youre building systems to help researchers make sense of health data, or financial analysts make sense of markets, we want to know about it. We also want to know how youre using the technology to manage, support, and even help your customers - or at least help other parts of your company help them.

We want to hear how youre using the available algorithms, frameworks, cognitive systems and UX, and the workarounds youve put in place to make them work for you. Of course, we also would love to hear how youre putting together and managing the hardware and networks that make these systems possible.

Likewise if you've put neural networks to work, or done more than play with predictive analytics or parallel programming.

And while there are plenty of opinion leaders who will give forth on the security, privacy and ethical implications of computers and robots, wed like to hear how you deal with these challenges in practice

So, send us your proposals for conference sessions and workshops that illustrate the rapid advances in this field - because youre the people that will ultimately decide whether it succeeds or fails.

The conference will take place from October 9 to 11, at 30 Euston Square, Central London. This is a stunningly comfortable venue in which to ponder some of the most intellectually, and ethically, challenging issues facing the tech community today, and we really want you to join us.

Full details here.

Originally posted here:

What have you learned about machine learning and AI? - The Register

AI-guided ultrasound developer Caption Health raises $53M for further rollout – FierceBiotech

Caption Health, maker of an artificial-intelligence-guided ultrasound platform capable of instructing clinicians on obtaining a clearer picture of the heart in motion, has raised $53 million in new financing to expand the commercial reach of its system.

The proceeds will also help fund the development of its AI platform in additional care areas.

"Caption Health is working towards a future where looking inside the body becomes as routine as a blood pressure cuff measurement, said Armen Vidian, a partner at DCVC, which led the startups series B funding round.

September 2-3, 2020 Live, Online Course: Biopharma Revenue Forecasting that Drives Decision Making and Investments

Become fluent in the core elements of revenue forecasting including epidemiology, competitive assessments, market share assignment and pricing. Let Biotech Primer's dynamic industry experts teach you how to assess the value of new therapies.

Simplifying ultrasound is critical to providing fast, effective care," Vidian said. "By making ultrasound accessible to non-specialists with AI-guided, FDA-cleared products, Caption AI brings the benefits of medical imaging to more caregivers in more settings.

The round also raised funding from Caption Healths previous backer, Khosla Ventures, as well as new money from Atlantic Bridge and cardiovascular devicemaker Edwards Lifesciences.

RELATED: To screen COVID-19 patients for heart problems, FDA clears several ultrasound, AI devices from Philips, Eko, Caption Health

Caption Healths ultrasound AI was first approved by the FDA this past February for walking medical professionals through the steps of the common cardiac exam used to diagnose different heart diseases. The software also analyzes the 2D ultrasound image in real time and automatically records the best video clips for later analysis, while calculating measures of heart function.

An updated version of the companys algorithms and guidance later received an expedited clearance from the agency as a tool for front-line hospital staff to help evaluate COVID-19 patients for cardiovascular complications.

Now, Caption Health has accelerated its plans to bring its AI to market later this summer, and the company says it is already in use at 11 U.S. medical centers, integrated with the Terason uSmart 3200T Plus portable ultrasound system.

"We are truly grateful to our investors and to our early adopter clinicians, who have believed in us from the beginning," said Caption Health CEO Charles Cadieu. "This capital will enable us to scale our collaborations with leading research institutions, regional health systems and other providers by making ultrasound available where and when it is neededacross departments, inside and outside the hospital.

See the article here:

AI-guided ultrasound developer Caption Health raises $53M for further rollout - FierceBiotech

Artificial Intelligence: Bias and the case for industry – Luxembourg Times

Over the past many decades, science fiction has shown us scenarios where AI has surpassed human intelligence and overpowered humanity. As we near a tipping point where AI could feature in every part of our lives from logistics to healthcare, human resources to civil security we take a look at opportunities and ethical questions in AI. In this article, we speak to AI expert Prof Dr Patrick Glauner about AI bias, as well as which impact good and bad AI could have on industry and workers.

What about our jobs? Can we trust AI to do what it is meant to, and without bias? What will society look like once we are surrounded by AI? Who will decide how far AI should go? These are some of the frequently asked questions when it comes to AI. These were also part of the questions participants were encouraged to delve into at the FNRs science-meets-science-fiction event House of Frankenstein sparking also the question of what it means to be human in the age of AI.

Its not who has the best algorithm that wins. Its who has the most data.

For about the last decade, the Big Data paradigm that has dominated research in machine learning can be summarized as follows: Its not who has the best algorithm that wins. Its who has the most data.explains Dr Patrick Glauner, who in February 2020 starts a Full Professorship in AI at the Deggendorf Institute of Technology (Germany), at the young age of 30.

In machine learning and statistics, samples of the population are typically used to get insights or derive generalisations about the population as a whole. Having a biased data set means that it is not representative of the population. Glauner explains biases appear in nearly every data set.

The machine learning models trained on those data sets subsequently tend to make biased decisions, too.

Queue facial recognition can for example unlock your phone by scanning your face. However, this technology has turned out to have [SE1] ethnic bias, with personal stories and studies pointing to the technology not distinguishing between faces of Asian ethnicity. Also apps that are meant to predict criminality tend to be biased toward people with darker skin. Why? Because it was developed based on, for example, Caucasian men, rather than a representative sample of populations.

Then there is the case of Tay. Tay was an AI chatbot, which immediately turned racist when unleashed on and exposed to Twitter. This shows that currently AI does not understand what it computes meaning the term Intelligence is criticised by part of AI research community itself. It is crucial to train AI on data sets, but the risk here is that AI makes decisions about something it does not understand at all. Decisions which are then applied by humans without knowing how the AI came to this decision. This is referred to as the explainability problem the black box effect.

Other concerns are the power that comes with this technology, and where to put the limits on how it is used? China, for example, has rolled out facial recognition technology, which can be used to identify protesters. And not just that, a city in China is currently apologising for using facial recognition to shame citizens who are seen wearing their pyjamas in public.

While the EU has drafted ethics guidelines for trustworthy AI, and the CEO of Microsoft has called for global guidelines, ethical guidelines for Government use of such technology are yet to be agreed on and implemented. The use of armed drones in warfare are also a concern.

Bias an old problem on a larger scale

Prof Dr Glauner explains that bias in data is far from new, and that there is a risk that known issues will be carried over to AI if not properly addressed.

Biases have always been present in the field of statistics. I am aware of statistics papers from 1976 and 1979 that started discussing biases. In my opinion, in the Big Data era, we tend to repeat the very same mistakes that have been made in statistics for a long time, but at a much larger scale.

Glauner explains that the machine learning research community has recently started to look more actively into the problem of biased data sets. However, he stresses that there needs to be greater awareness of this issue amongst students studying machine learning, as well as amongst professors.

In my view, it will be almost impossible to entirely get rid of biases in data sets, but that approach would be at least a great start.

Glauner also explains that it is imperative to close the gap between AI in academia and industry, emphasising that he will ensure that students he teaches under his Professorship will learn early on how to solve real-world problems.

AI and jobs

AI has both positive and negative implications for the working world. Some tasks will inevitably be handed over to AI, while others will continue to require humans. There will also be a mix. The Luxembourg Governments Artificial Intelligence: a strategic vision for Luxembourg puts the focus on how AI can improve our professional lives by automating time-consuming data-related tasks, helping us use our time more efficiently in the areas that require social relations, emotional intelligence and cultural sensitivity.

Prof Dr Glauner, whose AI background is rooted in industry, sees AI having a significant impact on the jobs market, both for businesses and workers. Not everyone who loses their job to AI will be able to transform into an AI developer. He also points out that the job market has always undergone change.

For example, look back 100 years: most of the jobs from that time do not exist anymore. However, those changes are now happening more frequently. As a consequence, employees will be forced to potentially undergo retraining multiple times in their career.

For instance, China has become a world-leading country in AI innovation. Chinese companies are using that advantage to rapidly advance their competitiveness in a large number of other industries. If Western companies do not adapt to that reality, they will probably be out of business in the foreseeable future.

AI is the next step of the industrial revolution

Even though those changes are dramatic, we cannot stop them. AI is the next step of the industrial revolution.

While the previous steps addressed the automation of repetitive physical steps, AI allows us to automate manual decision-making. That is a discipline in which humans naturally excel. AIs ability to do so, too, will significantly impact nearly every industry. From a business perspective, this will result in more efficient business processes and new services/products that improve humans lives.

Prof Dr Glauners PhD project is a concrete example of how AI can be used to improve output, and customer experience. Funded by an Industrial Fellowship grant (AFR-PPP at the time) a collaboration between public research and industry Glauner developed AI algorithms that detect the non-technical losses (NTL) of power grids: critical infrastructure assets.

NTLs include, but are not limited to, electricity theft, broken or malfunctioning meters and arranged false meter readings. In emerging markets, NTL are a prime concern and often account for up to 40% of the total electricity distributed.

The annual world-wide costs for utilities due to NTL are estimated to be around USD 100 billion. Reducing NTL in order to increase reliability, revenue, and profit of power grids is therefore of vital interest to utilities and authorities. My thesis has resulted in appreciable results on real-world big data sets of millions of customers.

AI and new industries

The opportunities AI presents for existing industries areas are manifold, if done right, and AI could pave the way for completely new industries as well: space exploration and space mining would hardly be developing so fast without AI. For example, there is a communication delay from the Earth to the Moon, which makes controlling an unmanned vehicle or a machine from Earth challenging to say the least. However, if the machine would be able to navigate on its own and make the most basic of decisions, this communication gap would no longer be much of an obstacle. Find out more about this FNR-funded project.

Improve, not replace

AI undoubtedly represents huge opportunities for industry in particular, and has the potential to improve performance and, output, as well as worker and customer satisfaction, to name only a few. However, it is imperative the bodies in charge put ethical considerations and the good of society at the heart of their strategies. A balance must be found. The goal has to be to improve society and the lives of the people within it, not to replace them. The same goes for bias in AI: after all, what good can come from algorithms that build their assumptions on non-representative data?

More here:

Artificial Intelligence: Bias and the case for industry - Luxembourg Times