Page 128«..1020..127128129130..140150..»

Category Archives: Ai

Analytics and AI helps Experian help its customers – CIO

Posted: June 28, 2021 at 10:37 pm

For the past several years, Experian has been transforming its business with analytics and AI. Shri Santhanam, executive vice president and general manager of global analytics and AI at the consumer credit reporting company, says Experians data transformation has focused on three pillars: internal modernization, creating analytics products and services, and driving commercial impact and business impact for customers.

Despite the impact of the pandemic, weve actually managed to make good progress in the foundations of analytics and AI, Santhanam says. The demand for analytics and AI has dramatically increased. Theres interest and engagement in how data and analytics for clients can help us help them make better decisions in how they run their business.

Ascend Intelligence Services is a prime example of Experians efforts to create analytics products that can revolutionize its clients businesses. As a managed analytics service, Ascend provides lenders with AI-powered modeling and strategy development, management, and deployment. Experian data scientists build a machine learning (ML) custom credit risk model, optimize a decision strategy, and deploy the model in production for clients. The services include Ascend Intelligence Services Challenger, which is a collaborative model development service, and Ascend Intelligence Services Pulse, a proactive model monitoring and validation service.

Midsize lender Atlas Credit recently won a CIO 100 Award in IT Excellence for its work with Experian Ascend Intelligence Services. Ascend helped the Texas-based lender double its credit approval rates while reducing credit losses by up to 20%.

Excerpt from:

Analytics and AI helps Experian help its customers - CIO

Posted in Ai | Comments Off on Analytics and AI helps Experian help its customers – CIO

The first WHO report on AI in healthcare is a mixed bag of horror and delight – The Next Web

Posted: at 10:37 pm

The World Health Organization today issued its first-ever report on the use of artificial intelligence in healthcare.

The report is 165 pages cover-to-cover and it provides a summary assessment of the current state of AI in healthcare while also laying out several opportunities and challenges.

Most of what the report covers boils down to six guiding principles for [AIs] design and use.

Per a WHO blog post, these include:

These bullet points make up the framework for the reports exploration of the current and potential benefits and dangers of using AI in healthcare.

The report focuses a lot of attention on cutting through hype to give analysis on the present capabilities of AI in the healthcare sector. And, according to the report, the most common use for AI in healthcare is as a diagnostic aid.

Per the report:

AI is being considered to support diagnosis in several ways, including in radiology and medical imaging. Such applications, while more widely used than other AI applications, are still relatively novel, and AI is not yet used routinely in clinical decision-making.

The WHO anticipates this will soon change.

Per the report, the WHO expects AI to improve nearly every aspect of healthcare from diagnostic accuracy to improved record-keeping. And theres even hope it could lead to drastically improved outcomes for patients presenting with stroke, heart attack, or other illnesses where early diagnosis is crucial.

Furthermore, AI is a data-based technology. The WHO believes the onset of machine learning technologies in healthcare could help predict the spread of disease and possibly even prevent epidemics in the future.

Its obvious from the report that the WHO is optimistic for the future of AI in healthcare. However, the report also details numerous challenges and risks associated with the wide-scale implementation of AI technologies into the healthcare system.

The report recognizes efforts on behalf of numerous nations to codify the use of AI in healthcare, but it also notes that current policies and regulations arent enough to protect patients and the public at large.

Specifically, the report outlines several areas where AI could make things worse. These include modern day concerns such as handing care of the elderly over to inhuman automated systems. And they also include future concerns:what happens when a human doctor disagrees with a black box AI system? If we cant explain why an AI made a decision, can we defend it if its diagnosis when it matters?

And the report also spends a significant portion of its pages discussing the privacy implications for the full implementation of AI into healthcare.

Per the report:

Collection of data without the informed consent of individuals for the intended uses (commercial or otherwise) undermines the agency, dignity and human rights of those individuals; however, even informed consent may be insufficient to compensate for the power dissymmetry between the collectors of data and the individuals who are the sources.

In other words: Even when everything is transparent, how can anyone be sure patients are giving informed consent when it comes to their medical information? When you consider the circumstances many patients are in when a doctor asks them to consent to a procedure, its hard to imagine a scenario where the intricacies of how artificial intelligence operates matters more than than what their doctor is recommending.

You can read the entire WHO report here.

Read the original post:

The first WHO report on AI in healthcare is a mixed bag of horror and delight - The Next Web

Posted in Ai | Comments Off on The first WHO report on AI in healthcare is a mixed bag of horror and delight – The Next Web

The future starts with Industrial AI – MIT Technology Review

Posted: at 10:37 pm

Domain expertise is the secret sauce that separates Industrial AI from more generic AI approaches. Industrial AI will guide innovation and efficiency improvements in capital-intensive industries for years to come, said Willie K Chan, CTO of AspenTech. Chan was one of the original members of the MIT ASPEN research program that later became AspenTech in 1981, now celebrating 40 years of innovation.

Incorporating that domain expertise gives Industrial AI applications a built-in understanding of the context, inner workings, and interdependencies of highly complex industrial processes and assets, and takes into account the design characteristics, capacity limits, and safety and regulatory guidelines crucial for real-world industrial operations.

More generic AI approaches may come up with specious correlations between industrial processes and equipment, generating inaccurate insights. Generic AI models are trained on large volumes of plant data that usually does not cover the full range of potential operations. Thats because the plant might be working within a very narrow and limited range of conditions for safety or design reasons. Consequently, these generic AI models cannot be extrapolated to respond to market changes or business opportunities. This further exacerbates the productization hurdles around AI initiatives in the industrial sector.

By contrast, Industrial AI leverages domain expertise specific to industrial processes and real-world engineering based on first principles that account for the laws of physics and chemistry (e.g., mass balance, energy balance) as guardrails for mitigating risks and complying with all the necessary safety, operational, and environmental regulations. This makes for a safe, sustainable, and holistic decision-making process, producing comprehensive results and trusted insights over the long run.

Digitalization in industrial facilities is critical to achieving new levels of safety, sustainability, and profitabilityand Industrial AI is a key enabler for that transformation.

Talking about Industrial AI as a revolutionary paradigm is one thing; actually seeing what it can do in real-life industrial settings is another. Below are a few examples that demonstrate how capital-intensive industries can leverage Industrial AI to overcome digitalization barriers and drive greater productivity, efficiency, and reliability in their operations.

These use cases are by no means exhaustive, but just a few examples of how pervasive, innovative, and broadly applicable Industrial AIs capabilities can be for the industry and for laying the groundwork for the digital plant of the future.

Industrial organizations need to accelerate digital transformation to stay relevant, competitive, and capable of addressing market disruptors. The Self-Optimizing Plant represents the ultimate vision of that journey.

Industrial AI embeds domain-specific know-how alongside the latest AI and machine-learning capabilities, into fit-for-purpose AI-enabled applications. This enables and accelerates the autonomous and semi-autonomous processes that run those operationsrealizing the vision of the Self-Optimizing Plant.

A Self-Optimizing Plant is a self-adapting, self-learning and self-sustaining set of industrial software technologies that work together to anticipate future conditions and act accordingly, adjusting operations within the digital enterprise. A combination of real-time data access and embedded Industrial AI applications empower the Self-Optimizing Plant to constantly improve on itselfdrawing on domain knowledge to optimize industrial processes, make easy-to-execute recommendations, and automate mission-critical workflows.

This will have numerous positive impacts on the business, including the following:

The Self-Optimizing Plant is the ultimate end goal of not just Industrial AI, but the industrial sectors digital transformation journey. By democratizing the application of industrial intelligence, the digital plant of the future drives greater levels of safety, sustainability, and profitability and empowers the next generation of the digital workforcefuture-proofing the business in volatile and complex market conditions. This is the real-world potential of Industrial AI.

To learn more about how Industrial AI is enabling the digital workforce of the future and creating the foundation for the Self-Optimizing Plant, visit http://www.aspentech.com/selfoptimizingplant, http://www.aspentech.com/accelerate, and http://www.aspentech.com/aiot.

This article was written by AspenTech. It was not produced by MIT Technology Reviews editorial staff.

Original post:

The future starts with Industrial AI - MIT Technology Review

Posted in Ai | Comments Off on The future starts with Industrial AI – MIT Technology Review

A Company That Uses AI To Fight Malaria Just Won The IBM Watson AI XPrize Competition – Forbes

Posted: at 10:37 pm

Zzapp Malaria, a company that uses artificial intelligence (AI) to fight malaria just won the grand prize in one of the toughest technology competitions to date. The competition is a joint venture between XPrize, the worlds leader in designing and operating incentive competitions to solve humanitys grand challenges, and IBM Watson, which is IBMs flagship AI platform, culminating in a $3 million dollar award for Zzapp.

Zzapps mission is straight-forward: use cutting edge technology to eliminate malaria in an efficient and scalable manner. The technology behind the companys platform is described as a software system that supports the planning and implementation of malaria elimination operations. Zzapp uses artificial intelligence to identify malaria hotspots and optimize interventions for maximum impact. Zzapp's map-based mobile app conveys the AI strategies to field workers as simple instructions, ensuring thorough implementation.

Specifically, the company explains: Malaria transmission takes place where water bodies and human populations converge: water bodies are necessary for mosquito larvae to develop, and humans act as the reservoir for the Plasmodium parasites responsible for malaria, and as a source of blood for mosquitoes [] In collaboration with Zzapp, IBM Watsons AI and Data Science Elite Team has developed a weather analysis module that predicts the abundance of water bodies based on weather data, allowing Zzapp to better time interventions, and more accurately determine the resources required to implement them.

An older video on the companys YouTube channel provides more insight into the process:

The world's largest mosquito net is unveiled 18 April 2000 in Abuja, Nigeria. Malaria causes more ... [+] than one million deaths around the world each year, more than 90 percent of them in Africa. afp photo phillip ojisua /PUE (Photo by - / AFP) (Photo credit should read -/AFP via Getty Images)

Innovation in this space could not come at a better time. Malaria is a devastating disease. Per the Centers for Disease Control (CDC) and Prevention, symptoms of malaria are extensive, entailing fever and flu-like illness, including shaking chills, headache, muscle aches, and tiredness. Nausea, vomiting, and diarrhea may also occur. Moreover, the CDC explains that If [malaria is] not promptly treated, the infection can become severe and may cause kidney failure, seizures, mental confusion, coma, and death.

The World Health Organization (WHO) reports jarring statistics regarding the widespread impact of the disease: In 2019, there were an estimated 229 million cases of malaria worldwide [] The estimated number of malaria deaths stood at 409 000 in 2019. The WHO also states that the African Region carries a disproportionately high share of the global malaria burden, with children under 5 years of age being the most vulnerable group to the disease.

Indeed, initiatives such as Zzapps effort to eradicate Malaria at the source point before it can even spread in a community may potentially add incredible value to the fight against the disease. Additionally, leveraging AI systems to identify the targets to focus on is a relatively new concept, and may become a worthwhile effort if the technology proves to be viable.

Zzapps victory in the competition is undoubtedly prestigious and deserves notable recognition. But perhaps equally if not more important, the victory signifies that the world is paying attention to a disease that is responsible for nearly half a million deaths annually. Indeed, there is still a long road ahead in the war against malaria; however, perhaps this small victory provides hope for a potentially better course ahead.

The content of this article is not implied to be and should not be relied on or substituted for professional medical advice, diagnosis, or treatment by any means, and is not written or intended as such. This content is for information and news purposes only. Consult with a trained medical professional for medical advice.

Follow this link:

A Company That Uses AI To Fight Malaria Just Won The IBM Watson AI XPrize Competition - Forbes

Posted in Ai | Comments Off on A Company That Uses AI To Fight Malaria Just Won The IBM Watson AI XPrize Competition – Forbes

Reinforcement learning could be the link between AI and human-level intelligence – The Next Web

Posted: at 10:37 pm

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward isall you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicatespecific functions of natural intelligencesuch as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, Ill try to disambiguate in simple terms where the line between theory and practice stands.

Credit: George Desipris

In their paper, the DeepMind scientists present the following hypothesis: Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. Im not an expert on the topic, but I suggest readingThe Blind Watchmakerby biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that dont get eliminated.

According to Dawkins, In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, thereasonsfor survival are anything but simple that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organisms survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didnt, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her bookConscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMinds scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

In their paper, DeepMinds scientists make the claim that the reward hypothesis can be implemented withreinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behavior so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agents behavior.

In anonline debate in December, computer scientist Richard Sutton, one of the papers co-authors, said, Reinforcement learning is the first computational theory of intelligence In reinforcement learning, the goal is to maximize an arbitrary reward signal.

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that canoutmatch humansin Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress insome of the most complex problems of science.

The scientists further wrote in their paper, According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximizinga singular reward in a single, complex environment[emphasis mine].

This is where hypothesis separates from practice. The keyword here is complex. The environments that DeepMind (and its quasi-rivalOpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources ofvery wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they cant offer theoretical guarantee on the sample efficiency of reinforcement learning agents.)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First, you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we dont have a fraction of the compute power needed to create quantum-scale simulations of the world.

Lets say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first life-forms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still dont have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power youll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent life-forms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Many will say that you dont need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: In order for a kitchen robot to maximize cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behavior that maximises cleanliness must therefore yield all these abilities in service of that singular goal.

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutors mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of cleanliness as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for cleanliness would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, theres a trade off between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of compute power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Read the original here:

Reinforcement learning could be the link between AI and human-level intelligence - The Next Web

Posted in Ai | Comments Off on Reinforcement learning could be the link between AI and human-level intelligence – The Next Web

How AI is changing the nature of analytics – VentureBeat

Posted: at 10:37 pm

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.

At its heart, artificial intelligence is an analytics tool. Its value comes from the ability to parse through enormous amounts of data, without direct human supervision, to identify patterns and anomalies that can then be put to use.

But since human-driven analytics have existed for centuries, long predating the modern computer age, how will this new generation of technology change the game? And how can organizations make sure they are getting their moneys worth once this technology is pushed into production environments?

The key element that AI brings to analytics is context, Oracles Joey Fitts and MIT research fellow Tom Davenport recently wrote in the Harvard Business Review. Under traditional analytics, the analyst was rarely an expert in the system or process being analyzed. They knew analytics, not marketing or sales or data networking. Their ultimate recommendations often lacked the context that can only come from broad knowledge and experience.

In an AI-driven framework, however, an algorithm can be trained to understand the thing it is analyzing and can then incorporate far more data at a much faster pace to deliver highly contextualized results. Ultimately, this is expected to push these powerful analytics tools to the people who require them so the analytics experts can devote their time to what they do best: crafting the models needed to make AI analytics faster and more accurate.

This need for context is best illustrated when applied to a common enterprise function, such as marketing. Arguably one of the most data-intensive disciplines in modern business, marketing is often subject to competing interpretations of the truth depending on the context in which data is presented.

AI excels at predictive analytics, the ability to spot future trends based on past and current data, according to Mike Kaput, chief content officer at Marketing AI Institute. This capability, of course, is like gold to a marketing team. At the same time, AI delivers prescriptive analytics the ability to make recommendations based on predictive analyses. In both cases, todays AI engines are capable of sifting through massive amounts of data to ensure these results are being presented within the full context of all available information, and they also have the ability to refine their algorithms to improve themselves using their own past analyses.

This ability to learn is one of the key differences between AI and simple automation. An automated system may still be able to parse a lot of data, provided it is structured properly and designed to address the specific needs for which the system was designed, according to analytics firm Avora. For instance, a simple reporting tool will update itself with new information over time, but it wont be able to provide new insight into changing data unless someone builds a dashboard that allows it to do so.

Likewise, simple automation cannot answer general queries related to diminishing performance and other factors. This typically requires hours if not days worth of work by a data analyst, who more than likely will still only collate a limited amount of data. A properly trained AI engine, on the other hand, could produce results to multiple questions within minutes.

Perhaps the best way to view AIs contribution to analytics is through one of the oldest analytical methods of all: the cost-benefit model. On the cost side, it requires a fairly sizeable upfront investment, provided you are building the underlying infrastructure from scratch. But this cost will amortize over time as output scales. On the benefit side, AI can crunch vastly more data than even an army of analysts could, and it can draw data from an untold number of sources to identify problems and/or opportunities that would otherwise remain hidden.

Ultimately, it will push analytics capabilities into the hands of knowledge workers who can best benefit from the insights tailored to their unique challenges, making the entire organization more efficient and productive.

More here:

How AI is changing the nature of analytics - VentureBeat

Posted in Ai | Comments Off on How AI is changing the nature of analytics – VentureBeat

New Intel XPU Innovations Target HPC and AI – Business Wire

Posted: at 10:37 pm

SANTA CLARA, Calif.--(BUSINESS WIRE)--At the 2021 International Supercomputing Conference (ISC) Intel is showcasing how the company is extending its lead in high performance computing (HPC) with a range of technology disclosures, partnerships and customer adoptions. Intel processors are the most widely deployed compute architecture in the worlds supercomputers, enabling global medical discoveries and scientific breakthroughs. Intel is announcing advances in its Xeon processor for HPC and AI as well as innovations in memory, software, exascale-class storage, and networking technologies for a range of HPC use cases.

More: Intel Data Center News | Intels HPC GM Trish Damkroger Keynotes 2021 ISC (Keynote Replay) | "Accelerating the Possibilities with HPC" (Keynote Presentation)

To maximize HPC performance we must leverage all the computer resources and technology advancements available to us, said Trish Damkroger, vice president and general manager of High Performance Computing at Intel. Intel is the driving force behind the industrys move toward exascale computing, and the advancements were delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realization.

Advancing HPC Performance Leadership

Earlier this year, Intel extended its leadership position in HPC with the launch of 3rd Gen Intel Xeon Scalable processors. The latest processor delivers up to 53% higher performance across a range of HPC workloads, including life sciences, financial services and manufacturing, as compared to the previous generation processor.

Compared to its closest x86 competitor, the 3rd Gen Intel Xeon Scalable processor delivers better performance across a range of popular HPC workloads. For example, when comparing a Xeon Scalable 8358 processor to an AMD EPYC 7543 processor, NAMD performs 62% better, LAMMPS performs 57% better, RELION performs 68% better, and Binomial Options performs 37% better. In addition, Monte Carlo simulations run more than two times faster, allowing financial firms to achieve pricing results in half the time. Xeon Scalable 8380 processors also outperform AMD EPYC 7763 processors on key AI workloads, with 50% better performance across 20 common benchmarks. HPC labs, supercomputing centers, universities and original equipment manufacturers who have adopted Intels latest compute platform include Dell Technologies, HPE, Korea Meteorological Administration, Lenovo, Max Planck Computing and Data Facility, Oracle, Osaka University and the University of Tokyo.

Integration of High Bandwidth Memory within Next-Gen Intel Xeon Scalable Processors

Workloads such as modeling and simulation (e.g., computational fluid dynamics, climate and weather forecasting, quantum chromodynamics), artificial intelligence (e.g., deep learning training and inferencing), analytics (e.g., big data analytics), in-memory databases, storage and others power humanitys scientific breakthroughs. The next-generation of Intel Xeon Scalable processors (code-named Sapphire Rapids) will offer integrated High Bandwidth Memory (HBM), providing a dramatic boost in memory bandwidth and a significant performance improvement for HPC applications that operate memory bandwidth-sensitive workloads. Users can power through workloads using just High Bandwidth Memory or in combination with DDR5.

Customer momentum is strong for Sapphire Rapids processors with integrated HBM, with early leading wins such as the U.S. Department of Energys Aurora supercomputer at Argonne National Laboratory and the Crossroads supercomputer at Los Alamos National Laboratory.

Achieving results at exascale requires the rapid access and processing of massive amounts of data, said Rick Stevens, associate laboratory director of Computing, Environment and Life Sciences at Argonne National Laboratory. Integrating high-bandwidth memory into Intel Xeon Scalable processors will significantly boost Auroras memory bandwidth and enable us to leverage the power of artificial intelligence and data analytics to perform advanced simulations and 3D modeling.

Charlie Nakhleh, associate laboratory director for Weapons Physics at Los Alamos National Laboratory, said: The Crossroads supercomputer at Los Alamos National Labs is designed to advance the study of complex physical systems for science and national security. Intels next-generation Xeon processor Sapphire Rapids, coupled with High Bandwidth Memory, will significantly improve the performance of memory-intensive workloads in our Crossroads system. The [Sapphire Rapids with HBM] product accelerates the largest complex physics and engineering calculations, enabling us to complete major research and development responsibilities in global security, energy technologies and economic competitiveness.

The Sapphire Rapids-based platform will provide unique capabilities to accelerate HPC, including increased I/O bandwidth with PCI express 5.0 (compared to PCI express 4.0) and Compute Express Link (CXL) 1.1 support, enabling advanced use cases across compute, networking and storage.

In addition to memory and I/O advancements, Sapphire Rapids is optimized for HPC and artificial intelligence (AI) workloads, with a new built-in AI acceleration engine called Intel Advanced Matrix Extensions (AMX). Intel AMX is designed to deliver significant performance increase for deep learning inference and training. Customers already working with Sapphire Rapids include CINECA, Leibniz Supercomputing Centre (LRZ) and Argonne National Lab, as well as the Crossroads system teams at Los Alamos National Lab and Sandia National Lab.

Intel Xe-HPC GPU (Ponte Vecchio) Powered On

Earlier this year, Intel powered on its Xe-HPC-based GPU (code-named Ponte Vecchio) and is in the process of system validation. Ponte Vecchio is an Xe architecture-based GPU optimized for HPC and AI workloads. It will leverage Intels Foveros 3D packaging technology to integrate multiple IPs in-package, including HBM memory and other intellectual property. The GPU is architected with compute, memory, and fabric to meet the evolving needs of the worlds most advanced supercomputers, like Aurora. Ponte Vecchio will be available in an OCP Accelerator Module (OAM) form factor and subsystems, serving the scale-up and scale-out capabilities required for HPC applications.

Extending Intel Ethernet For HPC

At ISC 2021, Intel is also announcing its new High Performance Networking with Ethernet (HPN) solution, which extends Ethernet technology capabilities for smaller clusters in the HPC segment by using standard Intel Ethernet 800 Series Network Adapters and Controllers, switches based on Intel Tofino P4-programmable Ethernet switch ASICs and the Intel Ethernet Fabric suite software. HPN enables application performance comparable to InfiniBand at a lower cost while taking advantage of the ease of use offered by Ethernet.

Commercial Support for DAOS

Intel is introducing commercial support for DAOS (distributed application object storage), an open-source software-defined object store built to optimize data exchange across Intel HPC architectures. DAOS is at the foundation of the Intel Exascale storage stack, previously announced by Argonne National Laboratory, and is being used by Intel customers such as LRZ and JINR (Joint Institute for Nuclear Research).

DAOS support is now available to partners as an L3 support offering, which enables partners to provide a complete turnkey storage solution by combining it with their services. In addition to Intels own data center building blocks, early partners for this new commercial support includes HPE, Lenovo, Supermicro, Brightskies, Croit, Nettrix, Quanta, and RSC Group.

More information about Intels participation at ISC 2021, including a full list of talks and demos, can be found at https://hpcevents.intel.com.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go to newsroom.intel.com and intel.com.

For performance claims, see [43, 47, 108] at http://www.intel.com/3gen-xeon-config.Results may vary.

Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

Follow this link:

New Intel XPU Innovations Target HPC and AI - Business Wire

Posted in Ai | Comments Off on New Intel XPU Innovations Target HPC and AI – Business Wire

British AI solution for breast cancer screening arrives in the UAE – Mobihealth News

Posted: at 10:37 pm

A UK-based applied science company, focused on supporting cancer diagnostics with machine learning, is bringing its award-winning solution to the United Arab Emirates (UAE).

Kheiron Medical Technologies has partnered with the UAEs Atlas Medical to launch Mia, its breast cancer screening solution that supports radiologists in reducing errors in cancer detection, in the region.

Launched in 2019, Mia which stands for mammography intelligent assessment uses artificial intelligence (AI) to work as a second reader in workflow. Should the radiologist (first human reader) and Mia disagree on a result, then a second radiologist is brought in for a third opinion.

Our mission at Kheiron is to support breast screening professionals in the fight against breast cancer with proven and effective AI-enabled tools, said Alex Hamlow, Kheirons Chief Commercial Officer. Were excited that Mia is the first AI independent reader solution available for use within the breast screening community in the UAE. Based on its performance in the UK and Europe, Mia represents a major breakthrough in helping radiologists dramatically improve breast cancer detection and patient outcomes.

He continued: According to the WHOs International Agency for Research on Cancer, breast cancer was the most prevalent of all cancers detected in the UAE in 2020, accounting for 38.8% of all new cancer cases detected in women. Im excited that Mia can help both radiologists and the women they care for.

WHY IT MATTERS

There are several advantages to using AI technology in cancer screening, says the company.

Using AI technology for the second screening frees up clinicians to spend time with patients, reduces the pressure to find more radiologists, and has the potential to screen greater numbers of women more quickly.

It also prevents unnecessary biopsies.

According to a statement by Kheiron, Mia has learnt to read mammograms to the same level of detail as a consulting radiologist.

ON THE RECORD

I am delighted that Kheiron Medical Technologies is bringing their breakthrough AI platform for breast screening, Mia, to the Gulf region, and that the UKs Department for International Trade played a role in making this happen, said Simon Penney, Her Majestys Trade Commissioner for the Middle East. Kheirons technology brings pioneering AI to the frontline, freeing up clinicians time and helping to save lives.

V. Kalyanasundaram, General Manager for Atlas Medical in Dubai and the Northern Emirates, added: We are looking forward to bringing the Mia solution to the breast screening community throughout the UAE. It has tremendous potential to transform breast screening for radiologists and for women.

By improving radiologist productivity and empowering breast screening professionals to detect potential malignancies more accurately and quickly, Mia ultimately will help save more lives in the fight against breast cancer.

In addition to the UAE, Mia is reportedly set to launch soon in Qatar and Oman, pending local requirements.

Visit link:

British AI solution for breast cancer screening arrives in the UAE - Mobihealth News

Posted in Ai | Comments Off on British AI solution for breast cancer screening arrives in the UAE – Mobihealth News

Futurism Dimensions to Reimagine the Future of e-Commerce With AI and Integrated Digital Marketing at the eCom World – PRNewswire

Posted: at 10:37 pm

PISCATAWAY, N.J., June 28, 2021 /PRNewswire/ -- Futurism is set to present its next-gen e-commerce platform at the eCom World. As a trusted digital transformation (DX) partner for 1000-plusfortune companies, Futurism aims to reinvent the entire e-commerce landscape with its flagship e-commerce platform, Dimensions.

"We are excited to be a part of the world's largest e-commerce event. As a digital transformation leader, it gives us immense pleasure to share space with some of the most amazing and successful technology leaders, digital transformation experts and DTC brands from across the world,"said Mr. Sheetal Pansare, CEO of Futurism Technologies Inc.

Dimensions and AI

"Artificial Intelligence (AI) is not the next big thing in the tech world. In fact, it is already one. Giants like Google, Amazon and Microsoft have invested heavily in AI in recent years to deliver personalized and super-enriching experiences to their users. What many retailers and e-commerce businesses fail to realize here is thatyou don't have to be an Amazon-size business to enjoy the benefits of AI and win at customer service and personalization,"added Mr. Pansare.

Dimensions can help bridge that gapsince the platform is armed with advanced AI capabilities to help deploy intelligent and human-like virtual agents or chatbots that use machine learning (ML) and NLP algorithms to come up with smart and personalized answers to customer concerns/queries.

Today, e-commerce is all about providing personalized experiences and smarter customer journeys. It is like walking into a store and everyone there knows what you want already.Dimensions leverages AI to understand customer behavior and help create personalized recommendations and/or messages based on a user's history. This type of personal touch is the key to win at e-commerce in today's digital-first age.

"We are excited to revolutionize the e-commerce landscape through our future-ready e-commerce platform. As a part of our digital transformation offering, we have reinforced Dimensions with out-of-the-box automation and AI capabilities that can help to predict inventory and automate critical processes like procurement, warehouse optimization, payments and support,"said Mr. Pansare.

About Futurism:

Futurism Technologies Inc. has evolved as a trusted digital transformation (DX) partner for more than 1,000 fortune organizations spanning across healthcare, retail, manufacturing, and various other verticals. Futurism provides DX services across the entire value chain including e-commerce, digital infrastructure, business processes, digital customer engagement, and cybersecurity. We've spent nearly two decades leveraging the disruptive power of technology and digital including AI, IoT, Cloud, Mobile, RPA, Blockchain, etc. to help our clients embrace their DX goals in a non-disruptive and secure way.

Learn more about Futurism Dimensions at https://www.futurismdimensions.com/.

Contact:

Chris GarnerAssociate VPMobile: +1-732-377-3717Email:[emailprotected]

Related Files

Futurism Participating in Ecom World Conference.png

Related Images

futurism-technologies-inc.png Futurism Technologies, Inc.

SOURCE Futurism Technologies, Inc.

See more here:

Futurism Dimensions to Reimagine the Future of e-Commerce With AI and Integrated Digital Marketing at the eCom World - PRNewswire

Posted in Ai | Comments Off on Futurism Dimensions to Reimagine the Future of e-Commerce With AI and Integrated Digital Marketing at the eCom World – PRNewswire

With cloud and AI, IBM broadens 5G deals with Verizon and Telefonica – Yahoo Finance

Posted: at 10:37 pm

By Clara-Laeila Laudette and Supantha Mukherjee

BARCELONA (Reuters) - IBM will offer telecom operators Verizon and Telefonica new services ranging from running 5G over a cloud platform to using artificial intelligence, the U.S. technology company said on Monday.

Big technology players such as Microsoft and Amazon are vying for a share of 5G revenue by offering telecom operators next-generation software tools.

IBM, using technology it obtained from buying software firm Red Hat, will offer the telecom operators cloud services to run their networks and assist them in selling products tailored to customers. No financial terms were disclosed about the tie-ups, which broadened IBM's existing partnerships with the two firms.

A cloud platform uses software instead of physical equipment to perform network functions, helping telecom operators build 5G networks faster, reduce costs and sell customised services.

"It's a disruptive time in this particular market segment, telcos are trying to position themselves as the destination for services like augmented reality, machine learning and AI," Darell Jordan-Smith, vice president of Redhat, told Reuters.

On the AI front, IBM and Spain's Telefonica have created a virtual assistant that they say will remove friction points, such as long wait times, by automating the handling of frequently asked questions and tasks like billing.

"We see this as an existential moment for telco operators with 5G: architecturally, they're looking to gain more control on their platforms and rethink their network as a digital world rather than a structured physical model," said Steve Canepa, IBM's general manager for communications business.

(Reporting by Clara-Laeila Laudette and Supantha Mukherjee in Barcelona)

Visit link:

With cloud and AI, IBM broadens 5G deals with Verizon and Telefonica - Yahoo Finance

Posted in Ai | Comments Off on With cloud and AI, IBM broadens 5G deals with Verizon and Telefonica – Yahoo Finance

Page 128«..1020..127128129130..140150..»