Page 109«..1020..108109110111..120130..»

Category Archives: Ai

For smart use of health care AI, start with the right questions – American Medical Association

Posted: September 12, 2021 at 9:29 am

Computers can sometimes show a surprising lack of common sense. Thats why asking the right questions, using the right data and guarding against the introduction of bias are keys to making augmented intelligence (AI) a valuable decision-support tool that is often called artificial intelligence.

Your clinicians can program the protocol you want for the alerts and predictions you want, said Ben Maisano, chief digital and innovation officer for New Jerseys Atlantic Health System, an AMA Health System Programmember.

If we're trying to reduce hospital stays, or we're trying to understand if our accountable care organization is profitable, or if social determinants of health data helps us better take care of someone, or predict a risk for readmission, youve got to understand what problems are you trying to solve and mapping that to the outcome you wantand then go fill in the blanks, said Maisano, who is a co-founder of CareDox, a platform that connected schools, pediatric practices and families in 38 states.

Maisano spoke during a virtual meeting of the AMA Insight Network that covered how to get a health care AI program up and running, and how to use it properly.

The network aims to helpAMA Health System Programmembers gain early access to innovative ideas, get feedback from their peers, network and learn about pilot opportunities.Learn more.

Dont start with, We want to use AI for radiology triage, because youre starting in the middle, Maisano explained. We take the approach of: What are our problems? And: Where are we well-positioned to execute?

Asking the right questions is fundamental, said Edward Lee, MD, the executive vice president for information technology and chief information officer for the Permanente Federation, and associate executive director of the Permanente Medical Group, an AMA Health System Program member.

I think Ben said it really well actually: What is the problem that we're trying to solve? Dr. Lee said. That's the first question we need to ask and answer before you embark on any program.

Other must-ask questions include:

Another important point to remember, Dr. Lee said, was that AI is intended to enhance, assist, complement and augment human intelligence and not necessarily to replace human intelligence.

He outlined these three main buckets of opportunity for health care AI.

Computer vision, which includes and specialty that deals with digitized imagessuch as radiology, dermatology, ophthalmology and pathologyis considered fertile ground for health care AI.

Predictive analytics, which involves using hundreds or thousands of data points to understand the likelihood of a particular event occurring. It can be used to predict hospital readmissions, fall risks and emerging COVID-19 hot spots.

Natural language processing, which involves interpreting unstructured data and can be used, for example, to search records for patients who have not had needed follow-up care.

Scale is also important, Dr. Lee said. He explained that a system needs to have a diverse and robust team processing a diverse and robust stream of data that is representative of the patient population it serves.

You want a diverse group of people with diversity of thought, because if you do things in a very narrow or potential tunnel vision way, the risk of bias can be introduced much more, Dr. Lee said. If you don't look for bias, you'll never find it.

He described a New England Journal of Medicine study, Automated Identification of Adults at Risk for In-Hospital Clinical Deterioration, by researchers from Kaiser Permanente. They examined a health care AI program that alerted clinicians when it appeared a high-risk patients condition was deteriorating.

The intervention group had a 16% lower mortality rate, and lower rates for intensive-care unit admission and shorter hospital stays. Patients were also less likely to die without a palliate care referral.

The panelists agreed that developing an effective health care AI program takes time.

You dont jump into the future, said Maisano. You have one foot in the present and one foot in the future and youve got to bring people along at their comfort level.

Learn more about theAMAs commitment to helping physicians harness health care AIin ways that safely and effectively improve patient care.

Go here to see the original:

For smart use of health care AI, start with the right questions - American Medical Association

Posted in Ai | Comments Off on For smart use of health care AI, start with the right questions – American Medical Association

Meet the AI that busts open classic songs to make new tunes – TechRadar

Posted: at 9:29 am

Whatever kind of music you listen to, the art of remixing is an integral part of popular music today. From its earliest roots in musique concrte and dancehall artists in 60s Jamaica to the latest Cardi B remix, repurposing and rearranging songs to create new material has long been a way for musicians to discover new and exciting sounds.

In the early days of electronic music production, music was remixed by means of physical tape manipulation, a process mastered by pioneering sound engineers like Delia Derbyshire, King Tubby and Lee Scratch Perry. And the process largely remained unchanged until the advent of digital music.

Nw, remixing is on the verge of another big transformation and AI company Audioshake is leading the charge. We spoke to Audioshake co-founder Jessica Powell about how the company is using a sophisticated algorithm to help music makers mine the songs of the past to create new material, and about potential future applications for the tech in soundtracking funny Tik-Tok videos, advertising, and making virtual live music concerts sound great.

Speaking to TechRadar in between appearances at a conference in Italy, Powell told us how Audioshakes technology works.

We use AI to break songs into their parts, which are known by producers as stems and stems are relevant because theres already lots you can do with them, like in movies and commercials, she explained.

Working with these stems allows producers to manipulate individual elements of a song or soundtrack for instance, lowering the volume of the vocal when a character on screen begins speaking. Stems are also used in everything from creating karaoke tracks, which cut out the lead vocal completely so that you can front your favorite band for three minutes, to the remixing of an Ed Sheeran song to a reggaeton beat.

And, as Powell explains, stems are being used in even more ways today. Spatial audio technologies like Dolby Atmos take individual parts of a track and place them in a 3D sphere and when youre listening with the right speakers or a great soundbar, it sounds like the music is coming from you at all angles.

So, if stems are used so widely in the music industry and beyond, why is Audioshake even needed? Well, record labels dont always have access to a tracks stems and before the 1960s, most popular music was made using monophonic and two-track recording techniques. And that means the individual parts of these songs the vocals, the guitars, the drums couldnt be separated.

Thats where Audioshake comes in. Take any song, upload it to the companys database, and its algorithm analyses the track, and splits it into any number of stems that you specify all you have to do is select the instruments it should be listening out for.

We tried it for ourselves with David Bowies Life on Mars. After selecting the approximate instruments we wanted the algorithm to listen out for (in this case, vocals, guitar, bass, and drums), it took all of 30 seconds for it to analyze the song and break it up into its constituent parts.

From there you can hear each instrument separately: the drums, the droning bass notes, the iconic whining guitar solo, Rick Wakemans flamboyant piano playing, or just Bowies vocal track. And the speed in which Audioshake is able to do this is breathtaking.

If youre a record label or music publisher, you can kind of create an instrumental on the fly, Powell explains. You dont have to go into a DAW (Digital Audio Workstation) like Ableton or Pro Tools to reassemble the song to create the instrumental its just right here on demand.

So, how does it work? Well, the algorithm has been trained to recognize and isolate the different parts of a song. Its surprisingly accurate, especially when you consider that the algorithm isnt technically aware of the difference between, say, a cello and a low-frequency synth. There are areas that do trip it up, though.

Heavy autotune Powell uses the example of artists like T-Pain will be identified as a sound effect as opposed to a vocal stem. The algorithm cant yet learn from user feedback, so this is something that needs to be addressed by developers, but the fact that these stems can be separated at all is seriously impressive.

Sadly, Audioshakes technology isnt currently available to the humble bedroom producer. Right now, the companys clients are mainly rights holders like record labels or publishers and while that might be disappointing to anyone whod love to break apart an Abba classic ahead of the groups upcoming virtual residency in London, the tech is being utilized in some really interesting ways.

One song management company, Hipgnosis, which sees songs as investment opportunities as much as works of art, owns the rights to an enormous back catalogue of iconic songs by artists ranging from Fleetwood Mac to Shakira.

Take Van Goghs Sunflowers. Were not just going to go and pop out a sunflower if you dont want us to.

Using Audioshake, Hipgnosis is creating stems for these old songs and then giving them to its stable of songwriters to try to reimagine those songs for the future, and introduce them to a new generation, as Powell puts it, adding You can imagine some of those beats in the hands of the right person that can do really cool things with them.

Owning the rights to these songs makes these things possible and opening up the technology to the public could be a legal quagmire, with people using and disseminating artistic creations that dont belong to them. Its not just a legal issue, though; for Audioshake its an ethical issue too, and Powell makes it clear that the technology should work for the artists, not against them.

She says the company really wanted to make sure that we respected the artists wishes. If they want to break open their songs and find these new ways to monetize them, we want to be there to help them do that. And if theyre not cool with that, were not going to be the ones helping someone to break open their work without permission.

Take Van Goghs Sunflowers, she adds. Were not just going to go and pop out a sunflower if you dont want us to.

Traditional pop remixes are just the start, though. There are lots of potential applications for Audioshake that could be opened up in the future and TikTok could be one of the more lucrative.

The possibilities created by giving TikTok creators the opportunity to work with stems to mash up tracks in entertaining ways could be an invaluable tool for a social media platform thats based on short snippets of audio and video.

Theres also the potential to improve the sound quality of livestreamed music. When an artist livestreams one of their concerts on a platform like Instagram, unless theyre able to use a direct feed from the sound desk, the listener is going to hear a whole load of crowd noise and distortion.

Watch something on Instagram Live and you dont even stick around youd almost prefer to watch the music video because its bad audio, says Powell. Using Audioshake (and with a small delay) you could feasibly turn down the crowd noise, bring the bass down, and bring the vocals up for a clearer audio experience.

Looking even further into the future, theres the potential to use the technology to produce adaptive music that is, music that changes depending on your activities.

This is more futuristic, but imagine youre walking down the street listening to Drake, says Powell. And then you start running and that song transforms its still the Drake song, but its now almost like a different genre, and that comes from working with the parts of the song, like increasing the intensity of the drumbeat as you exercise.

It sounds like adaptive music is a little way off, but we know that audio can already be manipulated based on your environment. Just look at adaptive noise-cancelling headphones like the Sony WH-1000XM4, which can turn the level of noise cancellation up as you enter noisy environments and other headphones models have similar features that automatically adjust the volume of your music based on your surroundings. The XM4s Speak-to-Chat feature is another example, with the headphones listening out for the sound of your voice.

The applications for running headphones could go even further than this. With the Apple AirPods 3 rumored to have biometric sensors that will measure everything from your breathing rate to how accurately you can recreate a yoga pose, adaptive music could even be used to bolster your workouts when your headphones detect a drop-off in effort and stem-mining technologies like Audioshake could make it easier for artists to monetize their music in this way.

While adaptive music is unlikely to reach our ears for a few years yet, the idea of breaking open songs in order to make them more interactive and to personalize them is just as exciting as the next generation of musicians mining the songs of the past to create new sounds. Heres hoping that one day, humble bedroom musicians will be able to mine these songs too, like plucking flowers from a Van Gogh vase.

Continue reading here:

Meet the AI that busts open classic songs to make new tunes - TechRadar

Posted in Ai | Comments Off on Meet the AI that busts open classic songs to make new tunes – TechRadar

AI Helps to Earlier Detect Brain Injury in Survivors of Cardiac Arrest – Polsky Center for Entrepreneurship and Innovation – Polsky Center for…

Posted: at 9:29 am

Published on Tuesday, September 7, 2021

The AI system improves the prognosis of surviving patients with Hypoxic Ischemic Brain Injury (HIBI) after cardiac arrest by allowing and facilitating earlier treatment. (Image: iStock/monsitj)

University of Chicago researchers have developed a patent-pending technique using deep learning, a form of artificial intelligence (AI), to better assess hypoxic-ischemic brain injury in survivors of cardiac arrest.

Over the past three decades, Maryellen Giger, A.N. Pritzker Distinguished Service Professor of Radiology, has been conducting research on computer-aided diagnosis, including computer vision, machine learning, and deep learning, in the areas of breast cancer, lung cancer, prostate cancer, lupus, and bone diseases.

She also is a cofounder of Quantitative Insights, which started through the 2010 New Venture Challenge at the Polsky Center. The company produced QuantX, which in 2017 became the first FDA-cleared machine-learning-driven system to aid in cancer diagnosis (CADx). In 2019, it was named one of TIME magazines inventions of the year and was bought by Qlarity Imaging.

Backed by this wealth of knowledge, she is today applying her research to neuro-imaging in collaboration with Fernando Goldenberg, a professor of neurology and neurosurgery, as well as the co-director of the comprehensive stroke center and director of neuroscience critical care at UChicago Medicine. The research team is enhanced with collaborators Jordan Fuhrman, a PhD student in Gigers lab in the Committee on Medical Physics and the Department of Radiology, and Ali Mansour, an assistant professor of neurology and neurosurgery with expertise in advanced clinical neuroimaging and machine learning.

The goal of this multi-department research was to see if machine-learning could help clinicians at the hospital better assess hypoxic-ischemic brain injury (HIBI), which can occur when the brain does not receive enough oxygen during cardiac arrest. The extent of this damage depends on several variables, including the baseline characteristics of the brain and its vascular supply, duration of oxygen deprivation, and cessation of blood flow.

While the neurological injury that follows cardiac arrest is largely a function of HIBI, the process of determining a patients projected long-term neurological function is a multifaceted endeavor that involves multiple clinical and diagnostic tools. In addition to bedside clinical exam, head CT (HCT) is often the earliest and most readily available imaging tool, explained Goldenberg.

In their work, the researchers hypothesized that the progression of HIBI could be identified in scans completed on average within the first three hours after the heart resumes normal activity.

To test this, the team used machine learning, specifically, a deep transfer learning approach (which Fuhrman had been using to assess COVID-19 in thoracic CTs) to predict from the first normal-appearing HCT scan whether or not HIBI would progress. The deep learning technique, for which there is a patent-pending, automatically assessed the first HCT scan to identify the progression of HIBI.

This is important as currently there is no imaging-based method/analyses to identify early on whether or not a patient will exhibit HIBI, and while more data is needed to further confirm the efficacy of the AI-based method, the results to date are very promising, said Fuhrman.

The findings in patients first HCT may be too subtle to be picked up by the human eye, said Giger. However, a computer looking at the complete image may be able to determine between those patients who will progress and eventually show evidence of HIBI and those who will not.

According to the researchers, the AI system can help in the process of prognostication in survivors of cardiac arrest by identifying patients who may differentially benefit from early interventions a step along precision medicine in this patient population. If prospectively validated, it could also allow for the neuroprognostic process to start sooner than the current standard timeline, said Mansour. Additionally, the AI algorithm is expected to be easily integrated into various commercially available image analysis software packages that are already deployed in clinical settings.

//Polsky Patentedis a column highlighting research and inventions from University of Chicago faculty. For more information about available technologies,click here.

Visit link:

AI Helps to Earlier Detect Brain Injury in Survivors of Cardiac Arrest - Polsky Center for Entrepreneurship and Innovation - Polsky Center for...

Posted in Ai | Comments Off on AI Helps to Earlier Detect Brain Injury in Survivors of Cardiac Arrest – Polsky Center for Entrepreneurship and Innovation – Polsky Center for…

Use of AI in marketing: present and future – The Drum

Posted: at 9:29 am

You will work with the AI. The use of artificial intelligence is becoming increasingly commonplace in the marketing field. There are also a lot of buzzwords flying around in regards to AI: machine learning, Google AI, Skynet, etc. None of them explains the most critical aspects of AI in marketing and why you should get used to working with your cyber colleagues.

Looking at AI in the present, the picture is overall positive:

The global market value of AI $93.53 billion and is expected to grow at CAGR 40% until 2028;

Marketing agencies & in-housing companies are raising their investments into AI;

The outlook on AI is overall a positive one, expecting to create more jobs.

Talking AI is complicated. There is no one singular way to define an AI that we are using today. Rather, its a list of types of artificial intelligence that are each good at some tasks. And not all of them are relevant to marketers.

Currently, we have these MOST COMMONLY used AIs some of them you already heard about:

Machine learning (ML) the most popular, widely used type of AI.

Natural language processing (NLP) a type of AI that processes and interprets human language.

Expert systems (ES) an AI that is trained to store data in a single field and extract information based on inference rules.

A BIG disclaimer incoming: there are more types of AI in varying stages of development. These are only examples and the 3 most popular and relevant to marketing. If you want to go down the rabbit hole and learn more about AI, heres a good place to start.

Machine learning in marketing. Thats what really interests us, and brings highest benefits. We have covered what its like now in the present. ML needs a lot of data to test its algorithms against. And while we are already seeing a lot of positive impact on data analytics, digital advertising planning and other instances there are future hurdles ML AI has to overcome.

AI use in creative marketing fields.

Creativity takes a human. Even though this article is quite technical and uses a lot of sources, it still took creativity to construct. Your social media post took inspiration from a picture you took, a video you filmed or a text you read. It still takes a human that instinctively understands context in order to produce creative work.

That doesnt mean that AI cant participate. In the world of journalism, news giants like BBC, Forbes, The Washington Post, MSN and others are using some form of AI to help them get first drafts of stories a human later fixes. An AI here saves time by providing grounds instead of a blank page.

Still, its a long way to go until true creativity. While ML can extrapolate from existing information, it would take many more algorithmic connections to truly produce creative work. Were not there yet. The question also remains: do we need to go that far?

Increased AI use in data processing and how 3rd-party cookie removal will affect that.

Right now, the most popular uses of AI in marketing are all technical and largely numerical. Based on studies, there are 5 categories where the use of ML and NLP AIs is prevalent (and somewhat successful). Some examples Im sure youre familiar with are:

Notice that 4 out of 5 categories rely heavily on user data. Thats why we all have cookies helping algorithms watch our behavior online: so ML can learn about us and help adjust business strategies accordingly.

What happens when cookies will no longer apply? It wont be quite the apocalypse that we tend to imagine, but it will get harder to track users. GDPR already curbed individual user tracking significantly and this trend is unlikely to stop. Googles set to introduce cohort tracking as opposed to personal, but that story will unravel in 2022 at the earliest.

All we know now is that AI needs data to learn. It will still be learning for sure, but it might do so in different ways.

Move towards hyperautomation.

Hyperautomation is a term that describes constructive and planned automation of as many business processes as possible. The use of AI is only one of many possible tools to attain the status of hyperautomated company.

Why is this a hurdle? The biggest question is not the fear of employees losing their job. Earlier we saw that AI is in fact a job-creating force. The issue is training and readiness for working hand-in-hand with AIs.

70% of participants in the study done by Drift and Marketing Artificial Intelligence Institute say they dont have the necessary knowledge or training to adopt AI. Not to mention fully automate their business processes.

Marketing employers will have to contend with this fact, train their employees AND themselves if they want to keep up with the competition.

AIs are here to stay. Our cyber partners are helping us achieve productivity and to keep up with the pace of modern digital marketing. Our greatest challenge right now is to prepare for them adequately. Keep up with the creativity, but do it in a smart, organized and productive fashion with AI helping you.

See the article here:

Use of AI in marketing: present and future - The Drum

Posted in Ai | Comments Off on Use of AI in marketing: present and future – The Drum

A massive regional gap is opening around AI – Axios

Posted: at 9:29 am

A handful of superstar U.S. metro areas are leading the way in AI, while much of the rest of the country is at risk of being left behind.

Why it matters: AI can enhance productivity and growth in multiple sectors, but as a technology that tends to centralize around a handful of talent hubs, it could also increase regional economic disparity across the country.

What's happening: In a new report released today, researchers at the Brookings Institution assessed the geographic distribution of AI talent, investment and research around the U.S.

The other side: More than half of the 261 U.S. metro areas surveyed by Brookings exhibit no significant AI activities at all.

What they're saying: "AI is at the stage where it is highly dependent on a super-specific talent base, and there's also a heavy need for massive computing power," says Mark Muro, policy director at Brookings' Metropolitan Policy Program and a co-author of the report.

What to watch: Muro notes that many of the AI early adopters benefited from federal investments in R&D that could potentially be spread more evenly around the country.

The bottom line: "The winner-takes-all dynamics of AI are strong," notes Muro and pushing against them won't be easy.

Originally posted here:

A massive regional gap is opening around AI - Axios

Posted in Ai | Comments Off on A massive regional gap is opening around AI – Axios

Don’t Sleep On The Lawn, There’s An AI-Powered, Flamethrower-Wielding Robot About – Hackaday

Posted: at 9:29 am

You know how it goes, youre just hanging out in the yard, there arent enough hours in the day, and weeding the lawn is just such a drag. Then an idea just pops into your head. How about we attach a gas powered flamethrower to a robot arm, drive it around on a tank-tracked robotic base, and have it operate autonomously with an AI brain? Yes, that sounds like a good idea. Lets do that. And so, [Dave Niewinski] did exactly that with his Ultimate Weed Killing Robot.

And you thought the robot overlords might take a more subtle approach and take over the world one coffee machine at a time?No, straight for the fully-autonomous flamethrower it is then.

This build uses a Kinova Robots Gen 3 six-axis arm, mounted to an Agile-X Robotics Bunker base. Control is via a Connect Tech Rudi-NX box which contains an Nvidia Jetson Xavier NX Edge AI computing engine. Wow that was a mouthful!

Connectivity from the controller to the base is via CAN bus, but, sadly no mention of how the robot arm controller is hooked up. At least this particular model sports an effector mount camera system, which can feed straight into the Jetson, simplifying the build somewhat.

To start the software side of things, [Dave] took a video using his mobile phone while walking his lawn. Next he used RoboFlow to highlight image stills containing weeds, which were in turn used to help train a vision AI system. The actual AI training was written in Python using Google Collaboratory, which is itself based on the awesome Jupyter Notebook (see also Jupyter Lab on the main site. If you havent tried that yet, and if you do any data science at all, youll kick yourself for not doing so!) Collaboratory would not be all that useful for this by itself, except that it gives you direct, free GPU access, via the cloud, so you can use it for AI workloads without needing fancy (and currently hard to get) GPU hardware on your desk.

Details of the hardware may be a little sparse, but at least the software required can be found on the WeedBot GitHub. Its not like most of us will have this exact hardware lying around anyway. For a more complete description of this terrifying contraption, checkout the video after the break.

View original post here:

Don't Sleep On The Lawn, There's An AI-Powered, Flamethrower-Wielding Robot About - Hackaday

Posted in Ai | Comments Off on Don’t Sleep On The Lawn, There’s An AI-Powered, Flamethrower-Wielding Robot About – Hackaday

6-Year-Old Becomes the Youngest AI Programmer to Beat Boredom – Interesting Engineering

Posted: at 9:29 am

A six-year-old called Kautilya Katariya of Northampton, U.K., was recently granted a programming Guinness World Record for being the youngest Python programmer, reported an IBM blog.

According to the blog, Katariyabegan studying IBM course materials to understand computer programming and concepts of coding languages like Python in order to do something useful during COVID-19. By November 2020, he had completed five different courses in Python and IBM including Foundations of AI, Python for Data Science, and a course from the IBM cognitive class.

If that sounds impressive it's because it is.

Analytics India Mag interviewed the young boy to find out what got him interested in programming in the first place. Katariya explained that he noticed that "everything pointed (...) that cool things are either run by a computer programmer or made using programming."

Katriya also added that his parents helped him in his path to becoming the world's youngest programmer by providing books and educational resources on programming and artificial intelligence. When he had used up all these sources, they gave him a laptop and an internet connection so he could further pursue his studies.

Katariya also had a special message for young people everywhere interested in programming: "I think computer programming is really fun and is similar to solving puzzles. If we think that we are just trying to solve puzzles, coding wont feel that difficult and you may start enjoying it. "

In the future, Katariya hopes tolearn and work in the cognitive computing field.

Katariya's parents were also interviewed by Analytics India Mag and had some good advice for parents everywhere. They suggested that parents consider coding as another mental exercise to develop logical thinking and problem-solving capabilities instead of viewing it as an additional subject to teach them.

If you are interested in programming, read our article on the best way to learn how to code.

See the original post here:

6-Year-Old Becomes the Youngest AI Programmer to Beat Boredom - Interesting Engineering

Posted in Ai | Comments Off on 6-Year-Old Becomes the Youngest AI Programmer to Beat Boredom – Interesting Engineering

RENCI Collaboration to Leverage AI and ML for DOE Workflows – HPCwire

Posted: at 9:29 am

Sept. 10, 2021 The Department of Energy (DOE) advanced Computational and Data Infrastructures (CDIs) such as supercomputers, edge systems at experimental facilities, massive data storage, and high-speed networks are brought to bear to solve the nations most pressing scientific problems, including assisting in astrophysics research, delivering new materials, designing new drugs, creating more efficient engines and turbines, and making more accurate and timely weather forecasts and climate change predictions.

Increasingly, computational science campaigns are leveraging distributed, heterogeneous scientific infrastructures that span multiple locations connected by high-performance networks, resulting in scientific data being pulled from instruments to computing, storage, and visualization facilities.

However, since these federated services infrastructures tend to be complex and managed by different organizations, domains, and communities, both the operators of the infrastructures and the scientists that use them have limited global visibility, which results in an incomplete understanding of the behavior of the entire set of resources that science workflows span.

Although scientific workflow systems likePegasusincrease scientists productivity to a great extent by managing and orchestrating computational campaigns, the intricate nature of the CDIs, including resource heterogeneity and the deployment of complex system software stacks, pose several challenges in predicting the behavior of the science workflows and in steering them past system and application anomalies, saidEwa Deelman, research professor of computer science and research director at the University of Southern Californias Information Sciences Institute and lead principal investigator (PI). Our new project, Poseidon, will provide an integrated platform consisting of algorithms, methods, tools, and services that will help DOE facility operators and scientists to address these challenges and improve the overall end-to-end science workflow.

Under a newDOE grant, Poseidon aims to advance the knowledge of how simulation and machine learning (ML) methodologies can be harnessed and amplified to improve the DOEs computational and data science.

Research institutions collaborating on Poseidon include the University of Southern California, the Argonne National Laboratory, the Lawrence Berkeley National Laboratory, and the Renaissance Computing Institute (RENCI) at the University of North Carolina at Chapel Hill.

Poseidon will add three important capabilities to current scientific workflow systems (1) predicting the performance of complex workflows; (2) detecting and classifying infrastructure and workflow anomalies and explaining the sources of these anomalies; and (3) suggesting performance optimizations. To accomplish these tasks, Poseidon will explore the use of novel simulation, ML, and hybrid methods to predict, understand, and optimize the behavior of complex DOE science workflows on DOE CDIs.

Poseidon will explore hybrid solutions where data collected from DOE and NSF testbeds, as well as from an ML simulator, will be strategically inputted into an ML training system.

In addition to creating a more efficient timeline for researchers, we would like to provide CDI operators with the tools to detect, pinpoint, and efficiently address anomalies as they occur in the complex DOE facilities landscape, saidAnirban Mandal, Poseidon co-PI, assistant director for network research and infrastructure at RENCI, University of North Carolina at Chapel Hill. To detect anomalies, Poseidon will explore real-time ML models that sense and classify anomalies by leveraging underlying spatial and temporal correlations and expert knowledge, combine heterogeneous information sources, and generate real-time predictions.

RENCI will play a pivotal role in the Poseidon project. RENCI researchers Cong Wang and Komal Thareja will lead project efforts in data acquisition from the DOE CDI and NSF testbeds (FABRIC and Chameleon Cloud) and emulation of distributed facility models, enabling ML model training and validation on the testbeds and DOE CDI. Additionally, Poseidon co-PI Anirban Mandal will lead the project portion on performance guidance for optimizing workflows.

Successful Poseidon solutions will be incorporated into a prototype system with a dashboard that will be used for evaluation by DOE scientists and CDI operators. Poseidon will enable scientists working on the frontier of DOE science to efficiently and reliably run complex workflows on a broad spectrum of DOE resources and accelerate time to discovery.

Furthermore, Poseidon will develop ML methods that can self-learn corrective behaviors and optimize workflow performance, with a focus on explainability in its optimization methods.

Working together, the researchers behind Poseidon will break down the barriers between complex CDIs, accelerate the scientific discovery timeline, and transform the way that computational and data science are done.

Please visit theproject websitefor more information.

Source: RENCI

Excerpt from:

RENCI Collaboration to Leverage AI and ML for DOE Workflows - HPCwire

Posted in Ai | Comments Off on RENCI Collaboration to Leverage AI and ML for DOE Workflows – HPCwire

Artificial Intelligence and the Humanization of Medicine InsideSources – InsideSources

Posted: September 8, 2021 at 10:25 am

If you want to imagine the future of healthcare, you can do no better than to read cardiologist and bestselling author Eric Topols trilogy on the subject: The Creative Destruction of Medicine, The Patient Will See You Now, and Deep Medicine.

Deep Medicine bears a paradoxical subtitle: How Artificial Intelligence Can Make Healthcare Human Again. The book describes the growing interaction of human and machine brains. Topol envisions a symbiosis, with people and machines working together to assist patients in ways that neither can do alone. In the process, healthcare providers will shed some of the mind-numbing rote tasks they endure today, giving them more time to focus on patients.

I recorded an interview with Topol in which we discuss his books. The podcast is titled Healthcares Reluctant Revolution because one of Topols themes is that healthcare is moving too slowly to integrate AI and machine learning (ML) into medicinea sluggishness that diminishes the quality and quantity of available care.

The first of Topols books, Creative Destruction, described how technology would transform medicine by digitizing data on individual human beings in great detail. In The Patient Will See You Now, he explored how this digital revolution can allow patients to take greater control over their own health and their own care. With this democratization of care, medicines ancient paternalism could fade. (In 2017, Topol and I co-authored an essay on Anatomy and Atrophy of Medical Paternalism.)

Deep Medicine is qualitatively different from the other two books. It has an almost-mystical quality. Intelligent machines engaging in AI and ML arrive at information in ways even their programmers can barely comprehend, if at all. Topol gives a striking example.

Take retinal scans of a large number of peoplethe sort of scans that your optometrist or ophthalmologist takes. Now, show the scans to the top ophthalmologists in the world and ask for each scan, Is this person a man or a woman? The doctors will answer correctly approximately 50 percent of the time. In other words, they have no idea and could do just as well by tossing a coin. Now, run those same scans through a deep neural network (a type of AI/ML system). The machine will answer correctly around 97 percent of the timefor no known reason.

Topol explains how such technologies can improve care. Today, radiologists spend their days intuitively searching for patterns in x-rays, CT scans, and MRIs. In the future, much of the pattern-searching will be automated (and more accurate), and radiologists (who seldom interact with patients today) will have much greater contact with patients.

Today, dermatologists are relatively few in number, so much of the earlier stages of skin care are done by primary care physicians, who have less ability to determine, say, whether a mole is potentially cancerous. The result can be misdiagnosis, delayed diagnosis, and the unnecessary use of dermatologists time. In the future, primary care doctors will likely screen patients using smart diagnostic tools, thereby wasting less of patients and dermatologists time and diagnosing more accurately.

In Deep Medicine, Topol tells the story of a newborn experiencing seizures that could lead to brain damage or death. Routine diagnostics and medications werent helping. Then, a blood sample was sent to a genomics institute that combed through a vast amount of data in a short time and identified a rare genetic disorder thats treatable through dietary restrictions and vitamins. The child went home, seizure-free, in 36 hours.

Unfortunately, healthcares adoption of such technologies is unduly slow. In our conversation, Topol noted that we have around 150 medical schools, some quite new, and yet they dont have any AI or genomics essentially in their curriculum.

Topol lists some hopes that observers invest in AI: Machines outperforming doctors at all tasks, diagnosing the undiagnosable, treating the untreatable, seeing the unseeable on scans, predicting the unpredictable, classifying the unclassifiable, eliminating workflow inefficiencies, eliminating patient harm, curing cancer, and more.

A realistic sort of optimist, Topol writes: Over time, AI will help propel us toward each of these objectives, but its going to be a marathon without a finish line.

See original here:

Artificial Intelligence and the Humanization of Medicine InsideSources - InsideSources

Posted in Ai | Comments Off on Artificial Intelligence and the Humanization of Medicine InsideSources – InsideSources

Why AI & ML must be part of diversity initiatives – The Drum

Posted: at 10:25 am

Marketers are embracing diversity, but are they overlooking a critical opportunity? Merkles Tracie Kambies discusses why many may be missing the mark when it comes to AI and ML, and what they must do about it.

Diversity, equity and inclusion are more important than ever in our dynamic world. Marketers and their digital agencies have embraced diversity, equity and inclusion (DEI) with enthusiasm and care. They are building their teams to be diverse, changing their brands to be inclusive, and shaping their messages to be just and ethical. DEI rightly must inform every part of the business. This is especially true as more people and organizations realize the importance of DEI in our current moment in history. Rethinking our ethics when it comes to data privacy, personal information and fluid identities is in motion now. Marketers are on it.

But are they missing the mark when it comes to artificial intelligence (AI) and machine learning (ML)?

AI and ML are exciting tools for the modern marketer riding the bleeding edge of technology. AI and ML can be used to hyper-target customer segments, learn from ridiculously deep data sets, improve content, react to the behaviors of millions of consumers and predict how we learn, shop and buy. They are game changers. The best AI/ML experts have their hands full learning how to leverage the technology, keeping up with new developments and changing their business models to adapt to new applications. What hasnt always happened or been done well is considering the ethics of what they are building.

AI and ML have their own special challenges in encoding ethics into their artificial brains. The point of AI/ML in marketing is to create bias toward inciting consumer action, such as transacting. The models are built to learn on their own. The incentives are aligned toward marketing KPIs such as increasing sales or building loyalty and engagement. They are fed data sets filled with dimensions of past action, demographics, financials, channels and more. What they dont usually get is ethical instructions to guide their outputs. AI and ML, in their current forms, are ethically blind.

The ethically-blind AI presents openings for dangerous outcomes. It may produce segments or targets that have undesirable biases against race, gender, sexual preference, identity, age and a host of other discriminations we wouldnt tolerate in other aspects of life. The ethically-blind AI could reverse much of the positive impact we are achieving through our human-curated activities in our branding or our team-building practices. A modern, ethical organization simply cant afford to have a non-ethical actor so prominently directing the organizations marketing behavior.

While we dont know exactly what ethical AI/ML looks like, we can begin to rethink how we approach the discipline with an ethics-based mindset.

Firstly, we need to inject ethical thinking into our design of AI and ML. We need to be conscious of how ethics plays into our algorithms and examine the outputs for moral content. We need to bring a diverse and inclusive mindset to our AI teams, and the best way to ensure this is to build AI teams that are diverse and inclusive themselves.

We also need to change our incentives so that ethical behavior in AI/ML is encouraged, and that competing incentives dont impede our ability to act ethically. Marketers and their agencies need to start asking themselves some exploratory questions:

Are our DEI objectives clearly accounted for in our AI and ML programs? Are our ethics part of our design process and governance?

Do we have a way to measure the ethical impact of our AI and ML outputs? Can we track them before they go to market?

Do marketers incentives and KPIs need to be adjusted to accommodate ethical approaches to employing AI/ML?

Are we bringing a diverse and inclusive perspective to our AI and ML programs? More precisely, are our teams themselves diverse and inclusive in their composition?

So much of AI/ML is designed and performed by agencies and their holding companies, so it is essential for marketers and agencies to be leaders in bringing ethics to these disciplines. We in the industry take great pride in being innovators in this bright and brilliant field. We know its not only the future but the now. We must embed our ethics and our deeply-held desires for justice within it now. Its our duty.

Tracie Kambies is global analytics leader, Merkle.

For more, sign up for The Drums daily US newsletter here.

Follow this link:

Why AI & ML must be part of diversity initiatives - The Drum

Posted in Ai | Comments Off on Why AI & ML must be part of diversity initiatives – The Drum

Page 109«..1020..108109110111..120130..»