Page 77«..1020..76777879..90100..»

Category Archives: Artificial Intelligence

Argentine project analyzing how data science and artificial intelligence can help prevent the outbreak of Covid-19 | Chosen from more than 150…

Posted: September 26, 2021 at 5:00 am

Data science and artificial intelligence can help prevent outbreaks COVID-19? This is the focus of the research of an Argentine project, coordinated by the Interdisciplinary Center for the Studies of Science, Technology and Innovation (Cecti), which which It was selected from more than 150 proposals from around the world and will receive funding from Canada and Sweden.

The project is called Arphai (in Argentinean English for General Research on Data Science and Artificial Intelligence for Epidemic Prevention) and its goal is to develop tools, models and recommendations that help predict and manage such epidemic events as Covid-19, but are replicable with other viruses.

The initiative originated from Ciecti a civic association set up by the National University of Quilmes (UNQ) and the Latin American College of Social Sciences (FLACSO Argentina) and was selected along with eight other proposals based in Africa, Latin America and Asia. In Latin America only two were selected: Arphai in Argentina and another project in Colombia.

Based on this recognition, it will be funded by the International Development Research Center (Idrc) in Canada and the Swedish International Development Cooperation Agency (Sida), under the Global South AI4COVID programme.

The project is coordinated by Ciecti and involves the Planning and Policy Secretariat of the Ministry of Science, Technology and Innovation and the National Information Systems Directorate of the Access to Health Secretariat of the Argentine Ministry of Health.

Researchers are also working on the initiative, Technical teams from the public administration and members of 19 institutions, including universities and research centers, in six Argentine provinces and the city of Buenos Aires.

The main goal is to develop technological tools based on artificial intelligence and data science, which are applied to electronic medical records (EHR), and allow to anticipate and detect potential epidemic outbreaks and favor preventive decision-making in the field of public health regarding Covid-19.

Among the tasks carried out, progress was also made on a pilot project to implement the electronic medical record designed by the Ministry of Health (Health History Integrated HSI) in the health networks of two municipalities on the outskirts of Buenos Aires, in order to synthesize learning and learning. Design an escalation strategy at the national level.

Another goal is to prioritize the perspective of equity, particularly gender, a criterion expressed in efforts to mitigate biases in developed prototypes (models, algorithms), in analysis and concern for the databases used and their diverse configuration. Teams: 60% of the project is made up of women, many of whom are in leadership positions.

Arphai operates under strict standards of confidentiality, protection and anonymity of data and is endorsed by the Ethics Committee of the National University of Quilmes (UNQ).

Go here to see the original:

Argentine project analyzing how data science and artificial intelligence can help prevent the outbreak of Covid-19 | Chosen from more than 150...

Posted in Artificial Intelligence | Comments Off on Argentine project analyzing how data science and artificial intelligence can help prevent the outbreak of Covid-19 | Chosen from more than 150…

Artificial intelligence predicts the risk of recurrence for women with the most common breast cancer – EurekAlert

Posted: at 5:00 am

21-09-2021, New York, NY and Paris, France The RACE AI study conducted by Gustave Roussy and the startup Owkin, as part of the AI for Health Challenge organized by the Ile-de-France Region in 2019, was presented as a proffered paper at ESMO (European Society of Medical Oncology). This study shows that thanks to deep learning analysis applied to digitized pathology slides, artificial intelligence can classify patients with localized breast cancer between high risk and low risk of metastatic relapse in the next five years . This AI could thus become an aid to therapeutic decision making and avoid unnecessary chemotherapy and its impact on personal, professional and social lives for low risk women. This is one of the first proofs of concept illustrating the power of an AI model for identifying parameters associated with relapse that the human brain could not detect.

With 59,000 new cases per year, breast cancer ranks first among cancers in women, clearly ahead of lung cancer and colorectal cancer. It is also the cancer that causes the greatest number of deaths in women, with 14%1 of female cancer deaths in 2018,. 80%1 of breast cancers are said to be hormone-sensitive or hormone-dependent. But these cancers are extremely heterogeneous and about 20% of patients will relapse with distant metastasis.

RACE AI is a retrospective study that was conducted on a cohort of 1400 patients managed at Gustave-Roussy between 2005 and 2013 for localized hormone-sensitive (HR+, HER2-) breast cancer. These women were treated with surgery, radiotherapy, hormone therapy, and sometimes chemotherapy to reduce the risk of distant relapse.

Chemotherapy is not routinely administered because not all women will benefit from it due to a naturally favorable prognosis. The practitioner's choice is based on clinico-pathological criteria (age of the patient, size and aggressiveness of the tumor, lymph node invasion, etc.) and the decision to administer or not adjuvant chemotherapy varies between oncology centers. Genomic signatures exist today to help identify women who benefit from chemotherapy, but they are not recommended by the French National Authority for Health and are not reimbursed by the French National Health Insurance (although they are included on the RIHN reimbursement list), which makes their access and use heterogeneous in France.

Gustave Roussy and Owkin have taken up the challenge of proposing a new method that is simple, inexpensive and easy to use in all oncology centers as a therapeutic decision-making tool. Ultimately, the goal is to direct patients identified as being at high risk towards new innovative therapies and to avoid unnecessary chemotherapy for low-risk patients.

In the RACE AI study, Owkin's Data Scientists, guided by Gustave Roussy's research physicians, developed an AI model capable of reliably assessing the risk of relapse with an AUC of 81% to help the practitioner determine the benefit/risk balance of chemotherapy. This calculation is based on the patient's clinical data combined with the analysis of stained and digitized histological slides of the tumor. These slides, used daily in pathology departments by anatomo-pathologists, contain very rich and decisive information for the management of cancer. It is not necessary to develop a new technique or to equip a specific technical platform. The only essential equipment is a slide scanner, which is a common piece of equipment in laboratories. Like an office scanner that digitizes text, this scanner digitizes the morphological information present on the slide.

The results of this first study by the Owkin and Gustave Roussy teams open up strong prospects and next steps include prospectively validating the model on an independent cohort of patients treated outside Gustave Roussy. If the results are confirmed, through providing reliable information to clinicians, this AI tool will prove to be a valuable aid to therapeutic decisions.

1Institut national du cancer(France):

https://www.e-cancer.fr/Professionnels-de-sante/Les-chiffres-du-cancer-en-France/Epidemiologie-des-cancers/Les-cancers-les-plus-frequents/Cancer-du-sein

https://www.e-cancer.fr/Patients-et-proches/Les-cancers/Cancer-du-sein/Hormonotherapie

Source

ESMO 2021 Oral Session

Proffered paper: Translational research

Prediction of distant relapse in patients with invasive breast cancer from deep learning models applied to digital pathology slides

Prsentation n 1124O Channel 5 14h20-14h30 Sunday 19th Septembre 2021

Speaker : Ingrid J. Garberis, Gustave Roussy

About Gustave Roussy

Classed as the leading European Cancer Centre and the fifth on the world stage, Gustave Roussy is a centre with comprehensive expertise and is devoted entirely to patients suffering with cancer. The Institute is a founding member of the Paris Saclay Cancer Cluster. It is a source of diagnostic and therapeutic advances. It caters for almost 50,000 patients per year and its approach is one that integrates research, patient care and teaching. It is specialized in the treatment of rare cancers and complex tumors and it treats all cancers in patients of any age. Its care is personalized and combines the most advanced medical methods with an appreciation of the patients human requirements. In addition to the quality of treatment offered, the physical, psychological and social aspects of the patients life are respected. 3,200 health professionals work on its two campuses: Villejuif and Chevilly-Larue. Gustave Roussy brings together the skills, which are essential for the highest quality research in oncology: a quarter of patients treated are included in clinical trials.

For further information: http://www.gustaveroussy.fr/en, Twitter, Facebook, LinkedIn, Instagram

About Owkin

Owkin is a French-American startup that specialises in AI and Federated Learning for medical research. Owkins mission is to connect the global healthcare industry through the safe and responsible use of data and application of artificial intelligence, for faster and more effective research. Owkin was founded in 2016 by Dr Thomas Clozel M.D., a clinical research doctor and former assistant professor in clinical hematology, and Dr Gilles Wainrib, Ph.D., a pioneer in the field of artificial intelligence in biology.

Owkin leverages life science and machine learning expertise to make drug development and clinical trial design more targeted and cost effective. Owkin applies its cutting-edge machine learning algorithms across a broad network of academic medical centers, creating dynamic models that not only predicts disease evolution and treatment outcomes, but can also be used in clinical trials for enhanced analysis, high-value subgroup identification, development of novel biomarkers, and the creation of both synthetic control arms and surrogate endpoints. The end result? Better treatments for patients, developed faster, and at a lower cost.

Owkin has published several high-profile scientific achievements in top journals such as Nature Medicine, Nature Communications, Hepatology and presented results at conferences such as the American Society of Clinical Oncology.

For more information, please visit http://www.owkin.com, follow @OWKINscience on Twitter

Media contact: Talia Lliteras at Talia.Lliteras@owkin.com

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Original post:

Artificial intelligence predicts the risk of recurrence for women with the most common breast cancer - EurekAlert

Posted in Artificial Intelligence | Comments Off on Artificial intelligence predicts the risk of recurrence for women with the most common breast cancer – EurekAlert

Urgent action needed over artificial intelligence risks to human rights – UN News

Posted: September 16, 2021 at 5:48 am

Urgent action is needed as it can take time to assess and address the serious risks this technology poses to human rights, warnedtheHigh Commissioner:The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be.

Ms. Bachelet also called for AI applications that cannot be used in compliance with international human rights law,to be banned. Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they areused without sufficient regard to how they affect peoples human rights.

On Tuesday, the UN rights chiefexpressed concern about the "unprecedented level of surveillance across the globe by state and private actors", which she insisted was "incompatible" with human rights.

She wasspeakingat a Council of Europe hearing on the implications stemming fromJulyscontroversy over Pegasus spyware.

The Pegasus revelations were no surprise to many people, Ms. Bachelet told the Council of Europe's Committee on Legal Affairs and Human Rights, in reference to the widespread use of spyware commercialized by the NSO group, which affected thousands of people in 45 countries across four continents.

The High Commissioners call came asher office, OHCHR,published a report that analyses how AI affects peoples right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.

The document includes an assessment of profiling, automated decision-making and other machine-learning technologies.

The situation is dire said Tim Engelhardt, Human Rights Officer, Rule of Law and Democracy Section, who was speaking at the launch of the report in Geneva on Wednesday.

The situation has not improved over the yearsbut has become worsehe said.

Whilst welcoming the European Unions agreement to strengthen the rules on control and the growth of international voluntary commitments and accountability mechanisms, he warned that we dont think we will have a solution in the coming year, butthe first steps need to be taken now or many people in the world will pay a high price.

OHCHRDirector of Thematic Engagement, Peggy Hicks,added to Mr Engelhardts warning, stating it's not about the risks in future, but the reality today.Without far-reaching shifts,the harms will multiply with scale and speed and we won't know the extent of the problem.

According to the report, States and businesses often rushed to incorporate AI applications, failing to carry out due diligence. It states that there have been numerous cases of people being treated unjustlydue toAImisuse, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognitionsoftware.

The document details how AI systems rely on large data sets, with information about individuals collected, shared, merged and analysed in multiple and often opaque ways.

The data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant, it argues, adding that long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways.

Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face, Ms. Bachelet said.

The report also stated that serious questions should be raised about the inferences, predictions and monitoring by AI tools, including seeking insights into patterns of human behaviour.

It found that the biased datasets relied on by AI systems can lead to discriminatory decisions, which are acute risks for already marginalized groups. This is whythere needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,she added.

An increasingly go-to solution for States, international organizations and technology companies are biometric technologies, which the report states are an area where more human rights guidance is urgently needed.

These technologies, which include facial recognition, are increasingly used to identify people in real-time and from a distance, potentially allowing unlimited tracking of individuals.

The report reiterates calls for a moratorium on their use in public spaces, at least until authorities can demonstrate that there are no significant issues with accuracy or discriminatory impacts and that these AI systems comply with robust privacy and data protection standards.

The document also highlights a need for much greater transparency by companies and States in how they are developing and using AI.

The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society, the report says.

We cannot afford to continue playing catch-up regarding AI allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact.

The power of AI to serve people is undeniable, but so is AIs ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us, Ms. Bachelet stressed.

See more here:

Urgent action needed over artificial intelligence risks to human rights - UN News

Posted in Artificial Intelligence | Comments Off on Urgent action needed over artificial intelligence risks to human rights – UN News

Elon is Right, AI is Hard: Five Pitfalls to Avoid in Artificial Intelligence | eWEEK – eWeek

Posted: at 5:48 am

During the recent Tesla AI Day event, Elon Musk said he discourages machine learning, because it is really difficult. Unless you have to use machine learning, dont do it.

Well, Musk may be right in his assessment, because machine learning is quite difficult to implement. Most companies desire the benefits of what artificial intelligence can achieve for their business, but most dont have what it takes to get it up and running. Therefore, as much as 85% of ML projects currently fail.

The takeaway from Musks startling statement is that organizations cant treat AI, of which machine learning is a subset, like a part-time project. Many businesses are making some important mistakes when trying to do AI. But it doesnt have to be this way. Below are five data points from Bin Zhao, Ph.D., Lead Data Scientist at Datatron, showing some common mistakes of AI implementation.

Dont treat AI/ML development like traditional software development. Developing AI/ML models is a much different process than software development, but many organizations try to apply the traditional software development lifecycle to manage AI/ML models.

Machine Learning development lifecycle (MLLC) takes much more time because of additional factors including translating AI algorithms to compatible software codes, unique infrastructure requirements, the need for frequent model iterations, and more. Compared to traditional programming languages, it can take more than five times as long. This means todays typical application release processes are simply not applicable.

This type of tools mistake introduces unnecessary delays and inefficiencies. In most IT situations, organizations can control the types of servers they buy, the software tools they use, the dependencies they build with and so on.

Not so with AI/ML; organizations must allow their data scientists to use their preferred tools based on what they think will get the job done in the best way. Otherwise, theyre likely to see all their data scientists leave.

DevOps is the union of software development and operations with the goals of reducing solution delivery time and sustaining a good user experience through automation (e.g. CI/CD and monitoring). But DevOps experts dont know the nuances of working with ML models.

MLOps is a new term that expresses how to apply DevOps rules to automate the building, testing and deployment of ML systems. The goal of MLOps is to unite ML application development and the operation of ML applications, making it easier for groups to deploy finer models more often.

Data scientists need the right raw data for modeling, and they excel in uncovering data to build the best models to solve business challenges. However, that does not mean they are experts in all the intricacies of deploying models to work with existing applications and infrastructure. This causes friction between them and the engineering team and business leaders, resulting in low job satisfaction for data scientists.

Though highly skilled and trained, they must rely on others for deployment and production, which also means that they cant iterate rapidly. And since the projects shift to the engineering team, who dont have the ML skill set, its easy for them to miss details especially if the model is not making accurate predictions.

Academic AI research has historically focused on developing models and algorithms. Limited efforts have been devoted towards iterating and improving data sets for a specific business problem, operationalizing a machine learning model or monitoring models in production.

Building and deploying a machine learning model for solving a real world problem is much more than developing the algorithm itself.

Operationalizing ML models is hard but not impossible. Using a new model development life cycle will streamline the process of model development and model production. It does this by helping data scientists, engineering and other involved teams make effective decisions in a timely manner. It will also help teams to reduce production risks.A successful model governance tool can also help by standardizing processes, simplifying governance and significantly reducing risks.

About the Author:

Bin Zhao, Ph.D., Lead Data Scientist at Datatron

View post:

Elon is Right, AI is Hard: Five Pitfalls to Avoid in Artificial Intelligence | eWEEK - eWeek

Posted in Artificial Intelligence | Comments Off on Elon is Right, AI is Hard: Five Pitfalls to Avoid in Artificial Intelligence | eWEEK – eWeek

NHS Artificial Intelligence provider reports 160% growth, promising to transform healthcare with better data – PR Newswire UK

Posted: at 5:48 am

RwHealth's Data Science Platform combines Artificial Intelligence (AI)andMachine Learning to give healthcare providers in-depth data to make better decisions and improve patient outcomes. By using RwHealth's analytical capabilitiesto make predictions, model treatment options, improve safety and increase efficiency, clinicians can deliver better, more widespread care.

One important capability driving the company's significant growth is its ability to anticipate hospital patient flows. Being able to model patient numbers has been vital during Covid, and the RwHealth platform has helped UK hospitals to anticipate demand, combat bed shortages and tackle worsening waiting list issues.

RwHealth's success mirrors the wider growth of AI in healthcare, as stakeholders across the health ecosystem find new ways to increase efficiency, save money and deliver optimised clinical outcomes. In September 2020, NHSX, the organisation driving digital transformation inhealth and social care, announced a 250m investment into AI in UK healthcare.

Orlando Agrippa, CEO and Founder of RwHealth, said: "We've grown at an extraordinary rate, as healthcare providers realise how AI can improve patient outcomes, while helping to ease the wider pressures that the healthcare industry faces. It's importantto tackle backlogs and bed capacity issues so that healthcare remains safe andsteady as we attempt to recover post-Covid."

RwHealth client, Chief Medical Officer (interim) and Responsible OfficerAndrew McLaren, adds: "RwHealth's platform enables us to solve bottlenecks before they become a problem. Faster treatment leads to better outcomes, so every moment the solution helps us save no matter how small has a tangibleimpact on patient care."

The protection of patient data is at the heart of the RwHealth proposition. While its primary customers are NHS Trusts and private healthcare organisations, the companydoes nothold any private patient information, nor any personally identifiable hospital data.With a dedicated Data Protection Officer, RwHealth uses suitable safeguards to protect all information from unauthorised access.

Today, RwHealth works with more than 70 UKand international providers, its AI technology having processed more than 10m UKpatients and 5.5 million across the Middle East and Australia. Founded in 2017, RwHealth is headquartered in London's Canary Wharf.

Photo - https://mma.prnewswire.com/media/1626850/Orlando_Agrippa.jpg

SOURCE RwHealth

Read more:

NHS Artificial Intelligence provider reports 160% growth, promising to transform healthcare with better data - PR Newswire UK

Posted in Artificial Intelligence | Comments Off on NHS Artificial Intelligence provider reports 160% growth, promising to transform healthcare with better data – PR Newswire UK

New institute aims to unlock the secrets of corn using artificial intelligence – Agri-Pulse

Posted: at 5:48 am

Iowa State University researchers are growing two kinds of corn plants.

If you drive past the many fields near the universitys campus in Ames, you can see row after row of the first. But the second exists in a location that hasnt been completely explored yet: cyberspace.

The researchers, part of the AI Institute for Resilient Agriculture, are using photos, sensor data and artificial intelligence to create digital twins of corn plants that, through analysis, can lead to a better understanding of their real-life counterparts. They hope the resulting software and techniques will lead to better management, improved breeding, and ultimately, smarter crops.

We need to use lots of real-time, high-resolution data to make decisions, Patrick Schnable, an agronomy professor and director of Iowa States Plant Sciences Institute,told Agri-Pulse. Just collecting data for data's sake is not something that production ag wants. But data which is then linked to statistical models or other kinds of mathematical models that advise farmers on what to do has a lot of value.

The idea of machine learning systems that can improve or take over typical human tasks has been seeing increased attention over the past couple of years in many industries, including agriculture. In 2019, the National Science Foundation and several partner agencies, including the USDA, began establishing and funding AI institutes to research and advance artificial intelligence in fields like agriculture.

In their call for proposals, the organizations said AI could spur the next revolution in food and feed production.

The Green Revolution of the 1960s greatly enhanced food production and resulted in positive impacts on food security, human health, employment, and overall quality of life for many, the solicitation said. There were also unintended consequences on natural resource use, water and soil quality, and pest population expansion. An AI-based approach to agriculture can go much further by addressing whole food systems, inputs and outputs, internal and external consequences, and issues and challenges at micro, meso, and macro scales that include meeting policy requirements of ecosystem health.

Among the seven inaugural institutes established in 2020 were two focusing on agriculture: the AI Institute for Future Agricultural Resilience, Management and Sustainability at the University of Illinois at Urbana-Champaign, and the AI Institute for Next Generation Food Systems at the University of California, Davis. The 2021 lineup includesthe AIIRA and the Institute for Agricultural AI for Transformation Workforce and Decision Support (AgAID) at Washington State University.

Lakshmi Attigala, a senior scientist and lab manager at Iowa State University, prepares a corn plant to be photographed.

The AIIRA, which received $20 million in funding from these governmental organizations, plans to pool the expertise of researchers at Iowa State, Carnegie Mellon University, the University of Arizona, New York University, George Mason University, the Iowa Soybean Association, the University of Nebraska-Lincoln and the University of Missouri to study the intersection of plant science, agronomics and AI.

The institute hopes to develop AI algorithms that can take all of the collectible data from a field whether by ground robots, drones, or satellites and analyze it to create tools farmers can use to improve production of crops for resilience to the pressures brought about by climate change.

This is a game-changer, Baskar Ganapathysubramian, the director of the institute, told Agri-Pulse as he walked toward a nondescript white shed tucked between crop fields on the Iowa State University campus.

Scouting is based on the visual, he said. By using multimodal things, you can actually go beyond the visual and do early detection and early mitigation. That's not only sustainable, because you're going to use less of the chemicals needed, but also amazingly profitable.

Ganapathysubramian opened the door to reveal a flurry of activity. Directly inside, genetics graduate student Yawei Li held a protractor up to a corn plant in various positions, trying to measure the angles of its leaves.

Across the room, Lakshmi Attigala, a senior scientist and lab manager, grabbed a fully headed corn plant from a gray tote and walked it over to the labs makeshift photography studio, where a sheet of blue cloth hanging from the ceiling served as a backdrop.

She placed the corn plant in a small, rotating green vase ringed by light stands and adjusted its leaves, preparing it for a photo shoot. She gave it a unique number, 21-3N3125-1, which was printed on a piece of paper she attached to the front of it.

As the vase rotated, she used two cameras one hanging from the ceiling and the other sitting atop a tripod in front of the corn plant to take shots of the plant.

On the north side of the building two researchers senior staff member Zaki Jubery and graduate student Koushik Nagasubramanian placed eight more corn plants in a ring surrounding a terrestrial laser scanner. The scanner sends out a signal to detect point clouds, or find the exact dimensions of these plants based on which points the lasers bounce off.

Interested in more news on farm programs, trade and rural issues? Sign up for a four-week free trial toAgri-Pulse.Youll receive our content absolutely free during the trial period.

All three of these actions, while happening separately and in different parts of the room, feed data from the 80 corn plants scanned that day to a computer learning program that can study their features to learn what the plants look like. If cameras, lasers and sensors can collect enough data on corn plants, the software should be able to create near-identical models of them when fully developed.

The idea is that we perfect something from here and then we do that on a higher scale in the field, said Nagasubramanian. Thats a more complicated thing if you have plants in the background and you have changing light intensities and clouds.

The institute, which collaborates with the Genomes to Fields Initiative to phenotype corn hybrid varieties across 162 environments in North America, also monitors a corn field lined with cameras mounted on poles. The solar-powered cameras sit above the corn plants and take photos every 15 minutes to watch each one develop over time.

The resulting data can be fed to AI programs to get a better understanding of how these plants grow and what genetic traits they share.

Certainly it is going to help us understand for example, with the photography what is the genetic control of leaf angle. And then that would allow us to develop varieties with different leaf angles more readily, Schnable said.

Schnable said its too soon for the developing technology to be widely deployed in fields or used for breeding purposes and that for now, the research funding is limited. But he believes private companies will use AI technology to develop their own products.

These things do have significant impacts out there in the world, he said.

For more news, go to http://www.Agri-Pulse.com.

See the article here:

New institute aims to unlock the secrets of corn using artificial intelligence - Agri-Pulse

Posted in Artificial Intelligence | Comments Off on New institute aims to unlock the secrets of corn using artificial intelligence – Agri-Pulse

US must not only lead in artificial intelligence, but also in its ethical application | TheHill – The Hill

Posted: at 5:48 am

Artificial intelligence (AI) is sometimes referred to as a herald of the fourth industrial revolution. That revolution is already here. Whenever you say Hey Siri or glance at your phone in order to unlock it, youre using AI. Its current and potential applications are numerous, including medical diagnosis and predictive technologies that enhance user interactions.

As chairwoman of the U.S. House Committee on Science, Space, and Technology, I am particularly interested in the potential for AI to accelerate innovation and discovery across the science and engineering disciplines. Just last year, DeepMind announced that its AI system AlphaFold had solved a protein-folding challenge that had stumped biologists for half a century. It is clear that not only will AI technologies be integral to improving the lives of Americans, but they will also help determine Americas standing in the world in the decades to come.

However, the vision of AIs role in humanitys future isnt all rosy. Increasingly autonomous devices and growing amounts of data will exacerbate traditional concerns, such as privacy and cybersecurity.Other potential dangers of AI have also arrived, appearing as patterns of algorithmic bias that often reflect our societys systemic racial and gender-based biases. We have seen discriminatory outcomes in AI systems that predict credit scores, health care risks, and recruitment potential. These are domains where we must mitigate the risk of bias in our decision-making, and the tools we use to augment that decision-making.

Technological progress does not have to come at the expense of safety, security, fairness, or transparency. In fact, embedding our values into technological development is central to our economic competitiveness and national security. Our federal government has the responsibility to work with private industry to ensure that we are able to maximize the benefits of AI technology for society while simultaneously managing its emerging risks.

To this end, the Science Committee has engaged in efforts to promote trustworthy AI. Last year, one of our signature achievements was passing the bipartisan National Artificial Intelligence Initiative Act, which directs the Department of Commerces National Institute of Standards and Technology (NIST) to develop a process for managing AI risks.

NIST may not be the most well-known government institution, but it has long conducted critical work on standard-setting and measurement research that is used by federal agencies and private industry. Over the past year, NIST has conducted a series of workshops examining topics like AI trustworthiness, bias, explainability, and evaluation. These workshops are geared at helping industry professionals understand how to detect, catalogue, and ultimately prevent the harmful outcomes that erode public trust in AI technology.

Most recently, NIST has been working to construct a voluntary Risk Management Framework that is intended to support the development and deployment of safe and trustworthy AI. This framework will be important for informing the work of both public and private sector AI researchers as they pursue their game-changing research. NIST is soliciting public comments until Sept. 15, 2021 and will develop the framework in several iterations, allowing for continued input. Interested stakeholders should submit comments and/or participate in the ongoing processes at NIST.

We know that AI has the potential to benefit society and make the world a better place. In order for the U.S. to be a true global leader in this technology, we have to ensure that the AI we create does just that.

Eddie Bernice JohnsonEddie Bernice JohnsonUS must not only lead in artificial intelligence, but also in its ethical application Our approach to schizophrenia is failing House passes bills to boost science competitiveness with China MORE represents the 30th District of Texas and is chairwoman of the House Committee on Science, Space, and Technology.

View post:

US must not only lead in artificial intelligence, but also in its ethical application | TheHill - The Hill

Posted in Artificial Intelligence | Comments Off on US must not only lead in artificial intelligence, but also in its ethical application | TheHill – The Hill

Industry VoicesNot all automation is created equally for clinical documentation improvement – FierceHealthcare

Posted: at 5:48 am

Healthcare system survival pivots on many metrics, but the ability to generate revenue and to evidence high quality of care are two of the most essential.

At the center of both metrics is the clinical documentation process, where an accurate representation of every patients clinical experience while in a providers care must be recorded.

As simple as it may sound, achieving that accurate reflection of diagnoses, interventions and the clinical picture is anything but simple. Medicine is as much science as it is art, and complex definitions, levels of specificity and complex medical terminology mean that most hospitals struggle to document everything properly, leading to significant lost revenues and under-reporting on quality metrics.

Health systems have answered this challenge by standing up clinical documentation integrity (CDI) programs, staffed with clinicians. As more healthcare revenue is tied to achieving specific quality metrics, the role of CDI has become even more critical.

However, ensuring integrity and completeness of documentation would require health systems to staff CDI teams with an incredible amount of highly trained clinicians to review and correct documentation on every record, every day. The cost and complexity of such an operation is unimaginable, and no healthcare system has the resources to either employ that many people or even find a supply of that many highly specialized staff.

As a result, many health systems are turning to software to support CDI with technology that scales clinical staff abilities and provides intelligent automation. Unfortunately, the challenge that many have run into is how to identify the right technology for their operation.

RELATED:Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning

All the work CDI specialists perform requires clinical knowledgethe sort of knowledge that is gained only after decades of academic study and real work experience. Automating that work means that the technology must mirror the same level of clinical thinking that any one of these specialists employs every day.

The challenge is immense. Emulating clinical thinking with software is among the loftiest goals of artificial intelligence in healthcare and requires the most sophisticated, cutting-edge technologies availablenot to mention years of training. Even with the most advanced technology, AI has sometimes failed to impress the critics, as weve seen multiple reports call out the stumbles of larger ambitioned (but similarly conceptualized)efforts like IBM Watson.

But, while there are still areas for improvement, the truth is that AI still is making a significant impact across the healthcare landscapeand especially within CDI, where success is well documented.

While CDI is an excellent and proven use case for AI in healthcare, providers should understand that not all AI is the same. In fact, many legacy systems that deploy the wrong type of AI to CDI are unable to see all the gains possible with the correct deployment.

The key to leveraging AI in CDI is to utilize technology that can truly emulate the way clinicians think. It must read, digest, understandand make statistical predictions on the entirety of the clinical record similarly to how physicians look at all the evidence to assess and diagnose to appropriately provide patient care.

Thats where machine learning holds the key. Machine-learning is, at its heart, a pattern-recognition engine that can digest a plethora of individual pieces of data, recognize patterns and then use those patterns to make statistical predictions. If properly applied to clinical information, it is a very powerful technology.Fed over time with millions of patient encounters, machine learning begins to emulate the way clinicians think, automating numerous tasks or challenges that otherwise would only be solvable by a human. While it does not replace clinicians, it does reduce clinical staff burden, providing more time to be spent on patient care.

Additionally, by automatically the review of every patient record in real-time every day, cases can be prioritized so a CDI specialist knows what to look atversus wasting time on those with no documentation irregularities.This type of machine learning interprets the clinical evidence, compares it to the existing documentationand highlights and prioritizes which cases have discrepancies automatically.

RELATED:Healthcare AI investment will shift to these 5 areas in the next 2 years: survey

Many legacy applications attempt to use another AI technology, natural language processing (NLP), to automate complex clinical tasks. While NLP has some useful applications for tasks like clinical narrationwhere the dictionary-like look up function of NLP suggests a better or more accurate wordNLP is only a partial solution for CDI.

For example, NLP can translate the narrative documentation from the clinician into text understoodby a computer. However, unless its paired with a machine learning solution that simultaneously reads and emulates clinical decision-making (thus enabling a comparison between what was written and what the clinical evidence says), its an inadequate solution to solving the core challenges in CDI.

Additionally, rules-based technology solutions that utilize rules or markers to automate clinical tasks fail entirely to emulate the way that clinicians think. As a result, they cannot reflect the many permutations of the way clinical conditions are presented.

Robotic process automation (RPA) is another buzzword in healthcare that has been cited as a tool for handling repeatable basic tasks. However, within the mid-revenue cycle (and thus CDI), nearly all tasks have a clinical element, requiring clinical understanding to complete.That means RPA definitionally is not suited for more complex tasks that require higher-level thinking.

Instead, intelligent process automation (IPA) is the right solution, as IPA applies machine learning to RPA to automate complex tasks that require human judgment (much like the work of CDI).Thus, to apply IPA in the revenue cycle, not only is machine learning critical, it also is the only technology available today that specifically emulates clinical thinking and judgment.

RELATED: Iodine Software acquires competitor ChartWise Medical Systems to expand its reach to more hospitals

As technology gets better at emulating a clinicians mind, increasingly powerful AI engines will soon be able to capture documentation and coding instantaneously. By accurately automating clinical condition documentation directly into EMRs and identifying the final code set, the process will become even more efficient and will have fewer translation errors.

Ultimately, that means smaller teams will be able to support the entire documentation process, which reduces costs for providers and stress on clinicians.

There is no doubt that managing a health system has become increasingly complex, and thats especially true for CDI teams that must capture data accurately and efficiently. However, AI has become a critical tool that is truly making an impact in the mid-revenue cycle, and there is much more innovation to come in the next few years. But, while we wait for that larger revolution, its important that health systems implement a stable and efficient CDI program now, powered by the right technology.

William Chan is theco-founder and CEO ofIodine Software.

Excerpt from:

Industry VoicesNot all automation is created equally for clinical documentation improvement - FierceHealthcare

Posted in Artificial Intelligence | Comments Off on Industry VoicesNot all automation is created equally for clinical documentation improvement – FierceHealthcare

Artificial Intelligence A New Portal to Promote Global Cooperation Launched with 8 International Organisations – Council of Europe

Posted: at 5:48 am

On 14 September 2021, eight international organisations joined forces to launch a new portal promoting global co-operation on artificial intelligence (AI). The portal is a one-stop shop for data, research findings and good practices in AI policy.

The objective of the portal is to help policymakers and the wider public navigate the international AI governance landscape. It provides access to the necessary tools and information, such as projects, research and reports to promote trustworthy and responsible AI that is aligned with human rights at global, national and local level.

Key partners in this joint effort include the Council of Europe, the European Commission, the European Union Agency for Fundamental Rights, the Inter-American Development Bank, the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the United Nations Educational, Scientific and Cultural Organization (UNESCO), and the World Bank Group.

Access the website: https://globalpolicy.ai

Useful links:

Go here to see the original:

Artificial Intelligence A New Portal to Promote Global Cooperation Launched with 8 International Organisations - Council of Europe

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence A New Portal to Promote Global Cooperation Launched with 8 International Organisations – Council of Europe

Yan Cui and Team Are Innovating Artificial Intelligence Approach to Address Biomedical Data Inequality – UTHSC News

Posted: at 5:48 am

Yan Cui, PhD, associate professor in the UTHSCDepartment of Genetics, Genomics, and Informatics,recently received a $1.7 million grant from the National Cancer Institute for a study titled Algorithm-based prevention and reduction of cancer health disparity arising from data inequality.

Dr. Cuis project aims to prevent and reduce health disparities caused by ethnically-biased data in cancer-related genomic and clinical omics studies. His objective is to establish a new machine learning paradigm for use with multiethnic clinical omics data.

For nearly 20 years, scientists have been using genome-wide association studies, known as GWAS, and clinical omics studies to detect the molecular basis of diseases. But statistics show that over 80% percent of data used in GWAS come from people of predominantly European descent.

As artificial intelligence (AI) is increasingly applied to biomedical research and clinical decisions, this European-centric skew is set to exacerbate long-standing disparities in health. With less than 20% of genomic samples coming from people of non-European descent, underrepresented populations are at a severe disadvantage in data-driven, algorithm-based biomedical research and health care.

Biomedical data-disadvantage has become a significant health risk for the vast majority of the worlds population, Dr. Cui said. AI-powered precision medicine is set to be less precise for the data-disadvantaged populations including all the ethnic minority groups in the U.S. We are committed to addressing the health disparities arising from data inequality.

The project is innovative in the type of machine learning technique it will use. Multiethnic machine learning normally uses mixture learning and independent learning schemes. Dr. Cuis project will instead be using a transfer learning process.

Transfer learning works much the same way as human learning. When faced with a new task, instead of starting the learning process from scratch, the algorithm leverages patterns learned from solving a related task. This approach greatly reduces the resources and amount of data required for developing new models.

Using large-scale cancer clinical omics data and genotype-phenotype data, Dr. Cuis lab will examine how and to what extent transfer learning improves machine learning on data-disadvantaged cohorts. In tandem with this, the team aims to create an open resource system for unbiased multiethnic machine learning to prevent or reduce new health disparities.

Neil Hayes, MD, MPH, assistant dean for Cancer Reesearch in the UTHSC College of Medicine and director of the UTHSC Center for Cancer Research, and Athena Starlard-Davenport, PhD, associate professor in the Department of Genetics, Genomics, and Informatics, are co-Investigators on the grant. Yan Gao, PhD, a postdoctoral scholar working with Dr. Cui, is a machine learning expert in the team. A pilot study for this project, funded by the UT Center for Integrative and Translational Genomics and UTHSC Office of Research, has been published in Nature Communications.

Related

Follow this link:

Yan Cui and Team Are Innovating Artificial Intelligence Approach to Address Biomedical Data Inequality - UTHSC News

Posted in Artificial Intelligence | Comments Off on Yan Cui and Team Are Innovating Artificial Intelligence Approach to Address Biomedical Data Inequality – UTHSC News

Page 77«..1020..76777879..90100..»