Page 78«..1020..77787980..90100..»

Category Archives: Artificial Intelligence

Artificial Intelligence Calculates Anti-Aging Properties Of Compounds – Bio-IT World

Posted: September 1, 2021 at 12:28 am

By Deborah Borfitz

August 31, 2021 | Artificial intelligence (AI) has been paired with one of the simplest of organismsthe nematode Caenorhabditis elegansto enlighten the scientific community about the physical and chemical properties of drug compounds with anti-aging effects, according to Brendan Howlin, reader in computational chemistry at the University of Surrey (U.K.). The predictive power of the methodology has just been demonstrated using an established database of small molecules found to extend life in model organisms.

The 1,738 compounds in the DrugAge database were broadly separated into flavonoids (e.g., from fruits and vegetables), fatty acids (e.g, omega-3 fatty acids), and those with a carbon-oxygen bond (e.g., alcohol)all heavily tied to nutrition and lifestyle choices. Pharmaceuticals could be developed based on that nutraceutical knowledge, including the importance of the number of nitrogen atoms, says Howlin.

Unlike prior efforts using AI to identify compounds that slow the aging process, Howlin used machine learning to calculate the quantitative structureactivity relationship (QSAR) of molecules. The model utilized 20% of the DrugAge compounds for the test set to learn which chemical properties were important. The information was then used on the remaining 80% to train the model to identify compounds with those properties, he explains.

As described in a recently published article in Scientific Reports (DOI: 10.1038/s41598-021-93070-6), the study builds on the work of another researcher (Diogo Barardo, University of Liverpool) who a few years ago built a random forest model to predict whether a compound would increase the lifespan of C. elegans based on data in the DrugAge database. His top-30 list of predictive molecular features referred to atom and bond counts as well as topological and partial charge properties of the substances.

The nematode is frequently used in age-related research because it has many of the organ systems present in more complex animals and has a short lifespan of 20 days, says Howlin. That makes it possible to conduct experiments that are not practical in either mice or humans.

Sideline Project

AI is now routinely employed by pharmaceutical companies in lieu of having hundreds of organic chemists testing every possible variation of every possible compound to see what works, says Howlin. In fact, AI is adding speed to virtually every stage of the drug discovery process by reducing repetitive, time-consuming tasks.

That breadth is represented by research underway at the University of Surrey, he continues, where AI-savvy scientists are helping to identify hits and leads, modify compounds to optimize their activity, predict how drugs are metabolized and affect the liver, and train the next generation of students in practical, real-world applications of machine learning algorithms.

Howlin has been actively involved in anti-aging drug design for many years now. He is one of the inventors of bi- and tri-aromatic compounds as NADPH oxidase 2 (Nox2) inhibitors, which are thought to have potential in treating a wide range of common, often age-related, diseases as well as aging itself.

NADPH oxidase is an enzyme made by the body to defend against bacterial infections, says Howlin. But if it doesnt turn off like it should, it produces oxidative stress that can damage the blood vessels and trigger diseases of aging.

The AI-based prediction model was a sideline project to see if the research team could provide industry with some drug discovery clues. Employing the latest version of the DrugAge database, it expands the number of identified molecules with anti-aging properties to 395 from the 229 previously identified by Barardo, while the volume of compounds that did not increase lifespan held steady at 1,163, Howlin reports.

Promising Leads

The study describes several compoundsthe flavonoids rutin and hesperidin (the predominant phenolic compound in orange extracts) and the organooxygen compounds lactose and sucrosewhich were previously found to be longevity-promoting in experiments on C. elegans. Future work will need to consider dosage, since it can impact whether a substance is beneficial or detrimental, he notes.

In addition to rutin (abundant in many plants), further in vivo testing may be warranted for gamolenic acid (plentiful in evening primrose oil and black currant oil), lactulose (shown to effectively treat chronic constipation in the elderly patients), and rifapentine (an antibiotic approved for the treatment of tuberculosis) based on the predictive exercise.

Moving forward, the machine learning model could be applied to any database to calculate the properties of different compounds, Howlin says. Many such databases are the property of pharmaceutical companies and could be tapped as a first step to improving human health by helping people age better.

University of Surrey researchers could also be supplementing their own aging research by finding new active compounds they can test alongside their experimental Nox2 inhibitors, he adds.

View original post here:

Artificial Intelligence Calculates Anti-Aging Properties Of Compounds - Bio-IT World

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Calculates Anti-Aging Properties Of Compounds – Bio-IT World

First Study on Artificial Intelligence-Based Chatbot for Anxiety & Depression in Spanish-Speaking University Students – Newswise

Posted: at 12:28 am

Newswise Palo Alto, CA -- A study conducted by researchers at Palo Alto University has shown artificial intelligence-based chatbots to be effective as a psychological intervention in Spanish speaking university students. The study took place in Argentina and showed promising evidence for the usability and acceptability of the mental health chatbot, Tess. The findings were published by JMIR Publications, which is dedicated to advancing digital health and open science.

The studys objective was to evaluate the viability, acceptability, and potential impact of using Tess, a chatbot, for examining symptoms of depression and anxiety in Spanish speaking university students. Chatbots are a novel delivery format that can expand the mental health services offerings and facilitate early access to those in need. This represents an opportunity for addressing delays associated with access to treatment for depression and anxiety.

While research conducted in the United States has reported decreased depressive and anxiety symptoms in college students, no studies have been performed on chatbots used for addressing mental health disorders in Spanish-speaking populations, said Eduardo Bunge, PhD, and Director for the Children and Adolescent Psychotherapy and Technology (CAPT) Lab at Palo Alto University.

The study assesses the viability and acceptability of psychological interventions delivered through Tess to college students in Argentina for the most prevalent disorders in Argentina; anxiety (16.4%) and mood (12.3%) disorders. The average age for the onset of these conditions is 20 years. The Pan American Health Organization (PAHO) and the Argentinian Ministry of Health have highlighted the importance of optimizing health care services for individuals who are not receiving any form of psychological care.

Results

The initial sample consisted of 181 Argentinian college students aged 18 to 33. On an average, 472 messages were exchanged, with 116 of the messages sent from the users in response to Tess. A higher number of messages exchanged with Tess was associated with positive feedback. No significant differences between the experimental and control groups were found from the baseline to week 8 for depressive and anxiety symptoms. However, significant intragroup differences demonstrated that the experimental group showed a significant decrease in anxiety symptoms; no such differences were observed for the control group. Further, no significant intragroup differences were found for depressive symptoms.

Conclusions

The students spent a considerable amount of time exchanging messages with Tess and positive feedback was associated with a higher number of messages exchanged. The initial results show promising evidence for the usability and acceptability of Tess in the Argentinian population. Research on chatbots is still in its initial stages and further research is needed.

About Palo Alto University

Palo Alto University (PAU), a private, non-profit university located in the heart of Northern Californias Silicon Valley, is dedicated to addressing pressing and emerging issues in the fields of psychology and counseling that meet the needs of todays diverse society. PAU offers undergraduate and graduate programs that are led by faculty who make significant contributions to in their field. Online, hybrid and residential program options are available. PAU was founded in 1975 as the Pacific Graduate School of Psychology and re-incorporated as Palo Alto University in August 2009. PAU is accredited by the Western Association of Schools and Colleges (WASC). PAUs doctoral programs are accredited by the American Psychological Association (APA) and its masters in counseling programs by the Council for Accreditation of Counseling & Related Educational Programs (CACREP).

Read more:

First Study on Artificial Intelligence-Based Chatbot for Anxiety & Depression in Spanish-Speaking University Students - Newswise

Posted in Artificial Intelligence | Comments Off on First Study on Artificial Intelligence-Based Chatbot for Anxiety & Depression in Spanish-Speaking University Students – Newswise

Which companies are leading the way for artificial intelligence in the technology sector? – Verdict

Posted: at 12:28 am

We aggregated thousands of records from GlobalDatas proprietary jobs, deals, patents and company filings databases to identify the top companies in the area of artificial intelligence in the technology sector.

International Business Machines Corp and Microsoft Corp are leading the way for artificial intelligence investment among top technology companies according to our analysis of a range of GlobalData data.

Artificial intelligence has become one of the key themes in the technology sector of late, with companies hiring for increasingly more roles, making more deals, registering more patents and mentioning it more often in company filings.

These themes, of which artificial intelligence is one, are best thought of as any issue that keeps a CEO awake at night, and by tracking and combining them, it becomes possible to ascertain which companies are leading the way on specific issues and which are dragging their heels.

According to GlobalData analysis, International Business Machines Corp is one of the artificial intelligence leaders in a list of high-revenue companies in the technology industry, having advertised for 8,040 positions in artificial intelligence, made seven deals related to the field, filed 461 patents and mentioned artificial intelligence 10 times in company filings between January 2020 and June 2021.

Our analysis classified 15 companies as Most Valuable Players or MVPs due to their high number of new jobs, deals, patents and company filings mentions in the field of artificial intelligence. An additional four companies are classified as Market Leaders and zero are Average Players. Two more companies are classified as Late Movers due to their relatively lower levels of jobs, deals, patents and company filings in artificial intelligence.

For the purpose of this analysis, weve ranked top companies in the technology sector on each of the four metrics relating to artificial intelligence: jobs, deals, patents and company filings. The best-performing companies the ones ranked at the top across all or most metrics were categorised as MVPs while the worst performers companies ranked at the bottom of most indicators were classified as Late Movers.

Microsoft Corp is spearheading the artificial intelligence hiring race, advertising for 15,092 new jobs between January 2020 and June 2021. The company reached peak hiring in March 2021, when it listed 1,495 new job ads related to artificial intelligence.

International Business Machines Corp followed Microsoft Corp as the second most proactive artificial intelligence employer, advertising for 8,040 new positions. Dell Technologies Inc was third with 5,323 new job listings.

When it comes to deals, Tencent Holdings Ltd leads with 29 new artificial intelligence deals announced from January 2020 to June 2021. The company was followed by Microsoft Corp with 19 deals and Apple Inc with nine.

GlobalData's Financial Deals Database covers hundreds of thousands of M&A contracts, private equity deals, venture finance deals, private placements, IPOs and partnerships, and it serves as an indicator of economic activity within a sector.

One of the most innovative technology companies in recent months was Samsung Electronics Co Ltd, having filed 1,271 patent applications related to artificial intelligence since the beginning of last year. It was followed by Intel Corp with 505 patents and International Business Machines Corp with 461.

GlobalData collects patent filings from 100+ counties and jurisdictions. These patents are then tagged according to the themes they relate to, including artificial intelligence, based on specific keywords and expert input. The patents are also assigned to a company to identify the most innovative players in a particular field.

Finally, artificial intelligence was a commonly mentioned theme in technology company filings. Google, Inc. mentioned artificial intelligence 12 times in its corporate reports between January 2020 and June 2021. Intel Corp filings mentioned it 12 times and Microsoft Corp mentioned it 12 times.

Read the rest here:

Which companies are leading the way for artificial intelligence in the technology sector? - Verdict

Posted in Artificial Intelligence | Comments Off on Which companies are leading the way for artificial intelligence in the technology sector? – Verdict

Artificial Intelligence approach helps to identify patients with heart failure that respond to beta-blocker treatment – University of Birmingham

Posted: at 12:28 am

Researchers at the University of Birmingham have developed a new way to identify which patients with heart failure will benefit from treatment with beta-blockers.

Heart failure is one of the most common heart conditions, with substantial impact on patient quality of life, and a major driver of hospital admissions and healthcare cost.

The study involved 15,669 patients with heart failure and reduced left ventricular ejection fraction (low function of the hearts main pumping chamber), 12,823 of which were in normal heart rhythm and 2,837 of which had atrial fibrillation (AF) - a heart rhythm condition commonly associated with heart failure that leads to worse outcomes.

Published in The Lancet, the study used a series of artificial intelligence (AI) techniques to deeply interrogate data from clinical trials.

The research showed that the AI approach could take account of different underlying health conditions for each patient, as well as the interactions of these conditions to isolate response to beta-blocker therapy. This worked in patients with normal heart rhythm, where doctors would normally expect beta-blockers to reduce the risk of death, as well as in patients with AF where previous work has found a lack of effectiveness. In normal heart rhythm, a cluster of patients was identified with reduced benefit from beta-blockers (combination of older age, less severe symptoms and lower heart rate than average). Conversely in patients with AF, the research found a cluster of patients who had a substantial reduction in death with beta-blockers (from 15% to 9% in younger patients with lower rates of prior heart attack but similar heart function to the average AF patient).

The research was led by the cardAIc group, a multi-disciplinary team of clinical and data scientists at the University of Birmingham and the University Hospitals Birmingham NHS Foundation Trust, aiming to integrate AI techniques to improve the care of cardiovascular patients. The study uses data collated and harmonized by the Beta-blockers in Heart Failure Collaborative Group, a global consortium dedicated to enhancing treatment for patients with heart failure.

First Author Dr Andreas Karwath, Rutherford Research Fellow at the University of Birmingham and member of the cardAIc group, added: We hope these important research findings will be used to shape healthcare policy and improve treatment and outcomes for patients with heart failure.

Corresponding author Georgios Gkoutos, Professor of Clinical Bioinformatics at the University of Birmingham, Associate Director of Health Data Research Midlands and co-lead for the cardAIc group, said: Although tested in our research in trials of beta-blockers, these novel AI approaches have clear potential across the spectrum of therapies in heart failure, and across other cardiovascular and non-cardiovascular conditions.

Corresponding author Dipak Kotecha, Professor & Consultant in Cardiology at the University of Birmingham, international lead for the Beta-blockers in Heart Failure Collaborative Group and co-lead for the cardAIc group, added: Development of these new AI approaches is vital to improving the care we can give to our patients; in the future this could lead to personalised treatment for each individual patient, taking account of their particular health circumstances to improve their well-being.

The research used individual patient data from nine landmark trials in heart failure that randomly assigned patients to either beta-blockers or a placebo. The average age of study participants was 65 years, and 24% were women. The AI-based approach combined neural network-based variational autoencoders and hierarchical clustering within an objective framework, and with detailed assessment of robustness and validation across all the trials.

The research was presented this week at the ESC Congress 2021, hosted by the European Society of Cardiology - a non-profit knowledge-based professional association that facilitates the improvement and harmonisation of standards of diagnosis and treatment of cardiovascular diseases.

Notes to Editors

Link:

Artificial Intelligence approach helps to identify patients with heart failure that respond to beta-blocker treatment - University of Birmingham

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence approach helps to identify patients with heart failure that respond to beta-blocker treatment – University of Birmingham

Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning – FierceHealthcare

Posted: at 12:28 am

Times of crisis spark innovation and creativity, as evidenced in the way organizations have come together to innovate for the greater good during the COVID-19 pandemic.

Liquor distilleries started producing hand sanitizer, 3D printing companies made face shields and nasal swabs to meet massive demandsand auto companies shifted gears to make ventilators.

Machine learning (ML)computer systems that learn and adapt autonomously by using algorithms and statistical models to analyze and draw inferences from patterns in data to inform and automate processeshas also played an important role, supporting practically every aspect of healthcare. Amazon Web Services has supported customers as they enable remote patient care, develop predictive surge planning to help manage inpatient/ICU bed capacityand tackle the unprecedented feat of developing an messenger ribonucleic acid (mRNA)-based COVID-19 vaccine in under a year.

We now have the opportunity to build on our lessons from the past year to apply ML to help address several underlying problems that plague the healthcare and life sciences communities.

Telehealth was on the rise before COVID-19, but it revealed its true potential during the pandemic. Telehealth is often viewed simply as patients and providers interacting online via video platforms but has proven capable of doing much more. Applying ML to telehealth provides a unique opportunity to innovate, scale and offer more personalized experiences for patients and ensure they have access to the resources and care they need, no matter where they're located.

ML-based telehealth tools such as patient service chatbots, call center interactions to better triage and direct patients to the information and care they requireand online self-service prescreenings are helping optimize patient experiences and streamline provider assessments and diagnostics.

RELATED:Global investment in telehealth, artificial intelligence hits a new high in Q1 2021

For example, GovChat, South Africa's largest citizen engagement platform, launched a COVID-19 chatbot in less than two weeks using an artificial intelligence (AI) service for building conversational interfaces into any application using voice and text. The chatbot provides health advice and recommendations on whether to get a test for COVID-19, information on the nearest COVID-19 testing facility, the ability to receive test resultsand the option for citizens to report COVID-19 symptoms for themselves, their family membersor other household members.

In addition, early in the COVID-19 crisis, New York City-based MetroPlusHealth identified approximately 85,000 at-risk individuals (e.g., comorbid heart or lung disease, or immunocompromised) who would require additional support services while sheltering in place. In order to engage and address the needs of this high-risk population, MetroPlusHealth developed ML-enabled solutions including an SMS-based chatbot that guides people through self-screening and registration processes, SMS notification campaigns to provide alerts and updated pandemic informationand a community-based organizations referral platform, called Now Pow, to connect each individual with the right resource to ensure their specific needs were met.

By providing an easy way for patients to access the care, recommendationsand support they need, ML has given providers the ability to innovate and scale their telehealth platforms to support diverse and continuously changing community needs. Agile, scalableand accessible telehealth continues to be important as providers look for ways to reach and engage patients in hard-to-reach or rural areas and those with mobility issues. Organizations and policymakers globally need to make telehealth and easy access to care a priority now and going forward in order to close critical gaps in care.

Beyond the unprecedented shifts in the approach to engaging, supporting and treating patients, COVID-19 has dictated clear direction for the future of patient care: precision medicine.

Guidelines for patient care planning care have shifted from statistically significant outcomes gathered from a general population to outcomes based on the individual. This gives clinicians the ability to understand what type of patient is most prone to have a disease, not just what sort of disease a specific patient has. Being able to predict the probability of contracting a disease far in advance of its onset is important to determining and initiating preventative, intervening, and corrective measures that can be tailored to each individual's characteristics.

RELATED:What's on the horizon for healthcare beyond COVID-19? Cerner, Epic and Meditech executives share their takes

One of the best examples of how ML is enabling precision medicine is biotech company Modernas ability to accelerate every step of the process in developing an mRNA vaccine for COVID-19. Moderna began work on its vaccine the moment the novel coronaviruss genetic sequence was published. Within days, the company had finalized the sequence for its mRNA vaccine in partnership with the National Institutes of Health.

Moderna was able to begin manufacturing the first clinical-grade batch of the vaccine within two months of completing the sequencinga process that historically has taken up to 10 years.

Personalized health isn't only about treating disease, it's about providing access to resources and information specific to a patient's needs. ML is playing a key role in curating content that can help to educate and support patients, caregivers and their families.

Breastcancer.org allows individuals with breast cancer to upload their pathology report to a private and secure personal account. The organization uses ML-based natural language processing to analyze and understand the report and create personalized information for the patient based on their specific pathology.

RELATED:Healthcare AI investment will shift to these 5 areas in the next 2 years: survey

For the last decade, organizations have focused on digitizing healthcare. Today, making sense of the data being captured will provide the biggest opportunity to transform care. Successful transformation will depend on enabling data to flow where it needs to be at the right time while ensuring that all data exchange is secure.

Interoperability is by far one of the most important topics in this discussion. Today, most healthcare data is stored in disparate formats (e.g., medical histories, physician notes and medical imaging reports), which makes extracting information challenging. ML models trained to support healthcare and life sciences organizations help solve this problem by automatically normalizing, indexing, structuring and analyzing data.

ML has the potential to bring data together in a way that creates a more complete view of a patient's medical history, making it easier for providers to understand relationships in the data and compare specific data to the rest of the population. Better data management and analysis leads to better insights, which lead to smarter decisions. The net result is increased operational efficiency for improved care delivery and management, and most importantly, improved patient experiences and health outcomes.

Looking ahead, imagine a time when our pernicious medical conditions like cancer and diabetes can be treated with tailored medicines and care plans enabled by AI and ML. The pandemic was a turning point for how ML can be applied to tackle some of the toughest challenges in the healthcare industry, though we've only just scratched the surface of what it can accomplish.

Taha Kass-Hout is the director of machine learning for Amazon Web Services.

Follow this link:

Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning - FierceHealthcare

Posted in Artificial Intelligence | Comments Off on Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning – FierceHealthcare

The Need of A Real-World Artificial Intelligence in The Pandemic Era – BBN Times

Posted: at 12:28 am

The Covid-19 pandemic has accelerated the development of artificial intelligence across the globe.

Organizations are using artificial intelligence to increase the productivity ofremote workers, enhance the virtual shopping experience, drive the digital transformation process and speed up the development of important drugs to end this on-going pandemic.

Real artificial intelligence is creating value by making humans more efficient, not redundant.

There are several levels ofknowledge, research, education, theory, practice, and technology:

Specialization: Narrow AI, Specialists, Scientists, Learned Ignoramus, which divides, specializes, and thinks inspecialcategories.

Disciplinarity: Analytical science and traditionally fragmenteddisciplines.

Interdisciplinarity: Itintegrates information, data, techniques, tools, concepts, and/or theories from within two or more disciplines.

Interdisciplinarity is about the interactions between specialised fields and cooperation among special disciplines to solve a specific problem. It concerns the transfer of methods and concepts from one discipline to another, allowing research to spill over disciplinary boundaries, still staying within the framework of disciplinary research.

Transdisciplinarity:Synthetic science and technology and society,the ideas of a unified scienceand technology and human society,universalknowledge, synthesis and the integration ofallknowledge, total convergence of knowledge, technology and people, Trans-AI = Narrow AI, ML, DL + Symbolic AI + Human Intelligence.

Transdisciplinarity is radically distinct from interdisciplinarity, multidisciplinarity and mono-disciplinarity.

Transdisciplinarity analyzes,synthesizes and harmonizes links between disciplines into a coordinated and coherent whole, a global system where all interdisciplinary boundaries dissolve.

It is aboutaddressingthe worlds most pressing issuesandseeing the worldin asystemic,consistent, andholisticway at three levels:

(1) theoretical, (2) phenomenological, and (3) experimental (which is based on existing data in a diversity of fields, such asexperimental science and technology, business,education, art, and literature).

Transdisciplinarity is a way of being radically distinct from interdisciplinarity, as well as multidisciplinarity and mono-disciplinarity.

Transdisciplinarity integrates the natural, social, andengineeringsciences in aunifyingcontext, a whole that is greater than the sum of its partsand transcends their traditional boundaries.

Transdisciplinarityconnotes a research strategy that crosses many disciplinary boundaries to create a holistic approach.

Transdisciplinary research integrates information, data, concepts, theories,techniques, tools, technologies, people, organizations, policies, and environments,asall sides of the real-world problems.

Transdisciplinarity takes this integration of disciplines on the highest level. It is a holistic approach, placing these interactions in an integral system. It thus builds a total network of individual disciplines, with a view to understand the world in terms of integrity and unity and discovery.

Monodisciplinary: Itinvolvesa single academic discipline.Itrefers to a single discipline or body of specialized knowledge.

Multidisciplinarity: Itdraws on knowledge from different disciplines but stays within their boundaries.Inmultidisciplinarity, two or more disciplines work together on a common problem, but without altering their disciplinary approaches or developing a common conceptual framework.

In the context oftheunprecedented worldwidepandemic-enhancedcrises, the transdisciplinarityappears asan all-sustainableway ofsolving complex real-world problemspursuinga general search for a unity of knowledgeor Real-World AI.

The Trans-AI paradigm means that the classic studies of Plato, Aristotle, Kant, Leibnizs Logic as Calculation and Booles Logic as Algebra withmodern ontological, scientific, mathematical and statistical research of reality/knowledge/intelligence/data formalization/computing/automation are a key to [Real] AI.

For example, the conception of AI was inherently implied in Aristotles Analytics, Prior and Posterior, Metaphysics/Ontology and Categories.

Without the reality/category theory, as the mind theory for human minds, and prior data analytics, no deep AI/ML/DL classifiers with effective classification algorithms are possible, where classes are targets, labels, or categories. ML/DL predictive modeling is NOT just the task of approximating a mapping function (f) from input variables (X) to output variables (y). Therefore, it is widely recognized that the lack of reality with causality is the black hole of current machine learning systems.

The Trans-AI is about the real-world data ontology, causality, real intelligence, science, computer model, semantics and syntax and pragmatics, universal knowledge/data synthesis vs. expert knowledge/data analytics, thus enabling a comprehensive machine understanding of data points, elements, sets, patterns, and relationships.

Without comprehensive causal worlds models integrating disciplinary, inter-, multi-, and trans-disciplinary knowledge, there is no real-world AI. A holistic research strategy integrating worlds knowledge into a meaningful whole is the systematic way of building the General Human-AI Platform as an Integrative General-Purpose Technology.

The current disciplinary approach to AI/ML/DL and Robotics is, at best or worst for humanity, ending up with superhuman narrow human-mimicking AI applications, integrated in our smart networks, devices. processes and services.

Some, who limit AI as augmenting or substituting biological intelligence with machine intelligence, believe transdisciplinarity is a way to a human-level AI.

The mono-disciplinary narrow AI of machine deep learning is blooming today, bringing its stakeholders unprecedented profits.Five top-performing tech stocks in the market, namely, Facebook, Amazon, Apple, Microsoft, and Alphabets Google, FAAMG, represent the U.S.'sNarrow AI technology leaderswhose productsspan machine learning and deep learning or data analytics cloud platforms, mobile and desktop systems, hosting services, online operations, and software products. The five FAAMG companies had a joint market capitalization of around $4.5 trillion a year ago, and now exceed $7.6 trillion, being all within the top 10 companies in the US.As to the modest Gartner's predictions, the total NAI-derived business value is forecast to reach $3.9 trillion in 2022.

The future superhuman narrow AI applications are here, within us, in our smart networks, devices. processes and services.

Special-designed automated intelligence outperforms humans in strategic games, chess/go playing, video gaming, self-driving mobility, stock trading, financial transactions, medical diagnosis, NLP, language translation, patterns/object/face recognition, manufacturing processes, etc.

And it is ONLY the narrow AI/ML/DL fragmented applications designed for narrow human-like tasks and jobs, as more efficient and effective than human labor, mental or menial.

The existential question isWhen Will Robots/Machines/Computers Emerge as a General-Purpose Real-World AI?

But most people are still blind to see the disruptive fundamental force of AI technology, its critical impact on our future.

Our company is proud to inform that EIS Encyclopedic Intelligent Systems LTD has completed studying, modeling, and designing the Real-World AI as a Causal Machine Intelligence and Learning, trademarked as Causal Artificial Superintelligence (CASI) GPT Platform complementing human intelligence, collective and individual.

Thecurrent disciplinary approach to AI/ML/DL and Robotics is ending up with superhumannarrow AI applications,integratedin our smart networks, devices. processes and services.

Special-designed automated intelligence outperforms humans in strategic games, chess/go playing, video gaming, self-driving mobility, stock trading, financial transactions, medical diagnosis, NLP, language translation, patterns/object/face recognition, manufacturing processes, etc.

It isstillONLY the narrowAnthropomorphic and AnthropocentricAI/ML/DL fragmented applications designed for narrow human-like tasks and jobs.Many scientists are trying to move the field of AI beyond data analytics, predictions and pattern-matching towards machines that could solve real-world problems. Some people think it might be enough to take what we have and just grow the size of the dataset, the model sizes, computer speedto just get a bigger brain (Conference on Neural Information Processing Systems (NeurIPS 2019) Yoshua Bengio)

Still, theexistentialquestionis open: What IfRobots/Machines/Computerswere toOutsmartHumans in allspecialrespects?

To address themoral and existentialissues ofdisciplinaryAI/ML/DL and robotics fragmentation,as Europes Responsible and Trustworthy AI,we have developeda TransdisciplinaryRealAI model, as not competing with, but complementing human intelligence.

The Transdisciplinary AIConferences are now emerging,but still considered as an interdisciplinary collection ofacademic research themes:

Transdisciplinary AI 2021 (TransAI 2021) is technically sponsored by the IEEE Computer Society.

Trans-AI aims to integrate disciplinary AIs, symbolic/logical or statistic/data, asML Algorithms (DL,ANNs), which are designed to substitute biological intelligence with machine intelligence.

Trans-AI is developed as a Man-Machine Global AI (GAI) Platform to integrate Human Intelligence with Narrow AI, ML, DL, Human-level AI, or Superhuman AI, all as Neural Information Processing Systems. It relies on fundamental scientific worlds knowledge, cybernetics, computer science, mathematics, statistics, data science, computing ontologies, robotics,psychology, linguistics, semantics, and philosophy.

The Trans AI model is mapped as an interdependent, mutually reinforcing, transdisciplinary quadrivium of the worlds knowledge depicted by the global knowledge graph (see the extended version).

The Trans-AI isa systematic, holistic and analytical means of obtaining knowledge about the world.

The Trans-AI is technologically designed as a Causal Machine Intelligence and Learning Platform, to be served as Artificial Intelligence for Everybody and Everything, AI4EE.

The Trans-AI technology could make the most disruptive general-purpose technology of the 21st Century, given an effective ecosystem of innovative business, government, policy-makers, NGOs, international organizations, civil society, academia, media and the arts.

TheTrans-AI asHuman-AI Global Platform is designed to extract knowledge from massive digital data forcreatingbreakthroughs in all parts of human life, from government to industry to education to healthcare to global security.

It isaimedtoprocess structured and unstructured digital data within unifying world-intelligence-data models and causal algorithms, shifting from supervised to self-supervised real learning. Making breakthroughs in these areas will be the matter of life or death for thefuture ofhumanity.

Why Trans-AI could be the disruptive discovery, innovation and unifying general-purpose technologyand the best smart investment

The Trans-AI could be the most disruptive research and breakthrough discovery, innovation and technology meetingthe founding fathers of AIdreamsto make machines use language, form abstractions and concepts,Google mission to organize the worlds information and make it universally accessible and useful, and best human ambitions for a unified knowledge of the world.

Among other disruptive changes, the Trans-AI enriches, updates and scales up the disciplinary AIs, as proposed by the EC'sHIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE:

Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions with some degree of autonomy to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).

The most concern of humanity must be the current accelerated growth of Big Techs Narrow and Weak AI of Machine Learning, ANNs and Deep Learning, as a Non-Real AI vs. Real World AI. It is fast emerging as narrow-minded automated super intelligences outperforming humans in any narrow cognitive tasks, and implemented as LAWs or military AI, ML/DL drones, killer robots, humanoid robots, self-driving transportation, smart manufacturing machines, RPAs, cyborgs, trading algorithms, smart government decision makers, recommendation engines, medical AI system, etc.

The whole idea of Anthropomorphic and Anthropocentric AI (AAAI) as the narrow or general ones, aimed at simulating human intelligence, cognitive skills, capacities, capabilities, and functions, as well as intelligent behavior and actions in computing machines is raising a number of undecidable social, moral, ethical and legal dilemmas.

The narrow and weak Deep-Learning AI programs classify tremendous amounts of data without any understanding of the world and meaning of their inputs or outputs (e.g., the recommendation to treat or a risk score or behaviour changes).

These consequences could be much worse than human cloning, which is prohibited in most countries, and massive technological unemployment without any compensation effects is just the beginning of the end.

This is what good minds forewarned humanity about the possibilities and possible perils of AAAI, mimicking human learning and reasoning by machines and humanoid robots:

The development of full artificial intelligence could spell the end of the human raceItwould take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldnt compete, and would be superseded. Stephen Hawking told the BBC

I visualise a time when we will be to robots what dogs are to humans, and Im rooting for the machines. Claude Shannon

Im increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we dont do something very foolish. I mean with artificial intelligence were summoning the demon. Elon Musk warned at MITs AeroAstro Centennial Symposium

All that we need, is a radically new kind of AI, Real and True MI, Real World AI, the Trans-AI, which is to simulate and understand or compute reality, causality, and mentality in digital reality machines.

This is becoming clear even for profit-seeking industrialists, as E. Musk, who understands that without the Real-World AI no really intelligent machine is possible. Self-driving requires solving a major part of real-world AI, so its an insanely hard problem, but Tesla is getting it done. AI Day will be great. Nothing has more degrees of freedom than reality.

The rise of real artificial intelligence will create and destroy new jobs, improve healthcare, disrupt smart cities, and minimize the impact of the next pandemic. Despite the concerns about the dark side of artificial intelligence, we are still far away from super artificial intelligence.

The rest is here:

The Need of A Real-World Artificial Intelligence in The Pandemic Era - BBN Times

Posted in Artificial Intelligence | Comments Off on The Need of A Real-World Artificial Intelligence in The Pandemic Era – BBN Times

Why ethics is essential in the creation of artificial intelligence – IT Brief Australia

Posted: at 12:28 am

Article by ManageEngine director of research Ramprakash Ramamoorthy.

Artificial intelligence (AI) has long been a feature of modern technology and is becoming increasingly common in workplace technologies. According to ManageEngines recent 2021 Digital Readiness Survey, more than 86% of organisations in Australia and New Zealand reported increasing their use of AI even as recently as two years ago.

But despite an increased uptake across organisations in the A/NZ region, only 25% said their confidence in the technology had significantly increased.

One possible reason for the lack of overall confidence in AI is the potential for unethical biases to work their way into developing AI technologies. While it may be true that nobody sets out to build an unethical AI model, it may only take a few cases for disproportionate or accidental weighting to be applied to certain data types over others, creating unintentional biases.

Demographic data, names, years of experience, known anomalies, and other types of personally identifiable information are the types of data that can skew AI and lead to biased decisions. In essence, if AI is not properly designed to work with data, or the data provided is not clean, this can lead to the AI model generating predictions that could raise ethical concerns.

The rising use of AI across industries subsequently increases the need for AI models that arent subject to unintentional biases, even if this occurs as a by-product of how the models are developed.

Fortunately, there are several ways developers can ensure their AI models are designed as fairly as possible to reduce the potential for unintentional biases. Two of the most effective steps developers can take are:

Adopting a fairness-first mindset

Embedding fairness into every stage of AI development is a crucial step to take when developing ethical AI models. However, fairness principles are not always uniformly applied and can differ depending on the intended use for AI models, creating a challenge for developers.

All AI models should have the same fairness principles at their core. Educating data scientists on the need to build AI models with a fairness-first mindset will lead to significant changes in how the models are designed.

Remaining involved

While one of the benefits of AI is its ability to reduce the pressure on human workers to spend time and energy on smaller, repetitive tasks, and many models are designed to make their own predictions, humans need to remain involved with AI at least in some capacity.

This needs to be factored in throughout the development phase of an AI model and its application within the workplace. In many cases, this may involve the use of shadow AI, where both humans and AI models work on the same task before comparing the results to identify the effectiveness of the AI model.

Alternatively, developers may choose to keep human workers within the operating model of the AI technology, particularly in cases where an AI model doesnt have enough experience, which will let them guide the AI.

The use of AI will likely only continue to increase as organisations across A/NZ, and the world, continue to digitally transform. As such, its becoming increasingly clear that AI developments will need to become even more reliable than they currently are to reduce the potential for unintentional biases and increase user confidence in the technology.

Go here to read the rest:

Why ethics is essential in the creation of artificial intelligence - IT Brief Australia

Posted in Artificial Intelligence | Comments Off on Why ethics is essential in the creation of artificial intelligence – IT Brief Australia

Emory students advance artificial intelligence with a bot that aims to serve humanity – SaportaReport

Posted: August 28, 2021 at 12:12 pm

A team of six Emory computer science students are helping to usher in a new era in artificial intelligence. Theyve developed a chatbot capable of making logical inferences that aims to hold deeper, more nuanced conversations with humans than have previously been possible. Theyve christened their chatbot Emora, because it sounds like a feminine version of Emory and is similar to a Hebrew word for an eloquent sage.

The team is now refining their new approach to conversational AI a logic-based framework for dialogue management that can be scaled to conduct real-life conversations. Their longer-term goal is to use Emora to assist first-year college students, helping them to navigate a new way of life, deal with day-to-day issues and guide them to proper human contacts and other resources when needed.

Eventually, they hope to further refine their chatbot developed during the era of COVID-19 with the philosophy Emora cares for you to assist people dealing with social isolation and other issues, including anxiety and depression.

The Emory team is headed by graduate students Sarah Finch and James Finch, along with faculty advisorJinho Choi, associate professor in the Department of Computer Sciences. The team also includes graduate student Han He and undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell. All the students are members of ChoisNatural Language Processing Research Laboratory.

Were taking advantage of established technology while introducing a new approach in how we combine and execute dialogue management so a computer can make logical inferences while conversing with a human, Sarah Finch says.

We believe that Emora represents a groundbreaking moment for conversational artificial intelligence, Choi adds. The experience that users have with our chatbot will be largely different than chatbots based on traditional, state-machine approaches to AI.

Last year, Choi and Sarah and James Finch headed a team of 14 Emory students that took first place in Amazons Alexa Prize Socialbot Grand Challenge, winning $500,000 for their Emora chatbot. The annual Alexa Prize challenges university students to make breakthroughs in the design of chatbots, also known as socialbots software apps that simplify interactions between humans and computers by allowing them to talk with one another.

This year, they developed a completely new version of Emora with the new team of six students.

They made the bold decision to start from scratch, instead of building on the state-machine platform they developed in 2020 for Emora. We realized there was an upper limit to how far we could push the quality of the system we developed last year, Sarah Finch says. We wanted to do something much more advanced, with the potential to transform the field of artificial intelligence.

They based the current Emora on three types of frameworks to advance core natural language processing technology, computational symbolic structures and probabilistic reasoning for dialogue management.

They worked around the clock, making it into the Alexa Prize finals in June. They did not complete most of the new system, however, until just a few days before they had to submit Emora to the judges for the final round of the competition.

That gave the team no time to make finishing touches to the new system, work out the bugs, and flesh out the range of topics that it could deeply engage in with a human. While they did not win this years Alexa Prize, the strategy led them to develop a system that holds more potential to open new doors of possibilities for AI.

In the run-up to the finals, users of Amazons virtual assistant, known as Alexa, volunteered to test out the competing chatbots, which were not identified by their names or universities. A chatbots success was gauged by user ratings.

The competition is extremely valuable because it gave us access to a high volume of people talking to our bot from all over the world, James Finch says. When we wanted to try something new, we didnt have to wait long to see whether it worked. We immediately got this deluge of feedback so that we could make any needed adjustments. One of the biggest things we learned is that what people really want to talk about is their personal experiences.

Sarah and James Finch, who married in 2019, are the ultimate computer power couple. They met at age 13 in a math class in their hometown of Grand Blanc, Michigan. They were dating by high school, bonding over a shared love of computer programming. As undergraduates at Michigan State University, they worked together on a joint passion for programming computers to speak more naturally with humans.

If we can create more flexible and robust dialogue capability in machines, Sarah Finch explains, a more natural, conversational interface could replace pointing, clicking and hours of learning a new software interface. Everyone would be on a more equal footing because using technology would become easier.

She hopes to pursue a career in enhancing computer dialogue capabilities with private industry after receiving her PhD.

James Finch is most passionate about the intellectual aspects of solving problems and is leaning towards a career in academia after receiving his PhD.

The Alexa Prize deadlines required the couple to work many 60-hour-plus weeks on developing Emoras framework, but they didnt consider it a grind. Ive enjoyed every day, James Finch says. Doing this kind of dialogue research is our dream and were living it. We are making something new that will hopefully be useful to the world.

They chose to come to Emory for graduate school because of Choi, an expert in natural language processing, and Eugene Agichtein, professor in the Department of Computer Science and an expert in information retrieval.

Emora was designed not just to answer questions, but as a social companion.

A caring chatbot was an essential requirement for Choi. At the end of every team meeting, he asks one member to say something about how the others have inspired them. When someone sees a bright side in us, and shares it with others, everyone sees that side and that makes it even brighter, he says.

Chois enthusiasm is also infectious.

Growing up in Seoul, South Korea, he knew by the age of six that he wanted to design robots. I remember telling my mom that I wanted to make a robot that would do homework for me so I could play outside all day, he recalls. It has been my dream ever since. I later realized that it was not the physical robot, but the intelligence behind the robot that really attracted me.

The original Emora was built on a behavioral mathematical model similar to a flowchart and equipped with several natural language processing models. Depending on what people said to the chatbot, the machine made a choice about what path of a conversation to go down. While the system was good at chit chat, the longer a conversation went on, the more chances that the system would miss a social-linguistic nuance and the conversation would go off the rails, diverting from the logical thread.

This year, the Emory team designed Emora so that she could go beyond a script and make logical inferences. Rather than a flowchart, the new system breaks a conversation down into concepts and represents them using a symbolic graph. A logical inference engine allows Emora to connect the graph of an ongoing conversation into other symbolic graphs that represent a bank of knowledge and common sense. The longer the conversations continue, the more its ability to make logical inferences grows.

Sarah and James Finch worked on the engineering of the new Emora system, as well as designing logic structures and implementing related algorithms. Undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell focused on developing dialogue content and conversational scripts for integrating within the chatbot. Graduate student Han He focused on structure parsing, including recent advances in the technology.

A computer cannot deal with ambiguity, it can only deal with structure, Han He explains. Our parser turns the grammar of a sentence into a graph, a structure like a tree, that describes what a chatbot user is saying to the computer.

He is passionate about language. Growing up in a small city in central China, he studied Japanese with the goal of becoming a linguist. His family was low income so he taught himself computer programming and picked up odd programmer jobs to help support himself. In college, he found a new passion in the field of natural language processing, or using computers to process human language.

His linguistic background enhances his technological expertise. When you learn a foreign language, you get new insights into the role of grammar and word order, He says. And those insights can help you to develop better algorithms and programs to teach computers how to understand language. Unfortunately, many people working in natural language processing focus primarily on mathematics without realizing the importance of grammar.

After getting his masters at the University of Houston, He chose to come to Emory for a PhD to work with Choi, who also emphasizes linguistics in his approach to natural language processing. He hopes to make a career in using artificial intelligence as an educational tool that can help give low-income children an equal opportunity to learn.

A love of language also brought senior Mack Hutsell into the fold. A native of Houston, he came to Emorys Oxford College to study English literature. His second love is computer programming and coding. When Hutsell discovered the digital humanities, using computational methods to study literary texts, he decided on a double major in English and computer science.

I enjoy thinking about language, especially language in the context of computers, he says.

Chois Natural Language Processing Lab and the Emora project was a natural fit for him.

Like the other undergraduates on the team, Hutsell did miscellaneous tasks for the project while also creating content that could be injected into Emoras real-world knowledge graph. On the topic of movies, for instance, he started with an IMDB dataset. The team had to combine concepts from possible conversations about the movie data in ways that would fit into the knowledge graph template and generate unique responses from the chatbot. Thinking about how to turn metadata and numbers into something that sounds human is a lot of fun, Hutsell says.

Language was also a key draw for senior Danii Huryn. He was born in Belarus, moved to California with his family when he was four, and then returned to Belarus when he was 10, staying until he completed high school. He speaks English, Belarusian and Russian fluently and is studying German.

In Belarus, I helped translate at my church, he says. That got me thinking about how different languages work differently and that some are better at saying different things.

Huryn excelled in computer programming and astronomy in his studies in Belarus. His interests also include reading science fiction and playing video games. He began his Emory career on the Oxford campus, and eventually decided to major in computer science and minor in physics.

For the Emora project, he developed conversations about technology, including an AI component, and another on how people were adapting to life during the pandemic.

The experience was great, Huryn says. I helped develop features for the bot while I was taking a course in natural language processing. I could see how some of the things I was learning about were coming together into one package to actually work.

Team member Sophy Huang, also a senior, grew up in Shanghai and came to Emory planning to go down a pre-med track. She soon realized, however, that she did not have a strong enough interest in biology and decided on a dual major of applied mathematics and statistics and psychology. Working on the Emora project also taps into her passions for computer programming and developing applications that help people.

Psychology plays a big role in natural language processing, Huang says. Its really about investigating how people think, talk and interact and how those processes can be integrated into a computer.

Food was one of the topics Huang developed for Emora to discuss. The strategy was first to connect with users by showing understanding, she says.

For instance, if someone says pizza is their favorite food, Emora would acknowledge their interest and ask what it is about pizza that they like so much.

By continuously acknowledging and connecting with the user, asking for their opinions and perspectives and sharing her own, Emora shows that she understands and cares, Huang explains. That encourages them to become more engaged and involved in the conversation.

The Emora team members are still at work putting the finishing touches on their chatbot.

We created most of the system that has the capability to do logical thinking, essentially the brain for Emora, Choi says. The brain just doesnt know that much about the world right now and needs more information to make deeper inferences. You can think of it like a toddler. Now were going to focus on teaching the brain so it will be on the level of an adult.

The team is confident that their system works and that they can complete full development and integration to launch beta testing sometime next spring.

Choi is most excited about the potential to use Emora to support first-year college students, answering questions about their day-to-day needs and directing them to the proper human staff or professor as appropriate. For larger issues, such as common conflicts that arise in group projects, Emora could also serve as a starting point by sharing how other students have overcome similar issues.

Choi also has a longer-term vision that the technology underlying Emora may one day be capable of assisting people dealing with loneliness, anxiety or depression. I dont believe that socialbots can ever replace humans as social companions, he says. But I do think there is potential for a socialbot to sympathize with someone who is feeling down, and to encourage them to get help from other people, so that they can get back to the cheerful life that they deserve.

Continued here:

Emory students advance artificial intelligence with a bot that aims to serve humanity - SaportaReport

Posted in Artificial Intelligence | Comments Off on Emory students advance artificial intelligence with a bot that aims to serve humanity – SaportaReport

Frontier Development Lab Transforms Space and Earth Science for NASA with Google Cloud Artificial Intelligence and Machine Learning Technology – SETI…

Posted: at 12:12 pm

August 26, 2021, Mountain View, Calif., Frontier Development Lab (FDL), in partnership with the SETI Institute, NASA and private sector partners including Google Cloud, are transforming space and Earth science through the application of industry-leading artificial intelligence (AI) and machine learning (ML) tools.

FDL tackles knowledge gaps in space science by pairing ML experts with researchers in physics, astronomy, astrobiology, planetary science, space medicine and Earth science.These researchers have utilized Google Cloud compute resources and expertise since 2018, specifically AI / ML technology, to address research challenges in areas like astronaut health, lunar exploration, exoplanets, heliophysics, climate change and disaster response.

With access to compute resources provided by Google Cloud, FDL has been able to increase the typical ML pipeline by more than 700 times in the last five years, facilitating new discoveries and improved understanding of our planet, solar system and the universe. Throughout this period, Google Clouds Office of the CTO (OCTO) has provided ongoing strategic guidance to FDL researchers on how to optimize AI / ML , and how to use compute resources most efficiently.

With Google Clouds investment, recent FDL achievements include:

"Unfettered on-demand access to massive super-compute resources has transformed the FDL program, enabling researchers to address highly complex challenges across a wide range of science domains, advancing new knowledge, new discoveries and improved understandings in previously unimaginable timeframes, said Bill Diamond, president and CEO, SETI Institute.This program, and the extraordinary results it achieves, would not be possible without the resources generously provided by Google Cloud.

When I first met Bill Diamond and James Parr in 2017, they asked me a simple question: What could happen if we marry the best of Silicon Valley and the minds of NASA? said Scott Penberthy, director of Applied AI at Google Cloud. That was an irresistible challenge. We at Google Cloud simply shared some of our AI tricks and tools, one engineer to another, and they ran with it. Im delighted to see what weve been able to accomplish together - and I am inspired for what we can achieve in the future. The possibilities are endless.

FDL leverages AI technologies to push the frontiers of science research and develop new tools to help solve some of humanity's biggest challenges. FDL teams are comprised of doctoral and post-doctoral researchers who use AI / ML to tackle ground-breaking challenges. Cloud-based super-computer resources mean that FDL teams achieve results in eight-week research sprints that would not be possible in even year-long programs with conventional compute capabilities.

High-performance computing is normally constrained due to the large amount of time, limited availability and cost of running AI experiments, said James Parr, director of FDL. Youre always in a queue. Having a common platform to integrate unstructured data and train neural networks in the cloud allows our FDL researchers from different backgrounds to work together on hugely complex problems with enormous data requirements - no matter where they are located.

Better integrating science and ML is the founding rationale and future north star of FDLs partnership with Google Cloud. ML is particularly powerful for space science when paired with a physical understanding of a problem space. The gap between what we know so far and what we collect as data is an exciting frontier for discovery and something AI / ML and cloud technology is poised to transform.

You can learn more about FDLs 2021 program here.

The FDL 2021 showcase presentations can be watched as follows:

In addition to Google Cloud, FDL is supported by partners including Lockheed Martin, Intel, Luxembourg Space Agency, MIT Portugal, Lawrence Berkeley National Lab, USGS, Microsoft, NVIDIA, Mayo Clinic, Planet and IBM.

About the SETI InstituteFounded in 1984, the SETI Institute is a non-profit, multidisciplinary research and education organization whose mission is to lead humanity's quest to understand the origins and prevalence of life and intelligence in the universe and share that knowledge with the world. Our research encompasses the physical and biological sciences and leverages expertise in data analytics, machine learning and advanced signal detection technologies. The SETI Institute is a distinguished research partner for industry, academia and government agencies, including NASA and NSF.

Contact Information:Rebecca McDonaldDirector of CommunicationsSETI Institutermcdonald@SETI.org

DOWNLOAD FULL PRESS RELEASE HERE.

Visit link:

Frontier Development Lab Transforms Space and Earth Science for NASA with Google Cloud Artificial Intelligence and Machine Learning Technology - SETI...

Posted in Artificial Intelligence | Comments Off on Frontier Development Lab Transforms Space and Earth Science for NASA with Google Cloud Artificial Intelligence and Machine Learning Technology – SETI…

Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task? – Just Security

Posted: at 12:12 pm

During armed conflict, unequal power relations and structural disadvantages derived from gender dynamics are exacerbated. There has been increased recognition of these dynamics during the last several decades, particularly in the context of sexual and gender-based violence in conflict, as exemplified for example in United Nations Security Council Resolution 1325 on Women, Peace, and Security. Though initiatives like this resolution are a positive advancement towards the recognition of discrimination against women and structural disadvantages that they suffer from during armed conflict, other aspects of armed conflict, including, notably, the use of artificial intelligence (AI) for targeting purposes, have remained resistant to insights related to gender. This is particularly problematic in the operational aspect of international humanitarian law (IHL), which contains rules on targeting in armed conflict.

The Gender Dimensions of Distinction and Proportionality

Some gendered dimensions of the application of IHL have long been recognized, especially in the context of rape and other categories of sexual violence against women occurring during armed conflict. Therefore, a great deal of attention has been paid in relation to ensuring accountability for crimes of sexual violence during times of armed conflict, while other aspects of conflict, such as the operational aspect of IHL, have remained overlooked.

In applying the principle of distinction, which requires distinguishing civilians from combatants (only the latter of which may be the target of a lawful attack), gendered assumptions of who is a threat have often played an important role. In modern warfare, often characterized by asymmetry and urban conflict and where combatants can blend in with the civilian population, some militaries and armed groups have struggled to reliably distinguish civilians. Due to gendered stereotypes of expected behavior of women and men, gender has operated as a de facto qualified identity that supplements the category of civilian. In practice this can mean that, for women to be targeted, IHL requirements are rigorously applied. Yet, in the case of young civilian males, the bar seems to be lower gender considerations, coupled with other factors such as geographical location, expose them to a greater risk of being targeted.

An illustrative example of this application of the principle of distinction is in so-called signature strikes, a subset of drone strikes adopted by the United States outside what it considers to be areas of active hostilities. Signature strikes target persons who are not on traditional battlefields without individually identifying them, but rather based only on patterns of life. According to reports on these strikes, it is sufficient that the persons targeted fit into the category military-aged males, who live in regions where terrorists operate, and whose behavior is assessed to be similar enough to those of terrorists to mark them for death. However, as the organization Article 36 notes, due to the lack of transparency around the use of armed drones in signature strikes, it is difficult to determine in more detail what standards are used by the U.S. government to classify certain individuals as legal targets. According to a New York Times report from May 2012, in counting casualties from armed drone strikes, the U.S. government reportedly recorded all military-age males in a strike zone as combatants [] unless there is explicit intelligence posthumously proving them innocent.

However, once a target is assessed as a valid military objective, the impact of gender is reversed in conducting a proportionality assessment. The principle of proportionality requires ensuring the anticipated harm to civilians and civilian objects is not excessive compared to the anticipated military advantage of an attack. But in assessing the anticipated advantage and anticipated civilian harms, the calculated military advantage can include the expected reduction of the commanders own combatant casualties as an advantage in other words, the actual loss of civilian lives can be offset by the avoidance of prospective military casualties. This creates the de facto result that the lives of combatants, the vast majority of whom are men, are weighed as more important than those of civilians who in a battlefield context, are often disproportionately women. Taking these applications of IHL into account, we can conclude that a gendered dimension is present in the operational aspect of this branch of law.

AI Application of IHL Principles

New technologies, particularly AI, have been increasingly deployed to assist commanders in their targeting decisions. Specifically, machine-learning algorithms are being used to process massive amounts of data to identify rules or patterns, drawing conclusions about individual pieces of information based on these patterns. In warfare, AI already supports targeting decisions in various forms. For instance, AI algorithms can estimate collateral damage, thereby helping commanders undertake the proportionality analysis. Likewise, some drones have been outfitted with AI to conduct image-recognition and are currently being trained to scan urban environments to find hidden attackers in other words, to distinguish between civilians and combatants as required by the principle of distinction.

Indeed, in modern warfare, the use of AI is expanding. For example, in March 2021 the National Security Commission on AI, a U.S. congressionally-mandated commission, released a report highlighting how, in the future, AI-enabled technologies are going to permeate every facet of warfighting. It also urged the Department of Defense to integrate AI into critical functions and existing systems in order to become an AI-ready force by 2025. As Neil Davison and Jonathan Horowitz note, as the use of AI grows, it is crucial to ensure that its development and deployment (especially when coupled with the use of autonomous weapons) complies with civilian protection.

Yet even if IHL principles can be translated faithfully into the programming of AI-assisted military technologies (a big and doubtful if), such translation will reproduce or even magnify the disparate, gendered impacts of IHL application identified previously. As the case of drones used to undertake signature strikes demonstrates, the integration of new technologies in warfare risks importing, and in the case of AI tech, potentially magnifying and cementing, the gendered injustices already embodied in the application of existing law.

Gendering Artificial Intelligence-Assisted Warfare

There are several reasons that AI may end up reifying and magnifying gender inequities. First, the algorithms are only as good as their inputs and those underlying data are problematic. To properly work, AI needs massive amounts of data. However, neither the collection nor selection of these data are neutral. In less deadly application domains, such as in mortgage loan decisions or predictive policing, there have been demonstrated instances of gender (and other) biases of both the programmers and the individuals tasked with classifying data samples, or even the data sets themselves (which often contain more data on white, male subjects).

Perhaps even more difficult to identify and correct than individuals biases are instances of machine learning that replicate and reinforce historical patterns of injustice merely because those patterns appear, to the AI, to provide useful information rather than undesirable noise. As Noel Sharkey notes, the societal push towards greater fairness and justice is being held back by historical values about poverty, gender and ethnicity that are ossified in big data. There is no reason to believe that bias in targeting data would be any different or any easier to find.

This means that historical human biases can and do lead to incomplete or unrepresentative training data. For example, a predictive algorithm used to apply the principle of distinction on the basis of target profiles, together with other intelligence, surveillance, and reconnaissance tools, will be gender biased if the data inserted equate military-aged men with combatants and disregard other factors. As the practice of signature drone strikes has demonstrated, automatically classifying men as combatants and women as vulnerable has led to mistakes in targeting. As the use of machine learning in targeting expands, these biases will be amplified if not corrected for with each strike providing increasingly biased data.

To mitigate this result, it is critical to ensure that the data collected are diverse, accurate, and disaggregated, and that algorithm designers reflect on how the principles of distinction and proportionality can be applied in gender-biased ways. High quality data collection means, among other things, ensuring that the data are disaggregated by gender otherwise it will be impossible to learn what biases are operating behind the assumptions used, what works to counter those biases, and what does not.

Ensuring high quality data also requires collecting more and different types of data, including data on women. In addition, because AI tools reflect the biases of those who build them, ensuring that female employees hold technical roles and that male employees are fully trained to understand gender and other biases is also crucial to mitigate data biases. Incorporating gender advisors would also be a positive step to ensure that the design of the algorithm, and the interpretation of what the algorithm recommends or suggests, considers gender biases and dynamics.

However, issues of data quality are subsidiary to larger questions about the possibility of translating IHL into code and, even if this translation is possible, the further difficulty of incorporating gender considerations into IHL code. Encoding gender considerations into AI is challenging to say the least, because gender is both a societal and individual construction. Likewise, the process of developing AI is not neutral, as it has both politics and ethics embedded, as demonstrated by documented incidents of AI encoding biases. Finally, the very rules and principles of modern IHL were drafted when structural discrimination against women was not acknowledged or was viewed as natural or beneficial. As a result, when considering how to translate IHL into code, it is essential to incorporate critical gender perspectives into the interpretation of the norms and laws related to armed conflict.

Gendering IHL: An Early Attempt and Work to be Done

An example of the kind of critical engagement with IHL that will be required is provided by the updated International Committee of the Red Cross (ICRC) Commentary on the Third Geneva Convention. Through the incorporation of particular considerations of gender-specific risks and needs (para. 1747), the updated commentary has reconsidered outdated baseline gender assumptions, such as the idea that women have non-combatant status by default, or that women must receive special consideration because they have less resilience, agency or capacity (para. 1682). This shift has demonstrated that it is not only desirable, but also possible to include a gender perspective in the interpretation of the rules of warfare. This shift also underscores the urgent need to revisit IHL targeting principles of distinction and proportionality to assess how their application impacts genders differently, so that any algorithms developed to execute IHL principles incorporate these insights from the start.

As a first cut at this reexamination, it is essential to reassert that principles of non-discrimination also apply to IHL, and must be incorporated into any algorithmic version of these rules. In particular, the principle of distinction allows commanders to lawfully target only those identified as combatants or those who directly participate in hostilities. Article 50 of Additional Protocol I to the Geneva Conventions defines civilians in a negative way, meaning that civilians are those who do not belong to the category of combatants and IHL makes no reference to gender as a signifier of identity for the purpose of assessing whether a given individual is a combatant. In this regard, being a military-aged male cannot be a shortcut to the identification of combatants. Men make up the category of civilians as well. As Maya Brehm notes, there is scope for categorical targeting within a conduct of hostilities framework, but the principle of non-discrimination continues to apply in armed conflict. Adverse distinction based on race, sex, religion, national origin or similar criteria is prohibited.

Likewise, in any attempt to translate the principle of proportionality into code, there must be recognition of and correction for the gendered impacts of current proportionality calculations. For example, across Syria between 2011 and 2016, 75 percent of the civilian women killed in conflict-related violence were killed by shelling or aerial bombardment. In contrast, 49 percent of civilian men killed in war-related violence were killed by shelling or aerial bombardment; men were killed more often by shooting. This suggests that particular tactics and weapons have disparate impacts on civilian populations that break down along gendered lines. The studys authors note that the evolving tactics used by Syrian, opposition, and international forces in the conflict contributed to a decrease in the proportion of casualties who were combatants, as the use of shelling and bombardment two weapons that were shown to have high rates of civilian casualties, especially women and children civilian casualties increased over time. Study authors also note, however, that changing patterns of civilian and combatant behavior may partially explain the increasing rates of women compared to men in civilian casualties: A possible contributor to increasing proportions of women and children among civilian deaths could be that numbers of civilian men in the population decreased over time as some took up arms to become combatants.

As currently understood, IHL does not require an analysis of the gendered impacts of, for example, the choice of aerial bombardment versus shooting. Yet this research suggests that selecting aerial bombardment as a tactic will result in more civilian women than men being killed (nearly 37 percent of women killed in the conflict versus 23 percent of men). Selecting shooting as a tactic produces opposite results, with 23 percent of civilian men killed by shooting compared to 13 percent of women. There is no right proportion of civilian men and women killed by a given tactic, but these disparities have profound, real-world consequences for civilian populations during and after conflict that are simply not considered under current rules of proportionality and distinction.

In this regard, although using force protection to limit ones own forces casualties is not forbidden, such strategy ought to consider the effect that this policy will have on the civilian population of the opposing side including gendered impacts. The compilation of data on how a certain means or method of warfare may impact the civilian population would enable commanders to take a more informed decision. Acknowledging that the effects of weapons in warfare are gendered is the first key step to be taken. In some cases, there has been progress in incorporating a gendered lens into positive IHL, as in the case of cluster munitions, where Article 5 of the convention banning these weapons notes that States shall provide gender-sensitive assistance to victims. But most of this analysis remains rudimentary and not clearly required. In the context of developing AI-assisted technologies, reflecting on the gendered impact of the algorithm is essential during AI development, acquisition, and application.

The process of encoding IHL principles of distinction and proportionality into AI systems provides a useful opportunity to revisit application of these principles with an eye toward interpretations that take into account modern gender perspectives both in terms of how such IHL principles are interpreted and how their application impacts men and women differently. As the recent update of the ICRC Commentary on the Third Geneva Convention illustrates, acknowledging and incorporating gender-specific needs in the interpretation and suggested application of the existing rules of warfare is not only possible, but also desirable.

Disclaimer:This post has been prepared as part of a research internship at theErasmus University Rotterdam, funded by the European Union (EU) Non-Proliferation and Disarmament Consortium as part of a larger EU educationalinitiative aimed at building capacity in the next generation of scholars and practitioners innon-proliferation policy and programming. The views expressed in this post are those of theauthor and do not necessarily reflect those of the Erasmus University Rotterdam, the EU Non-Proliferation andDisarmament Consortium or other members of the network.

Originally posted here:

Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task? - Just Security

Posted in Artificial Intelligence | Comments Off on Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task? – Just Security

Page 78«..1020..77787980..90100..»