Daily Archives: January 9, 2022

Artificial Intelligence (AI) – United States Department of …

Posted: January 9, 2022 at 5:13 pm

A global technology revolution is now underway. The worlds leading powers are racing to develop and deploy new technologies like artificial intelligence and quantum computing that could shape everything about our lives from whereweget energy, to how we do our jobs, to how wars are fought. We want America to maintain our scientific and technological edge, because its critical to us thriving in the 21st century economy.

Investments in AI have led to transformative advances now impacting our everyday lives, including mapping technologies, voice-assisted smart phones, handwriting recognition for mail delivery, financial trading, smart logistics, spam filtering, language translation, and more. AI advances are also providing great benefits to our social wellbeing in areas such as precision medicine, environmental sustainability, education, and public welfare.

The term artificial intelligence means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

The Department of State focuses on AI because it is at the center of the global technological revolution; advances in AI technology present both great opportunities and challenges. The United States, along with our partners and allies, can both further our scientific and technological capabilities and promote democracy and human rights by working together to identify and seize the opportunities while meeting the challenges by promoting shared norms and agreements on the responsible use of AI.

Together with our allies and partners, the Department of State promotes an international policy environment and works to build partnerships that further our capabilities in AI technologies, protect our national and economic security, and promote our values. Accordingly, the Department engages in various bilateral and multilateral discussions to support responsible development, deployment, use, and governance of trustworthy AI technologies.

The Department provides policy guidance to implement trustworthy AI through theOrganization for Economic Cooperation and Development (OECD)AI Policy Observatory, a platform established in February 2020 to facilitate dialogue between stakeholders and provide evidence-based policy analysis in the areas where AI has the most impact.The State Department provides leadership and support to the OECD Network of Experts on AI (ONE AI), which informs this analysis.The United States has 47 AI initiatives associated with the Observatory that help contribute to COVID-19 response, invest in workforce training, promote safety guidance for automated transportation technologies, andmore.

The OECDs Recommendation on Artificial Intelligence is the backbone of the activities at the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Policy Observatory. In May 2019, the United States joined together with likeminded democracies of the world in adopting the OECD Recommendation on Artificial Intelligence, the first set of intergovernmental principles for trustworthy AI. The principles promote inclusive growth, human-centered values, transparency, safety and security, and accountability. The Recommendation also encourages national policies and international cooperation to invest in research and development and support the broader digital ecosystem for AI. The Department of State champions the principles as the benchmark for trustworthy AI, which helps governments design national legislation.

GPAI is a voluntary, multi-stakeholder initiative launched in June 2020 for the advancement of AI in a manner consistent with democratic values and human rights. GPAIs mandate is focused on project-oriented collaboration, which it supports through working groups looking at responsible AI, data governance, the future of work, and commercialization and innovation. As a founding member, the United States has played a critical role in guiding GPAI and ensuring it complements the work of the OECD.

In the context of military operations in armed conflict, the United States believes that international humanitarian law (IHL) provides a robust and appropriate framework for the regulation of all weapons, including those using autonomous functions provided by technologies such as AI. Building a better common understanding of the potential risks and benefits that are presented by weapons with autonomous functions, in particular their potential to strengthen compliance with IHL and mitigate risk of harm to civilians, should be the focus of international discussion. The United States supports the progress in this area made by the Convention on Certain Conventional Weapons, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems (GGE on LAWS), which adopted by consensus 11 Guiding Principles on responsible development and use of LAWS in 2019. The State Department will continue to work with our colleagues at the Department of Defense to engage the international community within the LAWS GGE.

Learnmore about what specific bureaus and offices are doing to support this policy issue:

TheGlobal Engagement Centerhas developed a dedicated effort for the U.S. Government to identify, assess, test and implement technologies against the problems of foreign propaganda and disinformation, in cooperation with foreign partners, private industry and academia.

The Office of the Under Secretary for Managementuses AI technologies within the Department of State to advance traditional diplomatic activities,applying machine learning to internal information technology and management consultant functions.

TheOffice of the Under Secretary of State for Economic Growth, Energy, and the Environmentengages internationally to support the U.S. science and technology (S&T) enterprise through global AI research and development (R&D) partnerships, setting fair rules of the road for economic competition, advocating for U.S. companies, and enabling foreign policy and regulatory environments that benefit U.S. capabilities in AI.

TheOffice of the Under Secretary of State for Arms Control and International Securityfocuses on the security implications of AI, including potential applications in weapon systems, its impact on U.S. military interoperability with its allies and partners,its impact on stability,and export controls related to AI.

TheOffice of the Under Secretary for Civilian Security, Democracy, and Human Rightsand its component bureaus and offices focus on issues related to AI and governance, human rights, including religious freedom, and law enforcement and crime, among others.

TheOffice of the Legal Adviserleads on issues relating to AI in weapon systems (LAWS), in particular at the Group of Governmental Experts on Lethal Autonomous Weapons Systems convened under the auspices of the Convention on Certain Conventional Weapons.

For more information on federalprograms and policyon artificial intelligence, visitai.gov.

Read more:

Artificial Intelligence (AI) - United States Department of ...

Posted in Ai | Comments Off on Artificial Intelligence (AI) – United States Department of …

Artificial Intelligence – IBM

Posted: at 5:13 pm

These are technical guides for non-technical people. Whether youve found yourself in need of knowing AI or have always been curious to learn more, this will teach you enough to dive deeper into the vast and deep AI ocean. The purpose of these explanations is to succinctly break down complicated topics without relying on technical jargon.

Any system capable of simulating human intelligence and thought processes is said to have Artificial Intelligence (AI).

Science fiction has done a fantastic job at warning us whats to come once machines are able to think as well as humans. Fortunately, the AIs often depicted in movies are far more advanced than what technology is capable of today (or any time soon, for that matter).

While these aspirational systems called artificial general intelligence are far off, theres a lot of deserved buzz around artificial narrow intelligence, or weak AI. Narrow AI focuses on one, narrow task of human intelligence, and within it, there are two branches: rule-based AI and example based AI. The former involves giving the machine rules to follow while the latter giving it examples to learn from.

AI takes many forms, like machine learning, computer vision, natural language processing, robotics, etc. Consequently, the term AI is increasingly used as shorthand to describe any machines that mimic our cognitive functions such as learning and problem solving.

The safest working definition is the study of making systems capable of simulating human intelligence and thought processes, which comes in many forms.

Note: This guide makes comparisons to human intelligence to make it easier to understand the basic concepts of machine cognition. Organic thought is obviously different from artificial thought, both technically and philosophically, but philosophy is not the purpose of this document. Our goal is to provide the most digestible and practical explanation of AI.

AI can achieve higher quality outcomes faster than humanly possible.

Today, AI is most often used to recognize patterns, make predictions, and provide insights previously out of reach due to the sheer amount of available data. Its able to do this because, unlike traditional computer technologies, AI is able to learn from examples as opposed to being explicitly programmed to execute specific instructions.

These systems are meant to augment our own intelligence and maximize our confidence. In a growing number of fields, AI is serving as a companion for professionals to enhance performance and reduce the time required to become an expert. It will aid in the pursuit of knowledge, to further our expertise, and to improve the human condition.

AI is a powerful toolbox that has many applications in domains far and wide. The types of problems that the AI toolbox is best equipped to solve can be split into six core intents, as described on IBMs Watson site:

Some of the most common tasks AI performs and their corresponding subfields include:

It depends on several factors. Each of AI tasks mentioned has its own unique implementation, but it can be boiled down to roughly two approaches: specifying the rules that solve the problem versus giving the machine examples to find the pattern on its own.

The rules-based approach uses algorithms a sequence of unambiguous instructions used by computers to solve problems. It tells a computer precisely what steps to take to solve a problem or reach a goal. The chosen algorithm(s) determine how the AI will think about surfacing insights to address your problem space. Different algorithms have different goals, strengths, and weaknesses. Choosing the right fit depends on your desired outcome and the nuances of the process.

Algorithm for repairing a broken lamp

Algorithm for troubleshooting a non-functioning lamp

Theexamples-based approach usesdatato createmodels.

This data can take many forms: music, videos, weather conditions, user profiles, system logs, etc.Models are the result oftrainingan AI on data to find patterns. This is akin to you studying before a big exam you started with little to no understanding, so youingesteda bunch of study material so that you could go out into the world ready to apply your new knowledge.This way of problem solving is largely made possible by its subfield,machine learning.

Thisishelpful in cases where specifying rigid rules (i.e. writing algorithms)is hard or abundant e.g. in stock trading, identifying cancer, predicting which video a user wants to see next, etc.

Some helpful people at MIT created a flowchart that guides you through whether or not the thing youre looking at is, in fact, AI.

Read the original here:

Artificial Intelligence - IBM

Posted in Ai | Comments Off on Artificial Intelligence – IBM

Leveraging AI to relieve physician stress in turbulent times – Healthcare IT News

Posted: at 5:13 pm

During an overnight shift, a critical care physician treats a pulmonary embolism patient. Several days later, he is asked why he didnt note in the chart that the patient also suffered from acute respiratory failure. His response: I was busy saving a patients life at 3 a.m. taking care of all the things that need to be done such as oxygen intubation and positive pressure ventilation. While I was aware of the patients breathing problems, I was not really worried about missing a diagnosis like hypoxemia in my clinical notes.

With such scenarios considered typical, its not surprising that more than 40% of U.S. physicians are burnt out and those bureaucratic tasks top the list of factors contributing to this unrelenting stress, according to a Medscape report.[1] In addition to taking an emotional toll, burnout negatively affects finances, with each case of burnout costing healthcare organizations (HCOs) between $500,000 and $1 million, according to the American Medical Association.[2]

Stress has become a serious issue for physicians in recent years, said Robert Budman, MD, CMIO, Nuance Communications. First, physicians have to navigate how to get their clinical work done in a busy day. On top of that, there are administrative burdens placed on them by the government, insurance plans and employers. And then, theres simply the crush that they are feeling with their workload being exacerbated by COVID-19.

Getting down to root causes

HCOs can no longer turn a deaf ear to the problem. Having a way to deal with stress and promoting wellness is every organizations responsibility, he noted. While HCOs can help workers short-term by hosting massage sessions and ice cream socials, they can address the root causes of stress in the long run by zeroing in on operations and workflow.

With some artificial intelligence [AI] technologies, HCOs can alleviate worker stress by anticipating nursing or ED staffing needs or predicting the turnover of beds. These AI technologies help with the nuts and bolts of running a healthcare business, Budman said.

HCOs can also use AI to help address clinical documentation frustrations. Physicians often fail to note all secondary diagnoses, chronic conditions and comorbidities while treating patients, simply because the time required to comprehensively document at the point of care is overwhelming.

Missing diagnoses, however, can result in decreased reimbursement or set off a stressful series of events. For example, when missing a diagnosis, physicians are often queried several days or weeks later and then forced to go back into the EMR and review what happened in a particular case to update clinical documentation.

CAPD provides substantial support

A workflow-integrated, AI-driven computer-assisted physician documentation (CAPD) system enables physicians to focus on patient care by providing unobtrusive guidance as to what information needs to be included to ensure clinical documentation integrity.

With this AI technology, its possible to address the documentation edicts emanating from governmental bodies and insurance companies, Budman said. Physicians simply receive AI-generated advice that enables them to produce clinical notes at the point of care very quickly, avoiding the stress involved with having to do so downstream.

In addition, AI-driven CAPD makes it possible to capture the severity of illness, ensuring that providers receive the right reimbursement for all inpatient and outpatient care delivered. If a doctor sees a patient for diabetes and congestive heart failure and forgets the patients chronic renal failure or severe debilitating arthritis, that is going to affect the accuracy of the medical record and the billing, Budman pointed out.

Having clinically relevant expert advice right there at their fingertips and being able to add it to the note in less than 30 seconds can help physicians produce the documentation needed to receive proper reimbursement, while also improving workflow and alleviating stress, he said. Eliminating extra tasks like reviewing queries in the in-basket or dealing with a full email inbox days later is always a win for providers.

To learn more about how AI can reduce physician burnout through automation, click here for information.

About Nuance Communications, Inc.

Nuance Communications (Nuance) is a technology pioneer with market leadership in conversational AI and ambient intelligence. A full-service partner trusted by 77 percent of U.S. hospitals and 85 percent of the Fortune 100 companies worldwide, Nuance creates intuitive solutions that amplify peoples ability to help others.

[1]. Kane, L. 2021. Death by 1,000 cuts: Medscape National Physician Burnout & Suicide Report 2021. Jan. 22. https://www.medscape.com/slideshow/2021-lifestyle-burnout-6013456.

[2]. Berg, S. 2018. How much is physician burnout costing your organization? American Medical Association. Oct. 11. https://www.ama-assn.org/practice-management/physician-health/how-much-physician-burnout-costing-your-organization.

See the article here:

Leveraging AI to relieve physician stress in turbulent times - Healthcare IT News

Posted in Ai | Comments Off on Leveraging AI to relieve physician stress in turbulent times – Healthcare IT News

Are we witnessing the dawn of post-theory science? – The Guardian

Posted: at 5:13 pm

Isaac Newton apocryphally discovered his second law the one about gravity after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship one that could be expressed as an equation, F=ma and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebooks machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.

You cant lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that no theory, in a word. They just work and do so well. We witness the social effects of Facebooks predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.

Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were oversimplifications of reality. Soon, the old scientific method hypothesise, predict, test would be relegated to the dustbin of history. Wed stop looking for the causes of things and be satisfied with correlations.

With the benefit of hindsight, we can say that what Anderson saw is true (he wasnt alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. We have leapfrogged over our ability to even write the theories that are going to be useful for description, says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tbingen, Germany. We dont even know what they would look like.

But Andersons prediction of the end of theory looks to have been premature or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: whats the best way to acquire knowledge and where does science go from here?

The first reason is that weve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible. Think of the prejudice that has been documented in Googles search engines and Amazons hiring tools.

The second is that humans turn out to be deeply uncomfortable with theory-free science. We dont like dealing with a black box we want to know why.

And third, there may still be plenty of theory of the traditional kind that is, graspable by humans that usefully explains much but has yet to be uncovered.

So theory isnt dead, yet, but it is changing perhaps beyond recognition. The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts, says Tom Griffiths, a psychologist at Princeton University.

Griffiths has been using neural nets to help him improve on existing theories in his domain, which is human decision-making. A popular theory of how people make decisions when economic risk is involved is prospect theory, which was formulated by behavioural economists Daniel Kahneman and Amos Tversky in the 1970s (it later won Kahneman a Nobel prize). The idea at its core is that people are sometimes, but not always, rational.

In Science last June, Griffithss group described how they trained a neural net on a vast dataset of decisions people took in 10,000 risky choice scenarios, then compared how accurately it predicted further decisions with respect to prospect theory. They found that prospect theory did pretty well, but the neural net showed its worth in highlighting where the theory broke down, that is, where its predictions failed.

These counter-examples were highly informative, Griffiths says, because they revealed more of the complexity that exists in real life. For example, humans are constantly weighing up probabilities based on incoming information, as prospect theory describes. But when there are too many competing probabilities for the brain to compute, they might switch to a different strategy being guided by a rule of thumb, say and a stockbrokers rule of thumb might not be the same as that of a teenage bitcoin trader, since it is drawn from different experiences.

Were basically using the machine learning system to identify those cases where were seeing something thats inconsistent with our theory, Griffiths says. The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints. A way to picture it might be as a branching tree of if then-type rules, which is difficult to describe mathematically, let alone in words.

What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.

Some scientists are comfortable with that, even eager for it. When voice recognition software pioneer Frederick Jelinek said: Every time I fire a linguist, the performance of the speech recogniser goes up, he meant that theory was holding back progress and that was in the 1980s.

Or take protein structures. A proteins function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given proteins action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isnt the lack of a theory that will stop drug designers using it. What AlphaFold does is also discovery, she says, and it will only improve our understanding of life and therapeutics.

Others are distinctly less comfortable with where science is heading. Critics point out, for example, that neural nets can throw up spurious correlations, especially if the datasets they are trained on are small. And all datasets are biased, because scientists dont collect data evenly or neutrally, but always with certain hypotheses or assumptions in mind, assumptions that worked their way damagingly into Googles and Amazons AIs. As philosopher of science Sabina Leonelli of the University of Exeter explains: The data landscape were using is incredibly skewed.

But while these problems certainly exist, Dayan doesnt think theyre insurmountable. He points out that humans are biased too and, unlike AIs, in ways that are very hard to interrogate or correct. Ultimately, if a theory produces less reliable predictions than an AI, it will be hard to argue that the machine is the more biased of the two.

A tougher obstacle to the new science may be our human need to explain the world to talk in terms of cause and effect. In 2019, neuroscientists Bingni Brunton and Michael Beyeler of the University of Washington, Seattle, wrote that this need for interpretability may have prevented scientists from making novel insights about the brain, of the kind that only emerges from large datasets. But they also sympathised. If those insights are to be translated into useful things such as drugs and devices, they wrote, it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry.

Explainable AI, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?

Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data and hence scanning time to produce such an image, which isnt necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopras group has done so. But radiologists and patients remain wedded to the image. We humans are more comfortable with a 2D image that our eyes can interpret, he says.

The final objection to post-theory science is that there is likely to be useful old-style theory that is, generalisations extracted from discrete examples that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.

In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step the core of the creative process. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights new generalisations in the mathematics of knots.

In 2022, therefore, there is almost no stage of the scientific process where AI hasnt left its footprint. And the more we draw it into our quest for knowledge, the more it changes that quest. Well have to learn to live with that, but we can reassure ourselves about one thing: were still asking the questions. As Pablo Picasso put it in the 1960s, computers are useless. They can only give you answers.

Read more from the original source:

Are we witnessing the dawn of post-theory science? - The Guardian

Posted in Ai | Comments Off on Are we witnessing the dawn of post-theory science? – The Guardian

How AI Can Enable and Support Both Caregivers and Patients – Entrepreneur

Posted: at 5:13 pm

Opinions expressed by Entrepreneur contributors are their own.

Caregivers in healthcare deal with a considerable amount of complexity: In addition to a high volume of patients, they need to patriciate in and/or coordinate communication between team members and make sure all information is up to date. Their work is fast-paced, and situations can change in seconds.

One way to make these efforts more manageable is by using artificial intelligence. In healthcare, as in all fields, the job of AI is not to replace humans, but rather to perform repetitive, tedious and time-consuming tasks so that people dont have to freeing time for tasks that require a personal touch. Human judgment should remain the ultimate decision-maker.

Algorithms and software can help caregivers make predictions, analyze data and simplify processes. In my experience, if one is looking at a list of 50 repetitive tasks, AI can eliminate 45 of them, handing people extra hours for the five most pivotal. Personal care is scarce and valuable, but essential: the more technology can free up this time, the more focus can be on those precious tasks that technology alone cant handle.

Related: The Future of Healthcare Is in the Cloud

These prioritization benefits move down to patients. Efficient use of AI can reduce the costs of healthcare and the time required for treatment, not least because when routines are made more efficient, procedures can be completed faster, which ideally leads to lower expenditures.

AI also supports caregivers in making higher-quality decisions. For these professionals, it can be hard to find a starting point in interpreting data. In MRI imaging, for example, looking through thousands of images is inherently time-consuming and can lead to information being overlooked or misinterpreted. Artificial intelligence can help save time by bringing up the most relevant images, making care more efficient and accurate.

Algorithms can also be used for predicting: Software can take the current state of a situation, learn from patterns and make projections, which can be deeply useful. At GE, we use machine learning to forecast census for 14 days at the hospitals we serve, and look at every bed, unit and service in the process. This allows us to make accurate guesses as to conditions for each unit, over each hour, and for two weeks. Such forecasts can predict which parts of a facility will become hotspots, and teams can then determine which caregivers to transfer to each. They also help hospitals accept transfer patients more efficiently: If they receive a call asking whether they can accept an admission in two days, caregivers can give a confident answer, with forecasts in front of them.

Related: When Next-Generation Caregivers Meet New Technology

Were still in the beginning stages of AI software applied in healthcare, and it needs to be fine-tuned, but users also need to make sure theyre employing the technology correctly.

Its up to them to put software in context and use it in a way thats helpful. AI isnt a crutch to be relied upon, but a tool to be wielded. A nurses job isnt to sit there all day looking at forecasts, and a staffing coordinator doesnt wait all day for staffing forecasts. Whether algorithms are applied to worker allocation or radiology, they must be used in context in order to be helpful.

Think of these systems as akin to software in a phone, which likely includes a compass. When youre looking at the compass directly, its of marginal use, but when integrated into a map navigating app, its incredibly helpful. Theres an algorithm, and then theres the larger app its contained in. The same goes for AI in healthcare: It has to be used in the right context for it to reach full potential.

Link:

How AI Can Enable and Support Both Caregivers and Patients - Entrepreneur

Posted in Ai | Comments Off on How AI Can Enable and Support Both Caregivers and Patients – Entrepreneur

Observability: How AI will enhance the world of monitoring and management – VentureBeat

Posted: at 5:13 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

The more the enterprise transitions from a mere digital organization to a fully intelligent one, the more data executives will come to realize that traditional monitoring and management of complex systems and processes is not enough.

Whats needed is a new, more expansive form of oversight which lately has come to be known as data observability.

The distinction between observability and monitoring is subtle but significant. As VentureBeat writer John Paul Titlow explained in a recent piece, monitoring allows technicians to view past and current data environments according to predefined metrics or logs. Observability, on the other hand, provides insight into why systems are changing over time, and may detect conditions that have not previously been considered. In short, monitoring tells you what is happening, while observability tells you why its happening.

To fully embrace observability, the enterprise must engage it in three different ways. First, AI must fully permeate IT operations, since this is the only way to rapidly and reliably detect patterns and identify root causes of impaired performance. Secondly, data must be standardized across the ecosystem to avoid mismatch, duplication and other factors that can skew results. And finally, observability must shift into the cloud, as that is where much of the enterprise data environment is transitioning to as well.

Observability is based on Control Theory, according to Richard Whitehead, the chief evangelist at observability platform developer, Moogsoft. The idea is that with enough quality data at their disposal, AI-empowered technicians can observe how one system reacts to another, or at the very least, infer the state of a system based on its inputs and outputs.

The problem is that observability is viewed in different contexts between, say, DevOps and IT. While IT has worked fairly well by linking application performance monitoring (APM) with infrastructure performance monitoring (IPM), emerging DevOps models, with their rapid change rates, are chafing under the slow pace of data ingestion. By unleashing AI on granular data feeds, however, both IT and DevOps will be able to quickly discern the hidden patterns that characterize quickly evolving data environments.

This means observability is one of the central functions in emerging AIOps and MLOps platforms that promise to push data systems and applications management into hyperdrive. New Relic recently updated its New Relic One observability application to incorporate MLOps tools to enable self-retraining as soon as alerts are received. This should be particularly handy for ML and AI training, since these models tend to deteriorate over time. Data observability helps account for changing real-world conditions that affect critical metrics like skew, staleness of data as well as overall model precision and performance regardless of whether these changes are taking place in seconds or over days, weeks or years.

Over the next few years, it is reasonable to expect AI and observability to usher in a new era of hyperautomation, according to Douglas Toombs, Gartners vice president of research. In an interview with RT Insights, he noted that a fully realized AIOps environment is key to Gartners long-predicted Just-in-Time Infrastructure in which datacenter, colocation, edge, and other resources can be compiled in response to business needs within a cohesive but broadly distributed data ecosystem.

In a way, observability is AI transforming the parameters of monitoring and management in the same way it changes other aspects of the digital enterprise by making it more inclusive, more intuitive and more self-operational. Whether the task is charting consumer trends, predicting the weather or overseeing the flow of data, AIs job is to provide granular insight into complex systems and chart courses of action based on those analyses, some of which it can implement on its own and some that must be approved by an administrator.

Observability, then, is yet another way in which AI will take on the mundane tasks that humans do today, creating not just a faster and more responsive data environment, but one that is far more attuned to the real environments it is attempting to interpret digitally.

Here is the original post:

Observability: How AI will enhance the world of monitoring and management - VentureBeat

Posted in Ai | Comments Off on Observability: How AI will enhance the world of monitoring and management – VentureBeat

Living on the Edge (AI) – The Times of India Blog

Posted: at 5:13 pm

We are living in a hyperconnected world, where every device is connected, and are generating data at an unprecedented rate. If we look at a smartwatch or a smartphone, smart cars, smart factories, smart homes, or smart cities, the enormous data generated is collected at the source, processed and smart decisions are required to be executed instantly. This is possible when two powerful technologies come together such as Edge Computing and Artificial Intelligence (AI). Therefore, looking into poetic justification by the acclaimed music group Aerosmith, in these interesting times, we are living on the edge.

Edge AI is the amalgamation of two incredible technologies: Edge, which is all about bringing computation and data closer to end users to improve efficiency, and AI, which comprises of data-driven intelligence. In the digital world, machines learn from a large amount of data collected over a period of time, much like human minds, amassing knowledge and learning through real-world experience. The more data we use to train our models, the more advanced and intelligent these machines become. Once they are trained in a central storage (e.g. cloud), the models can be deployed at the edge level to make quicker decisions. Furthermore, Edge AI is capable of working on Deep Learning models and complex algorithms on their own.

Why Edge AI is important in a connected ecosystem:

With the advancements in technologies, democratization of internet, advanced LTE mobile networks like 4G/5G etc., the entire world is becoming hyperconnected and cloud is playing a key role in providing planet scale infrastructure (compute, storage, and network). However, the fallacies of distributed computing cannot be disregarded. Hence, Edge AI is no longer optional and now, more than ever, it must be brought to the forefront:

Edge AI in Practice (most relevant archetypes):

Side Effects of Edge AI:

Any emerging technology takes time to scale the maturity curve and has its side effects as well; and Edge AI is not an exception to this. There are few challenges and concerns that should be addressed such as:

To conclude, harnessing Edge AIs true potential will unleash a great opportunity for numerous industries to thrive, and digital transformation is just the tip of the iceberg for a larger prism of digital innovation. Moreover, the ongoing pandemic is forcing every industry to evolve and develop smart solutions, enhance remote (working capabilities, asset maintenance, learning, assistance etc.), platform automation and much more. The cloud continues to become the platform to leverage and as cloud has created a significant impact in todays time, so will Edge AI in the future. It will make cloud a fascinating place to be.

Views expressed above are the author's own.

END OF ARTICLE

Here is the original post:

Living on the Edge (AI) - The Times of India Blog

Posted in Ai | Comments Off on Living on the Edge (AI) – The Times of India Blog

How the AI Revolution Impacted Chess (1/2) – Chessbase News

Posted: at 5:13 pm

The wave of neural network engines that AlphaZero inspired have impacted chess preparation, opening theory, and middlegame concepts. We can see this impact most clearly at the elite level because top grandmasters prepare openings and get ideas by working with modern engines. For instance, Carlsen cited AlphaZero as a source of inspiration for his remarkable play in 2019.

Neural network engines like AlphaZero learn from experience by developing patterns through numerous games against itself (known as self-play reinforcement learning) and understanding which ideas work well in different types of positions. This pattern recognition ability suggests that they are especially strong in openings and strategic middlegames where long-term factors must be assessed accurately. In these areas of chess, their experience allows them to steer the game towards positions that provide relatively high probabilities of winning.

A table of four selected engines is provided below.

Chess Engines

Engine

Type

Description

Stockfish 8

Classical

Relies on hard-wired rules and brute-force calculation of variations.

AlphaZero

Neural network

DeepMinds revolutionary AI engine used self-play reinforcement learning to train a neural network.

Leela Chess Zero (Lc0)

Neural network

Launched in 2018 as an open-source project to follow the footsteps of AlphaZero.

Stockfish 12

(and newer versions)

Hybrid

Utilizes classical searching algorithms as well as a neural network.

The hybrid Stockfish engine aims to get the best of both types of AI: the calculation speed of classical engines and the strategic understanding of neural networks. Practice has shown that this approach is a very effective one because it consistently evaluates all types of positions accurately, from strategic middlegames to messy complications.

These two articles introduce a few concepts that the newer (i.e., neural network and hybrid) engines have influenced. Please note that the game annotations are based on work I did for my book, The AI Revolution in Chess, where I analyzed the impact of AI engines.

Clash of Styles

One of the biggest differences in understanding between older and newer engines can be found in strategic middlegames which involve long-term improvements by one side. As shown in many of the AlphaZero Stockfish games, the older engines sometimes fail to see dangers due to their limited foresight. Relying solely on move-by-move calculation is not always enough to solve problems against the strongest opponents. This is because neural network engines excel at slowly building up pressure, making small improvements to optimize their winning chances, before gradually preparing the decisive breakthrough.

In the following game, the older engines believe that the opening outcome is quite satisfactory for Black, while the newer ones strongly disagree. Grischuk sides with the opinion of the neural network engines and understands that Whites long-term initiative is both practically and objectively extremely difficult for Black to handle.

Opening Developments

Perhaps the most popularized idea of the neural network engines is the h-pawn advance, where White pushes h4-h5-h6 (or Black pushes h5-h4-h3) to cramp the opponents kingside by taking away some key squares. The idea itself is not at all new, but the newer engines have a much greater appreciation for it than the older ones. This has led to many new ideas in openings such as the Grunfeld, where the fianchettoed bishop on g7 can be targeted by an h-pawn attack. Tying back to the theme of long-term improvements, neural network engines understand the problems that it creates for the opponent in the long run.

Our next game surveys a cutting-edge approach against the Grunfeld. Its sharp rise in popularity from 2019 onwards coincides with the widespread use of neural network engines at the top level.

The clash of chess styles between classical and neural network AI is fascinating to analyze. Many examples on this topic can be found in the famous AlphaZero Stockfish games and in openings where the engines disagree on the evaluation, such as the Grischuk Nakamura game. Their disagreement has led to major advancements in all popular openings, as old lines are revised, and new lines supported by modern engines are introduced into high-level practice.

Part 2 will examine another AI-inspired opening and the modern battle between two players armed with ideas from neural network engines.

The rest is here:

How the AI Revolution Impacted Chess (1/2) - Chessbase News

Posted in Ai | Comments Off on How the AI Revolution Impacted Chess (1/2) – Chessbase News

Flagging Down A Roaming AI Self-Driving Car Robo-Taxi Might Not Be In The Cards – Forbes

Posted: at 5:13 pm

Hailing a ride in an era of Ai-based self-driving cars.

Not so long ago, it seemed that the hailing of a cab required long arms and a capacity to wave frantically to catch the eye of the taxi driver.

You would be standing at a curb and keep your cool while taxi after taxi seemed to entirely ignore your frantic motions. It was hard to discern why the cabs werent pulling over to pick you up. They were showing as empty and therefore ought to be avidly seeking a potential fare. Sometimes you would consider that perhaps they didnt like the particular manner in how you waved your arms.

Maybe they thought you were overly excited and this was a worrisome sign by the taxi driver. Or they didnt like the look of your seemingly crudely summoning tactic for getting a pickup. You see, there were lots of other potential cab seekers that had a more subtle approach. Some all-knowing people would nonchalantly do a once wave and that was all it took to get a cab to pull over. Others would merely nod their head or make a quick tip of their hat, as though these were secret signals in a private baseball game between a catcher and a pitcher.

Things could get really competitive at certain times of the day.

If it was rush hour, then all bets were off. There were tons of people fervently attempting to get cabs, all of them at the same time and all across the whole city. You pretty much had to hope for the randomness of the world to come to your aid. When a taxi perchance dropped off a rider at the very spot that you were standing, this gave you the top rights to commandeer the taxi and proclaim that it was yours for the taking.

Many movies and TV shows used to provide a gag whereby a taxi comes up to pick someone up, and then someone else darts into the cab instead. This was more than just a spate of humor. It happened. Quite often. Unless you took to mind the ever-present notion that possession is nine-tenths of the law, a nanosecond of a delay getting into a taxi could mean that an interloper would grab it and you would be left standing high and dry.

I remember getting into a cab at the airport and when I gave the hotel address for my stay, the cabbie gave me the most utterly disgusting of glances. He then explained that the hotel was less than a two-minute drive from the airport. His fare would be peanuts. Meanwhile, he had waited in an enormously long cab line while at the airport, and after dropping me at the hotel he would once again have to sit idly in that same darned line. In short, he emphatically told me that I just cost him nearly two hours of his available cab time, for pretty much nothing at all as a fare.

He pleaded with me to get out. The rules for the cabbies at the airport did not allow them to kick out a rider. It had to be the rider themselves that would decide to back out of a ride. He told me that he had a family and needed to support them. Get another cab, he exhorted. Just dont force him to give me the dinky ride, for which he also suggested I could just walk from the airport and enjoy the fresh air, averting the need for a taxi altogether.

Anyway, the point is that even if you believe you had managed to snag a taxi, there was still a chance that it might get loose from your grip. Either the cab driver would not want you, or someone else might try to intervene and take your cab, sometimes offering a whale of a story.

I recall one time that I had just gotten into a hailed cab and a bystander tapped on the window. The person explained that they had been waiting for twenty minutes to get a cab. They had noticed me standing there too, though I apparently had only been waiting about ten minutes. The explanation turned into a morality play that I ought to voluntarily give up the cab since this other person had waited longer than me. It made no difference that the cabbie stopped in front of me. It made no difference that we were both humans. The key was that I had gone outside of my fair turn and had cheated this other waiting rider.

How about that?

On another occasion, I luckily hailed a cab and a person came running up to the vehicle. They offered me ten bucks if I would hand the taxi over to them. They were in a hurry and didnt want to wait for a taxi. The logic was that time is money, as we all know, and so this potential rider was willing to pay me for giving up my cab and presumably giving up my waiting time. An interesting proposition. The taxi driver entered into the dialogue and pointed out that the ten dollars ought to go to the driver, or at least a cut of it ought to.

Hailing a cab while in the rain or snow was the worst.

There you are, standing out in the raw elements. The wind nearly blowing you over. Rain pouring sheets of water onto your head, or perhaps onto your umbrella or raincoat. If it was snow, you stood in the icy cold and kept moving your feet to keep the circulation going. An additional problem was that there seemed to be fewer cabs cruising around and thus the wait time was totally elongated.

In todays world, there is a lot less handwaving hailing going on.

In lieu of wantonly hailing a ride, you usually pull up an app on your smartphone and use a ridesharing network or even a cab-hailing network to get yourself a ride. No need to stand around and try to spy on an available roaming vehicle. The computer systems do all that work for you. This is known as e-hailing.

On a digital map displayed on your smartphone screen, youll see various dots or tiny emoji cars that are moving around in your area. One of them will be usually chosen for you by the computer system, based on factors such as how close the roaming vehicle is, where you want to go, the type of vehicle preference you have, and so on. All you then need to do is wait for the arrival of the assigned vehicle.

No need to wave at anyone or anything.

That being said, upon the arrival of your assigned vehicle, sometimes you do need to wave or make a motion to ensure that the driver sees you. The map is oftentimes not precisely able to indicate where the passenger is standing. Plus, there might be a multitude of people waiting for lifts, perhaps having all gotten out of a theatre at the same time and now seeking rides home.

There is no doubt that merely requesting online to get a ride is a lot smoother than having to play the roulette wheel game of hailing on-the-street a prospective ride.

Besides the ease of no longer needing to make those waving motions, you also now have a somewhat ironclad guarantee that you will get a ride. In the case of standing around and hailing, you never really knew how long it might take and whether you would ever land a ride. That was the terrible uncertainty of it all. This could be especially on your mind if perchance caught in a bad part of town or rotten weather. Your mind was frantically praying for an available ride to come along.

Another nifty aspect about using an app to hail a ride is that you know beforehand the nature of the vehicle and the driver. You are usually presented with some info about the car that is coming to pick you up. There is also the name of the driver and their rating. This helps you to know whether the rider is presumably any good at providing rides.

When you hailed a cab at random, it was a wildcard as to what type of driver you might get. Some drivers were cautious and went relatively slowly, taking turns with great aplomb. Other drivers were like racecar drivers, zipping along. They wanted to get you to your destination as fast as possible, meaning that they then could seek to find their next paying fare that much sooner. More fares in a day were the mantra for making any money at this game.

Those that have never hailed a ride via the standing outside and waving method are at times aghast when they discover that this approach still exists. Many believe it was only something that happened during the times of the dinosaurs, and they assumed that since dinosaurs are extinct that presumably, the traditional method of hailing of a ride was certainly extinct too.

Well, sit down and prepare yourself for a bit of a shock, conventional hailing still happens.

There are though additional twists and turns.

In many locales, there are byzantine rules about which cabs or taxis can provide those impromptu derived rides. Depending upon various conditions, it could be that only e-hailing is legally allowed, per time of day or where you are in a city or town. Anyone trying to do the street hailing has to be brazen to think that it will work since there are fewer and fewer chances of this being feasible.

Some sneaky riders will try to maximize their chances of getting a ride quickly by doing both the e-hailing and the stand-around techniques in unison.

They pull up the app for an e-hail and see what the wait time is like. They simultaneously stand out in the street and start waving at any seeming potential rides. If the wait time seems long on the e-hail, they will temporarily book it and then wait until the last allowed moment to drop it (before incurring any fees for doing so). During that interval, they will be stridently attempting to catch a ride via the waving method. Whichever approach strikes gold first is the winner in that momentary contest.

As they say, alls fair in love and war.

Since we have been discussing cars and taxis, it makes indubitable sense to consider that the future of such vehicles will consist of self-driving cars. Be aware that there isnt a human driver involved in a true self-driving car. True self-driving cars are driven via an AI driving system. There isnt a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.

For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Heres an intriguing question that is worth pondering: Once self-driving cars are acting as robo-taxis and cruising around on our streets to do so, will you be able to hail one by hand or only via e-hailing?

Before jumping into the details, Id like to further clarify what is meant when referring to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Hailing A Robo-Taxi

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in todays AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to todays AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system wont natively somehow know about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Lets dive into the myriad of aspects that come to play on this topic.

Assume for sake of discussion that self-driving cars are inevitably sufficiently able to drive around and at least achieve Level 4 capabilities (this means that there is a defined ODD or Operational Design Domain for which the autonomous vehicle is capable of driving within).

There is a hefty debate about whether individuals will be able to own and operate self-driving cars or whether only large companies will be able to do so. Part of the logic is that a self-driving car will need to be kept in tiptop shape else the AI driving system will not be able to safely provide rides. The assumption is that a company responsible for operating a fleet of self-driving cars is more likely to maintain and upkeep the autonomous vehicles than might individual owners do so.

I generally disagree with that contention and argue that we will indeed have individual ownership of self-driving cars, for various reasons that I have articulated at this link here.

Putting that whole brouhaha to the side, I think we can generally all agree that there will be self-driving cars offered on ridesharing or ride-hailing basis, regardless of whom the owner is. When you go to use an app to request a ridesharing lift, the odds are that the app will present you with one of two options, you can select a human-driven car or you can select a self-driving car. Some people will relish using a self-driving car, while others will eschew it and prefer instead to use a human-driven ridesharing car.

Each to their own preference.

Most pundits agree that self-driving cars will be on the go for much of their available traversal time. The nice thing about a self-driving car is that the AI driving system doesnt need any rest, nor lunch breaks, or even bathroom breaks. The expectation is that self-driving cars will be able to be driving around 24x7, except for times when they need to refuel or need some maintenance or fixing up.

For an entity that owns a self-driving car, there is the opportunity to potentially make big bucks by this always on the move capability (and, without the labor costs of needing a human driver). For example, I assert that a person could own a self-driving car, have it take them to the office for a normal workday, and while at work the self-driving car is made available on a ridesharing basis. The person then has the self-driving car take them home after work, and for the rest of the night, the self-driving car continues making money by providing more lifts. In short, their self-driving car makes money for them when they otherwise dont need it.

Without getting mired into any messy arguments, the emphasis is that a self-driving car can be a ridesharing or ride-hailing vehicle and provide rides to those making such a request. That seems abundantly clear-cut and inarguable.

The question we are considering herein is the matter of how to request a self-driving car for those that are seeking a lift.

We can already assume that the most likely approach consists of app-based e-hailing.

Either the company operating the self-driving car will provide a dedicated app for this purpose, or might list the self-driving car on some existing ridesharing network. If they list via a network, the odds are that a cut of the fare is bound to be required (i.e., a split between the operator of the self-driving car and the network operator). Ergo, the chances are that the operator of the self-driving car would prefer that people use the specialized app and not have to split any fees.

It is a tradeoff of course, as to whether the dedicated app will ensure enough use of the self-driving car versus being listed on a ridesharing network.

Will a self-driving car be occupied at all times while on a ridesharing basis with a passenger inside the vehicle?

Nope.

There will be times during which the self-driving car will be absent of a passenger. It could be that the self-driving car is doing delivery of a package, thus, there isnt a person inside the autonomous vehicle. Other times the self-driving car might be making its way to a requested lift and is empty until it reaches the person seeking a ride.

One other possibility is that there arent any ride requests at the moment and so the conundrum of what to do with the self-driving car then logistically arises. Do you opt to park the self-driving car at some locale, and have it wait for a requested ride? That might not be as advantageous as having the self-driving car roaming around, for which it might be in a better place when a request occurs.

The operator of a self-driving car will need to make this balancing act decision. In some instances, it might be better to park the self-driving car, while in other instances it is more prudent to keep it underway. A variety of factors come to play.

All told, we can seemingly agree that there will be times at which self-driving cars will be roaming empty of any passengers and awaiting a request for a ride. Ive suggested that this might become quite prevalent, see my analysis at this link here.

We are now at the moment of truth.

Should a self-driving car that is acting in this robo-taxi manner be able to pick up passengers that might undertake a traditional hailing gesture, or will self-driving cars only be summoned via e-hailing?

My claim is that we potentially could have self-driving cars programmed to handle the streetwise hailing approach.

This probably though will not occur at first. The mainstay will be the e-hailing avenue. Once that has become firmly established, I believe we will see some self-driving cars that are adjusted to be responsive to street-level hailing. This will primarily be due to competitive forces that require self-driving car operators to increasingly find ways to outshine their competition.

Now, it could be that the human drivers take the same stance too.

In other words, if self-driving cars start to become commonplace on ridesharing networks, the question naturally comes up about how human drivers will remain competitive. Assuming that a self-driving car is less expensive to use and that it wont have the human foibles of driving, the logical progression is that riders will aim to select a self-driving car over a human-driven car (all else being equal, as it were). A means for a human driver to remain competitive would be to offer something that the self-driving cars arent offering, namely the traditional street hailing approach.

Lets dig briefly into the complications of having a self-driving car attempt to perform the conventional hailing method.

As mentioned earlier, a person seeking a ride is customarily expected to make a motion that will serve to relatively definitively indicate that they are seeking a ride. This usually consists of waving an arm, along with perhaps looking directly at the targeted cab or taxi, and possibly pointing at the cab too. All of this is intended to catch the attention of the human driver.

Self-driving cars will be outfitted with a variety of sensors, including video cameras, radar, LIDAR, ultrasonic units, thermal imaging, and so on. Via the use of techniques such as Machine Learning (DL) and Deep Learning (DL), the data from those sensors is computationally analyzed and various patterns are being scanned for.

In theory, the image processing of the video camera's live stream could be used to try and detect a person that seems to be hailing the self-driving car. It would be easiest if the person had some special token or signal that was known for this purpose, such as a special flag or even just a specific gesture. But this might be a bit much for people to keep with them or have to know, so well assume that the traditional waving motion is the preferred method per se.

Admittedly, a person could be simply waving at a friend across the street, or perhaps swatting at a buzzing bee. It will be hard to discern with absolute certainty that the person is hailing the self-driving car. You could make the same case for human taxi drivers too, namely that they do not know for sure that a person is doing a hailing action. The context of the moment and the movements of the potential rider have to be carefully combined to reach such a conclusion.

Okay, so trying to spot a person seeking a ride that is doing a streetwise hailing will be somewhat computationally tough to do, but not insurmountable. There will be instances of an AI driving system skipping past the person due to the lack of detecting that a hailing activity was underway. There will also be instances of mistakenly coming to the person to provide a ride when they were not genuinely in the act of hailing a ride.

We can also assume some dolts will just for kicks decide to falsely attempt a hailing to see what the self-driving car will do.

In the case of a human taxi driver, the driver would likely be irked at the jokester and provide a rather stern talking to (or worse). One supposes that the AI driving system could send the video to a remote agent for review, and if the trickster is seen to have been playing false games, perhaps there would be some means of legitimately issuing a ticket or something along those lines (unfortunately, that could be a slippery slope too).

Conclusion

Flagging down a self-driving car that is being operated as a robo-taxi is not likely in the cards for the near-term, but certainly can be envisioned for the future.

This is going to be tricky to program.

Nonetheless, it is possible.

We will likely initially have disgruntled indications of situations that the AI driving system that went right past someone and ignored them. Similar to how there have been concerns about human taxi drivers that cherry-pick whom they will pick up, we would need to test and validate that the AI driving systems do not have any built-in patterns of biases (see my column for coverage on this and other AI Ethics issues).

There wont be much cause to out-the-gate have self-driving cars operate in this manner. The easiest approach entails doing e-hailing. Given that the AI developers already have their hands full as they aim to just get self-driving cars to safely go from point A to point B, the notion of including a conventional ride-hailing capability is ostensibly considered an edge or corner case. Those edge or corner cases are ranked as low priority and construed as outside the core of what needs to be developed.

Besides hand waving, perhaps we can program the AI driving system to detect a ride-hailing gesture such as a quick wink of the eye. Imagine though how confusing that might be when the self-driving car is going down a crowded street of pedestrians.

I know, maybe we can use mind-reading instead. If a person merely thinks about needing a lift, the AI driving system can make use of that type of hailing. As you likely know, the desire for mind-reading computers is right up there with the aspiration for autonomous vehicles (see my coverage).

Just dont read whatever else is in our minds, and stick with the earnest and singular desire of hailing a ride.

Read more here:

Flagging Down A Roaming AI Self-Driving Car Robo-Taxi Might Not Be In The Cards - Forbes

Posted in Ai | Comments Off on Flagging Down A Roaming AI Self-Driving Car Robo-Taxi Might Not Be In The Cards – Forbes

The United Nations: Empowering the UN agencies with ‘AI for Good’ Series – Analytics Insight

Posted: at 5:13 pm

The United Nations is utilizing artificial intelligence for better performance of the UN agencies

Recent progress in artificial intelligence has been immense and exponential. The technology is making its way out of research labs and into everyday lives, promising to help us tackle humanitys greatest challenges. As the UN specialized agency for information and communication technologies, ITU believes in the power of AI for good and has organized the AI for Good series since 2017. The 2018 AI for Good Global Summit brought together AI innovators and public and private-sector decision-makers, including more than 30 UN agencies, to generate AI strategies and support projects to accelerate progress towards the UN Sustainable Development Goals (SDGs).

The International Telecommunication Union (ITU) is the United Nations specialized agency for information and communication technologies and has become one of the key UN platforms for exploring the impact of AI. ITU has stated that it will provide a neutral platform for government, industry, and academia to build a common understanding of the capabilities of emerging AI technologies and consequent needs for technical standardization and policy guidance. The AI for Good series is the leading United Nations (UN) platform for dialogue on Artificial Intelligence (AI). As the UN specialized agency for ICTs, the International Telecommunication Union (ITU), in partnership with sister UN agencies, is organizing the annual AI for Good Global Summit for international dialogue, aimed at building a common understanding of the capabilities of emerging AI technologies.

The UN family has a critical role to play in balancing technological progress with social progress. ITU remains committed to continuing to work closely with sister UN agencies and all other stakeholders to build a common understanding of the capabilities of emerging AI technologies.

Along with this development, the United Nations declared the opening of a Centre on Artificial Intelligence and Robotics in the Netherlands to monitor developments in AI and robotics, with the support of the Government of the Netherlands and the City of The Hague. The office will help focus expertise on AI throughout the UN in a single agency, which will be organized under the UN Interregional Crime and Justice Research Institute (UNICRI). The UNICRI launched its program on AI and Robotics in 2015.

An innovative artificial intelligence (AI) tool that will make it easier for countries to measure the contributions of nature to their economic prosperity and wellbeing was launched in April 2021, by the United Nations and the Basque Centre for Climate Change (BC3). Developed by the Statistics Division of the United Nations Department of Economic and Social Affairs (UN DESA), the UN Environment Programme (UNEP) and BC3, the new tool can vastly accelerate the implementation of the new ground-breaking standard for valuing the contributions of nature that were adopted by the UN Statistical Commission last month. The tool makes use of AI technology using the Artificial Intelligence for Environment and Sustainability (ARIES) platform to support countries as they apply the new international standard for natural capital accounting, the System of Environmental-Economic Accounting (SEEA) Ecosystem Accounting.

In November 2021, the United Nations adopted a historical text defining the common values and principles needed to ensure the healthy development of artificial intelligence. The agreement was adopted at the 41st session of the UNESCO General Conference, showing renewed cooperation on the ethics of artificial intelligence. The approaches AI ethics as a systematic normative reflection, based on a holistic and evolving framework of interdependent values, principles, and actions that can guide societies in dealing responsibly with the known and unknown impacts of artificial technologies on human beings, societies and the environment and offers them a basis to accept or reject artificial intelligence technologies.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Read more from the original source:

The United Nations: Empowering the UN agencies with 'AI for Good' Series - Analytics Insight

Posted in Ai | Comments Off on The United Nations: Empowering the UN agencies with ‘AI for Good’ Series – Analytics Insight