Page 132«..1020..131132133134..140150..»

Category Archives: Artificial Intelligence

Will Artificial Intelligence Imperil Nuclear Deterrence? – War on the Rocks

Posted: September 23, 2019 at 7:44 pm

Nuclear weapons and artificial intelligence are two technologies that have scared the living daylights out of people for a long time. These fears have been most vividly expressed through imaginative novels, films, and television shows. Nuclear terror gave us Nevil Schutes On the Beach, Kurt Vonneguts Cats Cradle, Judith Merrils Shadow on the Hearth, Nicholas Meyers The Day After, and more recently Jeffrey Lewis 2020 Commission Report. Anxieties about artificial intelligence begat Jack Williamsons With Folded Hands, William Gibsons Neuromancer, Alex Garlands Ex Machina, and Jonathan Nolan and Lisa Joys Westworld. Combine these fears and you might get something like Sarah Connors playground dream sequence in Terminator 2, resulting in the desert of the real that Morpheus presents to Neo in The Matrix.

While strategists have generally offered more sober explorations of the future relationship between AI and nuclear weapons, some of the most widely received musings on the issue, including a recent call for an AI-enabled dead hand to update Americas aging nuclear command, control, and communications infrastructure, tend to obscure more than they illuminate due to an insufficient understanding of the technologies involved. An appreciation for technical detail, however, is necessary to arrive at realistic assessments of any new technology, and particularly consequential where nuclear weapons are concerned. Some have warned that advances in AI could erode the fundamental logic of nuclear deterrence by enabling counter-force attacks against heretofore concealed and mobile nuclear forces. Such secure second-strike forces are considered the backbone of effective nuclear deterrence by assuring retaliation. Were they to become vulnerable to preemption, nuclear weapons would lose their deterrent value.

We, however, view this concern as overstated. Because of AIs inherent limitations, splendid counter-force will remain out of reach. While emerging technologies and nuclear force postures might interact to alter the dynamics of strategic competition, AI in itself will not diminish the deterrent value of todays nuclear forces.

Understanding the Stability Concern

The exponential growth of sensors and data sources across all warfighting domains has analysts today facing an overabundance of information. The Defense Departments Project Maven was born out of this realization in 2017. With the help of AI, then-Deputy Secretary of Defense Robert Work sought to reduce the human factors burden of [full-motion video] analysis, increase actionable intelligence, and enhance military decision-making in support of the counter-ISIL campaign. Hans Vreeland, a former Marine artillery officer involved in the campaign, recently explained the potential of AI in facilitating targeted strikes for counterinsurgency operations, arguing that AI should be recognized and leveraged as a force multiplier, enabling U.S. forces to do more at higher operational tempo with fewer resources and less uncertainty. Such a magic bullet would surely be welcome as a great boon to any commanders arsenal.

Yet, some strategists warn that the same AI-infused capabilities that allow for more prompt and precise strikes against time-critical conventional targets could also undermine deterrence stability and increase the risk of nuclear use. Specifically, AI-driven improvements to intelligence, surveillance, and reconnaissance would threaten the survivability of heretofore secure second-strike nuclear forces by providing technologically advanced nations with the ability to find, identify, track, and destroy their adversaries mobile and concealed launch platforms. Transporter-erector launchers and ballistic missile submarines, traditionally used by nuclear powers to enhance the survivability of their deterrent forces, would be at greater risk. A country that acquired such an exquisite counter-force capability could not only hope to limit damage in case of a spiraling nuclear crisis but also negate its adversaries nuclear deterrence in one swift blow. Such an ability would undermine the nuclear deterrence calculus whereby the costs of imminent nuclear retaliation far outweigh any conceivable gains from aggression.

These expectations are exaggerated. During the 1991 Gulf War, U.S.-led coalition forces struggled hard to find, fix, and finish Iraqi Scud launchers despite overwhelming air and information superiority. Elusive, time-critical targets still seem to present a problem today. Facing a nuclear-armed adversary, such poor performance would prove disastrous. The prospect of just one enemy warhead surviving would give pause to any decisionmaker contemplating a preemptive counter-force strike. This is why nuclear weapons are such powerful deterrents after all and states who possess them go to great lengths to protect these assets. While some worry that AI could achieve near-perfect performance and thereby enable an effective counter-force capability, inherent technological limitations will prevent it from doing so for the foreseeable future. AI may bring modest improvements in certain areas, but it cannot fundamentally alter the calculus that underpins deterrence by punishment.

Enduring Obstacles

The limitations AI faces are twofold: poor data and the inability of even state-of-the-art AI to make up for poor data. Misguided beliefs about what AI can and cannot accomplish further impede realistic assessments.

The data used for training and operationalizing automated image-recognition algorithms suffers from multiple shortcomings. Training an AI to recognize objects of interest among other objects requires prelabeled datasets with both positive and negative examples. While pictures of commercial trucks are abundant, much fewer ground-truth pictures of mobile missile launchers are available. In addition to the ground-truth pictures potentially not representing all launcher models, this data imbalance in itself is consequential. To increase its accuracy with training data that includes fewer launchers than images of other vehicles, the AI would be incentivized to produce false negatives by misclassifying mobile launchers as non-launcher vehicles. Synthetic, e.g., manually warped, variations of missile-launcher images could be included to identify launchers that would otherwise go undetected. This would increase the number of false positives, however, because now trucks that resemble synthetic launchers would be misclassified.

Moreover, images are a poor representation of reality. Whereas humans can infer the function of an object from its external characteristics, AI still struggles to do so. This is not so much an issue where an objects form is meant to inform about its function, like in handwriting or speech recognition. But a vehicles structure does not necessarily inform about its function a problem for an AI tasked with differentiating between vehicles that carry and launch nuclear-armed ballistic missiles and those that do not. Pixilated, two-dimensional images are not only a poor representation of a vehicles function, but also of the three-dimensional object itself. Even though resolution can be increased and a three-dimensional representation constructed from images taken from different angles, this introduces the curse of dimensionality. With greater resolution and dimensional complexity, the number of discernable features increases, thus requiring exponentially more memory and running time for an AI to learn and analyze. AIs inability to discard unimportant features further makes similar pictures seem increasingly dissimilar and vice versa.

Could clever, high-powered AI compensate for these data deficiencies? Machine-learning theory suggests not. When designing algorithms, AI researchers face trade-offs. Data describing real-world problems, particularly those that pertain to human interactions, are always incomplete and imperfect. Accordingly, researchers must specify which patterns AI is to learn. Intuitively it might seem reasonable for an algorithm to learn all patterns present in a particular data set, but many of these patterns will represent random events and noise or be the product of selection bias. Such an AI could also fail catastrophically when encountering new data. In turn, if an algorithm learns only the strongest patterns, it may perform poorly although not catastrophically on any one image. Consequently, attempts to improve an AIs performance by reducing bias generally increase variance and vice versa. Additionally, while any tool can serve as a hammer, few will do a very good job at hammering. Likewise, no one algorithm can outperform all others on all possible problem sets. Neural networks are not universally better than decision trees, for example. Because there is an infinite number of design choices, there is no way to identify the best possible algorithm. And with new data, a heretofore near-perfect algorithm might no longer be the best choice. Invariably, some error is irreducible.

Nevertheless, tailoring improves AI performance. Regarding image recognition, intimate knowledge of the object to be detected allows for greater specification, yielding higher accuracy. On the counter-force problem, however, a priori knowledge is not easily obtained; it is likely to be neither clean nor concise. As discussed above, because function cannot be fully represented in an image, it cannot be fully learned by the AI. Moreover, like most military affairs, counter-force is a contested and dynamic problem. Adversaries will attempt to conceal their mobile-missile launchers or change their design to fool AI-enabled ISR capabilities. They could also try to poison AI training data to induce misclassification. This is particularly problematic because of the one-off nature of a counter-force strike, which prevents validating AI performance with real-world experience. Simulations can only get AI so far.

When it comes to AI, near-perfect performance is tied inextricably to operating in environments that are predictable, even controlled. The counter-force challenge is anything but. Facing such a complex and dynamic problem set, AI would be constrained to lower levels of confidence. Sensor platforms would provide an abundance of imagery and modern precision-guided munitions could be expected to eliminate designated targets, but automated image recognition could not guarantee the detection of all relevant targets.

The Pitfalls of a Faulty Paradigm

Poor data and technological constraints limit AIs impact on the fundamental logic of nuclear deterrence, as well as on other problem sets requiring near-perfect levels of confidence. So, why is the fuzzy buzz not making way for a more measured debate on specific merits and limitations?

The military-technological innovations of the past derived their power principally from the largely familiar and relatively intuitive physical world. Once the mechanics of aviation and satellite communication were understood, they were easily scaled up to enable the awesome capabilities militaries have at their disposal today. What many fail to appreciate, however, is how fundamentally different the world of AI operates and the enduring obstacles it contains. This unfamiliarity with the rules of the computational world sustains the application of an ill-fitting innovation paradigm to AI.

As discussed above, when problems grow more complex, AIs time and resource demands increase exponentially. The traveling salesman problem provides a simple illustration: Given a list of cities and the distances between each pair of cities, what is the shortest possible route a salesman can take that visits each city and returns to the origin city? A desktop computer can answer this question for ten cities (and 3,628,800 possible routes) in mere seconds. With just 60 cities the number of possible routes exceeds the number of atoms in the known universe (roughly 1080). Once the list gets up to 120 destinations, a supercomputer with as many processors as there are atoms in the universe each of them capable of testing a trillion routes per second would have to run longer than the age of the universe to solve the problem. Thus, in contrast to technological innovations rooted in the physical world, there is often no straight-forward way to scale up AI solutions.

Moreover, machine intelligence is much different from human intelligence. When confronted with impressive AI results, some tend to associate machine performance with human-level intelligence without acknowledging that these results were obtained in narrowly defined problem sets. Unlike humans, AI lacks the capacity for conjecture and criticism to deal flexibly with unfamiliar information. It also remains incapable of learning rich, higher-level concepts from few reference points, so that it cannot easily transfer knowledge from one area to another. Rather, there is a high likelihood of catastrophic failure when AI is exposed to a new environment.

Understanding AIs Actual Impact on Deterrence and Stability

What should we make of the real advantages AI promises and the real limitations it will remain constrained by? As Work, Vreeland, and others have persuasively argued, AI could generate significant advantages in a variety of contexts. While the stakes are high in all military operations, nuclear weapons are particularly consequential. But because AI cannot reach near-perfect levels of confidence in dynamic environments, it is unlikely to solve the counter-force problem and imperil nuclear deterrence.

What is less clear at this time is how AI, specifically automated image recognition, will interact with other emerging technologies, doctrinal innovations, and changes in the international security environment. AI could arguably enhance nations confidence in their nuclear early warning systems and lessen pressures for early nuclear use in a conflict, for example, or improve verification for arms control and nonproliferation.

On the other hand, situations might arise in which an imperfect but marginally AI-improved counter-force capability would be considered as good enough to order a strike against an adversarys nuclear forces, especially when paired with overconfidence in homeland missile defense. Particularly states with relatively small and vulnerable arsenals would find it hard to regard assurances that AI would not be used to target their nuclear weapons as credible. Their efforts to hedge against improving counter-force capabilities might include posture adjustments, such as pre-delegating launch authority or co-locating operational warheads with missile units, which could increase first-strike instability and heighten the risk of deliberate, inadvertent, and accidental nuclear use. Accordingly, future instabilities will be a product less of the independent effects of AI than of the perennial credibility problems associated with deterrence and reassurance in a world of ever-evolving capabilities.

Conclusion

As new technologies bring new forms of strategic competition, the policy debate must become better informed about technical matters. There is no better illustration of this requirement than in the debate about AI, where a fundamental misunderstanding of technical matters underpins a serious misjudgment of the impact of AI on stability. While faulty paradigms sustain misplaced expectations about AIs impact, poor data and technological constraints curtail its effect on the fundamental logic of nuclear deterrence. The high demands of counter-force and the inability of AI to provide optimal solutions for extremely complex problems will remain irreconcilable for the foreseeable future.

Rafael Loss (@_RafaelLoss) works at the Center for Global Security Research at Lawrence Livermore National Laboratory. He was a Fulbright fellow at the Fletcher School of Law and Diplomacy at Tufts University and recently participated in the Center for Strategic and International Studies Nuclear Scholars Initiative.

Joseph Johnson is a Ph.D. candidate in computer science at Brigham Young University. His research focuses on novel applications of game theory and network theory in order to enhance wargaming. He deployed to Iraq with the Army National Guard in 2003 and worked at the Center for Global Security Research at Lawrence Livermore National Laboratory.

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The views and opinions expressed herein do not necessarily state or reflect those of Lawrence Livermore National Security, LLC., the United States government, or any other organization. LLNL-TR-779058.

Image: U.S. Air Force (Photo bySenior Airman Thomas Barley)

Read the original post:

Will Artificial Intelligence Imperil Nuclear Deterrence? - War on the Rocks

Posted in Artificial Intelligence | Comments Off on Will Artificial Intelligence Imperil Nuclear Deterrence? – War on the Rocks

Risks associated with Artificial Intelligence worrying – Down To Earth Magazine

Posted: at 7:44 pm

And yet the concerns have not cast their shadow over India since AI research is still in its infancy in the country

Artificial Intelligenceor AI is the new digital frontier that will transform the way the world works and lives. Profoundly so. At a basic level of understanding, AI is the theory and development of computer systems that can perform tasks that normally require human intelligence, such as visual perception, speech recognition and even decision-making.

Its gradual development in the half century since 1956 when the term was first used gave us no hint of the extraordinary leaps in technology that would occur in the last decade and a half.

A research study by the WIPO (World Intellectual Property Organization) underlines this phenomenon with its findings: since the 1950s, innovators and researchers have published more than 1.6 million AI-related scientific publications and filed patent applications for nearly 340,000 inventions, most of it occurring since 2012.

Machine learning, finds the WIPO study, is the dominant AI technique, found in 40 per cent of all the AI-related patents it has studied. This trend has grown at an average rate of 28 per cent every year from 2013 onwards.

More data, increased connectedness and greater computer power have facilitated the new breakthroughs and the AI patent boom. As to which sectors are changing rapidly, the study shows it is primarily telecommunications, transportation and life or medical sciences. These account for 42 per cent of AI-related patents filed so far.

In short, super intelligence, which most of us believed was science fiction and a development far into the future, now appears imminent. Thats why there is so much concern over the risks associated with AI from the greats of science like Stephen Hawking to technology giants such as Steve Wozniak and Elon Musk.

Of a piece is the unexpected caution being shown by the US Patent and Trademark Office. It has sought public comments on a range of AI-related concerns, many of which are centred on the diminishing role of humans in AI breakthroughs.

Among the questions it has posed is: What are the different ways that a natural person can contribute to the conception of an AI invention and be eligible to be a named inventor? Should an entity other than a natural person, or company to which a natural person assigns an invention, be able to own a patent on the AI invention?

The dilemma for patent offices which have not addressed this worry is whether existing patent laws on inventions need to be revised to take into account inventions where an entity (computers) other than a natural person has contributed greatly to its conception.

Such esoteric concerns have not cast their shadow over India, understandably so since AI research is still in its infancy here. The Global AI Talent Report 2018 finds that India is a bit player in this critical area where, predictably, the US and China are in the forefront. Of the 22,000 PhD educated researchers worldwide working on AI, less than 50 are focused seriously on AI in India.

A NITI Aayog strategy paper on AI offers little hope because of the low intensity of research which is hobbled by lack of expertise, personnel and skilling opportunities and enabling data ecosystems. For momentous developments, watch the Chinese and American space.

(This article was first published in Down To Earth's print edition dated September 16-30, 2019)

We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.

India Environment Portal Resources :

Read more:

Risks associated with Artificial Intelligence worrying - Down To Earth Magazine

Posted in Artificial Intelligence | Comments Off on Risks associated with Artificial Intelligence worrying – Down To Earth Magazine

Artificial Intelligence Improving How We Diagnose Cancer – Technology Networks

Posted: at 7:44 pm

The Journal Metabolism Clinical and Experimental mentions in a recent review that the use of artificial intelligence (AI) in medicine has come to cover such broad topics from informatics to the application of nanorobots for the delivery of drugs. AI has come a long way from its humble beginnings. With the advanced development of AI systems and machine learning, more significant medical applications for the technology are emerging. According to Cloudwedge, FocalNet, an AI system recently developed by researchers at UCLA, can aid radiologists and oncology specialists in diagnosing prostate cancer.

According to UK Cancer Research Magazine, over 17 million cancer cases were diagnosed across the globe throughout 2018. The same research suggests there will be 27.5 million new cancer cases diagnosed each year by 2040.

Although these recent statistics seem discouraging, if we compare diagnosis and treatment data, patient outcomes have improved significantly compared to a few decades ago in the 1970s, less than a quarter of people suffering from cancer survived. Today, thanks to progress in the field, survival rates have significantly improved. AI is a part of that progress.

As early as 1988, The Annals of Internal Medicine mentioned that conventional computer-aided diagnoses were limited, and to overcome the shortfalls, researchers turned to artificial intelligence. However, because of the limited technology available at the time, the system had to be manually trained by medical personnel, and it's likely that this training only incorporated the personal experience of a handful of doctors. Despite these limitations, this set the stage for the use of neural networks in today's medical field.

These neural networks are the most basic form of artificial intelligence. Machine learning is the branch of AI that is focused on teaching machines to be better at tasks iteratively. By developing algorithms that can help systems determine where they were right and where they were wrong automatically, the system could theoretically learn generations worth of data in a short space of time. Despite the theoretical soundness of the technique, and the use of complex algorithms that can recognize behaviors and patterns, AI technology has only recently been able to offer the human-like insight and determinations required for it to excel in the medical field.

Nature reports that the New York Genome Center relies on a unique piece of software for screening its patients for glioblastoma - an artificial intelligence system developed by IBM called Watson. Watson gained fame in 2011 thanks to its excellent performance in a televised game show, but the AI is now being to put to work aiding the diagnostic field. However, the system still needs more data to be trained to function appropriately, and as yet, AI isn't able to teach itself what is correct and what isn't. The goal for IBM's Watson is to be able to read patient files and then access the relevant information needed to give the most accurate diagnosis and treatment plan.

While it has the ability to understand the meaning of language and can develop on its own via machine learning, Watson still has a way to go before it can be introduced into the real world as an effective assistant. But even today, AI has shown in potential in some specialized medical tasks, with human help. According to a recent Northwestern University study, AI can outperform radiologists at cancer screening, especially in patients with lung cancer. The results show that using AI cut false positives by 11%. The medical field might not be so far away from having its own well-trained AI delivering proper diagnoses. It all depends on how fast AI technology advances and how quickly it can learn to diagnose like a human physician.

Go here to read the rest:

Artificial Intelligence Improving How We Diagnose Cancer - Technology Networks

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Improving How We Diagnose Cancer – Technology Networks

Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ – AI News

Posted: at 7:44 pm

Microsoft chief Brad Smith issued a warning over the weekend that killer robots are unstoppable and a new digital Geneva Convention is required.

Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.

While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.

As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.

Speaking to The Telegraph, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.

Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.

Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.

Theres still no clear responsible entity for death or injuries caused by an autonomous machine the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.

With military applications, many technologists have called for AI to never make a combat decision especially one that would result in fatalities on its own. While AI can make recommendations, a final decision must be made by a human.

The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.

Petrovs computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Unions strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right.

Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.

Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.

Many companies including thousands of Google employees, following backlash over a Pentagon contract to develop AI tech for drones have pledged not to develop AI technologies for harmful use.

Smith has launched a new book called Tools and Weapons. At the launch, Smith also called for stricter rules over the use of facial recognition technology. There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.

Last month, a report from Dutch NGO PAX said leading tech firms are putting the world at risk of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself warned investors back in February that its AI offerings could damage the companys reputation.

Why are companies like Microsoft and Amazon not denying that theyre currently developing these highly controversial weapons, which could decide to kill people without direct human involvement? said Frank Slijper, lead author of PAXs report.

A global campaign simply titled Campaign To Stop Killer Robots now includes 113 NGOs across 57 countries and has doubled in size over the past year.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Read the original here:

Microsoft chief Brad Smith warns that killer robots are 'unstoppable' - AI News

Posted in Artificial Intelligence | Comments Off on Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ – AI News

DoD Growth In Artificial Intelligence: The Frontline Of A New Age In Defense – Breaking Defense

Posted: at 7:44 pm

The Pentagon is figuring ways to harness artificial intelligence (AI) for advantages as farflung as battlespace autonomy, intelligence analysis, record tracking, predictivemaintenance and military medicine. AI is a key growth investment area for DoD, withnearly $1 billion allocated in the 2020 budget. The Defense Departments Joint ArtificialIntelligence Center (JAIC) will see its budget double to over $208 million, with significantincreases likely in 2021 and beyond.

JAIC seeks to coordinate all military service anddefense agency artificial intelligence activity over a $15 million benchmark. The military is currently seeking to integrate AI into weapon systems development, augment human operatorswith AI-driven robotic maneuver on the battlefield and enhance the precision of militaryfires.

The rapid advancement and proliferation of new technologies is changing the character ofwar.

To prevent the erosion of the U.S. competitive military advantage, DOD is investing in newtechnologies to compete, deter, and if necessary, fight and win the wars of the future.

White House Fiscal Year 2020 Federal Budget

DoDs investment in AI is crucial to its continuing military advantage,ensuring the U.S. military does not lag behind rival world powers. BreakingDefense has prepared a special E-Book on Artificial Intelligence in defense, about the promise,cautionary points and future development.

Download the special Breaking Defense E-Book, Artificial Intelligence: The Frontlineof a New Age in Defense. Its free, and provides ideas and insights on the emergence of AI as a key factorin national security.

Visit link:

DoD Growth In Artificial Intelligence: The Frontline Of A New Age In Defense - Breaking Defense

Posted in Artificial Intelligence | Comments Off on DoD Growth In Artificial Intelligence: The Frontline Of A New Age In Defense – Breaking Defense

Artificial intelligence could help to translate critical Earth observation data Earth.com – Earth.com

Posted: at 7:44 pm

According to a new report from the European Space Agency (ESA), artificial intelligence and machine learning may be the key to accurately extracting and processing satellite data. For humans, it can be very challenging to locate the most relevant information in these massive datasets, which are transmitted from over 700 Earth observation satellites.

The need for reliable information about Earths climate system is more urgent now than ever before. For example, the ESA Climate Change Initiative (CCI) provides critical feedback to the UN Framework Convention on Climate Change. Teams of scientists have been formed to produce precise details on specific environmental processes.

The datasets used for the CCI include 21 essential climate variables, such as greenhouse gas concentrations, sea-level rise, and the state of the worlds polar ice sheets. These records, which cover four decades, are the foundation for the global climate models used to predict future changes.

Dr. Carsten Brockmann, who works on the CCI Ocean Colour science team, believes artificial intelligence has the power to address pressing challenges that are faced by climate researchers.

In machine learning, computer algorithms are trained to split, sort, and transform data. This can dramatically improve detection rates in Earth observation, as these algorithms can automatically make statistical connections within datasets for classification, prediction, or pattern discovery.

Connections between different variables in a dataset are caused by the underlying physics or chemistry, but if you tried to invert the mathematics, often too much is unknown, and so unsolvable, said Dr. Brockman. For humans its often hard to find connections or make predictions from these complex and nonlinear climate data.

Scientists involved in the CCI Aerosol project need to pinpoint changes in reflected sunlight caused by the presence of dust, smoke, and pollution in the atmosphere. Project leader Thomas Popp wants to use artificial intelligence to retrieve additional aerosol parameters from several sensors at once.

I want to combine several different satellite instruments and do one retrieval. This would mean gathering aerosol measurements across the visible, thermal and ultraviolet spectral range, from sensors with different viewing angles, said Popp. He said that approaching this as one big data problem could make these data automatically fit together and be consistent.

Explainable artificial intelligence is another evolving area that could help unveil the physics or chemistry behind the data, said Dr. Brockmann.

In artificial intelligence, computer algorithms learn to deal with an input dataset to generate an output, but we dont understand the hidden layers and connections in neural networks: the so-called black box.

We cant see whats inside this black box, and even if we could, it wouldnt tell us anything. In explainable artificial intelligence, techniques are being developed to shine a light into this black box to understand the physical connections.

By Chrissy Sexton, Earth.com Staff Writer

Image Credit: Shutterstock/NicoElNino

More:

Artificial intelligence could help to translate critical Earth observation data Earth.com - Earth.com

Posted in Artificial Intelligence | Comments Off on Artificial intelligence could help to translate critical Earth observation data Earth.com – Earth.com

Artificial Intelligence (AI) creates new possibilities for personalisation this year – Gulf News

Posted: at 7:44 pm

Representational image. Image Credit: Pixabay

New Delhi: Artificial Intelligence (AI) and cross-industry collaborations are creating new avenues for data collection and offering personalised services to users this year, according to a report.

Among other technology trends that are picking up this year are the convergence of the smart home and healthcare, autonomous vehicles coming for last-mile delivery and data becoming a hot-button geopolitical issue, according to the report titled "14 Trends Shaping Tech" from CB Insights.

"As a more tech-savvy generation ages up, we'll see the smart home begin acting as a kind of in-home health aide, monitoring senior citizens' health and well being. We'll see logistics players experiment with finally moving beyond a human driver," said the report.

"And we'll see cross-industry collaborations, whether via ancestry-informed Spotify playlists or limited edition Fortnite game skins," it added.

In September 2018, Spotify partnered with Ancestry.com to utilise DNA data to create unique playlists for individuals.

Playlists reflect music linked to different ethnicities and regions. A person with ancestral roots in Bengaluru, for example, might see Carnatic violinists and Kannada film songs on their playlists.

DNA data is also informing how we eat. GenoPalate, for example, collects DNA info through saliva samples and analyses physiological components like an individual's ability to absorb certain vitamins or how fast they can metabolize nutrients.

From there, it matches this information to nutrition analyses that it has conducted on a wide range of food and suggests a personalised diet. It also sells its own meal kits that use this information to map out menus.

"We'll also see technology brands expand beyond their core products and turn themselves into a lifestyle," said the report.

For example, as electric vehicle users need to wait for their batteries to charge for anywhere from 30 minutes to two hours, the makers of these vehicles are trying to turn this idle time into an asset.

China's NioHouse couples charging stations with a host of activities. At the NioHouse, a user can visit the library, drop children off at daycare, co-work, and even visit a nap pod to rest while charging.

Nio has also partnered with fashion designer Hussein Chalayan to launch and sell a fashion line, Nio Extreme.

Tech companies today are also attempting to bridge the gap between academia and the career market.

Companies like the Lambda School and Flatiron School offer courses to train students on exactly the skills they will need to get a job, said the report.

These apprenticeships mostly focus on tech skills like computer science and coding. Training comes with the explicit goal of employment and students only need to pay their tuition once they have landed a job that pays them above a certain range.

Investors are also betting on the rise of digital goods. While these goods cannot be owned in the physical world, they come with clout, and offer personalisation and in-game experiences to otherwise one-size-fits-all characters, the research showed.

The rest is here:

Artificial Intelligence (AI) creates new possibilities for personalisation this year - Gulf News

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence (AI) creates new possibilities for personalisation this year – Gulf News

Artificial intelligence being used in schools to detect self-harm and bullying – Sky News

Posted: at 7:44 pm

One of England's biggest academy chains is testing pupils' mental health using an AI (artificial intelligence) tool which can predict self-harm, drug abuse and eating disorders, Sky News can reveal.

A leading technology think tank has called the move "concerning", saying "mission creep" could mean the test is used to stream pupils and limit their educational potential.

The Academies Enterprise Trust has joined private schools such as Repton and St Paul's in using the tool, which tracks the mental health of students across an entire school and suggests interventions for teachers.

This month, 50,000 schoolchildren at 150 schools will take the online psychological test, called AS Tracking, including 10,000 Academies Enterprise Trust pupils.

Teachers say use of the tool is "snowballing" as it offers a way to ease the pressure on teenagers struggling to deal with social media scrutiny and academic stress.

The test, which is taken twice a year, asks students to imagine a space they feel comfortable in, then poses a series of abstract questions, such as "how easy is it for somebody to come into your space?"

The child can then respond by clicking a button on a scale that runs from "very easy" to "very difficult".

Dr Simon Walker, a cognitive scientist who conducted studies with 10,000 students in order to develop AS Tracking, says this allows teachers to hear pupils' "hidden voice" - in contrast to traditional surveys, which tend to ask more direct questions.

"A 13-year-old girl or boy isn't going to tell a teacher whether they're feeling popular or thinking about self harm, so getting reliable information is very difficult," he says.

Once a child has finished the questionnaire, the results are sent to STEER, the company behind AS Tracking, which compares the data with its psychological model, then flags students which need attention in its teacher dashboard.

"Our tool highlights those particular children who are struggling at this particular phase of their development and it points the teachers to how that child is thinking," says STEER co-founder Dr Jo Walker.

:: Listen to the New Lines podcast on Apple Podcasts, Google Podcasts, Spotify, Spreaker

Neil Woods led part of the Academies Enterprise Trust pilot of AS Tracking, in Tendring Technology College in Essex. He says that since introducing it the college has seen a 20% decrease in self harm.

"We've had a number of students where this has really significantly helped," he says.

"There is a mental health crisis, we know that. This tool is not going to solve it, but it's going to help us identify those students who may need the support."

The AS Tracking dashboard labels children red, amber or green according to their level of mental wellbeing. STEER, which provides training in the use of its tool, say this is necessary in order to make the complex data accessible for teachers.

However, technology experts warn that such rankings could be misused.

"With these types of technologies there is a concern that they are implemented for one reason and later used for other reasons," said Carly Kind, director of the Ada Lovelace Institute.

"There is the scope for mission creep, where somebody in a school says this would be a great tool to sort children into different classrooms, or decide which students should go on to university and which shouldn't."

AS Tracking costs a school with 1,200 pupils up to 25,500 a year. According to STEER's own figures, the psychological biases it tests for are linked to risks of self-harm, bullying and not coping with pressure in 82% of cases.

Once pupils have finished at school, they get their AS Tracking data in an app which they can use to see their own progress.

The National Education Union cautiously welcomed AS Tracking's growth.

"Exploring new ways for students to ask for help might be valuable, but aren't a substitute for giving teachers time to know their students and maintain supportive relationships," deputy general secretary Amanda Brown told Sky News.

Mr Wood, who also oversees art and music therapy at Tendring Technology College, agreed. "It's the wraparound interventions that you give to students that are important," he said.

"It's not just that we are looking at the data in one context, we are looking at their academic profile, we're looking at their pupil voice, we're looking at what parents are actually saying to us and AS Tracking is just another part of the puzzle."

Follow this link:

Artificial intelligence being used in schools to detect self-harm and bullying - Sky News

Posted in Artificial Intelligence | Comments Off on Artificial intelligence being used in schools to detect self-harm and bullying – Sky News

The impact on infrastructure once artificial intelligence shifts into top gear – Engineers Journal

Posted: at 7:44 pm

Arup's Tim Chapman points the way forward for people planning a career, which is likely to last at least 45 years, and through which they will encounter unfathomable change

Standard computer programming has been around for decades; what is new (ish) is the propensity for computers to teach themselves how to spot patterns and progressively improve.

Arups Tim Chapman points the way forward for people planning a career, which is likely to last at least 45 years, and through which they will encounter unfathomable change.

It is very easy to become fearful when reading press report after press report about the impact of artificial intelligence (AI) on our civilisation all the jobs to be lost and what will become of us? Or our children?

First, it is worthwhile being clear about what AI actually is. Definitions can all too easily become conflated with the latest scary film portraying robots with high intelligence and even human emotions great films like I Robot and Ex Machina are wonderful stories and do show what may ultimately happen but the level of technology is many decades away, if it ever happens.

What is more insidious is the progressive augmentation we get from ever more adept systems. Standard computer programming has been around for decades; what is new (ish) is the propensity for computers to teach themselves how to spot patterns and progressively improve.

And generally these algorithms are good for us, such as the ones that learn how to detect potentially cancerous moles on skin initially learning from the best doctors and thereby becoming better than any of them, then learning from historic photos with a precise diagnosis of which ones did actually turn cancerous later.

This is an application of machine learning or artificial intelligence in action the whizzy clever robots which can do anything are called artificial general intelligence (AGI).

So then what is intelligence? Einstein wrote, The true sign of intelligence is not knowledge but imagination, while Socrates wrote, I know that I am intelligent, because I know that I know nothing. It is unlikely that your HP laptop is anywhere near any thoughts that profound.

1.) Musical-rhythmic2.) Visual-spatial3.) Verbal-linguistic4.) Logical-mathematical understand principles of a causal system5.) Bodily kinaesthetic sports, dance, acting, making things6.) Interpersonal social skills7.) Intrapersonal self-reflection and aware8.) Naturalistic nurturing information to natural surroundings9.) Existential spiritual

Currently, AI is making some progress in only a small portion of these areas, fortunately. In the field of original composition, AI is making some progress in art and music writing, but mainly by averaging many prior human art.

Cloudpainter won the 2018 Robot Art Prize with a decidedly confused pastiche and the Portrait of Edmond Belamy was exhibited at Christies with an asking price of 7,000 to 9,000 it was made of an amalgamation of 15,000 portraits from the 14th to the 20th centuries so is far from original.

Every year the firm Gartner come up with its hype curve for new technologies plotting the progress of each from an innovation trigger through a peak of inflated expectations towards a trough of disillusionment, eventually into a slope of enlightenment and hopefully reaching a plateau of productivity.

Various AI technologies can be found throughout all of these zones, with AGI at the most undeveloped end.

It is worth putting AI into a context of world trends, which can combine to either thwart or reinforce existential threats so AI can be seen as either a saviour or a reinforcer for risks to humanitys future like global warming, resource depletion, destruction of our environment and deteriorating global order, alongside more usual threats like disease pandemics, which we thought we had cured but antibiotic resistance could allow to return.

Another interesting facet of this trend towards computer assisted process improvement and ever more expert systems is where does it leave the current human experts?

The professions derive their exalted position in society from the pact made at the time of the medieval guilds and it has been unchallenged until now.

Now various professions are being dumbed down by the invasion of expert systems, initially amplifying and improving expert opinions, but eventually supplanting them, apart from a small number of more complex cases.

This could easily lead to a reduction in status and salary for adherents to those professions. The recent 737 Max crashes illustrate the perils of uncontrolled trust in AI systems, but also show the zeal with which such systems are intruding into activities that we consider to be human controlled. Will truck and train drivers be needed in the long term?

AI systems can also disrupt industries in other ways by overturning standard business models. Hence Uber is the worlds biggest taxi company but owns no taxis.

Facebook is the worlds biggest media content provider but provides none of the content itself it is just a platform. Many industries are ripe for revolution in ways we cant yet imagine.

And these changes now happen quickly. It used to be that a disrupted industry had time to react to change but now it can occur in months.

This backdrop can be applied to any industry including that for infrastructure provision. In parallel, we are getting sharper about how we provide infrastructure nowadays. It is no longer the domain of nerdish engineers working in a vacuum plotting lines on maps with less consideration for the communities that will host it than they should have done.

We are much more aware of the special needs of the society for whom we provide infrastructure and which will pay for it though taxes or user charges.

We recognise that it is the outcome from the infrastructure that really matters rather than the asset themselves and we also know that the successful operation of assets is as least as noble an activity as designing new ones.

AI is intruding into all of these worlds too, and in some ways the expert systems are starting to obviate the need for high technical skills.

Equally data analytics on users of infrastructure are providing us with fascinating insights about how it can work and enable us to use these tools to design much better infrastructure that is ever more useful to the communities whose standard of living depends on its successful operation.

It is interesting to muse about whether there will be limits to the levels of intrusiveness that computers will be allowed to reach in our society. While they have the power to render many services quicker and thereby cheaper, what will happen to all the displaced humans?

Initially those that are at most risk of being displaced from the workforce are those with the lowest skills drivers being an obvious example what will the people who currently drive taxis and trucks do if that opening is no longer available to them will there be other jobs that allow them to support their families?

Presently, it seems that governments are becoming weaker and are less capable of taming the concerted global actions of the big tech organisations.

And the ambition of those large corporations to impose new technologies on us,making our lives potentially easier, but all the time minimising the tax bills that sustain our society and enable us to make our civilisation generous to everybody. Will governments eventually exert a higher level of control or will the big tech firms continue to run wild?

Before we get too worried it is worth reflecting on what computers are good for, and not so good at. We know that they are very very good at: Tasks/processes (if programmed well) Ordered memory (if designed well)

But not so good at: Curiosity Obscure disordered memory Radical rethinking Strategies Different situations Non-routine tasks

Hence an AI expert firm might thrive for two years, but would be incapable of dealing with the changes in our world, not least in the advance of technology.

It is worthwhile reflecting on the various levels at which AI might hit: Industry profound inexorable change Firm inexorable too, with winners and losers Person think of yourself or your children there will be winners and losers too; therefore we all need to make the right personal choices: staying ahead of the sorts of technology that could make us redundant. This makes it very difficult to plan for a 45-year career Society which depends on how nimble governments are and whether they stay ahead of the global tech firms?

So when AI finally hits the world of infrastructure creation and operation, it is fair to say that: Industry will become far more efficient and agile And also potentially more responsive to society And hopefully less impactful on society in terms of pollution, which can be more efficiently minimised Hopefully making construction cheaper to build lower user charges, so more affordable But blander too With fewer people employed And fewer experts needed so fewer peak salaries (some will still be needed though!).

A critical reflection is what happens to those who have no other place to go?

Author: Tim Chapman CEng FICE FIEI FREng, director and leader infrastructure London Group, Arup.

Arup's Tim Chapman points the way forward for people planning a career, which is likely to last at least 45 years, and through which they will encounter unfathomable change.It is very easy to become fearful when reading press report after press report about the impact of artificial intelligence (AI)...

More here:

The impact on infrastructure once artificial intelligence shifts into top gear - Engineers Journal

Posted in Artificial Intelligence | Comments Off on The impact on infrastructure once artificial intelligence shifts into top gear – Engineers Journal

Artificial Intelligence Takes On Earthquake Prediction – Quanta Magazine

Posted: at 7:44 pm

When the Los Alamos researchers probed those inner workings of their algorithm, what they learned surprised them. The statistical feature the algorithm leaned on most heavily for its predictions was unrelated to the precursor events just before a laboratory quake. Rather, it was the variance a measure of how the signal fluctuates about the mean and it was broadcast throughout the stick-slip cycle, not just in the moments immediately before failure. The variance would start off small and then gradually climb during the run-up to a quake, presumably as the grains between the blocks increasingly jostled one another under the mounting shear stress. Just by knowing this variance, the algorithm could make a decent guess at when a slip would occur; information about precursor events helped refine those guesses.

The finding had big potential implications. For decades, would-be earthquake prognosticators had keyed in on foreshocks and other isolated seismic events. The Los Alamos result suggested that everyone had been looking in the wrong place that the key to prediction lay instead in the more subtle information broadcast during the relatively calm periods between the big seismic events.

To be sure, sliding blocks dont begin to capture the chemical, thermal and morphological complexity of true geological faults. To show that machine learning could predict real earthquakes, Johnson needed to test it out on a real fault. What better place to do that, he figured, than in the Pacific Northwest?

Most if not all of the places on Earth that can experience a magnitude 9 earthquake are subduction zones, where one tectonic plate dives beneath another. A subduction zone just east of Japan was responsible for the Tohoku earthquake and the subsequent tsunami that devastated the countrys coastline in 2011. One day, the Cascadia subduction zone, where the Juan de Fuca plate dives beneath the North American plate, will similarly devastate Puget Sound, Vancouver Island and the surrounding Pacific Northwest.

The Cascadia subduction zone stretches along roughly 1,000 kilometers of the Pacific coastline from Cape Mendocino in Northern California to Vancouver Island. The last time it breached, in January 1700, it begot a magnitude 9 temblor and a tsunami that reached the coast of Japan. Geological records suggest that throughout the Holocene, the fault has produced such megaquakes roughly once every half-millennium, give or take a few hundred years. Statistically speaking, the next big one is due any century now.

Thats one reason seismologists have paid such close attention to the regions slow slip earthquakes. The slow slips in the lower reaches of a subduction-zone fault are thought to transmit small amounts of stress to the brittle crust above, where fast, catastrophic quakes occur. With each slow slip in the Puget Sound-Vancouver Island area, the chances of a Pacific Northwest megaquake ratchet up ever so slightly. Indeed, a slow slip was observed in Japan in the month leading up to the Tohoku quake.

For Johnson, however, theres another reason to pay attention to slow slip earthquakes: They produce lots and lots of data. For comparison, there have been no major fast earthquakes on the stretch of fault between Puget Sound and Vancouver Island in the past 12 years. In the same time span, the fault has produced a dozen slow slips, each one recorded in a detailed seismic catalog.

That seismic catalog is the real-world counterpart to the acoustic recordings from Johnsons laboratory earthquake experiment. Just as they did with the acoustic recordings, Johnson and his co-workers chopped the seismic data into small segments, characterizing each segment with a suite of statistical features. They then fed that training data, along with information about the timing of past slow slip events, to their machine learning algorithm.

After being trained on data from 2007 to 2013, the algorithm was able to make predictions about slow slips that occurred between 2013 and 2018, based on the data logged in the months before each event. The key feature was the seismic energy, a quantity closely related to the variance of the acoustic signal in the laboratory experiments. Like the variance, the seismic energy climbed in a characteristic fashion in the run-up to each slow slip.

The Cascadia forecasts werent quite as accurate as the ones for laboratory quakes. The correlation coefficients characterizing how well the predictions fit observations were substantially lower in the new results than they were in the laboratory study. Still, the algorithm was able to predict all but one of the five slow slips that occurred between 2013 and 2018, pinpointing the start times, Johnson says, to within a matter of days. (A slow slip that occurred in August 2019 wasnt included in the study.)

For de Hoop, the big takeaway is that machine learning techniques have given us a corridor, an entry into searching in data to look for things that we have never identified or seen before. But he cautions that theres more work to be done. An important step has been taken an extremely important step. But it is like a tiny little step in the right direction.

The goal of earthquake forecasting has never been to predict slow slips. Rather, its to predict sudden, catastrophic quakes that pose danger to life and limb. For the machine learning approach, this presents a seeming paradox: The biggest earthquakes, the ones that seismologists would most like to be able to foretell, are also the rarest. How will a machine learning algorithm ever get enough training data to predict them with confidence?

The Los Alamos group is betting that their algorithms wont actually need to train on catastrophic earthquakes to predict them. Recent studies suggest that the seismic patterns before small earthquakes are statistically similar to those of their larger counterparts, and on any given day, dozens of small earthquakes may occur on a single fault. A computer trained on thousands of those small temblors might be versatile enough to predict the big ones. Machine learning algorithms might also be able to train on computer simulations of fast earthquakes that could one day serve as proxies for real data.

But even so, scientists will confront this sobering truth: Although the physical processes that drive a fault to the brink of an earthquake may be predictable, the actual triggering of a quake the growth of a small seismic disturbance into full-blown fault rupture is believed by most scientists to contain at least an element of randomness. Assuming thats so, no matter how well machines are trained, they may never be able to predict earthquakes as well as scientists predict other natural disasters.

We dont know what forecasting in regards to timing means yet, Johnson said. Would it be like a hurricane? No, I dont think so.

In the best-case scenario, predictions of big earthquakes will probably have time bounds of weeks, months or years. Such forecasts probably couldnt be used, say, to coordinate a mass evacuation on the eve of a temblor. But they could increase public preparedness, help public officials target their efforts to retrofit unsafe buildings, and otherwise mitigate hazards of catastrophic earthquakes.

Johnson sees that as a goal worth striving for. Ever the realist, however, he knows it will take time. Im not saying were going to predict earthquakes in my lifetime, he said, but were going to make a hell of a lot of progress.

This article was reprinted onWired.com.

See the original post:

Artificial Intelligence Takes On Earthquake Prediction - Quanta Magazine

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Takes On Earthquake Prediction – Quanta Magazine

Page 132«..1020..131132133134..140150..»