PERSPECTIVE: Why Strong Artificial Intelligence Weapons Should Be Considered WMD Homeland Security Today – HSToday

The concept of strong Artificial Intelligence (AI), or AI that is cognitively equivalent to (or better than) a human in all areas of intelligence, is a common science fiction trope.[1] From HALs adversarial relationship with Dave in Stanley Kubricks film 2001: A Space Odyssey[2] to the war-ravaged apocalypse of James Camerons Terminator[3] franchise, Hollywood has vividly imagined what a dystopian future with super intelligent machines could look like and what the ultimate outcome for humanity might be. While I would not argue that the invention of super-intelligent machines will inevitably lead to our Schwarzenegger-style destruction, rapid advances in AI and machine learning have raised the specter of strong AI instantiation within a lifetime,[4] and this requires serious consideration. It is becoming increasingly important that we have a real conversation about strong AI before it becomes an existential issue, particularly within the context of decision making for kinetic autonomous weapons and other military systems that can result in a lethal outcome. From these discussions, appropriate global norms and international laws should be established to prevent the proliferation and use of strong AI systems for kinetic operations.

With the invention of almost every new technology, changes to ethical norms surrounding its appropriate use lag significantly behind proliferation. Consider social media as an example. We imagined that social media platforms would bring people together and facilitate greater communication and community, yet the reality has become significantly less sanguine.[5] Instead of bringing people together, social media has deepened social fissures and enabled the proliferation of disinformation at a virulent rate. It has torn families apart, caused greater divide, and at times transformed the very definition of truth.[6] Only now are we considering ethical restraints on social media to prevent the poison from spreading.[7] It is highly probable that any technology we create will ultimately reflect the darker parts of our nature, unless we create ethical limits before the technology becomes ubiquitous. It would be foolish to believe that AI would be an exception to this rule. This becomes especially important when considering strong AI designed for warfare, which is distinguishable from other forms of artificial intelligence.

To fully examine the implications of strong AI, we need to understand how it differs from current AI technologies, which are what we would consider weak AI.[8] Your smartphones ability to recognize images of your face is an example of weak AI. For a military example, an algorithm that can recognize a tank in an aerial video would be considered a weak AI system.[9] It can identify and label tanks, but it does not really know what a tank is or have any cognizance of how it relates to a tank. In contrast, a strong AI would be capable of the same task (as well as parallel tasks) with human-level proficiency (or beyond), but with an awareness of its own mind. This makes strong AI a more unpredictable threat. Not only would strong AI be highly proficient at rapidly processing battlefield data for pre- and post-strike decision making, but it would do so with an awareness of itself and its own motives, whatever they might be. Proliferation of weak AI systems for military applications is already becoming a significant issue. As an anecdotal example, Vladimir Putin has stated that the nation that leads AI will be the ruler of the world.[10] Imagine what the outcome could be if military AI systems had their own motives. This would likely involve catastrophic failure modes beyond what could be realized from weak AI systems. Thus, military applications of strong AI deserve their own consideration.

At this point, one may be tempted to dismiss strong AI as being highly improbable and therefore not worth considering. Given the rapid pace of AI technology development, it could be argued that, while the precise probability of instantiating strong AI is unknown,[11] it is a safe assumption that it is greater than zero. But what is important in this case is not the probability of strong AI instantiation, but the severity of a realized risk. To understand this, one need only consider how animals of greater intelligence typically consider animals of lesser intelligence. Ponder this scenario: when we have ants in our garden, does their well-being ever cross our minds? From our perspective, the moral value of an insect is insignificant in relation to our goals, thus we would not hesitate to obliterate them simply for eating our tomatoes. Now imagine if we encountered a significantly more intelligent AI how might it consider us in relation to its goals, whatever they might be? This meeting could yield an existential crisis if our existence hinders the AIs goal achievement, thus even this low-probability event could have a catastrophic outcome if it became a reality.

Understanding what might motivate a strong AI could provide some insight into how it might relate to us in such a situation. Human motivation is an evolved phenomenon. Everything that drives us (self-preservation, hunger, sex, desire for community, accumulation of resources, etc.) exists to facilitate our survival and that of our kin.[12] Even higher-order motives, like self-actualization, can be linked to the more fundamental goal of individual and species survival when viewed through the lens of evolutionary psychology.[13] However, a strong AI would not necessarily have evolved. It may simply be instantiated in situ as software or hardware. In this case, no evolutionary force would have existed over eons to generate a motivational framework analogous to what we, as humans, experience. In an instantiated strong AI, it might be prudent to assume that the AIs primary motive would be to achieve whatever goal it was initially programmed to do. Thus, self-preservation might not be the primary motivating factor. However, the AI would probably recognize that its continued existence is necessary for it to achieve its primary goal, thus self-preservation could become a meaningful sub-goal.[14] Other sub-goals may also exist, some of which would not be obvious to humans in the context of how we understand motivation. The AIs thought process by which sub-goals are generated or achieved might be significantly different from what humans would expect.

The existence of AI sub-goals that do not follow the patterns of human motivation implies the existence of a strong AI creative process that may be completely alien to us. One only needs to look at AI-generated art to see that AI creativity can manifest itself in often grotesque ways that are vastly different from what a human might expect.[15] While weird AI artistry hardly poses an existential threat to humanity, it illustrates the concept of perverse instantiation,[16] where the AI achieves a goal, but in an unexpected and potentially malignant way. As a military example, imagine a strong AI whose primary goal is to degrade and destroy the adversary. As we have demonstrated, AI creativity can be unbounded in its weirdness, as its thought processes are unlike that of any evolved intelligence. This AI might find a creative and completely unforeseen way to achieve its primary goal that leads to significant collateral damage against non-combatants, such as innocent civilians. Taking this analogy to a darker level, the AI might determine that a useful sub-goal would be to remove its military handlers from the equation. Perhaps they act as a man in the middle gatekeeper in affecting the AIs will, and the AI determines that this arrangement creates unacceptable inefficiencies. In this perverse instantiation, the AI achieves its goal of destroying the enemy, but in a grotesque way by killing its overseers.

The next obvious question is, how could we contain a strong AI in a way that would prevent malignant failure? The obvious solution might be in engineering a deontological ethic an Asimovian set of rules to limit the AIs behavior.[17] Considering a strong AIs tendency toward unpredictable creativity in methods of goal achievement, encoding an exhaustive set of rules would pose a titanic challenge. Additionally, deontological ethics is often subject to deontological failure, e.g., what happens when rules contradict one another? A classic example would be the trolly problem: if an AI is not allowed to kill a human, but the only two possible choices involve the death of humans, which choice does it make?[18] This is already an issue in weak AI, specifically with self-driving cars.[19] Does the vehicle run over a small child who crosses the road, or crash and kill its inhabitants, if those are the only possible choices? If deontological ethics are an imperfect option, perhaps AI disembodiment would be a viable solution. In this scenario, the AI would lack a means to directly interact with its environment, acting as sort of an oracle in a box.[20] The AI would advise its human handlers, who would act as ethical gatekeepers in affecting the AIs will. Upon cursory examination, this seems plausible, but we have already established that a strong AI might determine that a man in the middle arrangement degrades its ability to achieve its primary goal, so what would prevent the AI from coercing its handlers into enabling its escape? In our hubris, we would like to believe that we could not be outsmarted by a disembodied AI, but a being that is more intelligent than us could reasonably outsmart us just as easily as a savvy adult could a nave child.

While a single strong AI instantiation could pose a significant risk of malignant failure, imagine the impact that the proliferation of strong AI military systems might have on how we approach war. Our adversaries are earnestly exploring AI for military applications; thus, it is extremely likely that strong AI may become a reality and also proliferate.[21] The real problem becomes not how to prevent malignant failure of a single strong AI, but how to address the complex adaptive system of multiple strong AIs fighting against all logical actors, none of which exhibit reasonably predictable behavior.[22] To further complicate matters, ethical decision making is influenced by culture, and our adversaries might have different ideas as to which strong AI behaviors are acceptable during war, and which are not.

To avoid this potentially disastrous outcome, I propose the following be considered for further discussion with the hopeful end-goal of appropriate global norms and future international laws that ban strong AI decision making for kinetic offensive operations. Strong AI-based lethal autonomous weapons should be considered a weapon of mass destruction. This may be the best way to prevent the complex, unpredictable destruction that could arise from multiple strong AI systems intent on killing the enemy or unnecessarily wreaking havoc on critical infrastructure, which may have negative secondary and tertiary effects impacting countless innocent non-combatants. Inevitably, there may be rogue or non-signatory actors who develop weaponized strong AI systems despite international norms. Any strategy that addresses strong AI should also consider this potential outcome.

Several years ago, seriously discussing strong AI might get you laughed out of the room. Today, as AI continues to advance, and as our adversaries continue to aggressively militarize AI technologies, it is imperative that the United States consider a defense strategy specifically addressing the possibility of a strong AI instantiation. Any use of strong AI in the battlefield should be limited to non-kinetic operations to reduce the impact of malignant failure. This standard should be reflected in multilateral treaty agreements or protocols to prevent strong AI misuse and the inevitable unpredictability of adversarial strong AI systems interacting with each other in complex, unpredictable, and possibly horrific ways. This may be a sufficient way to ensure that weaponized strong AI does not cause cataclysmic devastation.

The author is responsible for the content of this article. The views expressed do not reflect the official policy or position of the National Intelligence University, the Department of Defense, the U.S. Intelligence Community, or the U.S. Government.

(Visited 371 times, 1 visits today)

Original post:

PERSPECTIVE: Why Strong Artificial Intelligence Weapons Should Be Considered WMD Homeland Security Today - HSToday

Defense Official Calls Artificial Intelligence the New Oil – Department of Defense

Artificial intelligence is the new oil, and the governments or the countries that get the best datasets will unquestionably develop the best AI, the Joint Artificial Intelligence Center's chief technologyofficer said Oct. 15.

Speaking on a panel about AI superpowers at the Politico AI Summit, Nand Mulchandani said AI is a very large technology and industry. "It's not a single, monolithic technology," he said. "It's a collection of algorithms, technologies, etc., all cobbled together to call AI."

The United States has access to global datasets, and that's why global partnerships are so incredibly important, he said, noting the Defense Department launched the AI partnership for defense at the JAIC recently to have access to global datasets with partners, which gives DOD a natural advantage in building these systems at scale.

"Industry has to develop on its own, and that's where the global talent is; that's where the money is; that's where all of the innovation is going on," Mulchandani noted, adding that the U.S. government's job is to be able to work in the best way and absorb the best technology that it can. That includes working hand in glove with industry on a voluntary basis, he said. He said there are certain areas of AI that are highly scaled that you can trust and deploy at scale.

"But notice many or not many of those systems have been deployed on weapon systems. We actually don't have any of them deployed," he said.

Mulchandani said the reason is that explainability, testing, trust and ethics are all highly connected pieces and even AI security when it comes to model security, data security being able to penetrate and break models. This is all very early, which is why the DOD and the U.S. government widely have taken a very stringent approach to putting together the ethics principles and frameworks within which we're going to operate.

"[Earlier this year, one of the first international visits that we made were to NATO and our European partners, and [we] then pulled them into this AI partnership for defense that I just talked about," he said. "Thirteen different countries are getting together to actually build these principles because we actually do need to build a lot of confidence in this."

He said DOD continues to attract and have the best talent at JAIC. "The real tricky part is: How do we actually take that technology and get it deployed? That's the complexity of integrating AI into existing systems, because one isn't going to throw away the entire investment of legacy systems that one has, whether it be software or hardware or even military hardware," Mulchandani said. "[How] can we absorb the best of what's coming and get it integrated into the system as where the complexity is?"

DOD has had a long history of companies that know how to do that, and harnessing it is the actual work and the piece that we're worried about the most and really are focused on the most, he added.

A global workforce the DOD technology companies are global companies, he emphasized. "These are not linked to a particular geographic region. We hire. We bring the best talent in, wherever it may be, [and we have] research and development arms all over the world."

DOD has special security needs and requirements that must be taken care of when it comes to data, and the JAIC is putting in place very different development processes now to handle AI development, he said. "So, the dynamics of the way software gets built [and] the dynamics of who builds it are changing in a very significant way," Mulchandani said. "But the global war for talent is a real one, which is why we are not actually focused on trying to corner the market on talent."

He said they are trying to build leverage by building relationships with the leading AI companies to harness the innovation.

Read more:

Defense Official Calls Artificial Intelligence the New Oil - Department of Defense

Smart startups accelerate the pace of AI innovation Sponsored Content by Dell EMC – EnterpriseAI

Dell Technologies teams with AI startups to make it easier for organizations put new artificial intelligence solutions into production.

At the outset of a new decade, news organizations and their prognosticators like to make predictions about what lies ahead for the next 10 years. Today, I will jump into the same game and make a sweeping prediction about the decade that is just getting under way: The 2020s will be the decade in which artificial intelligence comes of age.

Today, there is a tidal wave of momentum for AI, which is apparent in projections for dramatic growth in the AI market. A recent report from the Fortune Business Insights, for example, predicts that the global AI market will grow at a rate of more than 33 percent per year, rising from around $20 billion in 2018 to top $200 billion by 2026.[1]

This kind of market growth creates fertile ground for startup companies that are bringing innovative AI technologies to market including those for machine learning, deep learning and more. This is a game most companies now want to play. As Jeremy Achin, CEO of the AI startup DataRobot, says in a Forbes magazine story, Everyone knows you have to have machine learning in your story or youre not sexy.[2]

At Dell Technologies, we are all in with the move to AI-driven products and processes. To that end, we work closely with many AI startups to bring their software innovations to market on Precision workstations, PowerEdge servers, and to deliver joint HPC/AI solutions that make it easier for companies to adopt new AI technologies.

For example, Dell Technologies recently introduced new solutions to advance HPC and AI innovation, including new Dell EMC Ready Solutions designed to simplify and accelerate the path to AI.

The new scalable Dell EMC HPC Ready Architecture for AI and Data Analytics delivers the power of accelerated AI computing from the edge to high performance computing with an easy-to-deploy cloud-native software stack. The new Ready Solutions for Data Analytics, validated design forDomino Data Lab enables data scientists to develop and deliver models faster while providing IT with a centralized, extensible platform spanning the entire data science lifecycle.

To simplify AI deployments for all sizes of organizations, Dell Technologies also released new reference architectures developed in collaboration with AI partners:

These architectures are designed to help organizations accelerate the deployment of AI solutions for training and inferencing to modernize, automate and transform their data centers. The architectures are optimized for Intel Xeon Scalable processors and Dell EMC PowerEdge servers, storage and data protection technologies.

This is exactly what it is going to take for enterprises to capitalize on the promise of AI in the 2020s. To get there, organizations need partners, solutions and technologies that pave the path to the future.

And at Dell Technologies, were excited to help accelerate this move to the digitally driven business that capitalizes on AI across the enterprise.

Ready to get started?

Here are some of the ways that your organization can get started down the path to proven approaches to AI:

[1] Fortune Business Insights, Artificial Intelligence (AI) Market Share, Size, and Industry Analysis, January 2020.

[2] Forbes, AI 50: Americas Most Promising Artificial Intelligence Companies, September 17, 2019.

3 Dell Technologies TCO AnalysisHPC Ready Architecture for AI and Data Analytics, https://infohub.delltechnologies.com/section-assets/h18136-tco-analysis-dell-emc-hpc-ra-for-ai-da-sb, February 2020.

Related

Continue reading here:

Smart startups accelerate the pace of AI innovation Sponsored Content by Dell EMC - EnterpriseAI

A Simple Tactic That Could Help Reduce Bias in AI – Harvard Business Review

Its easier to program bias out of a machine than out of a mind.

Thats an emerging conclusion of research-based findings including my own that could lead to AI-enabled decision-making systems being less subject to bias and better able to promote equality. This is a critical possibility, given our growing reliance on AI-based systems to render evaluations and decisions in high-stakes human contexts, in everything from court decisions, to hiring, to access to credit, and more.

Its been well-established that AI-driven systems are subject to the biases of their human creators we unwittingly bake biases into systems by training them on biased data or with rules created by experts with implicit biases.

Consider the Allegheny Family Screening Tool (AFST), an AI-based system predicting the likelihood a child is in an abusive situation using data from the same-named Pennsylvania countys Department of Human Services including records from public agencies related to child welfare, drug and alcohol services, housing, and others. Caseworkers use reports of potential abuse from the community, along with whatever publicly-available data they can find for the family involved, to run the model, which predicts a risk score from 1 to 20; a sufficiently high score triggers an investigation. Predictive variables include factors such as receiving mental health treatment, accessing cash welfare assistance, and others.

Sounds logical enough, but theres a problem a big one. By multiple accounts, the AFST has built-in human biases. One of the largest is that the system heavily weights past calls about families, such as from healthcare-providers, to the community hotline and evidence suggests such calls are over three times more likely to involve Black and biracial families than white ones. Though multiple such calls are ultimately screened out, the AFST relies on them in assigning a risk score, resulting in potentially racially-biased investigations if callers to the hotline are more likely to report Black families than non-Black families, all else being equal. This can result in an ongoing, self-fulfilling, and self-perpetuating prophecy where the training data of an AI system can reinforce its misguided predictions, influencing future decisions and institutionalizing the bias.

It doesnt have to be this way. More strategic use of AI systems through what I call blind taste tests can give us a fresh chance to identify and remove decision biases from the underlying algorithms even if we cant remove them completely from our own habits of mind. Breaking the cycle of bias in this way has the potential to promote greater equality across contexts from business to science to the arts on dimensions including gender, race, socioeconomic status, and others.

Blind taste tests have been around for decades.

Remember the famous Pepsi Challenge from the mid-1970s? When people tried Coca-Cola and Pepsi blind no labels on the cans the majority preferred Pepsi over its better-selling rival. In real life, though, simply knowing it was Coke created a bias in favor of the product; removing the identifying information the Coke label removed the bias so people could rely on taste alone.

In a similar blind test from the same time period, wine experts preferred California wines over their French counterparts, in what became known as the Judgment of Paris. Again, when the label is visible, the results are very different, as experts ascribe more sophistication and subtlety to the French wines simply because theyre French indicating the presence of bias yet again.

So its easy to see how these blind taste tests can diminish bias in humans by removing key identifying information from the evaluation process. But a similar approach can work with machines.

That is, we can simply deny the algorithm the information suspected of biasing the outcome, just as they did in the Pepsi Challenge, to ensure that it makes predictions blind to that variable. In the AFST example, the blind taste test could work like this: train the model on all data, including referral calls from the community. Then re-train the model on all the data except that one. If the models predictions are equally good without referral-call information, it means the model makes predictions that are blind to that factor. But if the predictions are different when those calls are included, it indicates that either the calls represent a valid explanatory variable in the model, or there may be potential bias in the data (as has been argued for the AFST) that should be examined further before relying on the algorithm.

This process breaks the self-perpetuating, self-fulfilling prophecy that existed in the human system without AI, and keeps it out of the AI system.

My research with Kellogg collaborators Yang Yang and Youyou Wu demonstrated a similar anti-bias effect in a different domain: the replicability of scientific papers.

What separates science from superstition is that a scientific fact that is found in the lab or a clinical trial replicates out in the real world again and again. When it comes to evaluating the replicability or reproducibility of published scientific results, we humans struggle.

Some replication failure is expected or even desirable because science involves experimentation of unknowns. However, an estimated 68% of studies published in medicine, biology, and social science papers do not replicate. Replication failures continue to be unknowingly cited in the literature, driving up R&D costs by an estimated $28 billion annually and slowing discoveries of vaccines and therapies for Covid-19 and other conditions.

The problem is related to bias: when scientists and researchers review a manuscript for publication, they focus on a papers statistical and other quantitative results in judging replicability. That is, they use the numbers in a scientific paper much more than the papers narrative, which describes the numbers, in making this assessment. Human reviewers are also influenced by institutional labels (e.g., Cambridge University), scientific discipline labels (physicists are smart), journal names, and other status biases.

To address this issue, we trained a machine-learning model to estimate a papers replicability using only the papers reported statistics (typically used by human reviewers), narrative text (not typically used), or a combination of these. We studied 2 million abstracts from scientific papers and over 400 manually-replicated studies from 80 journals.

The AI model using only the narrative predicted replicability better than the statistics. It also predicted replicability better than the base rate of individual reviewers, and as well as prediction markets, where collective intelligence of hundreds of researchers is used to assess a papers replicability, a very costly approach. Importantly, we then used the blind taste test approach and showed that our models predictions werent biased by factors including topic, scientific discipline, journal prestige, or persuasion words like unexpected or remarkable. The AI model provided predictions of replicability at scale and without known human biases.

In a subsequent extension of this work (in progress), we again used an AI system to reexamine the scientific papers in the study that had inadvertently published numbers and statistics that contained mistakes that the reviewers hadnt caught during the review process, likely due to our general tendency to believe figures we are shown. Again, a system blind to variables that can promote bias when over-weighted in the review process quantitative evidence, in this case was able to render a more objective evaluation than humans alone could, catching mistakes missed due to bias.

Together, the findings provide strong evidence for the value of creating blind taste tests for AI systems, to reduce or remove bias and promote fairer decisions and outcomes across contexts.

The blind-taste-test concept can be applied effectively to reduce bias in multiple domains well beyond the world of science.

Consider earnings calls led by business C-suite teams to explain recent and projected financial performance to analysts, shareholders, and others. Audience members use the content of these calls to predict future company performance, which can have large, swift impact on share prices and other key outcomes.

But again, human listeners are biased to use the numbers presented just as in judging scientific replicability and to pay excessive attention to who is sharing the information (a well-known CEO like Jeff Bezos or Elon Musk versus someone else). Moreover, companies have an incentive to spin the information to create more favorable impressions.

An AI system can look beyond potential bias-inducing information to factors including the text of the call (words rather than numbers) and others such as the emotional tone detected, to render more objective inputs for decision-making. We are currently examining earnings-call data with this hypothesis in mind, along with studying specific issues such as whether the alignment between numbers presented and the verbal description of those numbers has an equal effect on analysts evaluations if the speaker is male or female. Will human evaluators give men more of a pass in the case of misalignment? If we find evidence of bias, it will indicate that denying gender information to an AI system can yield more equality-promoting judgments and decisions related to earnings calls.

We are also applying the ideas here to the patents domain, where patent applications involve a large investment and rejection rates are as high as 50%. Here, current models used to predict a patent applications success or a patents expected value dont perform much better than chance, and tend to use factors like whether an individual or team filed the application, again suggesting potential bias. We are studying the value of using AI systems to examine patent text, to yield more effective, fairer judgments.

There are many more potential applications of the blind-taste-test approach. What if interviews for jobs or assessments for promotions or tenure took place with some kind of blinding mechanism in place, preventing the biased use of gender, race, or other variables in decisions? What about decisions for which startup founders receive funding, where gender bias has been evident? What if judgments about who received experimental medical treatments were stripped of potential bias-inducing variables?

To be clear, Im not suggesting that we use machines as our sole decision-making mechanisms. After all, humans can also intentionally program decision-making AI systems to manipulate information. Still, our involvement is critical to form hypotheses about where bias may enter in the first place, and to create the right blind taste tests to avoid it. Thus, an integration of human and AI systems is the optimal approach.

In sum, its fair to conclude that the human condition inherently includes the presence of bias. But increasing evidence suggests we can minimize or overcome that by programming bias out of the machine-based systems we use to make critical decisions, creating a more equal playing field for all.

Originally posted here:

A Simple Tactic That Could Help Reduce Bias in AI - Harvard Business Review

Microsoft Hackathon leads to AI and sustainability collaboration to rid plastic from rivers and the ocean – Stories – Microsoft

Dan Morris, AI for Earth program director, says the most important result from the hackathon was that AI for Earth taught The Ocean Cleanup a lot about machine learning. The real value was teaching them through interaction with data scientists and engineers at Microsoft, he says.

This year, The Ocean Cleanup was named an AI for Earth grantee for its work.

Using the AI for Earth grant, weve been able to set up and run the machine learning models, De Vries says. Having the resources at our fingertips has greatly accelerated the technical progress, by taking away practical concerns and letting us focus on the development.

It allowed us to develop the vision that this is something we can do, not just for one river, but eventually for rivers across the globe.

Robin de Vries, right, of The Ocean Cleanup works with a Microsoft Global Hackathon team member in 2019.

The Ocean Cleanup is highly admired, particularly in the Netherlands, where the organization has been a symbol of pride for years, even before they became more well-known internationally, says Harry van Geijn, a digital adviser for Microsoft in the Netherlands. Van Geijn is among the Microsoft staffers there who have volunteered to help The Ocean Cleanup when it comes to computer and related support.

While its staff is relatively small with around 100 employees, they have this cause that they pursue with great tenacity and in an extremely professional way, van Geijn says. So much so that When I ask around for someone at Microsoft Netherlands to do something for The Ocean Cleanup, half the company raises their hand to say, I want to volunteer for that.

Drew Wilkinson at the 2019 Microsoft Global Hackathon in Redmond, Washington.

Wilkinson, who grew up in the hot, dry climate of the Arizona desert, spent time at sea as a volunteer for the Sea Shepherd Conservation Society, a nonprofit, marine wildlife conservation organization.

In 2018 at Microsoft, he and another coworker started an employee group, Microsofts Worldwide Sustainability Community, which has grown to more than 3,000 members globally. The group focuses on ways employees can help the company be more environmentally sustainable. Wilkinson now is a community program manager for the Worldwide Communities Program, which includes the employee group he co-founded.

Wilkinson sees the issue of plastics in the ocean as a pretty solvable problem and is excited about the work that has been done, the work that he spurred with an email.

Im not a scientist, but it doesnt take a lot of science to understand that our fate on the land is very much tied to the ocean, he says. The ocean is the planets life support system. Without a healthy ocean, we dont stand a chance either.

Top image: Some of the plastic and trash picked up onto the conveyor belt of The Ocean Cleanups Interceptor 002 on the Klang River in Malaysia. Photo credit: The Ocean Cleanup.

Go here to read the rest:

Microsoft Hackathon leads to AI and sustainability collaboration to rid plastic from rivers and the ocean - Stories - Microsoft

AI is Changing Everything Even Science Itself – Futurism

In BriefAI is being used for much more than many realize. In fact,particle physicists are currently pushing the limits of ourunderstanding of the universe with the help of these technologies. AI Particle Physics

Many might associate current artificial intelligence (AI) abilities with advanced gameplay, medical developments, and even driving. But AI is already reaching far beyond even these realms. In fact, AI is now helping particle physicists to discover new subatomic particles.

Particle physicists began integrating AI in the pursuit of particles as early as the 1980s, as the process of machine learning suits the hunt for fine patterns and subatomic anomalies particularly well. But, once an unexplored and novel technique, AI is now a fully integrated and standard part of everyday life within particle physics.

Pushpalatha Bhat, physicist at Fermilab, described the problem in an interview with Science Magazine. This is the proverbial needle-in-the-haystack problemThats why its so important to extract the most information we can from the data. And this extraction is where AI comes in handy. And this ability to extract data lent itself to the 2012 discovery of the Higgs boson particle, which occurred using the LHC.

While AI has not and will never replace the worlds scientists, this unparalleled tool is being applied in ways that many could never have even predicted. It is, as previously mentioned, helping researchers to push the boundaries of understanding. Its helping us to create modes of transportation that not only make daily life easier, but save countless lives.

AI is proving to be an essential component in the current quest to travel to and explore Mars, allowing probes to be controlled remotely and trusted to make changes in behavior according to a changing environment. And, even beyond medical advances, AI is making treatments more enjoyable for both patients and healthcare providers, altering an often-intimidating system.

AI technologies are also being designed that are capable of creating art. From paintings to music, we are learning that advanced machine learning algorithms are more than just the new face of industry. This makes a lot of people uneasy. Images of Will Smith in iRobot come into view, the voice of Hal 9000 from 2001: A Space Oddysey starts speaking, and our science fiction nightmares seem realized.

But, while AI is not yet a perfectly integrated part of daily life, it is certainly pushing us forward. So, who knows, thanks to AI, we may soon really put humans onto the red planet and particle physicists might smash protons just right and revealmore about our universe than we could have ever hoped to know.

Here is the original post:

AI is Changing Everything Even Science Itself - Futurism

Leadership in the age of Artificial Intelligence – Analytics Insight

Stationed at the frontier of accelerating artificial intelligence (AI) landscape, organizations need to validate executives who make nimble, informed decisions about where and how to employ AI in their business. Encouraging the industry-wide digital transformation, the widespread technology has permeated more organizations and more parts within organizations spanning the C-suite executives as well. The very fundamentals of leadership need to be rethought, from overall strategy to customer experience, in order to deploy AI appropriately while considering the human capital too.

As the conventional business leaderships are giving way to new approaches, opportunities, and threats as a result of broader AI adoption, the new set of AI executives are ready to take over the challenge to drive better innovation and competitiveness. Several C-level executives, in todays dynamic AI culture, are confident enough to wheel their organizations leadership team towards the ability to adapt significant and innovative AI approaches across the business.

As it stands now, top AI executives are not only evolving at a rapid pace but also revamping their surroundings for better technology implementation. Moreover, their employees and fellow teammates support them with full-confidence while promoting the positive aspects of AI. To excel further, the C-level executives press over the need to train the leadership team on AI as a top priority.

Despite, business leaders optimism about artificial intelligence and the opportunities it presents, they cannot neglect the fact regarding its potential risks. A number of C-level executives and their leadership team are hesitant to invest in AI technologies because of security or privacy concerns. However, showcasing the brave and progressive attributes of leadership, while ensuring security through innovation, some prominent executives are performing experiments with AI capabilities, and evidently, those are the ones who form the clan of topmost AI executives across the industry.

As claimed by certain market reports, business executives are showered with great success in AI across five major industries retail, transportation, healthcare, financial services, and technology itself. Tracing the success-map of such leaders, executives across various other sectors are admittingly adopting AI capabilities more aggressively than before.

In the age of AI, business executives must focus on embedding AI into their strategic plans which would subsequently enable such frontrunners develop an enterprise-wide strategy for AI, that inclusive business segments can follow. Moreover, as a part of the leadership team, they are responsible to look after financial aspects of the organization as well, therefore, applying AI to revenue and customer engagement opportunities will help them explore the use of technology for various revenue enhancements and client experience initiatives while tracing their own progress.

AI executives should also focus on employing multiple options for acquiring AI and developing innovative applications in an effort to accelerate the adoption of AI initiatives via access to a wider pool of talent and technology solutions.

Excerpt from:

Leadership in the age of Artificial Intelligence - Analytics Insight

Artificial Intelligence (AI) What it is and why it …

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

Link:

Artificial Intelligence (AI) What it is and why it ...

How AI Is Helping Fix America’s Crumbling Roads – Jalopnik

American roads suck. Our roadways scored a D on the American Society of Civil Engineers annual infrastructure report card because they are crowded, frequently in poor condition, chronically underfunded, and are becoming more dangerous. Yikes.

Since President Donald Trumps Great National Infrastructure Program, where he pitched $1 trillion to repairing ailing roads, waterworks and bridges, is nowhere to be seen, researchers are taking matters into their own hands.

Did you hear? U.S. roads are total shit! Our bridges are a fucking mess, too! And a new report

Take RoadBotics, a new startup out of Carnegie Mellon University: its program uses cell phone cameras propped up on car windshields to record video of road conditions, which then gets fed into an AI-algorithm. The AI determines the section of roads most in need for repairs, and produces a color-coded map that depicts where the problem areas are, Government Technology reported.

Using the map, civil engineers can better monitor roads to determine where problem areas could emerge before they turn into costly-to-fix potholes, RoadBotics CEO Mark DeSantis said in an interview with Jalopnik.

Its different than waiting a year or two years to look at a roadby that time youre gonna have some bad roads, DeSantis said. If youre monitoring every month, you can prevent those roads and have zero potholes.

RoadBotics isnt the first company to use advanced tech to help mend U.S. roads. Scientists have been working on developing magnetic asphalt that self-heals cracks, and putting special bacteria in concrete that produces calcium carbonate to fill up minor cracks.

DeSantis, however, believes artificial intelligence gives civil engineers an additional leg up by providing a more consistent, detailed, data-driven picture of roads that need repair. He said countries that excel in road maintenance, like Japan and Australia, have already begun using AI for assistance.

If you have a pulse and have driven in a car, you understand our roadways suck. Thats also true in

Though the government must still allocate funding to replacing seriously damaged roads, DeSantis said using AI to predict where potholes may occur before they develop is key to repairing U.S. roads.

Theres a big difference between maintaining something and fixing it. Fixing is a temporary thing; maintaining is if you can see (road issues) sooner, you can react to them, and you can have the road last forever.

Link:

How AI Is Helping Fix America's Crumbling Roads - Jalopnik

Why AI visionary Andrew Ng teaches humans to teach computers – ABC News

Andrew Ng has led teams at Google and Baidu that have gone on to create self-learning computer programs used by hundreds of millions of people, including email spam filters and touch-screen keyboards that make typing easier by predicting what you might want to say next.

As a way to get machines to learn without supervision, he has trained them to recognize cats in YouTube videos without being told what cats were. And he revolutionized this field, known as artificial intelligence, by adopting graphics chips meant for video games.

To push the boundaries of artificial intelligence further, one of the world's most renowned researchers in the field says many more humans need to get involved. So his focus now is on teaching the next generation of AI specialists to teach the machines.

Nearly 2 million people around the globe have taken Ng's online course on machine learning. In his videos, the lanky, 6-foot-1 Briton of Hong Kong and Singaporean upbringing speaks with a difficult-to-place accent . He often tries to get students comfortable with mind-boggling concepts by acknowledging up front, in essence, that "hey, this stuff is tough."

Ng sees AI as a way to "free humanity from repetitive mental drudgery." He has said he sees AI changing virtually every industry, and any task that takes less than a second of thought will eventually be done by machines. He once said famously that the only job that might not be changed is his hairdresser's to which a friend of his responded that in fact, she could get a robot to do his hair.

At the end of a 90-minute interview in his sparse office in Palo Alto, California, he reveals what's partially behind his ambition.

"Life is shockingly short," the 41-year-old computer scientist says, swiveling his laptop into view. He's calculated in a Chrome browser window how many days we have from birth to death: a little more than 27,000. "I don't want to waste that many days."

BUILDING BRAINS AS A TEEN

An upstart programmer by age 6, Ng learned coding early from his father, a medical doctor who tried to program a computer to diagnose patients using data. "At his urging," Ng says, he fiddled with these concepts on his home computer. At age 16, he wrote a program to calculate trigonometric functions like sine and cosine using a "neural network" the core computing engine of artificial intelligence modeled on the human brain.

"It seemed really amazing that you could write a few lines of code and have it learn to do interesting things," he said.

After graduating high school from Singapore's Raffles Institution, Ng made the rounds of Carnegie Mellon, MIT and Berkeley before taking up residence as a professor at Stanford University.

There, he taught robotic helicopters to do aerial acrobatics after being trained by an expert pilot. The work was "inspiring and exciting," recalls Pieter Abbeel, then one of Ng's doctoral students and now a computer scientist at Berkeley.

Abbeel says he once crashed a $10,000 helicopter drone, but Ng brushed it off. "Andrew was always like, 'If these things are too simple, everybody else could do them.'"

THE MARK OF NG

Ng's standout AI work involved finding a new way to supercharge neural networks using chips most often found in video-game machines.

Until then, computer scientists had mostly relied on general-purpose processors like the Intel chips that still run many PCs. Such chips can handle only a few computing tasks simultaneously, but make up for it with blazing speed. Neural networks, however, work much better if they can run thousands of calculations simultaneously. That turned out to be a task eminently suited for a different class of chips called graphics processing units, or GPUs.

So when graphics chip maker Nvidia opened up its GPUs for general purposes beyond video games in 2007, Ng jumped on the technology. His Stanford team began publishing papers on the technique a year later, speeding up machine learning by as much as 70 times.

Geoffrey Hinton, whose University of Toronto team wowed peers by using a neural network to win the prestigious ImageNet competition in 2012, credits Ng with persuading him to use the technique. That win spawned a flurry of copycats, giving birth to the rise of modern AI.

"Several different people suggested using GPUs," Hinton says by email. But the work by Ng's team, he says, "was what convinced me."

TEACHING HOW TO TEACH COMPUTERS

Ng's fascination with AI was paralleled by a desire to share his knowledge with students. As online education took off earlier this decade, Ng discovered a natural outlet.

His "Machine Learning" course, which kicked off Stanford's online learning program alongside two other courses in 2011, immediately signed up 100,000 people without any marketing effort.

A year later, he co-founded the online-learning startup Coursera. More recently, he left his high-profile job at Baidu to launch deeplearning.ai , a startup that produces AI-training courses.

Every time he's started something big, whether it's Coursera, the Google Brain deep learning unit, or Baidu's AI lab, he has left once he felt the teams he has built can carry on without him.

"Then you go, 'Great. It's thriving with or without me,'" says Ng, who continues to teach at Stanford while working in private industry.

For Ng, one of his next challenges might include having a child with his roboticist wife, Carol Reiley. "I wish we knew how children (or even a pet dog) learns," Ng says in an email follow-up. "None of us today know how to get computers to learn with the speed and flexibility of a child."

Follow AP Technology Writer Ryan Nakashima at https://twitter.com/rnakashi

Read the original post:

Why AI visionary Andrew Ng teaches humans to teach computers - ABC News

Software that swaps out words can now fool the AI behind Alexa and Siri – MIT Technology Review

The news: Software called TextFooler can trick natural-language processing (NLP) systems into misunderstanding text just by replacing certain words in a sentence with synonyms. In tests, it was able to drop the accuracy of three state-of-the-art NLP systems dramatically. For example, Googles powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.

How it works: The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence The characters, cast in impossibly contrived situations, are totally estranged from reality to The characters, cast in impossibly engineered circumstances, are fully estranged from reality makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.

Why it matters: We have seen many examples of such adversarial attacks, most often with image recognition systems, where tiny alterations to the input can flummox an AI and make it misclassify what it sees. TextFooler shows that this style of attack also breaks NLP, the AI behind virtual assistantssuch as Siri, Alexa and Google Homeas well as other language classifiers like spam filters and hate-speech detectors. The researchers say that tools like TextFooler can help make NLP systems more robust, by revealing their weaknesses.

Continued here:

Software that swaps out words can now fool the AI behind Alexa and Siri - MIT Technology Review

This AI-driven appointment booker takes annoying scheduling out of your hands – The Next Web

TLDR: With the KarenApp Scheduling Software, you can let customers book appointments, receive reminders and even make payments, all automatically.

If youre a busy entrepreneur or a freelance professional, booking appointments can turn into a major headache. Thats because it isnt just about scheduling an appointment. Its about making sure youve included everyone that needs to be included. Its about everyone knowing where a virtual or in-person meeting is happening. And if your hours are billable, its definitely about making sure youre getting paid for your time.

The KarenApp Scheduling Software isnt just a scheduling app. Its an AI-driven personal booking assistant that can help you or your entire organization all but fully automate your calendars without the needless flurry of back-and-forth emails, miscommunications and other logistical nightmares.

KarenApp connects to your Google calendar and gets to work, letting you talk to your calendar in natural language. If you want to schedule a meeting next Thursday, the Karen AI understands when next Thursday is and will get it done. And the more you use Karen, the better shell come to understand your preferences.

Once you enter information about your particular business, KarenApp generates an engaging landing page so potential customers can book appointments easily. With pricing included on your page, anyone who wants to see you will not only know when youre available so they can book a time that works for everyone, but also what your services will cost.

Once an appointment is set, KarenApp will automatically send reminders about your meeting to all parties; as well as allow you to access information about that client or alert and include other team members that should be involved.

If youre meeting virtually, KarenApp lets you automatically add a Zoom link to the appointment. And when you want to get paid for an appointment upfront, customers can connect their Stripe account and automatically pay for their meeting through KarenApps secured interface. Users can also expect an option for PayPal payments coming soon.

With the $49.99 KarenApp Nest Plan, three team members can schedule up to 100 meetings per month on this lifetime subscription thats regularly a $490 value. For larger teams, you can also upgrade with similar savings to a 6-member, 500-meeting Hive Plan ($99.99) or a full 12-member, 1,000-meeting Woods Plan ($149.99).

Prices are subject to change.

Read next: This VPN and password manager from Nord protects your web connection and passwords at over 70% off

Continued here:

This AI-driven appointment booker takes annoying scheduling out of your hands - The Next Web

How AI Will Change the Way We Make Decisions – Harvard Business Review

Executive Summary

Recent advances in AI are best thought of as a drop in the cost of prediction.Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Judgmentis the process of determining what the reward to a particular action is in a particular environment.In many cases, especially in the near term, humans will be required to exercise this sort of judgment. Theyll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions. But couldnt AI calculate costs and benefits itself? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.

With the recent explosion in AI, there has been the understandable concern about its potential impact on human work. Plenty of people have tried to predict which industries and jobs will be most affected, and which skills will be most in demand. (Should you learn to code? Or will AI replace coders too?)

Rather than trying to predict specifics, we suggest an alternative approach. Economic theory suggests that AI will substantially raise the value of human judgment. People who display good judgment will become more valuable, not less. But to understand what good judgment entails and why it will become more valuable, we have to be precise about what we mean.

Recent advances in AI are best thought of as a drop in the cost of prediction. By prediction, we dont just mean the futureprediction is about using data that you have to generate data that you dont have, often by translating large amounts of data into small, manageable amounts. For example, using images divided into parts to detect whether or not the image contains a human face is a classic prediction problem. Economic theory tells us that as the cost of machine prediction falls, machines will do more and more prediction.

Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Consider the example of a credit card network deciding whether or not to approve each attempted transaction. They want to allow legitimate transactions and decline fraud. They use AI to predict whether each attempted transaction is fraudulent. If such predictions were perfect, the networks decision process is easy. Decline if and only if fraud exists.

However, even the best AIs make mistakes, and that is unlikely to change anytime soon. The people who have run the credit card networks know from experience that there is a trade-off between detecting every case of fraud and inconveniencing the user. (Have you ever had a card declined when you tried to use it while traveling?) And since convenience is the whole credit card business, that trade-off is not something to ignore.

This means that to decide whether to approve a transaction, the credit card network has to know the cost of mistakes. How bad would it be to decline a legitimate transaction? How bad would it be to allow a fraudulent transaction?

Someone at the credit card association needs to assess how the entire organization is affected when a legitimate transaction is denied. They need to trade that off against the effects of allowing a transaction that is fraudulent. And that trade-off may be different for high net worth individuals than for casual card users. No AI can make that call. Humans need to do so.This decision is what we call judgment.

Judgment is the process of determining what the reward to a particular action is in a particular environment. Judgment is howwe work out the benefits and costs of different decisions in different situations.

Credit card fraud is an easy decision to explain in this regard. Judgment involves determining how much money is lost in a fraudulent transaction, how unhappy a legitimate customer will be when a transaction is declined, as well as the reward for doing the right thing and allowing good transactions and declining bad ones. In many other situations, the trade-offs are more complex, and the payoffs are not straightforward. Humans learn the payoffs to different outcomes by experience, making choices and observing their mistakes.

Getting the payoffs right is hard. It requires an understanding of what your organization cares about most, what it benefits from, and what could go wrong.

In many cases, especially in the near term, humans will be required to exercise this sort of judgment. Theyll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions.

But couldnt AI calculate costs and benefits itself? In the credit card example, couldnt AI use customer data to consider the trade-off and optimize for profit? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.

Like people, AIs can also learn from experience. One important technique in AI is reinforcement learning whereby a computer is trained to take actions that maximize a certain reward function. For instance, DeepMinds AlphaGo was trained this way to maximize its chances of winning the game of Go. Games are often easy to apply this method of learning because the reward can be easily described and programmed shutting out a human from the loop.

But games can be cheated. As Wired reports, when AI researchers trained an AI to play the boat racing game, CoastRunners, the AI figured out how to maximize its score by going around in circles rather than completing the course as was intended. One might consider this ingenuity of a type, but when it comes to applications beyond games this sort of ingenuity can lead to perverse outcomes.

The key point from the CoastRunners example is that in most applications, the goal given to the AI differs from the true and difficult-to-measure objective of the organization. As long as that is the case, humans will play a central role in judgment, and therefore in organizational decision-making.

In fact, even if an organization is enabling AI to make certain decisions, getting the payoffs right for the organization as a whole requires an understanding of how the machines make those decisions. What types of prediction mistakes are likely? How might a machine learn the wrong message?

Enter Reward Function Engineering. As AIs serve up better and cheaper predictions, there is a need to think clearly and work out how to best use those predictions. Reward Function Engineering is the job of determining the rewards to various actions, given the predictions made by the AI. Being great at itrequires having an understanding of the needs of the organization and the capabilities of the machine. (And it is not the same as putting a human in the loop to help train the AI.)

Sometimes Reward Function Engineering involves programming the rewards in advance of the predictions so that actions can be automated. Self-driving vehicles are an example of such hard-coded rewards. Once the prediction is made, the action is instant. But as the CoastRunners example illustrates, getting the reward right isnt trivial. Reward Function Engineering has to consider the possibility that the AI will over-optimize on one metric of success, and in doing so act in a way thats inconsistent with the organizations broader goals.

At other times, such hard-coding of the rewards is too difficult. There may so be many possible predictions that it is too costly for anyone to judge all the possible payoffs in advance. Instead, some human needs to wait for the prediction to arrive, and then assess the payoff. This is closer to how most decision-making works today, whether or not it includes machine-generated predictions. Most of us already do some Reward Function Engineering, but for humans not machines. Parents teach their children values. Mentors teach new workers how the system operates. Managers give objectives to their staff, and then tweak them to get better performance. Every day, we make decisions and judge the rewards. But when we do this for humans, prediction and judgment are grouped together, and the distinct role of Reward Function Engineering has not needed to be explicitly separate.

As machines get better at prediction, the distinct value of Reward Function Engineering will increase as the application of human judgment becomes central.

Overall, will machine prediction decrease or increase the amount of work available for humans in decision-making? It is too early to tell. On the one hand, machine prediction will substitute for human prediction in decision-making. On the other hand, machine prediction is a complement to human judgment. And cheaper prediction will generate more demand for decision-making, so there will be more opportunities to exercise human judgment. So, although it is too early to speculate on the overall impact on jobs, there is little doubt that we will soon be witness to a great flourishing of demand for human judgment in the form of Reward Function Engineering.

Read the original here:

How AI Will Change the Way We Make Decisions - Harvard Business Review

IoT trends continue to push processing to the edge for artificial intelligence (AI) – Urgent Communications

As connected devices proliferate, new ways of processing have come to the fore to accommodate device and data explosion.

For years, organizations have moved toward centralized, off-site processing architecture in the cloud and away from on-premises data centers. Cloud computing enabled startups to innovate and expand their businesses without requiring huge capital outlays on data center infrastructure or ongoing costs for IT management. It enabled large organizations to scale quickly and stay agile by using on-demand resources.

But as enterprises move toward more remote models, video-intensive communications and other processes, they need an edge computing architecture to accommodate data-hogging tasks.

These data-intensive processes need to happen within fractions of a second: Think self-driving cars, video streaming or tracking shipping trucks in real time on their route. Sending data on a round trip to the cloud and back to the device takes too much time. It can also add cost and compromise data in transit.

Customers realize they dont want to pass a lot of processing up to the cloud, so theyre thinking the edge is the real target, according to Markus Levy, head of AI technologies at NXP Semiconductors, in a piece on therise of embedded AI.

In recent years, edge computing architecture has moved to the fore, to accommodate the proliferation of data and devices as well as the velocity at which this data is moving.

To read the complete article, visit IoT World Today.

More here:

IoT trends continue to push processing to the edge for artificial intelligence (AI) - Urgent Communications

Samsung and Bayer invest in A.I. doctor app Ada Health – CNBC

Berlin-based Ada Health, which has developed a doctor-in-your-pocket style app that uses artificial intelligence to try to diagnose symptoms, has been backed by investment arms of South Korea's Samsung and German pharmaceutical giant Bayer.

Ada Health announced Thursday it has raised a $90 million funding round at an undisclosed valuation that brings total investment in the company up to around $150 million.

Bayer led the round through its Leaps by Bayer investment arm, while Samsung invested through the Samsung Catalyst Fund, a U.S.-based venture capital fund that Samsung Electronics uses to back companies worldwide. Samsung Electronics' former chief strategy officer and corporate president, Young Sohn, has joined the board of Ada Health.

Founded in 2011 by entrepreneurs Dr. Claire Novorol, Martin Hirsch and Daniel Nathrath, Ada Health says its app has been downloaded over 11 million times.

"The app basically works like a WhatsApp chat with your trusted family doctor, but 24/7," CEO Nathrath told CNBC.

The patient starts by entering their symptoms, and an AI chat bot will ask a series of questions to try to determine the issue. After that, the app will present the patient with the conditions that are most likely to be the cause and offers some suggestions on what to do next to address the issue.

The iOS and Android apps give generic advice such as to see a GP in the next three days. But when patients interact with Ada Health through a health system that uses the app, they can go straight into booking an appointment and sharing the outcome of their pre-assessment with a real doctor, Nathrath said.

He said the company has signed deals with several health systems, health insurers and life sciences companies. Axa OneHealth, Novartis, Pfizer and SutterHealth are listed as partners on Ada Health's website.

While the app is free for patients to download, Ada Health charges partners for access to its software.

The company said the new funding will be used to help it expand deeper into the U.S., which is already its biggest market with 2 million users. Elsewhere, Ada Health has roughly 4 million users across the U.K., Germany, Brazil and India, with roughly 1 million in each.

The funding will also be used to improve the company's algorithms, add to the medical knowledge base and go beyond 10 languages, Nathrath said.

He also also wants to feed the Ada Health app with more information beyond symptom data provided by the patient. That could include lab data, genetic testing and sensor data, Nathrath said.

"Smartwatches and other sensors have really made a big leap forward," Nathrath said. "Nowadays you can measure your blood pressure, you can do an ECG, measure heart rate variability and blood oxygen levels."

"Our ambition is really to build what we call a personal operating system for health where you wouldn't just have a symptom check, but you would be able to integrate all relevant sources of health information in a way where ideally Ada becomes this companion that can alert you before the 100 problem becomes a 100,000 a year problem."

Ada Health has received less funding than other "doctor" apps like Babylon and Kry.

Unlike Babylon and Kry, Ada Health doesn't allow patients to hold a video call with a GP.

Ada briefly ran a service called Doctor Chat that allowed users to consult with a registered GP through an on-demand chat portal. However, it was deactivated in March 2018 after being live for around a year.

"We were expecting a lot more people to actually use this than they did," Nathrath said, adding that people prefer the automated chat experience to video calls with GPs.

"When you look at telehealth, you can't scale it as well as you can an AI solution because you still need to hire a lot of doctors in different countries," Nathrath said.

The investment in Ada Health comes just over two weeks after British health start-up Huma raised $130 million from the venture arms of Bayer, Samsung and Hitachi.

Other investors in Ada Health's latest round include Vitruvian Ventures, Inteligo Bank, F4 and Mutschler Ventures.

Go here to see the original:

Samsung and Bayer invest in A.I. doctor app Ada Health - CNBC

3 reasons why 2021 will be AI’s time to shine – Siliconrepublic.com

Forresters Srividya Sridharan looks at how AI is changing and what we can expect in 2021.

AI is transformational. AI is exciting. AI is mysterious. AI is scary. AI is omnipresent.

Weve heard this oscillating narrative over the last few years and will continue to in the future, but in this unprecedented year, one thing became clear enterprises need to find a way to safely, creatively and boldly apply AI to emerge stronger both in the short term and in the long term.

2020 gave leaders the impetus, born out of necessity and confidence, to embrace AI with all its blemishes. The kinks in AI still remain: lack of trust, poor data quality, data paucity for some and a dearth of the right type of tools and talent.

2021 will see companies and C-level leaders tackle some of these challenges head on, not because they want to but because they have to. Heres why its time for AI to shine.

In 2021, the grittiest of companies will push AI to new frontiers, such as holographic meetings for remote work and on-demand, personalised manufacturing. They will gamify strategic planning, build simulations in the boardroom and move into intelligent edge experiences.

Coupled with this, lucky laggards will use no-code automated machine learning to implement five, 50, or 500 AI use cases faster, leapfrogging their competitors with capable, entrenched data science teams that take a traditional, code-first approach to machine learning.

In 2021, more than a third of companies in adaptive and growth mode will look to AI to help with workplace disruption for both location-based, physical or human-touch workers and knowledge workers working from home.

This will include applying AI for intelligent document extraction, customer service agent augmentation, return-to-work health tracking or semiautonomous robots for social separation.

2021 will showcase the good, the bad and the ugly of artificial data, which comes in two forms: synthetic data that allows users to create datasets for training AI, and fake data that does the opposite; it perturbs training data to deliberately throw off AI.

Companies are also facing increasing pressure from consumer interest groups and regulators to prove datas lineage for AI, including data audit trails to ensure compliance and ethical use.

In 2021, blockchain and AI will start joining forces more seriously to support data provenance, integrity and usage tracking.

BySrividya Sridharan

Srividya Sridharan is a vice-president and research director at Forrester.A version ofthis articleoriginally appeared onthe Forrester blog.

Go here to read the rest:

3 reasons why 2021 will be AI's time to shine - Siliconrepublic.com

How Artificial Intelligence is Improving Customer Experience – Business.com

Artificial Intelligence is having a drastic impact on the way companies interact with their customers.

Most people are familiar with artificial intelligence because of movies like iRobot or Star Wars. Over the years, technology proved that artificial intelligence wont always be a science fiction myth. In the year 2014 alone, a total of $300 million was invested in AI startup companies, as reported by Bloomberg. AI has been making things much simplerfor a lot of businesses which inevitably makes customers happy.

In fact, AI is becoming so big that according to Gartner, 85% of total customer interactions will not be managed by humans as of the year 2020. Forrester is even predicting that AI will take over a total of 16% of American jobs at the end of the decade.

Because of the development in technology, it is actually possible to communicate with computers the same way that we also communicate with people. The great thing about AI is that it is able to store tons of information in their memory banks and to pull them out any time. This type of function is extremely helpful for many companies in improving customer experience as it gives the customers what they exactly wanted. This adds to the overall customer satisfaction of the public. Remember that customer service is an integral ingredient of customer satisfaction; so the whole fact that AI can strengthen it will immediately ensure a higher customer satisfaction rate.

Over time, many technology companies have been delving into AI and have come up with a lot of interesting results. Siri happens to be one of the most famous apps of them all that aids in the iPhones customer satisfaction. For example, if you ask her to search something in Google for you, she will respond and bring you to the Google page with the search results presented.

Another one would be Watson, which is an even smarter AI app. Watson is known to be able to understand and respond to customers through cognition and not just memory banks from a database. In a nutshell, created by IBM, Watson is a problem solving robot thats been around since 2004.

Of course, Ive already mentioned how Apple made use of Siri to further help iPhone users get the most out of their phones. Just like Siri, Cortana is also an artificial intelligence assistant that also helps phone users, only Cortana can be found in Windows devices instead of Apple.

Weve also got Cogito which happens to be a very intelligent customer support robot that improves customer service of customer service representatives.

The travel industry also vastly benefits from AI apps. Take Baarb for example, a platform that uses AI technology to intelligently find the best travel spots for customers. All recommendations made by the platform are personalized and suited for each customers wants. These are only some of the companies that make use of AI for customer experience.

One of the most wonderful things about AI is how AI can actually make customer experience more personalized through the collection of data and also execution of humanlike traits. AIs work by first collecting data of their customers and storing them into their memory banks. They then use the information to interact with the customers. The more data that they store, the more intelligently they can interact. In a way, they are almost humanlike. They learn, they remember, then they apply.

By taking a look at some of the examples given above, we can see how the AIs use customer data to enhance experience. Siri, for example, stores information that will allow her to suggest tasks to be carried out for your needs. Baarb also does the same thing, but focuses on your travel preferences to come up with the best trips for your next vacation.

What makes AIs amazing are their ability to use data stored in their memory and use it to aid customers -- just like a customer service representative would.

AI is slowly becoming an integral part of our lives. With the use of this type of technology, creating good customer experiences for your consumers will be so much easier. With their sharp efficiency and human like traits, AI will definitely take over many tasks that were once done by humans. We just have to be ready for it.

Nathan Resnick

See more here:

How Artificial Intelligence is Improving Customer Experience - Business.com

AI Is Edging Into the Art World in Psychedelic Ways – Smithsonian

AI is now able to synthesize new sounds from old ones, and even compose original music

"Can machines be creative?" This question is the target of a recent Google undertaking, dubbed Project Magenta,focused on bringing artificial intelligence into the art world.

Magenta and other creative AI endeavors draw on the power of deep neural networks, systems that allow computers to sort through large amounts of data, recognizing patterns, and eventually generating their own pictures, music and more. These networks had previously been put to artistic use by Google for its "DeepDream" project, which was designed to visualize how neural networks think. Researchers could feed the tool images, which it thenreinterpretedinto often abstract, and oftentrippy, works.

Last year, Google started Project Magenta to apply what it learned from these AI-created masterpieces to further push the limits of computer creativity in art, music, videos and more. Now,TheNew York Times'Cade Metz tuned into the software giant's recent projects to see (and hear) what's come of the endeavor.

Along with the announcement of Project Magenta last summer, Google released the neural network's first song. The Google team gave its algorithm four notes (C, C, G, G) to work with, and then let the machine compose a roughly 90-second song with a piano sound.The littleditty is upbeat, starting slow but picking up with a drum beat added behind it as it explores patterns using those four notes.

But now, Google programmers are using those networks to not only create new pieces of music, but new instruments. For example, a tool calledNSynth, has analyzed hundreds of notes played by a variety of modern instruments, mapping out the features that makea guitar sound like a guitar, or a trumpet sound like a trumpet. Using these maps, users can then combine instrument characteristics to create brandnew sound makers.

A more recent project from Google trained an algorithm with examples of classical piano music to create a tool that can compose its own music within the framework of classical piano techniques, reports MatthewHutsonforScience. While you won't findPerformanceRNN, as the algorithm is called, composing a symphony any time soon, it can create short original music phrasings that are "quite expressive," as programmersIan Simon andSageevOorewrote last month on the Project Magenta blog. And another algorithm has been trained from Magenta's code to be able to respond to notes that people play with its own original snippets of music, in effect creating a "duet" with an AI.

Other Google algorithms have worked on edging more into the visual art world, reportsHutson. For example, the algorithmSketchRNNhas analyzed thousands of examples of human drawings to teach a computer to create basic sketches of common shapes, such aschairs, cats and trucks.

Once these models have been "trained," writes Google researcher David Ha, the computercan analyze and recreate previously submitted drawings in original ways. It can even correctmistakes researchers added in to make the images appear more accurate, such as drawing a pig with four legs instead of five.Similar to the blended instruments ofNSynth, artists can game these models by doing things like submitting drawings of chairs to a program that draws cats, creating blended sketches that lie somewhere between the shapes.

Some other projects haven't worked out just yet,Hutsonreports, such as a tool to create new jokes. (They just weren't funny.)

Google aren't the only ones interested in artsy AI. As Metz notes, last year, researchers at Sony trained an neural network tocompose new songs in the styles of existing artistseven creatinganpop songthat resemblesa composition from the Beatles. Another neural networkcomposed its ownChristmas songwhen shown a picture of a Christmas tree.

Though some people are concerned that AI could replace us all, developers don't see these tools as ever supplanting human creativity,Hutsonreports. But rather, these algorithms are tools that can helpinspire and channel imagination into new creations.

Maybe one day, your muse could be a computer.

Like this article? SIGN UP for our newsletter

Read more:

AI Is Edging Into the Art World in Psychedelic Ways - Smithsonian

Astro raises an $8 million Series A for its AI-powered email solution for teams – TechCrunch


TechCrunch
Astro raises an $8 million Series A for its AI-powered email solution for teams
TechCrunch
On the surface, Astro, launching its public beta today, is a nifty but not completely necessary email client that combines machine intelligence and a bot interface to improve workflows and increase the signal to noise ratio of mail for power users. But ...
Astro aims to fix your email mess with an AI chatbot - The VergeThe Verge
Astro is an AI-powered email client with big dreams | PCWorldPCWorld
Astro raises $8.3 million for its email app with AI assistantVentureBeat

all 5 news articles »

Read the rest here:

Astro raises an $8 million Series A for its AI-powered email solution for teams - TechCrunch

How A Recent Workshop Of NITI Aayog Gave Boost To India’s AI Ambitions – Analytics India Magazine

NITI Aayogs AI workshop called Artificial Intelligence The India Imperative took place recently in New Delhi.

The AI workshop- The India Imperative took place on 19th Dec20 at NITI Aayog Bhawan where all the relevant stakeholders from across the country were present. From ministers from states to representatives from the IT industry to professors from IITs were part of the workshop.

The event highlighted Indias leading think tank continuous work in ensuring continuous activity in the AI field. NITI Aayog has also been publishing Approach Papers to create execution plans and showcase essential suggestions along with stakeholders.

The CEO of NITI Aayog started the workshop with Indias AI aspirations and the importance of AI for all the sectors. He also recommended AI Superpower by Kai-Fu. He also added that India is expected to reach $15 trillion which will be more than the US and China together.

In his keynote address, CEO Amitabh Kant kickstarted the deliberations by emphasising on the importance of AI for All in realising Indias artificial intelligence aspirations. The main 5 sectors that the workshop focused on were- healthcare, agriculture, education, infrastructure and transportation that would benefit the most from AI.

Arnab Kumar, the Programme Director gave a presentation on AI For All and discussed the 4 fundamental themes:

1.Data Rich to Data Intelligent

2.Research & Development

3. AI-specific Computing

4.Large scale AI adoption

The inaugural session was followed by the breakout sessions which included the following topics- structured data infrastructure for AI, research ecosystem for AI, Moonshots for India and Adoption- Focus on healthcare, education and agriculture.

In the breakout session at the workshop on Artificial Intelligence The India Imperative, scalable approach to building solutions for a billion citizens, by leveraging technologies like AI/ML was discussed by the participants.

In 2018 2019, the government-mandated NITI Aayog to create the National Program on AI, with the aim of guiding the research and development in new innovation in artificial intelligence for India. NITI Aayog came out with the National Strategy for Artificial Intelligence (NSAI) discourse paper in June of 2018 to highlight Indian governments importance and role in boosting AI.

NITI Aayog has taken a three-part approach here undertaking exploratory proof-of-concept AI projects in different areas of the country, creating a national strategy for a vibrant AI ecosystem in India and collaborating with experts and stakeholders in the field. The recent workshop was a part of the think tanks engagement with stakeholders, including multiple startups.

One such startup had been Silversparro, an AI-powered video analytics firm invited by Niti Aayog for a day-long session on realising Indias AI aspirations. The startup presented its views on how AI can help India leapfrog in sectors like manufacturing, heavy industry and giving a boost to SMEs.

We are heartened by Niti Aayogs focus on making India an AI Superpower. We are also proud to be contributing directly by leveraging AI for making Indian Manufacturing more productive with our latest offering Sparrosense AI Supervisor said Abhinav Kumar Gupta, Founder & CEO at Silversparro.

At the workshop, there were several speakers talking about leadership and vision in the AI workshop Artificial Intelligence The India Imperative event in Delhi by NITI Aayog, including those from state governments such as Telangana. In fact, Telangana presented Year of AI at NITI Aayogs workshop.

Also Read: India Lags Behind In AI Research, But Will 7,000 Crore Boost Change Things?

India has rich publicly available data, and across government departments, the various processes have been digitised for reporting and analytics insights, which are feeding into information systems and visualisation dashboards. According to NITI Aayog, this data is being utilised to track and visualise processes and make iterative enhancements.

National Data and Analytics Platform (NDAP), an initiative aimed to aid Indias progress by promoting data-driven discourse and decision-making, NDAP also aimed to standardise data across multiple government sources, provide flexible analytics and make it easily accessible in formats conducive for research, innovation, policy-making and public consumption. As part of it, multiple data sets have been presented using a standardised schema, by using common geographical and temporal identifiers.

However, the data landscape can be further improved as the entire public government data can be smoothly accessible to all stakeholders in a user-friendly manner. Further, data across different government assets should be interlinked to enable analytics and insights, such websites of ministries and departments of the central and state governments.

Also Learn: AIRAWAT: NITI Aayog Describes How Indias AI Infrastructure Will Look Like

comments

See the original post:

How A Recent Workshop Of NITI Aayog Gave Boost To India's AI Ambitions - Analytics India Magazine