AI Could Revolutionize War as Much as Nukes – WIRED

In 1899, the worlds most powerful nations signed a treaty at The Hague that banned military use of aircraft, fearing the emerging technologys destructive power. Five years later the moratorium was allowed to expire, and before long aircraft were helping to enable the slaughter of World War I. Some technologies are so powerful as to be irresistible, says Greg Allen, a fellow at the Center for New American Security, a non-partisan Washington DC think tank. Militaries around the world have essentially come to the same conclusion with respect to artificial intelligence.

Allen is coauthor of a 132-page new report on the effect of artificial intelligence on national security. One of its conclusions is that the impact of technologies such as autonomous robots on war and international relations could rival that of nuclear weapons. The report was produced by Harvards Belfer Center for Science and International Affairs, at the request of IARPA, the research agency of the Office of the Director of National Intelligence. It lays out why technologies like drones with bird-like agility, robot hackers, and software that generates photo-real fake video are on track to make the American military and its rivals much more powerful.

New technologies like those can be expected to bring with them a series of excruciating moral, political, and diplomatic choices for America and other nations. Building up a new breed of military equipment using artificial intelligence is one thingdeciding what uses of this new power are acceptable is another. The report recommends that the US start considering what uses of AI in war should be restricted using international treaties.

The US military has been funding, testing and deploying various shades of machine intelligence for a long time. In 2001, Congress even mandated that one-third of ground combat vehicles should be uncrewed by 2015a target that has been missed. But the Harvard report argues that recent, rapid progress in artificial intelligence that has invigorated companies such as Google and Amazon is poised to bring an unprecedented surge in military innovation. Even if all progress in basic AI research and development were to stop, we would still have five or 10 years of applied research, Allen says.

Eric Adams

Darpa's Developing Tiny Drones That Swarm to and From Motherships

Tom Simonite

Its Too Late to Stop China From Becoming an AI Superpower

Cade Metz

Hackers Dont Have to Be Human Anymore. This Bot Battle Proves It

In the near-term, Americas strong public and private investment in AI should give it new ways to cement its position as the worlds leading military power, the Harvard report says. For example, nimbler, more intelligent ground and aerial robots that can support or work alongside troops would build on the edge in drones and uncrewed ground vehicles that has been crucial to the US in Iraq and Afghanistan. That should mean any given mission requires fewer human soldiersif any at all.

The report also says that the US should soon be able to significantly expand its powers of attack and defense in cyberwar by automating work like probing and targeting enemy networks or crafting fake information. Last summer, to test automation in cyberwar, Darpa staged a contest in which seven bots attacked each other while also patching their own flaws.

As time goes on, improvements in AI and related technology may also shake up balance of international power by making it easier for smaller nations and organizations to threaten big powers like the US. Nuclear weapons may be easier than ever to build, but still require resources, technologies, and expertise in relatively short supply. Code and digital data tend to get cheap, or end up spreading around for free, fast. Machine learning has become widely used and image and facial recognition now crop up in science fair projects.

The Harvard report warns that commoditization of technologies such as drone delivery and autonomous passenger vehicles could become powerful tools of asymmetric warfare. ISIS has already started using consumer quadcopters to drop grenades on opposing forces. Similarly, techniques developed to automate cyberwar can probably be expected to find their way into the vibrant black market in hacking tools and services.

You could be forgiven for starting to sweat at the thought of nation states fielding armies of robots that decide for themselves whether to kill. Some people who have helped build up machine learning and artificial intelligence already are. More than 3,000 researchers, scientists, and executives from companies including Microsoft and Google signed a 2015 letter to the Obama administration asking for a ban on autonomous weapons. I think most people would be very uncomfortable with the idea that you would launch a fully autonomous system that would decide when and if to kill someone, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, and a signatory to the 2015 letter. Although he concedes it might just take one country deciding to field killer robots to set others changing their minds about autonomous weapons. Perhaps a more realistic scenario is that countries do have them, and abide by a strict treaty on their use, he says. In 2012, the Department of Defense set a temporary policy requiring a human to be involved in decisions to use lethal force; it was updated to be permanent in May this year.

The Harvard report recommends that the National Security Council, DoD, and State Department should start studying now what internationally agreed-on limits ought to be imposed on AI. Miles Brundage, who researches the impacts of AI on society at the University of Oxford, says theres reason to think that AI diplomacy can be effectiveif countries can avoid getting trapped in the idea that the technology is a race in which there will be one winner. One concern is that if we put such a high premium on being first, then things like safety and ethics will go by the wayside, he says. We saw in the various historical arms races that collaboration and dialog can pay dividends.

Indeed, the fact that there are only a handful of nuclear states in the world is proof that very powerful military technologies are not always irresistible. Nuclear weapons have proven that states have the ability to say I dont even want to have this technology, Allen says. Still, the many potential uses of AI in national security suggest that the self-restraint of the US, its allies, and adversaries is set to get quite a workout.

UPDATE 12:50 pm ET 07/19/17: An earlier version of this story incorrectly said the Department of Defenses directive on autonomous weapons was due to expire this year.

Follow this link:

AI Could Revolutionize War as Much as Nukes - WIRED

An IBM AI Invented Perfumes That’ll Soon Sell in 4,000 Stores

Eau de AI

We already knew that IBM’s artificial intelligence (AI) Watson was a “Jeopardy” champion that dabbled in cancer diagnosis and life insurance.

Now, according to a new story in Vox, a cousin of Watson has accomplished an even finer task: creating perfumes. Soon, these AI-invented perfumes will go on sale in about 4,000 locations in Brazil.

Smell Curve

Fragrances have traditionally been created by sought-after perfumers. But Symrise, a German fragrance company with clients including Avon and Coty, recently struck a deal with IBM to see how its AI tools could be used to modernize the process.

IBM created an algorithm — its name is Philyra, according to Datanami — that studies fragrance formulas and customer data, then spits out new perfumes inspired by those training materials.

Nose Warmer

The first results of the collaboration are two scents, and the AI-invented perfumes will soon go on sale at O Boticário, a prominent Brazilian beauty chain. Spokespeople declined to confirm to Vox precisely which fragrances were invented by the AI.

It’s a sure sign, though, that AI creations are starting to leak out into the world of consumer products — and next time you sniff a sampler, it’s possible that it wasn’t formulated by a human at all.

READ MOREIs AI the Future of Perfume? IBM Is Betting on It [Vox]

More on IBM AI: IBM’s New AI Can Predict Psychosis in Your Speech

Visit link:

An IBM AI Invented Perfumes That’ll Soon Sell in 4,000 Stores

What Salesforce Einstein teaches us about enterprise AI – VentureBeat

Every business has customers. Every customer needs care. Thats why CRM is so critical to enterprises, but between incomplete data and clunky workflows, sales and marketing operations at most companies are less-than-optimal.

At the same time, companies who arent Google or Facebook dont have the billion dollar R&D budgets to build out AI teams to take away our human efficiencies. Even companies with the right technical talent dont have the petabytes of data that the tech titans use to train cutting-edge neural network models.

Salesforce hopes to plug this AI knowledge gap with Einstein. According to Chief Scientist, Richard Socher, Einstein is an AI layer, not a standalone product, that infuses AI features and capabilities across all the Salesforce Clouds.

The 150,000+ companies who already use Salesforce should be able to simply flip a switch and deploy AI capabilities to their organization. Organizations with data science and machine learning teams of their own can extend the base functionality through predictive APIs such as Predictive Vision and Predictive Sentiment Services, which allows companies to understand how their products feature in images and video and how consumers feel about them.

The improvements are already palpable. According to Socher, Salesforce Marketing Clouds predictive audiences feature helps marketers hone in on high-value outreach as well as re-engage users who might be in danger of unsubscribing. The technology has led to an average 25% lift on clicks and opens. Customers of Salesforces Sales Cloud have seen a 300% increase in conversions from leads to opportunities with predictive lead scoring while customers of Commerce Cloud have seen a 7-15% increase in revenue per site visitor.

Achieving these results has not been cheap. Salesforces machine learning and AI buying spree includes RelateIQ ($390 million), BeyondCore ($110 million), PredictionIO ($58 million) as well as deep learning specialist Metamind of which Socher was previously founder & CEO / CTO. Marc Benioff has spent over $4 billion to acquire the right talent and tech in 2016.

Even with all the right money and the right people, rolling out AI for enterprises is fraught with peril due to competition and high expectations. Gartner analyst Todd Berkowitz pointed out that Einsteins capabilities were not nearly as sophisticated as standalone solutions on the market. Other critics say the technology is at least a year and a half from being fully baked.

Infer is one of those aforementioned standalone solutions offering predictive analytics for sales and marketing, putting them in direct competition with Salesforce. In a detailed article about the current AI hype, CEO Vik Singh challenges that big companies like Salesforce are making machine learning feel like AWS infrastructure which wont result in sticky adoption. Singh adds that Machine learning is not like AWS, which you can just spin up and magically connect to some system.

Chief Scientist Socher acknowledges that challenges exist, but believes they are surmountable.

Communication is at the core of CRM, but while computers have surpassed humans in many key computer vision tasks, natural language processing (NLP) and natural language understanding (NLU) approaches fall short of being performant in high stakes enterprise environments.

The problem with most neural network approaches is that they train models on a single task and a single data type to solve a narrow problem. Conversation, on the other hand, requires different types of functionality. You have to be able to understand social cues and the visual world, reason logically, and retrieve facts. Even the motor cortex appears to be relevant for language understanding, explains Socher. You cannot get to intelligent NLP without tackling multi-task approaches.

Thats why the Salesforce AI Research team is innovating on a joint many-task learning approach that leverages transfer learning, where a neural network applies knowledge of one domain to other domains. In theory, understanding linguistic morphology should also also accelerate understanding of semantics and syntax.

In practice, Socher and his deep learning research team have been able to achieve state-of-the-art results on academic benchmark tests for main entity recognition (can you identify key objects, locations, and persons?) and semantic similarity (can you identify words and phrases that are synonyms?). Their approach can solve five NLP tasks chunking, dependency parsing, semantic relatedness, textual entailment, and part of speech tagging all at once and also builds in a character model to handle incomplete, misspelled, or unknown words.

Socher believes that AI researchers will achieve transfer learning capabilities in more comprehensive ways in 2017 and speech recognition will be embedded in many more aspects of our lives. Right now consumers are used to asking Siri about the weather tomorrow, but we want to enable people to ask natural questions about their own unique data.

For Salesforce Einstein, Socher is building a comprehensive Q&A system on top of multi-task learning models. To learn more about Salesforces vision for AI, you can hear Socher speak at the upcoming AI By The Bay conference in San Francisco (VentureBeat discount code VB20 for 20% off).

Solving difficult research problems is only step one. Whats surprising is that you may have solved a critical research problem, but operationalizing your work for customers requires so much more engineering work and talented coordination across the company, Socher reveals.

Salesforce has hundreds of thousands of customers, each with their own analyses and data, he explains. You have to solve the problem at a meta level and abstract away all the complexity of how you do it for each customer. At the same time, people want to modify and customize the functionality to predict anything they want.

Socher identifies three key phases of enterprise AI rollout: data, algorithms, and workflows. Data happens to be the first and biggest hurdle for many companies to clear. In theory, companies have the right data, but then you find the data is distributed across too many places, doesnt have the right legal structure, is unlabeled, or is simply not accessible.

Hiring top talent is also non-trivial, as computer scientists like to say. Different types of AI problems have different complexity. While some AI applications are simpler, challenges with unstructured data such as text and vision mean experts who can handle them are rare and in-demand.

The most challenging piece is the last part: workflows. Whats the point of fancy AI research when nobody uses your work? Socher emphasizes that you have to be very careful to think about how to empower users and customers with your AI features. This is very complex but very specific. Workflow integration for sales processes is very different from those for self-driving cars.

Until we invent AI that invents AI, iterating on our data, research, and operations is a never-ending job for us humans. Einstein will never be fully complete. You can always improve workflows and make them more efficient, Socher concludes.

This article appeared originally at Topbots.

Mariya Yao is the Head of R&D atTopbots, a site devoted to chatbots and AI.

Read this article:

What Salesforce Einstein teaches us about enterprise AI - VentureBeat

AI will boost global GDP by nearly $16 trillion by 2030with much of the gains in China – Quartz

Much has already been made about how artificial intelligence is going to transform our lives, ranging from visions of the future in which robots make humans obsolete to utopias in which technology solves intractable problems and frees up people to pursue their passions. Consultancy firm PwC ran the numbers, and came up with a relatively rosy scenario with regards to the impact AI will have on the global economy. By 2030, global GDP could increase by 14%, or $15.7 trillion, because of AI, the firm said in a report today (pdf).

Almost half of these economic gains will accrue to China, where AI is projected to give the economy a 26% boost over the next 13 yearsthe equivalent of an extra $7 trillion in GDP. North America can expect a 14.5% increase in GDP, worth $3.7 trillion.

According to PwC, North America will get the biggest economic boost in the next few years as consumers and industries are more ready to incorporate AI. By the mid-2020s, however, China will rise to the top. Since Chinas economy is heavy on manufacturing, there is a lot of potential for technology to boost productivity, but it will take time for new technology and the necessary expertise to come up to speed. When this happens, AI-enabled technologies will be exported from China to North America. Whats more, Chinese firms tend to re-invest more capital than North American and European ones, further boosting business growth in AI.

PwCs study defines four types of AI: automated intelligence, which performs tasks on its own; assisted intelligence, which helps people perform tasks faster and better; augmented intelligence, which helps people make better decisions; and autonomous intelligence, which automates decision-making entirely. Their forecasts isolate the potential growth from AI, keeping all other factors in the economy equal.

A large part of the forecast GDP gains$6.6 trillionare expected to come from increased labor productivity, with businesses automating processes or using AI to assist their existing workforce. This suggests PwC believes AI will generate a productivity boost thats bigger than previous technological breakthroughsdespite recent advancements, global productivity growth is very low and economists are puzzled about how to get out of this trap.

The rest of the projected economic growth would come from increased consumer demand for personalized and AI-enhanced products and services. The sectors that have the most to gain on this front are health care, financial services, and the auto industry.

Notably, there is no panic in the report about AI leading to excessive job lossesin previous reports, PwC has already tried to debunk some of the scarier forecasts about that. Instead the researchers recommend that businesses prepare for a hybrid workforce where humans and AI work side-by-side. In harmony? TBD.

Read this next: An economist explains why education wont save us from being replaced by robots

Read more here:

AI will boost global GDP by nearly $16 trillion by 2030with much of the gains in China - Quartz

Nepal should gamble on AI – The Phnom Penh Post

A statue of Beethoven by German artist Ottmar Hoerl stands in front of a piano during a presentation of a part of the completion of Beethovens 10th symphony made using artificial intelligence at the Telekom headquarters in Bonn, western Germany. AFP

Artifical intelligence (AI) is an essential part of the fourth industrial revolution, and Nepal can still catch up to global developments.

Although unrelated, the last decade has seen two significant events: Nepal promulgated a new constitution after decades of instability and is now aiming for prosperity. At the same time, AI saw a resurgence through deep learning, impacting a wide variety of fields. Though unrelated, one can help the other AI can help Nepal in its quest for development and prosperity.

AI was conceptualised during the 1950s, and have seen various phases. The concept caught the publics imagination and hundreds, if not thousands, of movies and novels were created based on a similar idea of a machines intelligence being on par with humans.

But human intelligence is a very complex phenomenon and is diverse in its abilities like rationalisation or recognising a persons face. Even the seemingly simple task of recognising faces, when captured at different camera angles, was considered a difficult challenge for AI as late as the first decade of this century.

However, thanks to better algorithms, computation capabilities, and loads of data from the magic of the internet and social media giants, the current AI systems are now capable of performing such facial recognition tasks better than humans. Some other exciting landmarks include surpassing trained medical doctors in diagnosing diseases from X-ray images and a self-taught AI beating professionals in the strategy board game Go. Although AI may be still far away from general human intelligence, these examples should be more than enough to start paying attention to the technology.

The current leap in AI is now considered an essential ingredient for the fourth industrial revolution. The first industrial revolution, via the steam engine, that started in Great Britain during the late 1700s and quickly expanded to other European countries and America led to rapid economic prosperity. This further opened floodgates of innovation and wealth creation leading to the second and third industrial revolution. A case study of this could be the relationship between Nokia and Finland.

Both of them failing miserably in economic terms in the late 1980s. But both the company and the country gambled on GSM technology, which later went on to become the worlds dominant network standard. In a single decade that followed, Finland achieved unprecedented economic growth with Nokia accounting for more than 70 per cent of Helsinkis stock exchange market capital. This single decade transformed Finland into the most specialised countries in terms of information and communication despite it being under a severe economic crisis since World War II.

The gamble involved not just the motivation to support new technology, but a substantial investment through the Finnish Funding Agency for Technology and Innovation into Nokias research and development projects. This funding was later returned in the form of colossal tax revenue, employment opportunities and further demand for skilled human resources. All these resulted in an ecosystem with a better educational system and entrepreneurial opportunities.

Owing to the years of political turmoil and instability, Nepal missed out on these past industrial revolutions. But overlooking the current one might leave us far behind.

Global AI phenomenon

A recent study of the global AI phenomenon has shown that developed countries have invested heavily in talent and the associated market and have already started to see a direct contribution from AI in their economy. Some African countries are making sure that they are not being left behind, with proper investment in AI research and development. AI growth in Africa has seen applications in the area of agriculture and healthcare. Google, positioning itself to be an AI-first company, has caught this trend in Africa and opened its first African AI lab in Accra, Ghana.

So is Nepal too late to this party? Perhaps. But Nepal still has a chance of catching up. Instead of scattering its focus and the available resources, the country now needs to narrow its investments into AI and technology. It will all start with the central government beginning with a concrete plan for AI development for the upcoming decade.

Similar policies have already been released by many other countries, including its neighbours India and China. It is unfortunate to note that AI strategy from China, reported in the 19th Congress by Chinese President Xi Jinping in 2017, received close to no attention in Nepal, in comparison to the Belt and Road Initiative (BRI) strategy that was announced in 2013.

An essential component of such a strategic plan should be on enhancing Nepals academic institutions. Fortunately, any such programme from the government could be facilitated by recent initiatives like Naami Nepal, a research organisation for AI or NIC Nepal, an innovation centre started by Mahabir Pun.

Moreover, thanks to the private sector, Nepal has also begun to see AI-based companies like Fuse machines or Paaila Technologies that are attempting to close the gap. It has now become necessary to leverage AI for inclusive economic growth to fulfil our dreams of prosperity.

THE KATHMANDU POST/ASIA NEWS NETWORK

View post:

Nepal should gamble on AI - The Phnom Penh Post

All Organizations Developing AI Must Be Regulated, Warns Elon Musk – Analytics Insight

Through the development of artificial intelligence (AI) in the past few years, Teslas Elon Musk has been expressing serious concerns and warnings regarding its negative effects. Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of AI. He tweeted recently, all organizations developing advanced AI should be regulated, including Tesla.

Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and John Schulman. At first, OpenAI was formed as a non-profit backed by US$1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).

He also responded to a tweet posted back in July about how OpenAI originally billed itself as a nonprofit, but the company is now seeking to license its closed technology. In response, Musk, who was one of the companys founders but is no longer a part of the company, said that there were reasonable concerns.

Musk exited the company later, reportedly due to disagreements about the companys direction.

Back in April, Musk said during an interview at the World Artificial Intelligence Conference in Shanghai, that computers would eventually surpass us in every single way.

The first thing we should assume is we are very dumb, Musk said. We can definitely make things smarter than ourselves.

Musk pointed to computer programs that allow computers to beat chess champions as well as technology from Neuralink, his own brain interface company that may eventually be able to help people boost their cognitive abilities in some spheres, as examples.

AI is being criticized by others beside Musk, however. Digital rights groups and the American Civil Liberties Union (ACLU) have called for either a complete ban or more transparency in AI technology such as facial recognition software. Even Googles CEO, Sundar Pichai, has warned of the dangers of AI, calling for more regulation of the technology.

The Tesla and SpaceX CEO has been outspoken about the potential dangers of AI before. During a talk sponsored by South by Southwest in Austin in 2018, Musk talked about the dangers of artificial intelligence.

Moreover, he tweeted in 2014 that it could be more dangerous than nukes, and told an audience at an MIT Aeronautics and Astronautics symposium that year that AI was our biggest existential threat, and humanity needs to be extremely careful. He quoted, With artificial intelligence, we are summoning the demon. In all those stories where theres the guy with the pentagram and the holy water, its like yeah hes sure he can control the demon. Didnt work out.

However, not all his Big Tech contemporaries agree. Facebooks chief AI scientist Yann LeCun described his call for prompt AI regulation nuts, while Mark Zuckerberg said his comments on the risks of the tech were pretty irresponsible. Musk responded by saying the Facebook founders understanding of the subject is limited.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Link:

All Organizations Developing AI Must Be Regulated, Warns Elon Musk - Analytics Insight

Global AI in Healthcare Diagnosis Market 2020-2027 – AI in Future Epidemic Outbreaks Prediction and Response – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Global Analysis by Diagnostic Tool; Application; End User; Service; and Geography" report has been added to ResearchAndMarkets.com's offering.

The global artificial intelligence (AI) in healthcare diagnosis market was valued at US$ 3,639.02 million in 2019 and is projected to reach US$ 66,811.97 million by 2027; it is expected to grow at a CAGR of 44% during 2020-2027.

The growth of the market is mainly attributed to factors such rising adoption of AI in disease identification and diagnosis, and increasing investments in AI healthcare startups. However, the lack of skilled workforce and ambiguity in regulatory guidelines for medical software are the factor hindering the growth of the market.

Artificial Intelligence in healthcare is one of the most significant technological advancements in medicine so far. The involvement of multiple startups in the development of AI-driven imaging and diagnostic solutions is the major factors contributing to the growth of the market. China, the US, and the UK are emerging as popular hubs for healthcare innovations.

Additionally, the British government has announced the establishment of a National Artificial Intelligence Lab that would collaborate with the country's universities and technology companies to conduct research on cancer, dementia, and heart diseases. The UK-based startups have received benefits from the government's robust library of patient data, as British citizens share their anonymous healthcare data with the British National Health Service. As a result, the number of artificial intelligence startups in the healthcare sector has significantly grown in the past few years, and the trend is expected to be the same in the coming years.

Based on diagnostic tool, the global artificial intelligence in healthcare diagnosis market is segmented into medical imaging tool, automated detection system, and others. The medical imaging tool segment held the largest share of the market in 2019, and the market for automated detection system is expected to grow at the highest CAGR during the forecast period.

Based on application, the global artificial intelligence in healthcare diagnosis market is segmented into eye care, oncology, radiology, cardiovascular, and others. The oncology segment held the larger share of the market in 2019, and the radiology segment is expected to register the highest CAGR during the forecast period.

Based on service, the global artificial intelligence in healthcare diagnosis market is segmented into tele-consultation, tele monitoring, and others. The tele-consultation segment held the largest share of the market in 2019, however, tele monitoring segment it is further expected to report the highest CAGR in the market during the forecast period.

Based on end-user, the global artificial intelligence in healthcare diagnosis market is segmented into hospital and clinic, diagnostic laboratory, and home care. The hospital and clinic segment held the highest share of the market in 2019 and is expected to register the highest CAGR in the market during the forecast period.

Key Topics Covered

1. Introduction

1.1 Scope of the Study

1.2 Report Guidance

1.3 Market Segmentation

1.3.1 Artificial Intelligence in Healthcare Diagnosis Market - By Diagnostic Tool

1.3.2 Artificial Intelligence in Healthcare Diagnosis Market - By Application

1.3.3 Artificial Intelligence in Healthcare Diagnosis Market - By Service

1.3.4 Artificial Intelligence in Healthcare Diagnosis Market - By End User

1.3.5 Global Artificial Intelligence in Healthcare Diagnosis Market - By Geography

2. Artificial Intelligence in Healthcare Diagnosis Market - Key Takeaways

3. Research Methodology

3.1 Coverage

3.2 Secondary Research

3.3 Primary Research

4. Artificial Intelligence in Healthcare Diagnosis Market - Market Landscape

4.1 Overview

4.2 PEST Analysis

4.2.1 North America - PEST Analysis

4.2.2 Europe - PEST Analysis

4.2.3 Asia-Pacific - PEST Analysis

4.2.4 Middle East & Africa - PEST Analysis

4.2.5 South & Central America

4.3 Expert Opinion

5. Artificial Intelligence in Healthcare Diagnosis Market - Key Market Dynamics

5.1 Market Drivers

5.1.1 Rising Adoption of Artificial Intelligence (AI) in Disease Identification and Diagnosis

5.1.2 Increasing Investment in AI Healthcare Start-ups

5.2 Market Restraints

5.2.1 Lack of skilled AI Workforce and Ambiguous Regulatory Guidelines for Medical Software

5.3 Market Opportunities

5.3.1 Increasing Potential in Emerging Economies

5.4 Future Trends

5.4.1 AI in Epidemic Outbreak Prediction and Response

5.5 Impact Analysis

6. Artificial Intelligence in Healthcare Diagnosis Market - Global Analysis

6.1 Global Artificial Intelligence in Healthcare Diagnosis Market Revenue Forecast and Analysis

6.2 Global Artificial Intelligence in Healthcare Diagnosis Market, By Geography - Forecast and Analysis

6.3 Market Positioning of Key Players

7. Artificial Intelligence in Healthcare Diagnosis Market - By Diagnostic Tool

7.1 Overview

7.2 Artificial Intelligence in Healthcare Diagnosis Market Revenue Share, by Diagnostic Tool (2019 and 2027)

7.3 Medical Imaging Tool

7.4 Automated Detection System

7.5 Others

8. Artificial Intelligence in Healthcare Diagnosis Market Analysis, By Application

8.1 Overview

8.2 Artificial Intelligence in Healthcare Diagnosis Market Revenue Share, by Application (2019 and 2027)

8.3 Eye Care

8.4 Oncology

8.5 Radiology

8.6 Cardiovascular

8.7 Others

9. Artificial Intelligence in Healthcare Diagnosis Market - By End-User

9.1 Overview

9.2 Artificial Intelligence in Healthcare Diagnosis Market, by End-User, 2019 and 2027 (%)

9.3 Hospital and Clinic

9.4 Diagnostic Laboratory

9.5 Home Care

10. Artificial Intelligence in Healthcare Diagnosis Market - By Service

10.1 Overview

10.2 Artificial Intelligence in Healthcare Diagnosis Market, by Service, 2019 and 2027 (%)

10.3 Tele-Consultation

10.4 Tele-Monitoring

10.5 Others

11. Artificial Intelligence in Healthcare Diagnosis Market - Geographic Analysis

11.1 North America: Artificial Intelligence in Healthcare Diagnosis Market

11.2 Europe: Artificial Intelligence in Healthcare Diagnosis Market

11.3 Asia-Pacific: Artificial Intelligence in Healthcare Diagnosis Market

11.4 Middle East and Africa: Artificial Intelligence in Healthcare Diagnosis Market

11.5 South & Central America: Artificial Intelligence in Healthcare Diagnosis Market

12. Impact of COVID-19 Pandemic on Global Artificial Intelligence in Healthcare Diagnosis Market

12.1 North America: Impact Assessment of COVID-19 Pandemic

12.2 Europe: Impact Assessment of COVID-19 Pandemic

12.3 Asia-Pacific: Impact Assessment of COVID-19 Pandemic

12.4 Rest of the World: Impact Assessment of COVID-19 Pandemic

13. Artificial Intelligence in Healthcare Diagnosis Market - Industry Landscape

13.1 Overview

13.2 Growth Strategies Done by the Companies in the Market, (%)

13.3 Organic Developments

13.4 Inorganic Developments

14. Company Profiles

14.1 General Electric Company

14.2 Aidoc

14.3 Arterys Inc.

14.4 icometrix

14.5 IDx Technologies Inc.

14.6 MaxQ AI Ltd.

14.7 Caption Health, Inc.

14.8 Zebra Medical Vision, Inc.

14.9 Siemens Healthineers AG

More here:

Global AI in Healthcare Diagnosis Market 2020-2027 - AI in Future Epidemic Outbreaks Prediction and Response - ResearchAndMarkets.com - Business Wire

$2.5 Million NIH Grant Will Support AI Approach to Study and Predict Excessive Drinking | | SBU News – Stony Brook News

A unique data-driven scientific approach to study and predict excessive drinking using social media and mobile-phone data has wonAndrew Schwartz, assistant professor in the Department of Computer Science, and his team a $2.5M award from the National Institutes of Health (NIH). Their research will develop an innovative approach utilizing data from texting and social media as well as mobile phone apps to better understand how unhealthy drinking manifests in daily life and push the envelope for the ability of Artificial Intelligence (AI) to predict human behaviors.

The cross-disciplinary team will build AI models that predict future drinking as well as future effects of drinking on an individuals mood among service industry workers. The award will be distributed over four years in collaboration withRichard Rosenthal, MD, Director of Addiction Psychiatry at Stony Brook Medicine, andChristine DeLorenzo, Associate Professor in the Departments of Biomedical Engineering and Psychiatry and a team at the University of Pennsylvania.

Andy is blazing the trail in advancing AI tools for tackling major health challenges, said Fotis Sotiropoulos, Dean,College of Engineering and Applied Sciences. His work is an ingenious approach using data-science tools, smart-phones and social media postings to identify early signs of alcohol abuse and alcoholism and guide interventions. This is AI-driven discovery and innovation at its very best!

With the aid of the teams methods, social media content can be collected and analyzed faster and cheaper, and present an unscripted, less biased psychological assessment. Traditional ecological studies of unhealthy drinking are done via costly and time-consuming phone interviews, which can also be subject to poor data quality and biases. Schwartz and his team will use their novel AI-based approach over social media text content. Samir Das, Chair of the Department of Computer Science, said: Andrew has very successfully applied large-scale data and text analysis techniques for important and timely human health and well-being applications with very impactful results.

We now know analysis of everyday language can cover a wide array of daily factors affecting individual health but their use over timespans is limited. The methods we will develop in this project should enable real-time study into how health plays out in the each individuals own words and bring about the possibility for personalized mental health care, said Schwartz.

The technology will be developed with a focus on a population of frontline restaurant workers bartenders and servers a group that has among the highest rates of heavy alcohol use of all professions. This unhealthy drinking (defined as seven drinks a week for women and 14 for men, according to the National Institute on Alcohol Abuse and Alcoholism) creates the potential for extensive negative consequences related to work performance, relationships, and physical and psychological health. For example, the team will look at the effect of empathy, as measured through language. Psychologists on the team note that empathy can be both health-promoting (beneficial) and health-threatening (depleting). Distinguishing beneficial versus depleting empathy is an example where AI can capture something difficult to get at through questions. Its also a dimension of human psychology suspected to play a role in stress on servers and bartenders since they often listen to customers problems and provide advice, which could have a negative effect on them.

The teams research will include development of:

Schwartzs research has also focused on the impact of social media to predict mental and physical health issues. He is also using Twitter to studyCOVID-19and before that he focused ondepression and social media posts.

About the ResearcherAndrew Schwartzis an Assistant Professor in the Department of Computer Science and a faculty member of theInstitute for AI-Driven Discovery and Innovation. His research utilizes natural language processing and machine learning techniques to focus on large-scale language analysis for health and social sciences.

Dan Olawski

See the original post here:

$2.5 Million NIH Grant Will Support AI Approach to Study and Predict Excessive Drinking | | SBU News - Stony Brook News

4 tips for transforming your customer communications with AI – VentureBeat

Machine learning and artificial intelligence (AI) are not just tools for streamlining customer engagement. They represent an opportunity for companies to completely rethink how they build context around each individual, ultimately creating a better experience and a more loyal customer.

By tapping into the potential of these new technologies, brands can cost-effectively enable and empower sophisticated, relevant types of two-way communications with existing and potential customers. These technologies also change how brands can use digital channels to reimagine their essential communications such as bills, statements, tax documents, and other important information. Traditionally, these touchpoints were static, generic mailings created to address the widest audience. They lacked personalization and relevance. They were informational, but not engaging.

An important brand message no longer needs to be a common document that looks the same to everyone who receives it. Personalized essential communications bring additional value to the customer. Brands can offer more than a sum-total utility or service bill, by including a customers usage compared to the previous billing period or year and providing tips on how to cut down on their use and cost. But this is just the tip of the iceberg.

With some strategic planning and investment, businesses can tap into machine learning and AI to transform customer communications by following these four steps:

Consumer expectations for how they engage with brands continue to evolve. Customers now demand that every piece of communication is tailored to their individual needs and preferences. These technologies enable brands to meet and exceed these constantly rising expectations.

Its no surprise that first on the list of Gartners top 10 strategic technology trends for 2017 is AI and advanced machine learning. These solutions can sift through, analyze, and respond to volumes of data at a speed no number of humans can rival. As these technologies continue to advance, they have the power to benefit both companies and the customers they serve. They represent an opportunity for companies to completely rethink how they build context around each individual to create a better experience and a more loyal customer.

How will you embrace these new technological innovations to catapult your business into the next year and beyond?

Rob Krugman is the Chief Data Officer at Broadridge, a customer communication and data analytics company.

Read more here:

4 tips for transforming your customer communications with AI - VentureBeat

When robots learn to lie, then we worry about AI – The Australian Financial Review

Beware the hyperbole surrounding artificial intelligence and how far it has progressed.

Great claims are being made for artificial intelligence, or AI, these days.

Amazon's Alexa, Google's assistant, Apple's Siri, Microsoft's Cortana: these are all cited as examples of AI. Yet speech recognition is hardly new: we have seen steady improvements in commercial software like Dragon for 20 years.

Recently we have seen a series of claims that AI, with new breakthroughs like "deep learning", could displace 2 million or more Australian workers from their jobs by 2030.

Similar claims have been made before.

I was fortunate to discuss AI with a philosopher, Julius Kovesi, in the 1970s as I led the team that eventually developed sheep-shearing robots. With great insight, he argued that robots, in essence, were built on similar principles to common toilet cisterns and were nothing more than simple automatons.

"Show me a robot that deliberately tells you a lie to manipulate your behaviour, and then I will accept you have artificial intelligence!" he exclaimed.

That's the last thing we wanted in a sheep-shearing robot, of course.

To understand future prospects, it's helpful to see AI as just another way of programming digital computers. That's all it is, for the time being.

We have been learning to live with computers for many decades. Gradually, we are all becoming more dependent on them and they are getting easier to use. Smartphones are a good example.

Our jobs have changed as a result, and will continue to change.

Smartphones can also disrupt sleep and social lives, but so can many other things too. Therefore, claims that we are now at "a convergence" where AI is going to fundamentally change everything are hard to accept.

We have seen several surges in AI hyperbole. In the 1960s, machine translation of natural language was "just two or three years away". And we still have a long way to go with that one. In the late 1970s and early 1980s, many believed forecasts that 95 per cent of factory jobs would be eliminated by the mid-1990s. And we still have a long way to go with that one too. The "dot com, dot gone" boom of 2001 saw another surge. Disappointment followed each time as claims faded in the light of reality. And it will happen again.

Self-driving cars will soon be on our streets, thanks to decades of painstaking advances in sensor technology, computer hardware and software engineering. They will drive rather slowly at first, but will steadily improve with time. You can call this AI if you like, but it does not change anything fundamental.

The real casualty in all this hysteria is our appreciation of human intelligences ... plural. For artificial intelligence has only replicated performances like masterful game playing and mathematical theorem proving, or even legal and medical deduction. These are performances we associate with intelligent people.

Consider performances easily mastered by people we think of as the least intelligent, like figuring out what is and is not safe to sit on, or telling jokes. Cognitive scientists are still struggling to comprehend how we could begin to replicate these performances.

Even animal intelligence defies us, as we realised when MIT scientists perfected an artificial dog's nose sensitive enough to detect TNT vapour from buried landmines. When tested in a real minefield, this device detected TNT everywhere and the readings appeared to be unrelated to the actual locations of the mines. Yet trained mine detection dogs could locate the mines in a matter of minutes.

To appreciate this in a more familiar setting, imagine a party in a crowded room. One person lights up a cigarette and, to avoid being ostracised, keeps it hidden in an ashtray under a chair. Everyone in the room soon smells the cigarette smoke but no one can sense where it's coming from. Yet a trained dog would find it in seconds.

There is speculation that quantum computers might one day provide a real breakthrough in AI. At the moment, however, experiments with quantum computers are at much the same stage as Alan Turing was when he started tinkering with relays in the 1920s. There's still a long way to go before we will know whether these machines will tell deliberate lies.

In the meantime it might be worth asking whether the current surge of interest in AI is being promoted by companies like Google and Facebook in a deliberate attempt to seduce investors. Then again, it might just be another instance of self-deception group-think.

James Trevelyan is emeritus professor in the School of Mechanical and Chemical Engineering at the University of Western Australia.

Excerpt from:

When robots learn to lie, then we worry about AI - The Australian Financial Review

Microsofts CTO explains how AI can help health care in the US right now – The Verge

This week for our Vergecast interview series, Verge editor-in-chief Nilay Patel chats with Microsoft chief technology officer Kevin Scott about his new book Reprogramming the American Dream: From Rural America to Silicon ValleyMaking AI Serve Us All.

Scotts book tackles how artificial intelligence and machine learning can help rural America in a more grounding way, from employment to education to public health. In one chapter of his book, Scott focuses on how AI can assist with health care and diagnostic issues a prominent concern in the US today, especially during the COVID-19 pandemic.

In the interview, Scott refocuses the solutions he describes in the book around the current crisis, specifically supercomputers Microsoft has been using to train natural language processing now being used to search for vaccine targets and therapies for the novel coronavirus.

Below is a lightly edited excerpt of the conversation.

So lets talk about health care because its something you do focus on in the book. Its a particularly poignant time to talk about health care. How do you see AI helping broadly with health care and then more specifically with the current crisis?

I think there are a couple of things going on.

One I think is a trend that I wrote about in the book and that is just getting more obvious every day is that we need to do more. So that particular thing is that if our objective as a society is to get higher-quality, lower-cost health care to every human being who needs it, I think the only way that you can accomplish all three of those goals simultaneously is if you use some form of technological disruption.

And I think AI can be exactly that thing. And youre already seeing an enormous amount of progress on the AI-powered diagnostics front. And just going into the crisis that were in right now, one of the interesting things that a bunch of folks are doing including, I think I read a story about the Chan Zuckerberg Initiative is doing this is the idea is that if you have ubiquitous biometric sensing, like youve got a smartwatch or a fitness band or maybe something even more complicated that can sort of read off your heart-tick data, that can look at your body temperature, that can measure the oxygen saturation in your blood, that can basically get a biometric readout of how your bodys performing. And its sort of capturing that information over time. We can build diagnostic models that can look at those data and determine whether or not youre about to get sick and sort of predict with reasonable accuracy whats going on and what you should do about it.

Like you cant have a cardiologist following you around all day long. There arent enough cardiologists in the world even to give you a good cardiological exam at your annual checkup.

I think this isnt a far-fetched thing. There is a path forward here for deploying this stuff on a broader scale. And it will absolutely lower the cost of health care and help make it more widely available. So thats one bucket of things. The other bucket of things is like just some mind-blowing science that gets enabled when you intersect AI with the leading-edge stuff that people are doing in the biosciences.

Give me an example.

So, two things that we have done relatively recently at Microsoft.

One is one of the big problems in biology that weve had that that immunologists have been studying for years and years and years, is whether or not you could take a readout of your immune system by looking at the distribution of the types of T-cells that are active in your body. And from that profile, determine what illnesses that your body may be actively dealing with. What is it prepared to deal with? Like what might you have recently had?

And that has been a hard problem to figure out because, basically, youre trying to build something called a T-cell receptor antigen map. And now, with our sequencing technology, we have the ability to get the profile so you can sort of see what your immune system is doing. But we have not yet figured out how to build that mapping of the immune system profile to diseases.

Except were partnering with this company called Adaptive that is doing really great work with us, like bolting machine learning onto this problem to try to figure out what the mapping actually looks like. We are rushing right now a serologic test like a blood test that we hope well be able to sort of tell you whether or not you have had a COVID-19 infection.

So I think its mostly going to be useful for understanding the sort of spread of the disease. I dont think its going to be as good a diagnostic test as like a nasal swab and one of the sequence-based tests that are getting pushed out there. But its really interesting. And the implications are not just for COVID-19, but if you are able to better understand that immune system profile, the therapeutic benefits of that are just absolutely enormous. Weve been trying to figure this out for decades.

The other thing that were doing is when youre thinking about SARS-CoV-2 which is the virus that causes COVID-19 that is raging through the world right now we have never in human history had a better understanding of a virus and how it is attacking the body. And weve never had a better set of tools for precision engineering, potential therapies, and vaccines for this thing. And part of that engineering process is using a combination of simulation and machine learning and these cutting-edge techniques of biosciences in a way where youre sort of leveraging all three at the same time.

So weve got this work that were doing with a partner right now where I have taken a set of supercomputing clusters that we have been using to train natural language processing, deep neural networks, just massive scale. And those clusters are now being used to search for vaccine targets and therapies for SARS-CoV-2.

Were one among a huge number of people who are very quickly searching for both therapies and potential vaccines. There are reasons to be hopeful, but weve got a way to go.

But its just unbelievable to me to see how these techniques are coming together. And one of the things that Im hopeful about as we deal with this current crisis and think about what we might be able to do on the other side of it is it could very well be that this is the thing that triggers a revolution in the biological sciences and investment in innovation that has the same sort of a decades-long effect that the industrialization push around World War II had in the 40s that basically built our entire modern world.

Yeah, thats what I keep coming back to, this idea that this is a reset on a scale that very few people living today have ever experienced.

And you said out of World War II, a lot of basic technology was invented, deployed, refined. And now we kind of get to layer in things like AI in a way that is, quite frankly, remarkable. I do think, I mean, it sounds like were going to have to accept that Cortana might be a little worse at natural language processing while you search for the protein surfaces. But I think its a trade most people make.

[Laughs] I think thats the right trade-off.

Read the original here:

Microsofts CTO explains how AI can help health care in the US right now - The Verge

Artificial Intelligence Isn’t an Arms Race With China, and the United States Shouldn’t Treat It Like One – Foreign Policy

At the last Democratic presidential debate, the technologist candidate Andrew Yang emphatically declared that were in the process of potentially losing the AI arms race to China right now. As evidence, he cited Beijings access to vast amounts of data and its substantial investment in research and development for artificial intelligence. Yang and othersmost notably the National Security Commission on Artificial Intelligence, whichreleased its interim report to Congress last monthare right about Chinas current strengths in developing AI and the serious concerns this should raise in the United States. But framing advances in the field as an arms race is both wrong and counterproductive. Instead, while being clear-eyed about Chinas aggressive pursuit of AI for military use and human rights-abusing technological surveillance, the United States and China must find their way to dialogue and cooperation on AI. A practical, nuanced mix of competition and cooperation would better serve U.S. interests than an arms race approach.

AI is one of the great collective Rorschach tests of our times. Like any topic that captures the popular imagination but is poorly understood, it soaks up the zeitgeist like a sponge.

Its no surprise, then, that as the idea of great-power competition has reengulfed the halls of power, AI has gotten caught up in therace narrative.ChinaAmericans are toldis barreling ahead on AI, so much so that the United States willsoon be lagging far behind. Like the fears that surrounded Japans economic rise in the 1980s or the Soviet Union in the 1950s and 1960s, anxiety around technological dominance are really proxies for U.S. insecurity about its own economic, military, and political prowess.

Yet as technology, AI does not naturally lend itself to this framework and is not a strategic weapon.Despite claims that AI will change nearly everything about warfare, and notwithstanding its ultimate potential, for the foreseeable future AI will likely only incrementally improve existing platforms, unmanned systems such as drones, and battlefield awareness. Ensuring that the United States outpaces its rivals and adversaries in the military and intelligence applications of AI is important and worth the investment. But such applications are just one element of AI development and should not dominate the United States entire approach.

The arms race framework raises the question of what one is racing toward. Machine learning, the AI subfield of greatest recent promise, is a vast toolbox of capabilities and statistical methodsa bundle of technologies that do everything from recognizing objects in images to generating symphonies. It is far from clear what exactly would constitute winning in AI or even being better at a national level.

The National Security Commission is absolutely right that developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. U.S. leadership in AI is imperative. Leading, however, does not mean winning. Maintaining superiority in the field of AI is necessary but not sufficient. True global leadership requires proactively shaping the rules and norms for AI applications, ensuring that the benefits of AI are distributed worldwidebroadly and equitablyand stabilizing great-power competition that could lead to catastrophic conflict.

That requires U.S. cooperation with friends and even rivals such as China. Here, we believe that important aspects of the National Security Commission on AIs recent report have gotten too little attention.

First, as the commission notes, official U.S. dialogue with China and Russia on the use of AI in nuclear command and control, AIs military applications, and AI safety could enhance strategic stability, like arms control talks during the Cold War. Second, collaboration on AI applications by Chinese and American researchers, engineers, and companies, as well as bilateral dialogue on rules and standards for AI development, could help buffer the competitive elements of anincreasingly tense U.S.-Chinese relationship.

Finally, there is a much higher bar to sharing core AI inputs such as data and software and building AI for shared global challenges if the United States sees AI as an arms race. Although commercial and military applications for AI are increasing, applications for societal good (addressing climate change,improving disaster response,boosting resilience, preventing the emergence of pandemics, managing armed conflict, andassisting in human development)are lagging. These would benefit from multilateral collaboration and investment, led by the United States and China.

The AI arms race narrative makes for great headlines, buttheunbridled U.S.-Chinese competition it implies risks pushing the United States and the world down a dangerous path. Washington and Beijing should recognize the fallacy of a generalized AI arms race in which there are no winners. Instead, both should lead by leveraging the technology to spur dialogue between them and foster practical collaboration to counter the many forces driving them apartbenefiting the whole world in the process.

Read the original here:

Artificial Intelligence Isn't an Arms Race With China, and the United States Shouldn't Treat It Like One - Foreign Policy

Puma, PVH Corp. on AI and Forecasting, Merchandising in the COVID-19 Era – WWD

Artificial intelligence has come to mean more to retailers than just chatbots and recommendation engines. AI has found its place as a critical tool, particularly for companies managing a fleet of stores.

Thats certainly the case for Puma and PVH Corp., which have both come to rely on the technology in the critical months of the coronavirus pandemic.

Katie Darling, Pumas vice president of merchandising, and Kate Nadolny, senior vice president of business strategy and innovation at PVH, weighed in on how their companies used AI to improve forecasting and merchandising during a tough year that has vexed brick-and-mortar retail like no other.

Nadolny explained to host Prashant Agrawal, of Impact Analytics, that when PVH Corp. began its journey with AI forecasting a couple of years ago, the company wasnt entirely sure about what it would entail.

We identified the clear need and opportunity for us to be smarter about how were making our forecasting and prediction decisions, she said. But we werent really sure about what the tools and capabilities were that we needed.

As the parent company of Van Heusen, Tommy Hilfiger, Calvin Klein, Izod, Geoffrey Beene and more, PVH had more to weigh than a chain of single-branded stores. It was dealing with a multibrand portfolio, with different target customers and locations, bringing an extra layer of complexity. But its the sort of challenge that AI meets head on.

The company first looked to brainstorm with tech partners over areas like assortment and allocation, but COVID-19 quickly changed the priorities.

Suddenly, Nadolny said, PVH realized that it needed to make swift decisions, as its stores contended with varying rules that meant some stores closed, while others remained opened or swung between the two, as infection rates changed. Meanwhile, the rules of retail were being rewritten, as consumer behaviors morphed.

This was the period when stores were exploring curbside pickup and retailers that never before offered appointment shopping suddenly raced to meet new customer expectations. And such services may not work equally well in all areas, particularly in regions hard hit by the economic downturn, or perhaps work best for certain product categories or customer segments, which can vary by store.

Nadolny explained that knowing where and how to shift inventories or change pricing in real time, AI is simply faster at crunching the data and pulling insights than humans.

Darling agreed. She discovered at Puma that granular planning, down to the store level, across multiple doors is impossible without artificial intelligence. Additionally, it can find patterns youre not looking for, she explained, especially when compared to the way people dig through spreadsheets.

Its not only slower, but also less efficient at spotting identifying critical insights.

If one product sells out, whats the next best item in stock that can fill the gap? If a certain item performs well, but whats really leading sales are the smaller sizes in that particular style or stockkeeping unit, could a human staffer pinpoint that? In a normal year, such questions would point to missed opportunities, but in 2020, those insights can determine survival.

The idea of using artificial intelligence to help us make smarter decisions, whether it be at a category level, a collection level or even down to a size level is really important, said Darling.

But before AI can be really useful in the retail setting or in any setting the fundamentals need to be in place. Nadolny pointed out that AI initiatives need to start out with good data, which was one of the PVHs biggest early challenges.

As the saying goes, Garbage in, garbage out, she said. So how do you make sure that your data is right? That attributes are right, that the information that we have is correct and aligned? So while we have quite a bit of data thats very, very useful for us, its not always in the same place, in the same structure, in the same format.

As we start to move towards being able to better utilize these types of tools, internally were spending a lot of time focused on the clarity of our data governance and data structure, so we can therefore take that information and utilize that appropriately in the tool, she added.

The human element also remains important, Nadolny noted, in that staff should have appropriate training on how to best use the tools for the business.

The process could be a challenge for the humans, Nadolny acknowledged. It can even feel like a loss of control, but ideally theyll come to see and appreciate the tech. The machine can really learn more quickly and adapt to whats happening in the space more so than we can in our Excel-based toolset that we have today, she said.

Read more here:

Puma, PVH Corp. on AI and Forecasting, Merchandising in the COVID-19 Era - WWD

Go Beyond Artificial Intelligence: Why Your Business Needs Augmented Intelligence – Forbes

Augmented Intelligence

The nasal test for Covid-19 requires a nurse to insert a 6-inch long swab deep into your nasal passages. The nurse inserts this long-handled swab into both of your nostrils and moves it around for 15 seconds.

Now, imagine that your nurse is a robot.

A few months ago, a nasal swab robot was developed by Brain Navi, a Taiwanese startup. The companys intent was to minimize the spread of infection by reducing staff-patient contact. So, here we have a robot autonomously navigating the probe down into your throat, and carefully avoiding channels that lead up to the eyes.

The robot is supposed to be safe. But many patients would, understandably, be terrified.

Unfortunately, enterprise applications of artificial intelligence (AI) are often no less misguided. Today, AI has picked up remarkable capabilities. Its better than humans in tasks such as voice and image recognition, across disciplines from audio transcription to games.

But does this mean we should simply hand over the reins to machines and sit back? Not quite.

You need humans to make your AI solutions more effective, acceptable, and humane for your users. Thats when they will be adopted and deliver ROI for your organization. When AI and humans combine forces, the whole can be greater than the sum of its parts.

This is called augmented intelligence.

Here are 4 reasons why you need augmented intelligence to transform your business:

A large computer manufacturer wanted to find out what made its customers happy. Gramener, a company providing data science solutions analyzed tens of thousands of comments from the clients bi-annual voice of customer (VoC) survey. A key step in this text analytics process was to find what the customers were talking about. Were they worried about billing or after-sales service?

The team used AI language models to classify comments into the right categories. The algorithm delivered an average accuracy of over 90%, but the business users werent happy. While the algorithm aced at most categories, there were a few where it stumbled, at around 60% accuracy. This led to poor decisions in those areas.

Algorithms perform best when they are trained on large volumes of data, with a representative variety of scenarios. The low-accuracy categories in this project had neither. The project team experimented by bringing in humans to handle those categories where the models confidence was low.

At low manual effort, the overall solution accuracy shot up. This delivered an improvement of 2 percentage points in the clients Net Promoter Score.

Algorithms detect online fraud by studying factors such as consumer behavior and historical shopping patterns. They learn from past examples to identify whats normal and whats not. With the onset of the pandemic, these algorithms started failing.

In todays new normal, consumers have gone remote. They spend more time online, and the spending patterns have shifted in unexpected ways. Suddenly, everything these algorithms have learned has become irrelevant. Covid-19 threw them a curveball.

Algorithms work well only in scenarios that they are trained for. In completely new situations, humans must step in. Organizations that have kept humans in the loop can quickly transition control to them in such situations. Humans can keep systems running smoothly by ensuring that they are resilient in the face of change.

Meanwhile algorithms can go back to the classroom to unlearn, relearn, and come back a little smarter. For example, a recent NIST study found that the use of face masks is breaking facial recognition algorithms, such as the ones used in border crossings. Most systems had error rates up to 50%, calling for manual intervention. The algorithms are being retrained to use areas visible around the eyes.

On March 18, 2018, Elaine Herzberg was walking her bike across Mill Avenue. It was around 10 p.m in Tempe, Arizona. She crossed several lanes of traffic, before being struck by a Volvo.

But this wasnt any Volvo. It was a self-driving car, being tested by Uber.

The car was trained to detect jaywalkers at crosswalks. But, Herzberg had been crossing in the middle of the road, so the AI failed to detect her.

This tragic incident was the first pedestrian death caused by a self-driving car. It raised several questions. When AI makes a mistake, who should be held responsible? Is it the carmaker (Volvo), the AI system maker (Uber), the car driver (Rafaela Vasquez), or the pedestrian (Elaine Herzberg)?

Occasionally, high-precision algorithms will falter, even in familiar scenarios. Rather than roll back the advances made in automation, we must make efforts to improve accountability. Last month, the European Commission published recommendations from an independent expert report for self-driving cars.

The experts call for identifying ownership of all parties and for devising ways to attribute responsibility across scenarios. The report recommends an improvement of human-machine interactions so that AI and drivers can communicate better and understand each others limitations.

Will Siri, Alexa or Google Assistant discriminate against you? Earlier this year, researchers at Stanford University attempted to answer this question by studying the top voice recognition systems in the world. They found that these popular devices had more trouble understanding Black people than white people. They misidentified 35 percent of words spoken by Black users, but only 19 percent for white users.

Bias is a thorny issue in AI. But we must remember that algorithms are only as good as the data used to train them. Our world is anything but perfect. When algorithms learn from our data, they mimic these imperfections and magnify the bias. There is ongoing research in AI to improve fairness and ethics. However, no amount of model engineering will make algorithms perfect.

In the real world, if we are serious about fighting bias, we use our judgement. We make rules more inclusive and adopt measures to amplify suppressed voices. The same approach is needed in AI solutions. Design human intervention to check and address potential scenarios of discrimination. Use human judgment to fight a machines learned bias.

Rethink Design

We often measure progress in AI by comparing AIs abilities to that of humans.

While thats a useful benchmarking exercise, its a mistake to use this approach while designing AI solutions. Organizations often pit AI against humans. This doesnt do justice to either one. It leads to suboptimal performance, brittle solutions, untrustworthy applications and unfair decisions.

Augmented intelligence combines the strengths of humans with those of AI. It combines the speed, logic and consistency of machines with the common sense, emotional intelligence and empathy of humans.

To achieve augmented intelligence, you need humans in the loop. This mustbe planned upfront. Merely adding new processes or responsibilities to an existing technology solution leads to poor results. You must (re)design the solution workflow, and decide which areas are best handled by algorithms. You should define whether humans must make decisions or review decisions made by a machine.

Building augmented intelligence is an ongoing journey. With evolution in machine capabilities and changes in users comfort and trust levels, you must continuously improve the design.

This will make AI-driven systems that do invasive medical procedures or that make high-stakes financial decisions more compassionate and trustworthy for your users.

Disclosure: I co-founded the company, Gramener, that's mentioned in one of the examples in this article.

More here:

Go Beyond Artificial Intelligence: Why Your Business Needs Augmented Intelligence - Forbes

R&D Special Focus: Robotics/AI – R & D Magazine

Robotics and artificial intelligence (A.I) were once considered fantasies of the future.

Today, both technologies are being incorporated into many elements of everyday life, with applications popping up in everything from healthcare and education to communication and transportation.

In July, R&D Magazine took a deeper dive into this breakthrough area of research.

We kicked off our coverage by speaking to several experts about where the field of robotics is going in, Robotics Industry Has Big Future as Applications Grow.

Susan Teele of the Advanced Robotics for Manufacturing Institute and Bob Doyle of the Robotics Industries Association, discussed the impact that robots will have on the workforce and what technological advancements are needed for them to truly flourish.

We expanded on that idea in Creating Robots That Are More Like Humans, which features a research group at Northeastern University focused on creating software that makes robots more autonomous, so eventually they are able to perform tasks on their own with little human supervision or intervention.

The groups leader, Taskin Padir told R&D Magazine how reliable robots with human-like dexterity and improved autonomy could take over jobs that are dangerous or difficult for humans to perform.

Robots can also be used to reduce danger. Our article, Creator of Suicidal Robot Explains How Robot Security Could Prevent 'The Next Sandy Hook, focused on the robotic security company Knightscope, which made headline recently for a humorous mishap involving one of its robots falling into a fountain.

However, the real story is the true mission behind Knightscope.

The company was created by a former police officer who was deeply impacted by the Sandy Hook Elementary School shooting. Knightscopes robots now serve as intelligence gathering tools, which law enforcement officials can utilize during, as well as afteran emergency, to better understand what is going on, de-escalate a dangerous situation, and potentially help capture or gather evidence against the perpetrator of the crime.

We wrapped up our robotics coverage with, Robotic Teachers Can Adjust Style Based on Student Success, which focuses on the development of socially assistive robotics a new field of robotics that focuses on assisting users through social rather than physical interaction. A research group at Yale University is designing these robots to work with children, including those with challenges such as autism, hearing impairment, or those whose first language is one other than English.

A.I. Advancements

Our A.I. coverage kicked off with Why Canada is Becoming a Hub for A.I. Research, which highlighted the significant commitment to A.I. research and development our neighbor to the north is making.

The Vector Institute which received an estimated $150 million investment from both the Canadian government and as well as Canadian businessesis one example of that commitment.

The independent not-for-profit institution based in Ontario seeks to to build and sustain A.I.-based innovation, growth and productivity in Canada by focusing on the transformative potential of deep learning and machine learning.

We also looked at the impact of A.I. in the healthcare space. One article, Startup Uses A.I. to Streamline Drug Discovery Process, features an interview with the CEO of Exscientia, which is using A.I. fueled programs in conjunction with experienced drug developers to implement a rapid design-make-test cycle. This essentially ascertains how certain molecules will behave and then predicts how likely they are to become useful drugs.

Another startup, Potbotics is using A.I. to comb through the different strains of medical marijuana to find the right one for a specific ailment with its app PotBot.Once a medical cannabis recommendation is calculated, the app helps patients find their recommended cannabis at a nearby dispensary or set up an appointment with a licensed medical cannabis clinic. We featured the company in our article, PotBot Uses A.I. to Match Medical Marijuana Users to Best Strain.

The use of A.I. to create autonomous vehicles is another area that is rapidly growing. In our article Algorithm Improves Energy Efficiency of Autonomous Underwater Vehicles, we focused on researchers from Oregon State University, who developed a new algorithm to direct autonomous underwater vehicles to ride the ocean currents when traveling from point to point.

Improving the A.I. of the vehicles extends their battery life by decreasing the amount of battery power wasted through inefficient trajectory cuts.

Also, we took a deep dive into Toyotas plan for A.I., featuring an exclusive interview with Jim Adler the managing director of the Japanese car companys new venture fund, Toyota A.I. Ventures in our article, How Toyotas New Venture Fund Will Tackle A.I. Investments.

The venture fund will use an initial fund of $100 million to collaborate with entrepreneurs from all over the world, in an effort to improve the quality of human life through artificial A.I.

Toyota A.I. Ventures will work with startups at an early stage and offer a founder-friendly environment that wont impact their ability to work with other investors. They will also offer assistance with technology and product expertise to validate that the product being built is for the right market, and give these entrepreneurs access to Toyotas global network of affiliates and partners to ensure a successful market launch.

Next Months Special Focus

In August, R&D Magazine will continue its special focus series, this time highlighting the many applications of virtual reality. The technology has expanded significantly outside of the video gaming world, and is now being used across multiple disciplines.

Continue reading here:

R&D Special Focus: Robotics/AI - R & D Magazine

AI Can Tell if You’re Depressed by Reading Your Facebook Posts

Dr. Facebook

It shouldn’t come as much of a surprise, but Facebook knows a lot about you.

And while the information it collects about you isn’t exactly in the safest of hands, it could give mental health care professionals a huge leg up in predicting your future mental well-being — if you’re willing to hand over your login information.

In research published this week Proceedings of the National Academy of Sciences, scientists at the University of Pennsylvania described how they were able to determine whether a particular Facebook user is likely to become depressed in the future, simply by analyzing their status updates.

Robot Psychologists

Wired reports that the researchers used machine learning algorithms to analyze almost half a million Facebook posts — spanning a period of seven years — by 684 willing patients at a Philadelphia emergency ward.

“We’re increasingly understanding that what people do online is a form of behavior we can read with machine learning algorithms, the same way we can read any other kind of data in the world,” UPenn psychologist Johannes Eichstaedt told the magazine.

The algorithms looked for markers of depression in the patients’ posts, and found that depressed individuals used more first-person language — a finding in line with many previous studies. The algorithm got so good at catching those markers that it could predict if a Facebook user was depressed up to three months prior to a formal diagnosis by a health care professional.

The Human Touch

For now, take these results with a grain of salt — this won’t ever be a substitute for human psychologists, because there are far too many variables. But social media information — along with heart rates or sleep data collected by fitness trackers — could be a powerful tool to catch mental health problems early on.

That is, if we’re willing to share that kind of information with them in the first place.

Read More: Your Facebook Posts Can Reveal If You’re Depressed [Wired]

More on mental health and social media: Instagram is Trying to Make Users Feel Better Without Scaring Them Off

See the article here:

AI Can Tell if You’re Depressed by Reading Your Facebook Posts

Google to set up AI research lab in Bengaluru – The Hindu

U.S.-headquartered Google on Thursday announced the setting up of a research lab in Bengaluru that will work on advancing artificial intelligence-related research with an aim to solve problems in sectors such as healthcare, agriculture and education.

...we announced Google Research India a new AI research team in Bangalore that will focus on advancing computer science and applying AI research to solve big problems in healthcare, agriculture, education and more, Sundar Pichai, CEO, Google, tweeted.

Caesar Sengupta, vice-president, Next Billion Users Initiative and Payments at Google, added that this team would focus on advancing fundamental computer science and AI research by building a strong team and partnering with the research community across the country. It will also be applying this research to tackle problems in fields such as healthcare, agriculture, and education.

The new lab will be a part of and support Googles global network of researchers. Were also exploring the potential for partnering with Indias scientific research community and academic institutions to help train top talent and support collaborative programmes, tools and resources, Jay Yagnik, vice-president and Google Fellow, Google AI, said.

Google Pay

The technology giant announced a host of additions to its UPI-powered digital payments app Google Pay, which the company said had grown more than three times in the last 12 months to 67 million monthly active users, driving transactions worth over $110 billion on an annualised basis.

To start with, the company has introduced Spot Platform within Google Pay, which will enable merchants to create new experiences that bridge the offline and online world. Ambarish Kenghe, director, product management, Google Pay, said: A Spot is a digital front for a business that is created, branded and hosted by them, and powered by Google Pay. Users can discover a Spot online or at a physical location, and transact with the merchant easily and securely within the Google Pay app.

Users will now also be able to search for entry-level jobs that could not be easily discovered online via the application.

Google will also roll out tokenized cards, which will enable users to make payments using debit and credit cards without using actual card number.

View post:

Google to set up AI research lab in Bengaluru - The Hindu

This startup is building AI to bet on soccer games – The Verge – The Verge

Listen to Andreas Koukorinis, founder of UK sports betting company Stratagem, and youd be forgiven for thinking that soccer games are some of the most predictable events on Earth. Theyre short duration, repeatable, with fixed rules, Koukorinis tells The Verge. So if you observe 100,000 games, there are patterns there you can take out.

The mission of Koukorinis company is simple: find these patterns and make money off them. Stratagem does this either by selling the data it collects to professional gamblers and bookmakers, or by keeping it and making its own wagers. To fund these wagers, the firm is raising money for a 25 million ($32 million) sports betting fund that its positioning as an investment alternative to traditional hedge funds. In other words, Stratagem hopes rich people will give Stratagem their money. The company will gamble with it using its proprietary data, and, if all goes to plan, everyone ends up just that little bit richer.

Its a familiar story, but Stratagem is adding a little something extra to sweeten the pot: artificial intelligence.

At the moment, the company uses teams of human analysts spread out around the globe to report back on the various sporting leagues it bets on. This information is combined with detailed data about the odds available from various bookmakers to give Stratagem an edge over the average punter. But, in the future, it wants computers to do the analysis for it. It already uses machine learning to analyze some of its data (working out the best time to place a bet, for example), but its also developing AI tools that can analyze sporting events in real time, drawing out data that will help predict which team will win.

Stratagem is using deep neural networks to achieve this task the same technology thats enchanted Silicon Valleys biggest firms. Its a good fit, since this is a tool thats well-suited for analyzing vast pots of data. As Koukorinis points out, when analyzing sports, theres a hell of a lot data to learn from. The companys software is currently absorbing thousands of hours of sporting fixtures to teach it patterns of failure and success, and the end goal is to create an AI that can watch a range of a half-dozen different sporting events simultaneously on live TV, extracting insights as it does.

Stratagems AI identifies players to make a 2D map of the game

At the moment, though, Stratagem is starting small. Its focusing on just a few sports (soccer, basketball, and tennis) and a few metrics (like goal chances in soccer). At the companys London offices, home to around 30 employees including ex-bankers and programmers, were shown the fledgling neural nets for soccer games in action. On-screen, the output is similar to what you might see from the live feed of a self-driving car. But instead of the computer highlighting stop signs and pedestrians as it scans the road ahead, its drawing a box around Zlatan Ibrahimovi as he charges at the goal, dragging defenders in his wake.

Stratagems AI makes its calculations watching a standard, broadcast feed of the match. (Pro: its readily accessible. Con: it has to learn not to analyze the replays.) It tracks the ball and the players, identifying which team theyre on based on the color of their kits. The lines of the pitch are also highlighted, and all this data is transformed into a 2D map of the whole game. From this viewpoint, the software studies matches like an armchair general: it identifies what it thinks are goal-scoring chances, or the moments where the configuration of players looks right for someone to take a shot and score.

Football is such a low-scoring game that you need to focus on these sorts of metrics to make predictions, says Koukorinis. If theres a short on target from 30 yards with 11 people in front of the striker and that ends in a goal, yes, it looks spectacular on TV, but its not exciting for us. Because if you repeat it 100 times the outcomes wont be the same. But if you have Lionel Messi running down the pitch and hes one-on-one with the goalie, the conversion rate on that is 80 percent. We look at what created that situation. We try to take the randomness out, and look at how good the teams are at what theyre trying to do, which is generate goal-scoring opportunities.

Whether or not counting goal-scoring opportunities is the best way to rank teams is difficult to say. Stratagem says its a metric thats popular with professional gamblers, but they and the company weigh it with a lot of other factors before deciding how to bet. Stratagem also notes that the opportunities identified by its AI dont consistently line up with those spotted by humans. Right now, the computer gets it correct about 50 percent of the time. Despite this, the company say its current betting models (which it develops for soccer, but also basketball and tennis) are right more than enough times for it to make a steady return, though they wont share precise figures.

A team of 65 analysts collect data around the world

At the moment, Stratagem generates most of its data about goal-scoring opportunities and other metrics the old-fashioned way: using a team of 65 human analysts who write detailed match reports. The companys AI would automate some of this process and speed it up significantly. (Each match report takes about three hours to write.) Some forms of data-gathering would still rely on humans, however.

A key task for the companys agents is finding out a teams starting lineup before its formally announced. (This is a major driver of pre-game betting odds, says Koukorinis, and knowing in advance helps you beat the market.) Acquiring this sort of information isnt easy. It means finding sources at a club, building up a relationship, and knowing the right people to call on match day. Chatbots just arent up to the job yet.

Machine vision, though, is really just one element of Stratagems AI business plan. It already applies machine learning to more mundane facets of betting like working out the best time to place a bet in any particular market. In this regard, what the company is doing is no different from many other hedge funds, which for decades have been using machine learning to come up with new ways to trade. Most funds blend human analysis with computer expertise, but at least one is run completely by decisions generated by artificial intelligence.

However, simply adding more computers to the mix isnt always a recipe for success. Theres data showing that if you want to make the most out of your money, its better to just invest in the top-performing stocks of the S&P 500, rather than sign up for an AI hedge fund. Thats not the best sign that Stratagems sports-betting fund will offer good returns, especially when such funds are already controversial.

In 2012, a sports-betting fund set up by UK firm Centaur Holdings, collapsed just two years after it launched. It lost $2.5 million after promising investors returns of 15 to 20 percent. To critics, operations like this are just borrowing the trappings of traditional funds to make gambling look more like investing.

I dont doubt its great fun... but dont qualify it with the term investment.

David Stevenson, director of finance research company AltFi, told The Verge that theres nothing essentially wrong with these funds, but they need to be thought of as their own category. I dont particularly doubt its great fun [to invest in one] if you like sports and a bit of betting, said Stevenson. But dont qualify it with the term investment, because investment, by its nature, has to be something you can predict over the long run.

Stevenson also notes that AI hedge funds that are successful those that torture the math within an inch of its life to eek out small but predictable profits tend not to seek outside investment at all. They prefer keeping the money to themselves. I treat most things that combine the acronym AI and the word investing with an enormous dessert spoon of salt, he said.

Whether or not Stratagems AI can deliver insights that make sporting events as predictable as the tides remains to be seen, but the companys investment in artificial intelligence does have other uses. For starters, it can attract investors and customers looking for an edge in the world of gambling. It can also automate work thats currently done by the companys human employees and make it cheaper. As with other businesses that are using AI, its these smaller gains that might prove to be most reliable. After all, small, reliable gains make for a good investment.

Link:

This startup is building AI to bet on soccer games - The Verge - The Verge

How Much Dark Matter in the Universe? AI May Have the Answer – Technology Networks

Understanding the how our universe came to be what it is today and what will be its final destiny is one of the biggest challenges in science. The awe-inspiring display of countless stars on a clear night gives us some idea of the magnitude of the problem, and yet that is only part of the story. The deeper riddle lies in what we cannot see, at least not directly: dark matter and dark energy. With dark matter pulling the universe together and dark energy causing it to expand faster, cosmologists need to know exactly how much of those two is out there in order to refine their models.

At ETH Zurich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D.

While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: "Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy." As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter - including the dark variety - slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as "weak gravitational lensing", distorts the images of those galaxies very subtly, much like far-away objects appear blurred on a hot day as light passes through layers of air at different temperatures.

Cosmologists can use that distortion to work backwards and create mass maps of the sky showing where dark matter is located. Next, they compare those dark matter maps to theoretical predictions in order to find which cosmological model most closely matches the data. Traditionally, this is done using human-designed statistics such as so-called correlation functions that describe how different parts of the maps are related to each other. Such statistics, however, are limited as to how well they can find complex patterns in the matter maps.

"In our recent work, we have used a completely new methodology", says Alexandre Refregier. "Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job." This is where Aurelien Lucchi and his colleagues from the Data Analytics Lab at the Department of Computer Science come in. Together with Janis Fluri, a PhD student in Refregier's group and lead author of the study, they used machine learning algorithms called deep artificial neural networks and taught them to extract the largest possible amount of information from the dark matter maps.

In a first step, the scientists trained the neural networks by feeding them computer-generated data that simulates the universe. That way, they knew what the correct answer for a given cosmological parameter - for instance, the ratio between the total amount of dark matter and dark energy - should be for each simulated dark matter map. By repeatedly analysing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information. In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.

The results of that training were encouraging: the neural networks came up with values that were 30% more accurate than those obtained by traditional methods based on human-made statistical analysis. For cosmologists, that is a huge improvement as reaching the same accuracy by increasing the number of telescope images would require twice as much observation time - which is expensive.

Finally, the scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-450 dataset. "This is the first time such machine learning tools have been used in this context," says Fluri, "and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications."

As a next step, he and his colleagues are planning to apply their method to bigger image sets such as the Dark Energy Survey. Also, more cosmological parameters and refinements such as details about the nature of dark energy will be fed to the neural networks.

Reference: Fluri J, Kacprzak T, Lucchi A, Refregier A, Amara A, Hofmann T, Schneider A: Cosmological constraints with deep learning from KiDS-450 weak lensing maps. Physical Review D. 100: 063514, doi: 10.1103/PhysRevD.100.063514

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

See the original post:

How Much Dark Matter in the Universe? AI May Have the Answer - Technology Networks

AI learns to write its own code by stealing from other programs – New Scientist

Set a machine to program a machine

iunewind/Alamy Stock Photo

By Matt Reynolds

OUT of the way, human, Ive got this covered. A machine learning system has gained the ability to write its own code.

Created by researchers at Microsoft and the University of Cambridge, the system, called DeepCoder, solved basic challenges of the kind set by programming competitions. This kind of approach could make it much easier for people to build simple programs without knowing how to write code.

All of a sudden people could be so much more productive, says Armando Solar-Lezama at the Massachusetts Institute of Technology, who was not involved in the work. They could build systems that it [would be] impossible to build before.

Ultimately, the approach could allow non-coders to simply describe an idea for a program and let the system build it, says Marc Brockschmidt, one of DeepCoders creators at Microsoft Research in Cambridge, UK.

DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.

It could allow non-coders to simply describe an idea for a program and let the system build it

One advantage of letting an AI loose in this way is that it can search more thoroughly and widely than a human coder, so could piece together source code in a way humans may not have thought of. Whats more, DeepCoder uses machine learning to scour databases of source code and sort the fragments according to its view of their probable usefulness.

All this makes the system much faster than its predecessors. DeepCoder created working programs in fractions of a second, whereas older systems take minutes to trial many different combinations of lines of code before piecing together something that can do the job. And because DeepCoder learns which combinations of source code work and which ones dont as it goes along, it improves every time it tries a new problem.

The technology could have many applications. In 2015, researchers at MIT created a program that automatically fixed software bugs by replacing faulty lines of code with working lines from other programs. Brockschmidt says that future versions could make it very easy to build routine programs that scrape information from websites, or automatically categorise Facebook photos, for example, without human coders having to lift a finger

The potential for automation that this kind of technology offers could really signify an enormous [reduction] in the amount of effort it takes to develop code, says Solar-Lezama.

But he doesnt think these systems will put programmers out of a job. With program synthesis automating some of the most tedious parts of programming, he says, coders will be able to devote their time to more sophisticated work.

At the moment, DeepCoder is only capable of solving programming challenges that involve around five lines of code. But in the right coding language, a few lines are all thats needed for fairly complicated programs.

Generating a really big piece of code in one shot is hard, and potentially unrealistic, says Solar-Lezama. But really big pieces of code are built by putting together lots of little pieces of code.

This article appeared in print under the headline Computers are learning to code for themselves

More on these topics:

See the rest here:

AI learns to write its own code by stealing from other programs - New Scientist