Artificial Intelligence of Things: AIoT Market by Technology and Solutions 2020 – 2025 – PRNewswire

NEW YORK, Aug. 18, 2020 /PRNewswire/ --

Overview:This AIoT market report provides analysis of technologies, leading companies and solutions. The report also provides quantitative analysis including market sizing and forecasts for AIoT infrastructure, services, and specific solutions for the period 2020 through 2025. The report also provides an assessment of the impact of 5G upon AIoT (and vice versa) as well as blockchain and specific solutions such as Data as a Service, Decisions as a Service, and the market for AIoT in smart cities.

Read the full report: https://www.reportlinker.com/p05951233/?utm_source=PRN

While it is no secret that AI is rapidly becoming integrated into many aspects of ICT, many do not understand the full extent of how it will transform communications, applications, content, and commerce. For example, the use of AI for decision making in IoT and data analytics will be crucial for efficient and effective smart city solutions in terms of decision making.

The convergence of AI and Internet of Things (IoT) technologies and solutions (AIoT) is leading to "thinking" networks and systems that are becoming increasingly more capable of solving a wide range of problems across a diverse number of industry verticals. AI adds value to IoT through machine learning and improved decision making. IoT adds value to AI through connectivity, signaling, and data exchange.

AIoT is just beginning to become part of the ICT lexicon as the possibilities for the former adding value to the latter are only limited by the imagination. With AIoT, AI is embedded into infrastructure components, such as programs, chipsets and edge computing, all interconnected with IoT networks. APIs are then used to extend interoperability between components at the device level, software level and platform level. These units will focus primarily on optimizing system and network operations as well as extracting value from data.

While early AIoT solutions are rather monolithic, it is anticipated that AIoT integration within businesses and industries will ultimately lead to more sophisticated and valuable inter-business and cross-industry solutions. These solutions will focus primarily upon optimizing system and network operations as well as extracting value from industry data through dramatically improved analytics and decision-making processes. Six key areas that the analyst sees within the scope of AIoT solutions are: Data Services Asset Management Immersive Applications Process Improvement Next Gen UI and UX Industrial Automation

Many industry verticals will be transformed through AI integration with enterprise, industrial, and consumer product and service ecosystems. It is destined to become an integral component of business operations including supply chains, sales and marketing processes, product and service delivery and support models.

From the perspective of the analyst, we see AIoT evolving to become more commonplace as a standard feature from big analytics companies in terms of digital transformation for the connected enterprise. This will be realized in infrastructure, software, and SaaS managed service offerings. More specifically, we see 2020 as a key year for IoT data-as-a-service offerings to become AI-enabled decisions-as-a-service-solutions, customized on a per industry and company basis. Certain data-driven verticals such as the utility and energy services industries will lead the way.

As IoT networks proliferate throughout every major industry vertical, there will be an increasingly large amount of unstructured machine data. The growing amount of human-oriented and machine generated data will drive substantial opportunities for AI support of unstructured data analytics solutions. Data generated from IoT supported systems will become extremely valuable, both for internal corporate needs as well as for many customer-facing functions such as product life-cycle management.

The use of AI for decision making in IoT and data analytics will be crucial for efficient and effective decision making, especially in the area of streaming data and real-time analytics associated with edge computing networks. Real-time data will be a key value proposition for all use cases, segments, and solutions. The ability to capture streaming data, determine valuable attributes, and make decisions in real-time will add an entirely new dimension to service logic.

In many cases, the data itself, and actionable information will be the service. AIoT infrastructure and services will, therefore, be leveraged to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytics, creating a foundation for IoT Data as a Service (IoTDaaS) and AI-based Decisions as a Service.

The fastest-growing 5G AIoT applications involve private networks. Accordingly, the 5GNR market for private wireless in industrial automation will reach $4B by 2025. Some of the largest market opportunities will be AIoT market IoTDaaS solutions. The analyst sees machine learning in edge computing as the key to realizing the full potential of IoT analytics.

Target Audience: AI companies IoT companies Robotics companies Semiconductor vendors Data management vendors Industrial automation companies Governments and R&D organizations

Select Report Findings: The global AIoT market will reach $65.9B by 2025, growing at 39.1% CAGR The global market for IoT data as service solutions will reach $8.2B USD by 2025 The AI enabled edge device market will be the fastest growing segment within the AIoT AIoT automates data processing systems, converting raw IoT data into useful information Today's AIoT solutions are the precursor to next generation AI Decision as a Service (AIDaaS)

Companies in Report: AB Electrolux ABB Ltd. AIBrian Inc. Alibaba Alluvium Amazon Inc. Analog Devices Apple Inc. ARM Limited Arundo Analytics Atmel Corporation Ayla Networks Inc. Baidu Brighterion Inc. Buddy C3 IoT Canvass Analytics Cisco CloudMinds Cumulocity GmBH Cypress Semiconductor Corp Digital Reasoning Systems Inc. DT42 Echelon Corporation Enea AB Express Logic Inc. Facebook Inc. Falkonry Fujitsu Ltd. Gemalto N.V. General Electric General Vision Inc. Google Gopher Protocol Graphcore H2O.ai Haier Group Corporation Helium Systems Hewlett Packard Enterprise Huawei Technologies IBM Corp. Infineon Technologies AG Innodisk Intel Corporation Interactor Juniper Networks Losant IoT Micron Technology Microsoft Corp. Nokia Corporation Nvidia Oracle Corporation Pepper PTC Corporation Qualcomm Robert Bosch GmbH Salesforce Inc. SAS Sharp ShiftPixy Siemens AG SK Telecom SoftBank Robotics SpaceX SparkCognition STMicroelectronics Symantec Corporation Tellmeplus Tencent Tend.ai Terminus Tesla Texas Instruments Thethings.io Tuya Smart Uptake Veros Systems Whirlpool Corporation Wind River Systems Xiaomi Technology

Read the full report: https://www.reportlinker.com/p05951233/?utm_source=PRN

About Reportlinker ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

Contact Clare: [emailprotected] US: (339)-368-6001 Intl: +1 339-368-6001

SOURCE Reportlinker

http://www.reportlinker.com

Original post:

Artificial Intelligence of Things: AIoT Market by Technology and Solutions 2020 - 2025 - PRNewswire

Tesla hires AI expert to help lead team in charge of self-driving software – MarketWatch

Tesla Inc. has hired a Stanford University computer scientist specializing in artificial intelligence and deep learning to lead its efforts around driverless cars.

Andrej Karpathy, previously a research scientist at OpenAI, was named director of AI and Autopilot Vision, reporting directly to Chief Executive Elon Musk, a Tesla spokesperson said.

Karpathy is one of the worlds leading experts in computer vision and deep learning, the spokesperson said. He will work closely with Jim Keller, who is responsible for Autopilot hardware and software.

Autopilot is Teslas suite of advanced driver assistance systems, which relies on an onboard Nvidia Corp NVDA, +1.52% supercomputer to make sense of data from numerous sensors in and around Tesla vehicles and the companys software.

Several Silicon Valley companies, from titans such as Apple Inc. AAPL, +0.59% to startups, as well as traditional car makers, software companies, and others elsewhere, are vying to make driverless cars a common sight on roads.

Apples CEO Tim Cook recently confirmed the companys efforts around what he called autonomous systems, and called driverless cars the mother of all AI projects.

See also: We still dont know what Apple is up to with driverless cars

Musk is co-chair of OpenAI, a nonprofit focused on AI research and on a path to safe artificial general intelligence.

The hire comes as Teslas lead of Autopilot software, Chris Lattner, earlier this week announced he was leaving the company after six months on the job.

Lattner worked for more than an decade at Apple Inc.

The Tesla spokesperson said Lattner just wasnt the right fit for Tesla, and weve decided to make a change.

The company is weeks away from starting production of the Model 3, the $35,000 all-electric sedan it hopes to sell to the masses. Tesla is expected to sell the car by the end of the year.

Musk has said that several new vehicles, including a compact SUV and an electric commercial freight truck, are coming to Teslas lineup as the company aims to produce vehicles at a rate of half a million by the end of 2018.

The stock has gained more than 74% so far this year, after a string of record highs in the past two months. That contrasts with gains of 9% for the S&P 500 index SPX, -0.06%

More:

Tesla hires AI expert to help lead team in charge of self-driving software - MarketWatch

IBMs Arin Bhowmick explains why AI trust is hard to achieve in the enterprise – VentureBeat

While appreciation of the potential impact AI can have on business processes has been building for some time, progress has not nearly been as quick as many initial forecasts led many organizations to expect.

Arin Bhowmick, chief design officer for IBM, explained to VentureBeat what needs to be done to achieve the level of AI explainability that will be required to take AI to the next level in the enterprise.

This interview has been edited for clarity and brevity.

VentureBeat: It seems a lot of organizations are still not trustful of AI. Do you think thats improving?

Arin Bhowmick: I do think its improved or is getting better. But we still have a long way to go. We havent historically been able to bake in trust and fairness and explainable AI into the products and experiences. From an IBM standpoint, we are trying to create reliable technology that can augment [but] not really replace human decision-making. We feel that trust is essential to the adoption. It allows organizations to understand and explain recommendations and outcomes.

What we are essentially trying to do is akin to a nutritional label. Were looking to have a similar kind of transparency in AI systems. There is still some hesitation in adoption of AI because of a lack of trust. Roughly 80-85% of some of the professionals that took part in an IBM survey from different organizations said their organization has been pretty negatively impacted by problems such as bias, especially in the data. I would say 80% or more agree that consumers are more likely to choose services from a company that offers transparency and an ethical framework for how its AI models are built.

VentureBeat: As an AI model runs, it can generate different results as the algorithms learn more about the data. How much does that lack of consistency impact trust?

Bhowmick: The AI model used to do the prediction is as good as the data. Its not just models. Its about what it does and the insight it provides at that point in time that develops trust. Does it tell the user why the recommendation is made or is significant, how it came up with the recommendations, and how confident it is? AI tends to be a black box. The trick around developing trust is to unravel the black box.

VentureBeat: How do we achieve that level of AI explainability?

Bhowmick: Its hard. Sometimes its hard to even judge the root cause of a prediction and insight. It depends on how the model was constructed. Explainability is also hard because when it is provided to the end user, its full of technical mumbo jumbo. Its not in the voice and tone that the user actually understands.

Sometimes explainability is also a little bit about the why, rather than the what. Giving an example of explainability in the context of the tasks that the user is doing is really, really hard. Unless the developers who are creating these AI-based [and] infused systems actually follow the business process, the context is not going to be there.

VentureBeat: How do we even measure this?

Bhowmick: There is a fairness score and a bias score. There is a concept of model accuracy. Most tools that are available do not provide a realistic score of the element of bias. Obviously, the higher the bias, the worse your model is. Its pretty clear to us that a lot of the source of the bias happens to be in the data and the assumptions that are used to create the model.

What we tried to do is we baked in a little bit of bias detection and explainability into the tooling itself. It will look at the profile of the data and match it against other items and other AI models. Well be able to tell you that what youre trying to produce already has built-in bias, and heres what you can do to fix it.

VentureBeat: That then becomes part of the user experience?

Bhowmick: Yes, and thats very, very important. Whatever bias feeds into the system has huge ramifications. We are creating ethical design practices across the company. We have developed specific design thinking exercises and workshops. We run workshops to make sure that we are considering ethics at the very beginning of our business process planning and design cycle. Were also using AI to improve AI. If we can build in sort of bias and explainable AI checkpoints along the way, inherently we will scale better. Thats sort of the game plan here.

VentureBeat: Will every application going have an AI model embedded within it?

Bhowmick: Its not about the application, its about whether there are things within that application that AI can help with. If the answer is yes, most applications will have infused AI in them. It will be unlikely that applications will not have AI.

VentureBeat: Will most organizations embed AI engines in their applications or simply involve external AI capabilities via an application programming interface (API)?

Bhowmick: Both will be true. I think the API would be good for people who are getting started. But as the level of AI maturity increases, there will be more information that is specific to a problem statement that is specific to an audience. For that, they will likely have to build custom AI models. They might leverage APIs and other tooling, but to have an application that really understands the user and really gets at the crux of the problem, I think its important that its built in-house.

VentureBeat: Overall, whats your best AI advice to organizations?

Bhowmick: I still find that our level of awareness of what is AI and what it can do, and how it can help us, is not high. When we talk to customers, all of them want to go into AI. But when you ask them what are the use cases, they sometimes are not able to articulate that.

I think adoption is somewhat lagging because of peoples understanding and acceptance of AI. But theres enough information on AI principles to read up on. As you develop an understanding, then look into tooling. It really comes down to awareness.

I think were in the hype cycle. Some industries are ahead, but if I could give one piece of advice to everyone, it would be dont force-fit AI. Make sure you design AI in your system in a way that makes sense for the problem youre trying to solve.

Excerpt from:

IBMs Arin Bhowmick explains why AI trust is hard to achieve in the enterprise - VentureBeat

Elon Musk and AI experts urge UN to ban artificial intelligence in weapons – Los Angeles Times

Tesla and SpaceX chief Elon Musk has joined dozens of CEOs of artificial intelligence companies in signing an open letter urging the United Nations to ban the use of AI in weapons before the technology gets out of hand.

The letter was published Monday the same day the U.N.s Group of Governmental Experts on Lethal Autonomous Weapons Systems was to discuss ways to protect civilians from the misuse of automated weapons. That meeting, however, has been postponed until November.

Lethal autonomous weapons threaten to become the third revolution in warfare, read the letter, which was also signed by the chief executives of companies such as Cafe X Technologies (which built the autonomous barista) and PlusOne Robotics (whose robots automate manual labor). Once this Pandoras box is opened, it will be hard to close. Therefore we implore the High Contracting Parties to find a way to protect us all from these dangers.

The letters sentiments echo those in another open letter that Musk along with more than 3,000 AI and robotics researchers, plus others such as Stephen Hawking and Steve Wozniak signed nearly two years ago. In the 2015 letter, the signatories warned of the dangers of artificial intelligence in weapons, which could be used in assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

Many nations are already familiar with drone warfare, in which human-piloted drones are deployed in lieu of putting soldiers on site. Lower costs, as well as the fact that they dont risk the lives of military personnel, have contributed to their rising popularity. Future capabilities for unmanned aerial vehicles could include autonomous take-offs and landings, while underwater drones could eventually roam the seas for weeks or months to collect data to send back to human crews on land or on ships.

Automated weapons would take things a step further, removing human intervention entirely, and potentially improving efficiency. But it could also open a whole new can of worms, according to the 2015 letter, lowering the threshold for going to battle and creating a global arms race in which lethal technology can be mass-produced, deployed, hacked and misused.

For example, the letter says, there could be armed quadcopters that search for and eliminate people who meet pre-defined criteria.

Samantha Masunaga

SpaceX and Tesla chief Elon Musk teased the world last month by confirming his new venture, Neuralink, a company to create implantable brain chips. Now a lot more details have been revealed.

SpaceX and Tesla chief Elon Musk teased the world last month by confirming his new venture, Neuralink, a company to create implantable brain chips. Now a lot more details have been revealed. (Samantha Masunaga)

Artificial intelligence technology has reached a point where the deployment of such systems is practically, if not legally feasible within years, not decades, and the stakes are high, the 2015 letter read. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.

Philip Finnegan, director of corporate analysis at the Teal Group, said there has been no appetite in the U.S. military for removing the human decision-maker from the equation and allowing robots to target foes autonomously.

The U.S. military has stressed its not interested, he said.

Musk has long been wary of the proliferation of artificial intelligence, warning of its potential dangers as far back as 2014 when he drew a comparison between the future of AI and the film The Terminator. Musk is also a sponsor of OpenAI, a nonprofit he co-founded with entrepreneurs such as Peter Thiel and Reid Hoffman to research and build safe artificial intelligence, whose benefits are as widely and evenly distributed as possible.

Earlier this year, Musk unveiled details about his new venture Neuralink, a California company that plans to develop a device that can be implanted into the brain and help people who have certain brain injuries, such as strokes. The device would enable a persons brain to connect wirelessly with the cloud, as well as with computers and with other brains that have the implant.

The end goal of the device, Musk said, is to fight potentially dangerous applications of artificial intelligence.

Were going to have the choice of either being left behind and being effectively useless or like a pet you know, like a house cat or something or eventually figuring out some way to be symbiotic and merge with AI, Musk said in a story on the website Wait But Why.

Musks views of the risks of artificial intelligence have clashed with those of Facebooks Mark Zuckerberg as well others researching artificial intelligence. Last month, Zuckerberg called Musks warnings overblown and described himself as optimistic.

Musk shot back by saying Zuckerbergs understanding of the subject was limited.

Times staff writer Samantha Masunaga contributed to this report.

tracey.lien@latimes.com

Twitter: @traceylien

UPDATES:

2:20 p.m.: This article was updated to include comment from an analyst.

Noon: This article was updated to include information about Neuralink.

This article was originally published at 9:45 a.m.

Read this article:

Elon Musk and AI experts urge UN to ban artificial intelligence in weapons - Los Angeles Times

Artificial Intelligence Can’t Deal With Chaos, But Teaching It Physics Could Help – ScienceAlert

While artificial intelligence systems continue to make huge strides forward, they're still not particularly good at dealing with chaos or unpredictability. Now researchers think they have found a way to fix this, by teaching AI about physics.

To be more specific, teaching them about the Hamiltonian function, which gives the AI information about the entirety of a dynamic system: all the energy contained within it, both kinetic and potential.

Neural networks, designed to loosely mimic the human brain as a complex, carefully weighted type of AI, then have a 'bigger picture' view of what's happening, and that could open up possibilities for getting AI to tackle harder and harder problems.

"The Hamiltonian is really the special sauce that gives neural networks the ability to learn order and chaos," says physicist John Lindner, from North Carolina State University.

"With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems."

The researchers compare the introduction of the Hamiltonian function to a swinging pendulum it's giving AI information about how fast the pendulum is swinging and its path of travel, rather than just showing AI a snapshot of the pendulum at one point in time.

If neural networks understand the Hamiltonian flow so where the pendulum is, in this analogy, where it might be going, and the energy it has then they are better able to manage the introduction of chaos into order, the new study found.

Not only that, but they can also be built to be more efficient: better able to forecast dynamic, unpredictable outcomes without huge numbers of extra neural nodes. It helps AI to quickly get a more complete understanding of how the world actually works.

A representation of the Hamiltonian flow, with rainbow colours coding a fourth dimension. (North Carolina State University)

To test their newly improved AI neural network, the researchers put it up against a commonly used benchmark called the Hnon-Heiles model, initially created to model the movement of a star around a sun.

The Hamiltonian neural network successfully passed the test, correctly predicting the dynamics of the system in states of order and of chaos.

This improved AI could be used in all kinds of areas, from diagnosing medical conditions to piloting autonomous drones.

We've already seen AI simulate space, diagnose medical problems, upgrade movies and develop new drugs, and the technology is, relatively speaking, just getting started there's lots more on the way. These new findings should help with that.

"If chaos is a nonlinear 'super power', enabling deterministic dynamics to be practically unpredictable, then the Hamiltonian is a neural network 'secret sauce', a special ingredient that enables learning and forecasting order and chaos," write the researchers in their published paper.

The research has been published in Physical Review E.

Read the rest here:

Artificial Intelligence Can't Deal With Chaos, But Teaching It Physics Could Help - ScienceAlert

Purdue researchers uncover blind spots at the intersection of AI and neuroscience – Purdue News Service

Findings debunk dozens of prominent published papers claiming to read minds with EEG

WEST LAFAYETTE, Ind. Is it possible to read a persons mind by analyzing the electric signals from the brain? The answer may be much more complex than most people think.

Purdue University researchers working at the intersection of artificial intelligence and neuroscience say a prominent dataset used to try to answer this question is confounded, and therefore many eye-popping findings that were based on this dataset and received high-profile recognition are false after all.

The Purdue team performed extensive tests over more than one year on the dataset, which looked at the brain activity of individuals taking part in a study where they looked at a series of images. Each individual wore a cap with dozens of electrodes while they viewed the images.

The Purdue teams work is published in IEEE Transactions on Pattern Analysis and Machine Intelligence. The team received funding from the National Science Foundation.

This measurement technique, known as electroencephalography or EEG, can provide information about brain activity that could, in principle, be used to read minds, said Jeffrey Mark Siskind, professor of electrical and computer engineering in Purdues College of Engineering. The problem is that they used EEG in a way that the dataset itself was contaminated. The study was conducted without randomizing the order of images, so the researchers were able to tell what image was being seen just by reading the timing and order information contained in EEG, instead of solving the real problem of decoding visual perception from the brain waves.

The Purdue researchers originally began questioning the dataset when they could not obtain similar outcomes from their own tests. Thats when they started analyzing the previous results and determined that a lack of randomization contaminated the dataset.

This is one of the challenges of working in cross-disciplinary research areas, said Hari Bharadwaj, an assistant professor with a joint appointment in Purdues College of Engineering and College of Health and Human Sciences. Important scientific questions often demand cross-disciplinary work. The catch is that, sometimes, researchers trained in one field are not aware of the common pitfalls that can occur when applying their ideas to another. In this case, the prior work seems to have suffered from a disconnect between AI/machine-learning scientists, and pitfalls that are well-known to neuroscientists.

The Purdue team reviewed publications that used the dataset for tasks such as object classification, transfer learning and generation of images depicting human perception and thought using brain-derived representations measured through electroencephalograms (EEGs)

The question of whether someone can read another persons mind through electric brain activity is very valid, said Ronnie Wilbur, a professor with a joint appointment in Purdues College of Health and Human Sciences and College of Liberal Arts. Our research shows that a better approach is needed.

Siskind is a well-known Purdue innovator and has worked on multiple patented technologies with the Purdue Research Foundation Office of Technology Commercialization. For more information on licensing and other opportunities with Purdue technologies, contact OTC at otcip@prf.org.

About Purdue Research Foundation Office of Technology Commercialization

The Purdue Research Foundation Office of Technology Commercialization operates one of the most comprehensive technology transfer programs among leading research universities in the U.S. Services provided by this office support the economic development initiatives of Purdue University and benefit the university's academic activities through commercializing, licensing and protecting Purdue intellectual property. The office recently moved into the Convergence Center for Innovation and Collaboration in Discovery Park District, adjacent to the Purdue campus. In fiscal year 2020, the office reported 148 deals finalized with 225 technologies signed, 408 disclosures received and 180 issued U.S. patents. The office is managed by the Purdue Research Foundation, which received the 2019 Innovation and Economic Prosperity Universities Award for Place from the Association of Public and Land-grant Universities. In 2020, IPWatchdog Institute ranked Purdue third nationally in startup creation and in the top 20 for patents. The Purdue Research Foundation is a private, nonprofit foundation created to advance the mission of Purdue University. Contact otcip@prf.org for more information.

About Purdue University

Purdue University is a top public research institution developing practical solutions to todays toughest challenges. Ranked the No. 5 Most Innovative University in the United States by U.S. News & World Report, Purdue delivers world-changing research and out-of-this-world discovery. Committed to hands-on and online, real-world learning, Purdue offers a transformative education to all. Committed to affordability and accessibility, Purdue has frozen tuition and most fees at 2012-13 levels, enabling more students than ever to graduate debt-free. See how Purdue never stops in the persistent pursuit of the next giant leap at purdue.edu.

Writer: Chris Adam, cladam@prf.orgSources: Jeffrey Siskind, qobi@purdue.edu

Hari Bharadwaj, hbharadw@purdue.edu

Ronnie Wilbur, wilbur@purdue.edu

ABSTRACT

The Perils and Pitfalls of Block Design for EEG Classification Experiments

Ren Li, Jared S. Johansen, Hamad Ahmed, Thomas V. Ilyevsky, Ronnie B. Wilbur, Hari M. Bharadwaj and Jeffrey Mark Siskind

A recent paper claims to classify brain processing evoked in subjects watching ImageNet stimuli as measured with EEG and to employ a representation derived from this processing to construct a novel object classifier. That paper, together with a series of subsequent papers, claims to achieve successful results on a wide variety of computer-vision tasks, including object classification, transfer learning, and generation of images depicting human perception and thought using brain-derived representations measured through EEG. Our novel experiments and analyses demonstrate that their results crucially depend on the block design that they employ, where all stimuli of a given class are presented together, and fail with a rapid-event design, where stimuli of different classes are randomly intermixed. The block design leads to classification of arbitrary brain states based on block-level temporal correlations that are known to exist in all EEG data, rather than stimulus-related activity. Because every trial in their test sets comes from the same block as many trials in the corresponding training sets, their block design thus leads to classifying arbitrary temporal artifacts of the data instead of stimulus-related activity. This invalidates all subsequent analyses performed on this data in multiple published papers and calls into question all of the reported results. We further show that a novel object classifier constructed with a random codebook performs as well as or better than a novel object classifier constructed with the representation extracted from EEG data, suggesting that the performance of their classifier constructed with a representation extracted from EEG data does not benefit from the brain-derived representation. Together, our results illustrate the far-reaching implications of the temporal autocorrelations that exist in all neuroimaging data for classification experiments. Further, our results calibrate the underlying difficulty of the tasks involved and caution against overly optimistic, but incorrect, claims to the contrary.

Continue reading here:

Purdue researchers uncover blind spots at the intersection of AI and neuroscience - Purdue News Service

Human-centered redistricting automation in the age of AI – Science Magazine

Redistrictingthe constitutionally mandated, decennial redrawing of electoral district boundariescan distort representative democracy. An adept map drawer can elicit a wide range of election outcomes just by regrouping voters (see the figure). When there are thousands of precincts, the number of possible partitions is astronomical, giving rise to enormous potential manipulation. Recent technological advances have enabled new computational redistricting algorithms, deployable on supercomputers, that can explore trillions of possible electoral maps without human intervention. This leaves us to wonder if Supreme Court Justice Elena Kagan was prescient when she lamented, (t)he 2010 redistricting cycle produced some of the worst partisan gerrymanders on record. The technology will only get better, so the 2020 cycle will only get worse (Gill v. Whitford). Given the irresistible urge of biased politicians to use computers to draw gerrymanders and the capability of computers to autonomously produce maps, perhaps we should just let the machines take over. The North Carolina Senate recently moved in this direction when it used a state lottery machine to choose from among 1000 computer-drawn maps. However, improving the process and, more importantly, the outcomes results not from developing technology but from our ability to understand its potential and to manage its (mis)use.

It has taken many years to develop the computing hardware, derive the theoretical basis, and implement the algorithms that automate map creation (both generating enormous numbers of maps and uniformly sampling them) (14). Yet these innovations have been easy compared with the very difficult problem of ensuring fair political representation for a richly diverse society. Redistricting is a complex sociopolitical issue for which the role of science and the advances in computing are nonobvious. Accordingly, we must not allow a fascination with technological methods to obscure a fundamental truth: The most important decisions in devising an electoral map are grounded in philosophical or political judgments about which the technology is irrelevant. It is nonsensical to completely transform a debate over philosophical values into a mathematical exercise.

As technology advances, computers are able to digest progressively larger quantities of data per time unit. Yet more computation is not equivalent to more fairness. More computation fuels an increased capacity for identifying patterns within data. But more computation has no relationship with the moral and ethical standards of an evolving and developing society. Neither computation nor even an equitable process guarantees a fair outcome.

The way forward is for people to work collaboratively with machines to produce results not otherwise possible. To do this, we must capitalize on the strengths and minimize the weaknesses of both artificial intelligence (AI) and human intelligence. Ensuring representational fairness requires metacognition that integrates creative and benevolent compromises. Humans have the advantage over machines in metacognition. Machines have the advantage in producing large numbers of rote computations. Although machines produce information, humans must infuse values to make judgments about how this information should be used (5).

Markedly different outcomes can emerge when six Republicans and six Democrats in these 12 geographic units are grouped into four districts. A 50-50 party split can be turned into a 3:1 advantage for either party. When redistricting a state with thousands of precincts, the potential for political manipulation is enormous.

Accordingly, machines can be tasked with the menial aspects of cognitionthe meticulous exploration of the astronomical number of ways in which a state can be partitioned. This helps us classify and understand the range of possibilities and the interplay of competing interests. Machines enhance and inform intelligent decision-making by helping us navigate the unfathomably large and complex informational landscape. Left to their own devices, humans have shown themselves to be unable to resist the temptation to chart biased paths through that terrain.

The ideal redistricting process begins with humans articulating the initial criteria for the construction of a fair electoral map (e.g., population equality, compactness measures, constraints on breaking political subdivisions, and representation thresholds). Here, the concerns of many different communities of interest should be solicited and considered. Note that this starting point already requires critical human interaction and considerable deliberation. Determining what data to use, and how, is not automatable (e.g., citizen voting age versus voting age population, relevant past elections, and how to forecast future vote choices). Partisan measures (e.g., mean-median difference, competitiveness, likely seat outcome, and efficiency gap) as well as vote prediction models, which are often contentious in court, should be transparently specified.

Once we have settled on the inputs to the algorithm, the computational analysis produces a large sample of redistricting plans that satisfy these principles. Trade-offs usually arise (e.g., adhering to compactness rules might require splitting jagged cities). Humans must make value-laden judgments about these trade-offs, often through contentious debate.

The process would then iterate. After some contemplation, we may decide, perhaps, on two, not three, majority-minority districts so that a particular town is kept together. These refined goals could then be specified for another computational analysis round with further deliberation to follow. Sometimes a Pareto improvement principle applies, with the algorithm assigned to ascertain whether, for example, city splits or minority representation can be maintained or improved even as one raises the overall level of compliance with other factors such as compactness. In such a process, computers assist by clarifying the feasibility of various trade-offs, but they do not supplant the human value judgments that are necessary for adjusting these plans to make them humanly rational. Neglecting the essential human role is to substitute machine irrationality for human bias.

Automation in redistricting is not a substitute for human intelligence and effort; its role is to augment human capabilities by regulating nefarious intent with increased transparency, and by bolstering productivity by efficiently parsing and synthesizing data to improve the informational basis for human decision-making. Redistricting automation does not replace human labor; it improves it. The critical goal for AI in governance is to design successful processes for human-machine collaboration. This process must inhibit the ill effects from sole reliance on humans as well as overreliance on machines. Human-machine collaboration is key, and transparency is essential.

The most promising institutional route in the near term for adopting this human-machine line-drawing process is through independent redistricting commissions (IRCs) that replace politicians with a balanced set of partisan citizen commissioners. IRCs are a relatively new concept and exist in only some states. They have varied designs. In eight states, a commission has primary responsibility for drawing the congressional plan. In six, they are only advisory to the legislature. In two states, they have no role unless the legislature fails to enact a plan. IRCs also vary in the number of commissioners, partisan affiliation, how the pool of applicants is created, and who selects the final members.

The lack of a blueprint for an IRC allows each to set its own rules, paving the way for new approaches. Although no best practices have yet emerged for these new institutions, we can glean some lessons from past efforts about how to integrate technology into a partisan balanced deliberation process. For example, Mexico's process integrated algorithms but struggled with transparency, and the North Carolina Senate relied heavily on a randomness component. Both offer lessons and help us refine our understanding of how to keep bias from creeping into the process.

Once these structural decisions are made, we must still contend with the fact that devising electoral maps is an intricate process, and IRCs generally lack the expertise that politicians and their staffs have cultivated from decades of experience. In addition, as the bitter partisanship of the 2011 Arizona citizen commission demonstrated, without a method to assess the fairness of proposals, IRCs can easily deadlock or devolve into lengthy litigation battles (6). New technological tools can aid IRCs in fulfilling their mandate by compensating for this experience deficiency as well as providing a way to benchmark fairness conceptualizations.

To maintain public confidence in their processes, IRCs would need to specify the criteria that guide the computational algorithm and implement the iterative process in a transparent manner. Open deliberation is crucial. For instance, once the range of maps is known to produce, say, a seven-to-eight likely split in Democrat-to-Republican seats 35% of the time, an eight-to-seven likely Democrat-to-Republican split 40% of the time, and something outside these two choices 25% of the time, how does an IRC choose between these partisan splits? Do they favor a split that produces more compact districts? How do they weigh the interests of racial minorities versus partisan considerations?

Regardless of what technology may be developed, in many states, the majority party of the state legislature assumes the primary role in creating a redistricting planand with rare exceptions, enjoys wide latitude in constructing district lines. There is neither a requirement nor an incentive for these self-interested actors to consent to a new process or to relinquish any of their constitutionally granted control over redistricting.

All the same, technological innovation can still have benefits by ameliorating informational imbalance. Consider redistricting Ohio's 16 congressional seats. A computational analysis might reveal that, given some set of prearranged criteria (e.g., equal population across districts, compact shapes, a minority district, and keeping particular communities of interest together), the number of Republican congressional seats usually ends up being 9 out of 16, and almost never more than 11. Although the politicians could still then introduce a map with 12 Republican seats, they would now have to weigh the potential public backlash from presenting electoral districts that are believed, a priori, to be overtly and excessively partisan. In this way, the information that is made more broadly known through technological innovation induces a new pressure point on the system whereby reform might occur.

Although politicians might not welcome the changes that technology brings, they cannot prevent the ushering in of a new informational era. States are constitutionally granted the right to enact maps as they wish, but their processes in the emerging digital age are more easily monitored and assessed. Whereas before, politicians exploited an information advantage, scientific advances can decrease this disparity and subject the process to increased scrutiny.

Although science has the potential to loosen the grip that partisanship has held over the redistricting process, we must ensure that the science behind redistricting does not, itself, become partisanship's latest victim. Scientific research is never easy, but it is especially vulnerable in redistricting where the technical details are intricate and the outcomes are overtly political.

We must be wary of consecrating research aimed at promoting a particular outcome or believing that a scientist's credentials absolve partisan tendencies. In redistricting, it may seem obvious to some that the majority party has abused its power, but validating research that supports that conclusion because of a bias toward such a preconceived outcome would not improve societal governance. Instead, use of faulty scientific tests as a basis for invalidating electoral maps allows bad actors to later overturn good maps with the same faulty tests, ultimately destroying our ability to legally distinguish good from bad. Validating maps using partisan preferences under the guise of science is more dangerous than partisanship itself.

The courts must also contend with the inconvenient fact that although their judgments may rely on scientific research, scientific progress is necessarily and excruciatingly slow. This highlights a fundamental incompatibility between the precedential nature of the law and the unrelenting need for high-quality science to take time to ponder, digest, and deliberate. Because of the precedential nature of legal decision-making, enshrining underdeveloped ideas has harmful path-dependent effects. Hence, peer review by the relevant scientific community, although far from perfect, is clearly necessary. For redistricting, technical scientific communities as well as the social scientific and legal communities are all relevant and central, with none taking over the role of another.

The relationship of technology with the goals of democracy must not be underappreciatedor overappreciated. Technological progress can never be stopped, but we must carefully manage its impact so that it leads to improved societal outcomes. The indispensable ingredient for success will be how humans design and oversee the processes we use for managing technological innovation.

Acknowledgments: W.K.T.C. has been an expert witness for A. Philip Randolph Institute v. Householder, Agre et al. v. Wolf et al., and The League of Women Voters of Pennsylvania et al. v. The Commonwealth of Pennsylvania et al.

Read this article:

Human-centered redistricting automation in the age of AI - Science Magazine

Facebooks Red Team Hacks Its Own AI Programs – WIRED

In 2018, Canton organized a risk-a-thon in which people from across Facebook spent three days competing to find the most striking way to trip up those systems. Some teams found weaknesses that Canton says convinced him the company needed to make its AI systems more robust.

One team at the contest showed that using different languages within a post could befuddle Facebooks automated hate-speech filters. A second discovered the attack used in early 2019 to spread porn on Instagram, but it wasnt considered an immediate priority to fix at the time. We forecast the future, Canton says. That inspired me that this should be my day job.

In the past year, Cantons team has probed Facebooks moderation systems. It also began working with another research team inside the company that has built a simulated version of Facebook called WW that can be used as a virtual playground to safely study bad behavior. One project is examining the circulation of posts offering goods banned on the social network, such as recreational drugs.

The red teams weightiest project aims to better understand deepfakes, imagery generated using AI that looks like it was captured with a camera. The results show that preventing AI trickery isnt easy.

Deepfake technology is becoming easier to access and has been used for targeted harassment. When Cantons group formed last year, researchers had begun to publish ideas for how to automatically filter out deepfakes. But he found some results suspicious. There was no way to measure progress, he says. Some people were reporting 99 percent accuracy, and we were like That is not true.

Facebooks AI red team launched a project called the Deepfakes Detection Challenge to spur advances in detecting AI-generated videos. It paid 4,000 actors to star in videos featuring a variety of genders, skin tones, and ages. After Facebook engineers turned some of the clips into deepfakes by swapping peoples faces around, developers were challenged to create software that could spot the simulacra.

The results, released last month, show that the best algorithm could spot deepfakes not in Facebooks collection only 65 percent of the time. That suggests Facebook isnt likely to be able to reliably detect deepfakes soon. Its a really hard problem, and its not solved, Canton says.

Cantons team is now examining the robustness of Facebook's misinformation detectors and political ad classifiers. Were trying to think very broadly about the pressing problems in the upcoming elections, he says.

Most companies using AI in their business dont have to worry as Facebook does about being accused of skewing a presidential election. But Ram Shankar Siva Kumar, who works on AI security at Microsoft, says they should still worry about people messing with their AI models. He contributed to a paper published in March that found 22 of 25 companies queried did not secure their AI systems at all. The bulk of security analysts are still wrapping their head around machine learning, he says. Phishing and malware on the box is still their main thing.

Last fall Microsoft released documentation on AI security developed in partnership with Harvard that the company uses internally to guide its security teams. It discusses threats such as model stealing, where an attacker sends repeated queries to an AI service and uses the responses to build a copy that behaves similarly. That stolen copy can either be put to work directly or used to discover flaws that allow attackers to manipulate the original, paid service.

Battista Biggio, a professor at the University of Cagliari who has been publishing studies on how to trick machine-learning systems for more than a decade, says the tech industry needs to start automating AI security checks.

Companies use batteries of preprogrammed tests to check for bugs in conventional software before it is deployed. Biggio says improving the security of AI systems in use will require similar tools, potentially building on attacks he and others have demonstrated in academic research.

That could help address the gap Kumar highlights between the numbers of deployed machine-learning algorithms and the workforce of people knowledgeable about their potential vulnerabilities. However, Biggio says biological intelligence will still be needed, since adversaries will keep inventing new tricks. The human in the loop is still going to be an important component, he says.

More Great WIRED Stories

Originally posted here:

Facebooks Red Team Hacks Its Own AI Programs - WIRED

8 biggest AI trends of 2020, according to experts – The Next Web

Artificial intelligence is one of the fastest moving and least predictable industries. Just think about all the things that were inconceivable a few years back: deepfakes, AI-powered machine translation, bots that can master the most complicated games, etc.

But it never hurts to try our chances at predicting the future of AI. We asked scientists and AI thought leaders about what they think will happen in the AI space in the year to come. Heres what you need to know.

As Jeroen Tas, Philips Chief Innovation & Strategy Officer, told TNW: AIs main impact in 2020 will be transforming healthcare workflows to the benefit of patients and healthcare professionals alike, while at the same time reducing costs. Its ability to acquire data in real-time from multiple hospital information flows electronic health records, emergency department admissions, equipment utilization, staffing levels etc. and to interpret and analyze it in meaningful ways will enable a wide range of efficiency and care enhancing capabilities.

This will come in the form of optimized scheduling, automated reporting, and automatic initialization of equipment settings, Tas explained, which will be customized to an individual clinicians way of working and an individual patients condition features that improve the patient and staff experience, result in better outcomes, and contribute to lower costs.

There is tremendous waste in many healthcare systems related to complex administration processes, lack of preventative care, and over- and under-diagnosis and treatment. These are areas where AI could really start to make a difference, Tas told TNW. Further out, one of the most promising applications of AI will be in the area of Command Centers which will optimize patient flow and resource allocation.

Philips is a key player in the development of necessary AI-enabled apps seamlessly being integrated into existing healthcare workflows. Currently, one in every tworesearchers at Philips worldwide work with data science and AI, pioneering new ways to apply this tech to revolutionizing healthcare.

For example, Tas explained how combining AI with expert clinical and domain knowledge will begin to speed up routine and simple yes/no diagnoses not replacing clinicians, but freeing up more time for them to focus on the difficult, often complex, decisions surrounding an individual patients care: AI-enabled systems will track, predict, and support the allocation of patient acuity and availability of medical staff, ICU beds, operating rooms, and diagnostic and therapeutic equipment.

2020 will be the year of AI trustability, Karthik Ramakrishnan, Head of Advisory and AI Enablement at Element AI, told TNW. 2019 saw the emergence of early principles for AI ethics and risk management, and there have been early attempts at operationalizing these principles in toolkits and other research approaches. The concept of explainability (being able to explain the forces behind AI-based decisions) is also becoming increasingly well known.

There has certainly been a growing focus on AI ethics in 2019. Early in the year, the European Commission published a set of seven guidelines for developing ethical AI. In October, Element AI, which was co-founded by Yoshua Bengio, one of the pioneers of deep learning, partnered with the Mozilla Foundation to create data trusts and push for the ethical use of AI. Big tech companies such as Microsoft and Google have also taken steps toward making their AI development conformant to ethical norms.

The growing interest in ethical AI comes after some visible failures around trust and AI in the marketplace, Ramakrishnan reminded us, such as the Apple Pay rollout, or the recent surge in interest regarding the Cambridge Analytica scandal.

In 2020, enterprises will pay closer attention to AI trust whether theyre ready to or not. Expect to see VCs pay attention, too, with new startups emerging to help with solutions, Ramakrishnan said.

Well see a rise of data synthesis methodologies to combat data challenges in AI, Rana el Kaliouby, CEO and co-founder of Affectiva, told TNW. Deep learning techniques are data-hungry, meaning that AI algorithms built on deep learning can only work accurately when theyre trained and validated on massive amounts of data. But companies developing AI often find it challenging getting access to the right kinds of data, and the necessary volumes of data.

Many researchers in the AI space are beginning to test and use emerging data synthesis methodologies to overcome the limitations of real-world data available to them. With these methodologies, companies can take data that has already been collected and synthesize it to create new data, el Kaliouby said.

Take the automotive industry, for example. Theres a lot of interest in understanding whats happening with people inside of a vehicle as the industry works to develop advanced driver safety features and to personalize the transportation experience. However, its difficult, expensive, and time-consuming to collect real-world driver data. Data synthesis is helping address that for example, if you have a video of me driving in my car, you can use that data to create new scenarios, i.e., to simulate me turning my head, or wearing a hat or sunglasses, el Kaliouby added.

Thanks to advances in areas such as generative adversarial networks (GAN), many areas of AI research can now synthesize their own training data. Data synthesis, however, doesnt eliminate the need for collecting real-world data, el Kaliouby reminds: [Real data] will always be critical to the development of accurate AI algorithms. However [data synthesis] can augment those data sets.

Neural network architectures will continue to grow in size and depth and produce more accurate results and become better at mimicking human performance on tasks that involve data analysis, Kate Saenko, Associate Professor at the Department of Computer Science at Boston University, told TNW. At the same time, methods for improving the efficiency of neural networks will also improve, and we will see more real-time and power-efficient networks running on small devices.

Saenko predicts that neural generation methods such as deepfakes will also continue to improve and create ever more realistic manipulations of text, photos, videos, audio, and other multimedia that are undetectable to humans. The creation and detection of deepfakes has already become a cat-and-mouse chase.

As AI enters more and more fields, new issues and concerns will arise. There will be more scrutiny of the reliability and bias behind these AI methods as they become more widely deployed in society, for example, more local governments considering a ban on AI-powered surveillance because of privacy and fairness concerns, Saenko said.

Saenko, who is also the director of BUs Computer Vision and Learning Group, has a long history in researching visual AI algorithms. In 2018, she helped develop RISE, a method for scrutinizing the decisions made by computer vision algorithms.

In 2020, expect to see significant new innovations in the area of what IBM calls AI for AI: using AI to help automate the steps and processes involved in the life cycle of creating, deploying, managing, and operating AI models to help scale AI more widely into the enterprise, said Sriram Raghavan, VP of IBM Research AI.

Automating AI has become a growing area of research and development in the past few years. One example is Googles AutoML, a tool that simplifies the process of creating machine learning models and makes the technology accessible to a wider audience. Earlier this year, IBM launched AutoAI, a platform for automating data preparation, model development, feature engineering, and hyperparameter optimization.

In addition, we will begin to see more examples of the use of neurosymbolic AI which combines statistical data-drivenapproaches with powerful knowledge representation and reasoning techniques to yield more explainable & robust AI that can learn from less data, Raghavan told TNW.

An example is the Neurosymbolic Concept Learner, a hybrid AI model developed by researchers at IBM and MIT. NSCL combines classical rule-based AI and neural networks and shows promise in solving some of the endemic problems of current AI models, including large data requirements and a lack of explainability.

2020 will be the year that the manufacturing industry embraces AI to modernize the production line, said Massimiliano Versace, CEO, and co-founder of Neurala. For the manufacturing industry, one of the biggest challenges is quality control. Product managers are struggling to inspect each individual product and component while also meeting deadlines for massive orders.

By integrating AI solutions as a part of workflows, AI will be able to augment and address this challenge, Versace believes: In the same way that the power drill changed the way we use screwdrivers, AI will augment existing processes in the manufacturing industry by reducing the burden of mundane and potentially dangerous tasks, freeing up workers time to focus on innovative product development that will push the industry forward.

Manufacturers will move towards the edge, Versace adds. With AI and data becoming centralized, manufacturers are forced to pay massive fees to top cloud providers to access data that is keeping systems up and running. The challenges of cloud-based AI have spurred a slate of innovations toward creating edge AI, software and hardware that can run AI algorithms without the need to have a link to the cloud.

New routes to training AI that can be deployed and refined at the edge will become more prevalent. As we move into the new year, more and more manufacturers will begin to turn to the edge to generate data, minimize latency problems and reduce massive cloud fees. By running AI where it is needed (at the edge), manufacturers can maintain ownership of their data, Versace told TNW.

AI will remain a top national military and economic security issue in 2020 and beyond, said Ishan Manaktala, CEO of Symphony AyasdiAI. Already, governments are investing heavily in AI as a possible next competitive front. China has invested over $140 billion, while the UK, France, and the rest of Europe have plowed more than $25 billion into AI programs. The U.S., starting late, spent roughly $2 billion on AI in 2019 and will spend more than $4 billion in 2020.

Manaktala added, But experts urge more investment, warning that the U.S. is still behind. A recent National Security Commission on Artificial Intelligence noted that China is likely to overtake U.S. research and development spending in the next decade. The NSCAI outlined five points in its preliminary report: invest in AI R&D, apply AI to national security missions, train and recruit AI talent, protect U.S. technology advantages, and marshal global coordination.

We predict drug discovery will be vastly improved in 2020 as manual visual processes are automated because visual AI will be able to monitor and detect cellular drug interactions on a massive scale, Emrah Gultekin, CEO at Chooch, told TNW. Currently, years are wasted in clinical trials because drug researchers are taking notes, then entering those notes in spreadsheets and submitting them to the FDA for approval. Instead, highly accurate analysis driven by AI can lead to radically faster drug discoveries.

Drug development is a tedious process that can take up to 12 years and involve the collective efforts of thousands of researchers. The costs of developing new drugs can easily exceed $1 billion. But theres hope that AI algorithms can speed up the process of experimentation and data gathering in drug discovery.

Additionally, cell counting isa massive problem in biological researchnot just in drug discovery. People are hunched over microscopes or sitting in front of screens with clickers in their hands counting cells. There are expensive machines that attempt to count, inaccurately. But visual AI platforms can perform this task in seconds, with 99% accuracy in just moments, Gultekin added.

This post is brought to you byPhilips.

Excerpt from:

8 biggest AI trends of 2020, according to experts - The Next Web

Facebook to use artificial intelligence in bid to improve renewable energy storage – CNBC

Facebook and Carnegie Mellon University have announced they are trying to use artificial intelligence (AI) to find new "electrocatalysts" that can help to store electricity generated by renewable energy sources.

Electrocatalysts can be used to convert excess solar and wind power into other fuels, such as hydrogen and ethanol, that are easier to store. However, today's electrocatalysts are rare and expensive, with platinum being a good example, and finding new ones hasn't been easy as there are billions of ways that elements can be combined to make them.

Researchers in the catalysis community can currently test tens of thousands of potential catalysts a year but Facebook and Carniegie Mellon believe they can increase the number to millions, or even billions, of catalysts with the help of AI.

The social media giant and the university on Wednesday released some of their own AI software "models" that can help to find new catalysts but they want other scientists to have a go as well.

To support these scientists, Facebook and Carnegie Mellon have released a data set with information on potential catalysts that scientists can use to create new pieces of software.

Facebook said the "Open Catalyst 2020" data set required 70 million hours of compute time to produce. The data set includes "relaxation" calculations for a million possible catalysts as well as supplemental calculations.

Relaxations, a widely used measurement in catalysis, are calculated to see if a particular combination of elements will make a good catalyst.

Each relaxation calculation, which simulates how atoms from different elements will interact, takes scientists around eight hours on average to work out, but Facebook says AI software can potentially do the same calculations in under a second.

If you study catalysis, "that's going to dramatically change how you do your work and how you do your research," said Larry Zitnick, a research scientist at Facebook AI Research, on a call ahead of the announcement.

In recent years, tech giants like Facebook and Google have attempted to use AI to speed up scientific calculations and observations across multiple fields.

For example, DeepMind, an AI-lab owned by Google parent Alphabet, developed AI software capable of spotting tumors in mammograms faster and more accurately than human researchers.

View original post here:

Facebook to use artificial intelligence in bid to improve renewable energy storage - CNBC

The rise of AI in medicine – Varsity Online

During the coronavirus pandemic, it's unlikely that AI doctors would work at all: the depth of moral decisions that need to be made simply can't be accommodated by a program.Vidal Balielo Jr.

By now, its almost old news that artificial intelligence (AI) will have a transformative role in medicine. Algorithms have the potential to work tirelessly, at faster rates and now with potentially greater accuracy than clinicians.

In 2016, it was predicted that machine learning will displace much of the work of radiologists and anatomical pathologists. In the same year, a University of Toronto professor controversially announced that we should stop training radiologists now. But is it really the beginning of the end for some medical specialties?

AI excels in pattern identification in determining pathologies that look certain ways, according to Elliot Fishman, a radiology and oncology professor at Johns Hopkins University and a key proponent of AI integration into medicine. Ultimately, specialties that rely heavily on visual pattern recognition notably radiology, pathology, and dermatology are those believed to be at the greatest risk. With the advent of virtual primary care services, such as Babylon, General Practice may also have to adapt in the future.

Pattern recognition functions

In January of this year, an article in Nature reported that AI systems outperformed doctors in breast cancer detection. This was carried out by an international team, including researchers from Google Health and Imperial College London on mammograms obtained from almost 29,000 women. Screening mammography currently plays a critical role in early breast cancer detection, ensuring early initiation of treatment and yielding improved patient prognoses. False negatives are a significant problem in mammography. The study found AI use was associated with an absolute reduction of 9.4% and 2.7% reduction in false negatives, in the USA and UK, respectively. Similarly, use of the AI system led to a reduction of 5.7% and 1.2% in the USA and UK respectively for false positives. The study suggested that AI outperformed the six radiologists individually, and was equivalent to the current double-reading system of two doctors currently used in the UK. These developments have already had perceptible consequences in practice: algorithms eliminate the need for a second radiologist when interpreting mammograms. However, critically, one radiologist remains responsible for the diagnosis.

AI can also be deployed to predict the cognitive decline that leads to Alzheimers disease... allowing early intervention and treatment

Earlier studies have also yielded similar results: a 2017 study published in Nature examined the use of algorithms in dermatology. The study, from Stanford University, involved an algorithm developed by computer scientists using an initial database of 130,000 skin disease images. When compared to the success rates of 21 dermatologists, the algorithm was almost equally successful. Likewise, in a study conducted by the European Society for Medical Oncology, it was found that AI exceeded the performance of 58 international dermatologists. A system reliant on a form of machine learning known as Deep Learning Convolutional Neural Network (CNN) missed fewer melanomas (the most lethal form of skin cancer), and misdiagnosed benign moles (or nevi) as malignant less often than the group of dermatologists.

Further applications in medicine

However, the prospects of AI technology extend beyond the clear applications in cancer diagnosis and radiology: recent studies have also demonstrated that AI may be able to detect genetic diseases in infants by rapid whole-genome sequencing and interpretation. Considering that time is critical in treating gravely ill children, such automated techniques can be crucial in diagnosing children who are suspected of having genetic diseases.

In addition, AI can also be deployed to predict the cognitive decline that leads to Alzheimers disease. Such computational models can be highly valuable at the individual level, allowing early intervention and treatment planning. FDA approval has also been granted to a number of companies for such technologies; these include Imagens OsteoDetect, an algorithm intended to aid wrist fracture detection. In addition, algorithms may have functions in other specialties such as anaesthesiology in monitoring and responding to physiological signs.

Limitations of AI

Despite the benefits that AI integration into clinical practice can provide, the technology is not without limitations. Machine learning algorithms are highly dependent on the quality and quantity of the data input, typically requiring millions of observations to function at suitable levels. Biases in data collection can heavily impact performance; for instance, racial or gender representation in the original data set can lead to differences in diagnostic abilities of the system for different groups, consequently leading to disparities in patient outcomes. Considering that certain pathologies, including melanoma, present differently between races and with different incidences, this can often lead to both later diagnoses and poorer outcomes for racial minorities, as found in a number of studies. Volunteer bias of the data collected is also a pertinent consideration; for example, although lactate concentration is a good predictor of death, this is not routinely measured in healthy individuals.

Considering the magnitude of what is at stake raises the question of whether it is appropriate to rely solely on machines without any human input.

Other key problems which may arise include how algorithms overfit predictions based on random errors in the data, resulting in unstable estimates which vary between data samples. In addition, clinicians may take a more cautious approach when making a diagnosis. Therefore, it may appear that a human underperforms compared to an algorithm since their actions may yield a lower accuracy in tumour identification, however this approach could lead to a lower number of critical cases missed.

Ultimately, the tendency for humans to favour propositions given by automated systems over non-automated ones, known as automation bias, may exacerbate these problems.

Attempts to replace GPs with AI have been unsuccessful

The success of AI integration into clinical practice crucially depends on the receptiveness of patients. Babylon, a start-up company based in the UK, was developed to give medical advice to patients using chat services. Although Babylon has been referred to as the biggest disruption in medical practice in years and a game-changer in UK media as quoted on Babylons website it is questionable how successful the service has been so far Babylon has been slow in recruiting patients and this month, it came under fire for data breaches. The fact that patients lose access to their regular GP if they sign up to Babylon is perhaps a key contributing factor for Babylons slow take-off. Therefore, it appears that human contact is highly valued by patients, after all, at least for some medical specialties.

Potential effect of COVID-19

The COVID-19 pandemic, with its requirements for social distancing, could potentially accelerate the use of AI. COVID-related restrictions could change the perception of patients about remote medical consultations, paving the way for increased receptiveness to primary healthcare apps including Babylon. The pandemic has also highlighted the inadequacies in fast internet access throughout the country. This may encourage increased government investment into broadband infrastructure, which may, in turn, facilitate broader penetration of AI technology. The increased pressure on the NHS may also encourage greater use of algorithms to delegate menial tasks as seen in specialties such as radiology already.

The future

AI will likely become an indispensable tool in clinical medicine, facilitating the work of professionals by automating mundane, albeit essential tasks. By reducing the medical workload, this could allow healthcare professionals to dedicate greater efforts to other aspects of their work, including patient interaction. As emphasised by the President of the Royal College of Radiologists, radiologists can instead focus more of their time on interventional radiology and in managing more complex cases to a much greater extent. Indeed, innovation may aid clinicians and augment their decision-making capabilities to improve their efficiency and diagnostic accuracy, however it remains doubtful whether technology can fully replace these roles. After all, considering the magnitude of what is at stake human life raises the question of whether it is appropriate to rely solely on machines without any human input. Therefore, it remains likely that human involvement will need to continue across medical specialties, although this may be in a reduced or adapted form.

Varsity is the independent newspaper for the University of Cambridge, established in its current form in 1947. In order to maintain our editorial independence, our newspaper and news website receives no funding from the University of Cambridge or its constituent Colleges.

We are therefore almost entirely reliant on advertising for funding, and during this unprecedented global crisis, we have a tough few weeks and months ahead.

In spite of this situation, we are going to look at inventive ways to look at serving our readership with digital content for the time being.

Therefore we are asking our readers, if they wish, to make a donation from as little as 1, to help with our running cost at least until we hopefully return to print on 2nd October 2020.

Many thanks, all of us here at Varsity would like to wish you, your friends, families and all of your loved ones a safe and healthy few months ahead.

Originally posted here:

The rise of AI in medicine - Varsity Online

Combining AI and Analog Forecasting to Predict Extreme Weather – Eos

The future of extreme weather prediction may lie in modernizing a piece of technology from the past. Researchers recently developed a new technique to augment an old-fashioned weather forecasting method with the power of deep learning, a subset of artificial intelligence (AI). Once the deep learning system is fully trained, it is able to predict extreme weather events like heat waves and cold spells with ~80% accuracy up to 5 days beforehand.

This is a very inexpensive way of predicting extreme events at least a few days ahead of time, said Ashesh Chattopadhyay, a mechanical engineering graduate student at Rice University in Houston and lead author on the project.

The project began when Pedram Hassanzadeh, an assistant professor of mechanical engineering at Rice, realized that extreme weather events like heat waves and cold spells usually arise from very unusual atmospheric circulation patterns that could potentially be taught to a pattern recognition computer program.

While weather models need to be run on supercomputers, you can run this pretty much on a computer, like even my laptop.And then we realized that this was how weather prediction used to be done, said Hassanzadeh, who was the senior author on the study recently published in the Journal of Advances in Modeling Earth Systems.

Analog forecasting, as the technique is called, operates on the straightforward principle of making predictions by comparing current weather patterns to similar patterns (or analogs) from the past. Historically, it had a key role in weather prediction (in fact, it was crucial for planning the D-Day Normandy invasion of 1944) but is hindered by challenges in finding enough useful analogs from past weather catalogs. With the rise of computer-based numerical weather prediction (NWP) in the 1950s, analog forecasting was rendered nigh obsolete.

Today, NWP is the gold standard used for day-to-day weather prediction. However, it is computationally expensive and has not been completely reliable in predicting extreme weather events. By contrast, the new method of combining AI deep learning with analog forecasting is relatively inexpensive. While weather models need to be run on supercomputers, you can run this pretty much on a computer, like even my laptop, said Chattopadhyay.

The researchers trained a novel deep learning pattern recognition program called capsule neural networks (or CapsNets) to identify patterns of atmospheric circulation in the days leading up to an extreme weather event in North America, either a heat wave or a cold spell. To do this, the researchers first used a different unsupervised computer algorithm to identify four different regions, or clusters, within North America where extreme weather events could occur, plus a cluster for no extreme weather. CapsNet was then trained with a data set of hundreds of maps depicting atmospheric circulation patterns paired with the subsequent extreme weather event clusters occurring days later. CapsNet was able to teach itself to predict whether a particular circulation pattern would lead to an extreme weather event and which of the four geographic regions it would occur in.

When CapsNet was trained using temperature information in addition to atmospheric circulation patterns, its accuracy for predicting extreme weather in winter was from 82% 1 day in advance to 76.7% 5 days in advance. In summer, its accuracy was 79.3% 1 day in advance and 75.8% with 5 days lead-up.

CapsNet was more accurate in its predictions compared to the more commonly known convolutional neural network (CNN), a class of deep learning that many scientists in the environment and weather research community are trying to use, Hassanzadeh said. CNNs run into whats called a Picasso problem: They can robustly detect features within an image (e.g., the eyes and nose) but not their relative positions or orientations (e.g., the networks do not care where those eyes and nose are located on a face). CapsNet, on the other hand, tracks those relative positions and orientation information, which may be key for more accurate predictions.

It brings analog forecasting back to life, but using deep learning.One point of our paper is that capsule neural networks (CapsNets) are much better tools for our problems, said Hassanzadeh.

In addition, the researchers found that CapsNets did not require as much data for training; feeding it fewer samples did not decrease its prediction accuracy like it did for the convolutional neural network. CapsNets could thus potentially address difficulties in obtaining enough high-quality data to effectively use deep learning.

This paper brings analog forecasting back to life, but using deep learning, said Redouane Lguensat, a postdoctoral researcher at the University of Grenobles Institute of Environmental Geosciences in France. Lguensat, who was not involved in the study, worked on analog forecasting for his graduate research but in the classical way without deep learning algorithms. To his knowledge, this study is the first time that CapsNets have been applied to weather-related problems like forecasting.

Though this study was a proof of concept, the overarching goal is to augment (but not replace) current numerical weather prediction systems by giving early warnings about which regions may have extreme weather events in the future, said Chattopadhyay. Next, we want to go beyond 5 days to 10 days, 15 days, maybe a month, possibly subseasonal scales.

Hassanzadeh is optimistic about the potential for deep learning programs to help scientists understand what features and mechanisms in atmospheric dynamics lead to extreme weather events in the first place. He also hopes this work encourages people from the computer science community, from the atmosphere dynamics community, and from the weather forecasting community to work more closely together.

I think this is an interesting area for the application of deep learning, he said.

Richard J. Sima (@richardsima), Science Writer

Read more:

Combining AI and Analog Forecasting to Predict Extreme Weather - Eos

Element AI raises $151M on a $600-700M valuation to help companies build and run AI solutions – TechCrunch

While tech giants like Google and Amazon build and invest in a multitude of artificial intelligence applications to grow their businesses, a startup has raised a big round of funding to help those that are not technology businesses by nature also jump into the AI fray.

Element AI, the very well-funded, well-connected Canadian startup that has built an AI systems integrator of sorts to help other companies develop and implement artificial intelligence solutions an Accenture for machine learning, neural network-based solutions, computer vision applications and so on is today announcing a further 200 million Canadian dollars ($151.3 million) in funding, money that it plans to use to commercialise more of its products, as well as to continue working on R&D, specifically working on new AI solutions.

Operationalising AI is currently the industrys toughest challenge, and few companies have been successful at taking proofs-of-concept out of the lab, imbedding them strategically in their operations, and delivering actual business impact, said Element AI CEO Jean-Franois (JF) Gagn in a statement. We are proud to be working with our new partners, who understand this challenge well, and to leverage each others expertise in taking AI solutions to market.

The company did not disclose its valuation in the short statement announcing the funding, nor has it ever talked about it publicly, but PitchBook notes that as of its previous funding round of $102 million back in 2017, it had a post-money valuation of $300 million, a figure a source close to the company confirmed to me. From what I understand, the valuation now is between $600 million and $700 million, a mark of how Element AI has grown, which is especially interesting, considering how quiet is has been.

The funding is being led by Caisse de dpt et placement du Qubec (CDPQ), along with participation from McKinsey & Company and its advanced analytics company QuantumBlack; and the Qubec government. Previous investors DCVC (Data Collective), Hanwha Asset Management, BDC (Business Development Bank of Canada), Real Ventures and others also participated, with the total raised to date now at C$340 million ($257 million). Other strategic investors in the company have included Microsoft, Nvidia and Intel.

Element AI was started under an interesting premise that goes something like this: AI is the next major transformational shift not just in computing, but in how businesses operate. But not every business is a technology business by DNA, and that creates a digital divide of sorts between the companies that can identify a problem that can be fixed by AI and build/invest in the technology to do that and those that cannot.

Element AI opened for business from the start as a kind of AI shop for the latter kinds of enterprises, to help them identify areas where they could build AI solutions to work better, and then build and implement those solutions. Today it offers products in insurance, financial services, manufacturing, logistics and retail a list that is likely to get longer and deeper with this latest funding.

One catch about Element AI is that the company has not been very forthcoming about its customer list up to now those that have been named as partners include Bank of Canada and Gore Mutual, but there is a very notable absence of case studies or reference customers on its site.

However, from what we understand, this is more a by-product of the companies (both Element AI and its customers) wishing to keep involvement quiet for competitive and other reasons; and in fact there are apparently a number of large enterprises that are building and deploying long-term products working with the startup. We have also been told big investors in this latest round (specifically McKinsey) are bringing in customers of their own by way of this deal, expanding that list. Total bookings are a significant double digit million number at the moment.

With this transaction, we are investing capital and expertise alongside partners who are ideally suited to transform Element AI into a company with a commercial focus that anticipates and creates AI products to address clients needs, said Charles mond, EVP and head of Qubec Investments and Global Strategic Planning at la Caisse, in a statement. CDPQ launched an AI Fund this year and this is coming out of that fund to help export more of the AI tech and IP that has been incubated and developed in the region. Through this fund, la Caisse wants to actively contribute to build and strengthen Qubecs global presence in artificial intelligence.

Management consultancies like McKinsey would be obvious competitors to Element AI, but in fact, they are turning out to be customer pipelines, as traditional system integrators also often lack the deeper expertise needed in newer areas of computing. (And thats even considering that McKinsey itself has been investing in building its own capabilities, for example through its acquisition of the analytics firmQuantumBlack.

For McKinsey, this investment is all about helping our clients to further unlock the potential of AI and Machine Learning to improve business performance, said Patrick Lahaie, senior partner and Montreal managing partner for McKinsey & Company, in a statement. We look forward to collaborating closely with the talented team at Element AI in Canada and globally in our shared objective to turn cutting-edge thinking and technology into AI assets which will transform a wide range of industries and sectors. This investment fits into McKinseys long-term AI strategy, including the 2015 acquisition of QuantumBlack, which has grown substantially since then and will spearhead the collaboration with Element AI on behalf of our Firm.

See the article here:

Element AI raises $151M on a $600-700M valuation to help companies build and run AI solutions - TechCrunch

Which Military Has the Edge in the A.I. Arms Race? – OZY

The race to be the new Big Brother is powered by robotics. Can the U.S. keep up with its rivals?

Think of artificial intelligence, and the mind often goes to industrial robots and benign surveillance systems. Increasingly, though, these are steppingstones for Big Brother to enhance capabilities in domestic security and international military warfare.

China has co-opted a controversial big data policing program into law enforcement, both for racial profiling of its Uighur minority population and for broader citizen surveillance through facial recognition. Wuhan has an entirely AI-staffed police station. But experts say Chinas artificial intelligence research is also being adapted for unconventional military warfare in the countrys bid to dominate the field over the next decade.

AI could form an important pillar of the new cold war brewing between the U.S. and China over trade, technology and geopolitical influence. From the U.S. to Russia and American allies like Israel, military researchers are embedding AI into cybersecurity initiatives and robotic systems that provide remote surgical support. Theyre using it for combat simulation and data processing.

By 2030, a third of the combat capacity of Russia, Americas archrival, is expected to be driven by artificial intelligence, say experts, including AI-guided missiles with the ability to change target midflight. Israel has adopted a networked sensor-to-shooter system to aid the Israel Defense Forces in remotely patrolling the many contentious regions under their control. Other countries, including the U.K., Brazil, Australia, South Korea and Iran, are also investing in research into AI-powered weapons, tanks and other armed platforms.

We owe it to the American people and our men and women in uniform to adopt A.I. principles that reflect our nations values of a free and open society.

Lt. Gen. Jack Shanahan, director, Joint Artificial Intelligence Center

All of this has prompted the U.S. to accelerate its own AI research to preserve its status as the worlds sole military superpower. Department of Defense researchers are building a robotic submarine system that will detect underwater mines and other anti-submarine enemy action without putting the lives of American soldiers at risk. Where the U.S. is trying to market itself as different is when it comes to the targets of its military AI capabilities.

We owe it to the American people and our men and women in uniform to adopt AI principles that reflect our nations values of a free and open society, Lt. Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center, told journalists in February. This runs in stark contrast to Russia and China, whose use of AI tech for military purposes raises serious concern about human rights, ethics and international norms.

Not everyone is buying that. In April 2018, more than 3,000 Google employees urged CEO Sundar Pichai in an open letter to discontinue working on the Pentagons Project Maven. They argued that the tech giant should not be in the business of war because we cannot outsource the moral responsibility of our technologies to third parties. Pichai and Google pulled out of the program.

Meanwhile, this May, Australian Prime Minister Scott Morrison received a prototype of a jet-powered drone from Boeing Australia to flank and protect its manned combat aircraft. Brazil and India have set up panels for their militaries to work with cutting-edge labs on developing artificial intelligence. Britains Ministry of Defense has launched its own AI lab, as has the South Korean army, which has also used a sentry robot in the Demilitarized Zone along the border with North Korea.

Fenced in by U.S. sanctions, the Iranian government is funding the countrys AI tech. An international conference on robotics in Tehran two years ago discussed, among other things, using drones. Researchers in the country have proposed a new Ministry of Artificial Intelligence. And last October, the Iranian military released images of miniature robots that can slide under enemy tanks.

The U.S. knows it cant let up. The DOD spent $7.4 billion on AI, cloud computing and big data in 2017, and requested $927 million for AI alone in 2020.

Thats just a fraction, though, of the commercial spending by companies on research into autonomous systems development, says Mary Cummings, director of the Humans and Autonomy Laboratory at Duke University and one of Americas first female naval fighter pilots.The future of AI in military systems is directly tied to the ability of engineers to design autonomous systems that can demonstrate the ability to act and react on their own with the level of sophistication that humans bring through their knowledge and reasoning, she explains.

China is one step ahead of the U.S. there. While the DODs Defense Advanced Research Projects Agency has struggled to develop a drone that can transport troops, Cummings says, China is believed to have designed a commercial drone that can transport passengers. The tech space is clearly leading the charge, and the military is playing catch-up, says Toby Walsh, a professor of artificial intelligence at Sydneys New South Wales University.

For the U.S., its a race against time to commandeer influence in this new global arms race. Win or lose, its a high-stakes game to be the new Big Brother.

View original post here:

Which Military Has the Edge in the A.I. Arms Race? - OZY

AI could help design better drugs that don’t clash with other medication – MIT Technology Review

A new system that can predict a proposed drugs chemical structure could help prevent adverse drug interactions, one of the leading causes of patient death.

Why it matters: According to the FDA, serious adverse drug interactions could kill more than 100,000 hospitalized people in the US every year. But traditional ways of avoiding such interactions during drug development require expensive and laborious physical testing and clinical trials to catalogue all the proposed drugs possible chemical interactions with existing ones.

How it works: The system takes in two different drugs and generates a prediction for how or whether they will interact. To get there, the researchers first developed a new way to represent the 3D chemical structures of drugs in a character format that could be read by a neural network. The drug melatonin, for example, is represented by CC(=O)NCCC1=CNc2c1cc(OC)cc2, while morphine is represented by CN1CCC23C4OC5=C(O)C=CC(CC1C2C=CC4O)=C35.

They then translated a database of known drug interactions into this format and trained a neural network. The resulting system predicts the probability that two drugs will have an adverse interaction and shows the particular parts of the molecule that contributed to that prediction.

The results: When the researchers tested their system on two common drug interaction data sets, it performed better than state-of-the-art results from existing AI systems. The paper, which was led by researchers at health information technology company IQVIA, is being presented at the proceedings of the Association for the Advancement of Artificial Intelligence later this week.

Co-pilot: The new techniques for analyzing chemical data could have many other applications, including drug and material design. There's just an awful lot of the modern world that depends on chemistry, says David Cox, the IBM director of the MIT-IBM Watson AI Lab, a member of which coauthored the paper. Theres tremendous potential for AI to be a copilot for us, augmenting our ability to reason about chemical interactions, properties, and qualities.

Correction:An earlier version of this article misstated David Coxs title as director. He is the IBM director.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Excerpt from:

AI could help design better drugs that don't clash with other medication - MIT Technology Review

China fosters innovative AI research through competition – TNW

The Chinese government has been unequivocal in its goal to become the global AI leader. Three of the largest tech companies in China Sougou, Chinas second-biggest search giant, Sinovation Ventures, and internet firm Bytedance have joined forces to bring that plan to fruition. The companies pooled funds and data to create the AI Challenger contest.

The contest features a prize pool of 2 million yuan, or approximately $300,000. Entrants will presumably be judged on innovation in the field of AI research. Contestants will eventually have access to a data-set featuring 300,000 image-based objects and more than 10 million text-based data entries.

Researchers participating in the program essentially have access to enough data to jump-start any machine-learning project. AI is taught by feeding it data and allowing algorithms to create conclusions and patterns, with the goal being a system that gets better over time. According to China Daily, Wang Xiaochuan, chief executive of Sougou, said:

In the US, professors and researchers would complain about falling [behind] big corporations due to the lack of data. Here we hope to set up a longstanding contest and cultivate talents by providing them with a huge data pool.

China represents one of the top three nations for AI research, alongside the US and India. At this point experts might be arguing over whether we should fear killer robots or not, but its clear that theres no stopping the full-scale implementation of AI into our lives.

Its apparent that China intends to set the pace, so the question for everyone else is: will second place be good enough in the AI race?

Internet companies launch AI contest with largest open data pool on China Daily

Read next: Facebook makes it quicker to check friends are safe when disaster strikes

Link:

China fosters innovative AI research through competition - TNW

This AI Generates Photos Using Only Text Captions as a Guide – PetaPixel

Researchers at the Allen Institute for Artificial Intelligence (AI2) have created a machine learning algorithm that can produce images using only text captions as its guide. The results are somewhat terrifying but if you can look past the nightmare fuel, this creation represents an important step forward in the study of AI and imaging.

Unlike some of the genuinely mind-blowing machine learning algorithms weve shared in the pastsee here, here, and herethis creation is more of a proof-of-concept experiment. The idea was to take a well-established computer vision model that can caption photos based on what it sees in the image, and reverse it: producing an AI that can generate images from captions, instead of the other way around.

This is a fascinating area of study and, as MIT Technology Review points out, it shows in real terms how limited these computer vision algorithms really are. While even a small child can do both of these things readilydescribe an image in words, or conjure a mental picture of an image based on those wordswhen the Allen Institute researchers tried to generate a photo from a text caption using a model called LXMERT, it generated nonsense in return.

So they set out to modify LXMERT and created X-LXMERT. And while the results that X-LXMERT generates given a text caption arent exactly coherent, theyre not nonsense eitherthe general idea is usually there. Here are some example images created by the researchers using various captions:

And here are a few examples we generated by plugging various captions into a live demo they created using their model:

The above are all based on captions provided by the researchers, and most of them seem to at least contain the major concepts in each description. However, when we tried to create totally new captions based on more esoteric concepts like photographer, photography studio, or even the word camera, the results fell apart:

While the results from and limitations of X-LXMERT probably dont inspire either awe or the fear of the impending AI revolution, the groundbreaking masking technique that the researchers developed is an important first step in teaching an AI to fill in the blanks that any text description inherently leaves out.

This will eventually lead to better image recognition and computer vision, which can only help improve tasks that actually matter to the readers of this site. In other words: the better a computer is at understanding what you mean when you describe an image or image editing task, the more complex the tasks it will be able to perform on that image.

To learn more about this creation or see some more creepy AI-generated images, read the full research paper here or check out an interactive live demo of the model at this link.

(via DPReview)

Originally posted here:

This AI Generates Photos Using Only Text Captions as a Guide - PetaPixel

A US Air Force pilot is taking on AI in a virtual dogfight here’s how to watch it – The Next Web

An AI-controlled fighter jet will battle a US Air Force pilot in a simulated dogfight next week and you can watch the actiononline.

The clashis the culmination of DARPAsAlphaDogfight competition, which the Pentagons mad science wing launched to increase trust in AI-assisted combat.DARPA hopes this will raise support for using algorithms in simpler aerial operations,so pilots can focus on more challenging tasks, such as organizing teams of unmanned aircraft across the battlespace.

The three-day event was scheduled to take place in-person in Las Vegas from August 18-20, but the COVID-19 pandemic led DARPA to move the event online.

Before the teams take on the Air Force on August 20, the eight finalists will test their algorithms against five enemy AIs developed byJohns Hopkins Applied Physics Laboratory. Theirmission is to recognize and exploit the weaknesses and mistakes of their rivals, and maneuver to a position of control beyond the enemys weapon employment zone.

[Read:Gamers will teach AI how to control military drone swarms]

The next day, the teams will compete against each other in a round-robin tournament. The top four will then enter a single-elimination tournament for theAlphaDogfight Trials Championship. Finally, the winner will take on a US Air Force fighter pilot flying a virtual reality F16 simulator, to test whether their system can vanquish the militarys elite.

The contest aims to develop a base of AI developers for DARPAs Air Combat Evolution (ACE) program, which is trying to further automate aerial combat. Dogfighting is expected to be a rare part of this in the future, but the duels will provide evidence that AI can handle a high-end fight.

Regardless of whether the human or machine wins the final dogfight, the AlphaDogfight Trials is all about increasing trust in AI, said Colonel Dan Animal Javorsek, program manager in DARPAs Strategic Technology Office. If the champion AI earns the respect of an F-16 pilot, well have come one step closer to achieving effective human-machine teaming in air combat, which is the goal of the ACE program.

You can watch the battles unfold by signing up online. Registration closes on August 17 for US citizens, and on August 11 for everyone else. If youre not a US citizen, youll also need to submit one of DARPAs visit request forms.

Published August 10, 2020 12:28 UTC

Original post:

A US Air Force pilot is taking on AI in a virtual dogfight here's how to watch it - The Next Web

How AI and ML Applications Will Benefit from Vector Processing – EnterpriseAI

As expected, artificial intelligence (AI) and machine learning (ML) applications are already having an impact on society. Many industries that we tap into dailysuch as banking, financial services and insurance (BFSI), and digitized health carecan benefit from AI and ML applications to help them optimize mission-critical operations and execute functions in real time.

The BFSI sector is an early adopter of AI and ML capabilities. Natural language processing (NLP) is being implemented for personal identifiable information (PII) privacy compliance, chatbots and sentiment analysis; for example, mining social media data for underwriting and credit scoring, as well as investment research. Predictive analytics assess which assets will yield the highest returns. Other AI and ML applications include digitizing paper documents and searching through massive document databases. Additionally, anomaly detection and prescriptive analytics are becoming critical tools for the cybersecurity sector of BFSI for fraud detection and anti-money laundering (AML).1

Scientists searching for solutions to the COVID-19 pandemic rely heavily on data acquisition, processing and management in health care applications. They are turning to AI, ML and NLP to track and contain the coronavirus, as well as to gain a more comprehensive understanding of the disease. Among the applications for AI and ML include medical research for developing a vaccine, tracking the spread of the disease, evaluating the effects of COVID-19 intervention, using natural processing of language in social media to understand the impact on society, and more.2

Processing a Data Avalanche

The fuel for BFSI applications like fraud detection, AML applications and chatbots, or health applications such as tracking the COVID-19 pandemic, are decision support systems (DSSs) containing vast amounts of structured and unstructured data. Overall, experts predict that by 2025, 79 trillion GB of data will have been generated globally.3 This avalanche of data is making data mining (DM) difficult for scalar-based high-performance computers to effectively and efficiently run a DSS for its intended applications. More powerful accelerator cards, such as vector processing engines supported by optimized middleware, are proving to efficiently process enterprise data lakes to populate and update data warehouses, from which meaningful insights can be presented to the intended decision makers.

Resurgence of Vector Processors

There is currently a resurgence in vector processing, which, due to the cost, was previously reserved for the most powerful supercomputers in the world. Vector processing architectures are evolving to provide supercomputer performance in a smaller, less expensive form factor using less power, and they are beginning to outpace scalar processing for mainstream AI and ML applications. This is leading to their implementation as the primary compute engine in high performance computing applications, freeing up scalar processors for other mission critical processing roles.

Vector processing has unique advantages over scalar processing when operating on certain types of large datasets. In fact, a vector processor can be more than 100 times faster than a scalar processor, especially when operating on the large amounts of statistical data and attribute values typical for ML applications, such as sparse matrix operations.

While both scalar and vector processors rely on instruction pipelining, a vector processor pipelines not only the instructions but also the data, which reduces the number of fetch then decode steps, in turn reducing the number of cycles for decoding. To illustrate this, consider the simple operation shown in Figure 1, in which two groups of 10 numbers are added together. Using a standard programming language, this is performed by writing a loop that sequentially takes each pair of numbers and adds them together (Figure 1a).

Figure 1: Executing the task defined above, the scalar processor (a) must perform more steps than the vector processor (b).

When performed by a vector processor, this task requires only two address translations, and fetch and decode is performed only once (Figure 1b) , rather than the 10 times required by a scalar processor (Figure 1a). And because the vector processors code is smaller, memory is used more efficiently. Modern vector processors also allow different types of operations to be performed simultaneously, further increasing efficiency.

To bring vector processing capabilities into applications less esoteric than scientific ones, it is possible to combine vector processors with scalar CPUs to produce a vector parallel computer. This system comprises a scalar host processor, a vector host running LINUX, and one or more vector processor accelerator cards (or vector engines), creating a heterogeneous compute server that is ideal for broad AI and ML workloads and data analytics applications. In this scenario, the primary computational components are the vector engines, rather than the host processor. These vector engines also have self-contained memory subsystems for increased system efficiency, rather than relying on the host processors direct memory access (DMA) to route packets of data through the accelerator cards I/O pins.

Software Matters

Processors perform only as well as the compilers and software instructions that are delivered to them. Ideally, they should be based on industry-standard programming languages such as C/C++. For AI and ML application development, there are several frameworks available with more emerging. A well designed vector engine compiler should utilize both industry-standard programming languages and open source AI and ML frameworks such as TensorFlow and PyTorch. A similar approach should be taken for database management and data analytics, using proven frameworks such as Apache Spark and Scikit-Learn. This software strategy allows for seamless migration of legacy code to vector engine accelerator cards. Additionally, by using the message passing interface (MPI) to implement distributed processing, the configuration and initialization become transparent to the user.

Conclusion

AI and ML are driving the future of computing and will continue to permeate more applications and services in the future. Many of these application deployments will be implemented in smaller server clusters, perhaps even a single chassis. Accomplishing such a feat requires revisiting the entire spectrum of AI technologies and heterogeneous computing. The vector processor, with advanced pipelining, is a technology that proved itself long ago. Vector processing paired with middleware optimized for parallel pipelining is lowering the entry barriers for new AI and ML applications, and is set to solve the challenges both today and in the future that were once only attainable by the hyperscale cloud providers.

References

About the Author

Robbert Emery is responsible for commercializing NEC Corporations advanced technologies in HPC and AI/ML platform solutions. His role includes discovering and lowering the entry point and initial investment for enterprises to realize the benefits of big data analytics in their operations. Robbert has developed a career of over 20 years in the ICT industrys emerging technologies, including mobile network communications, embedded technologies and high-volume manufacturing. Prior to joining NECs technology commercialization accelerator, NEC X Inc., in Palo Alto California, Robbert led the product and business plan for an embedded solutions company that resulted in a leadership position, in terms of both volume and revenue. He has an MBA from SJSUs Lucas College and Graduate School of Business, as well as a bachelors degree in electrical engineering from California Polytechnic State University.

Related

Read more here:

How AI and ML Applications Will Benefit from Vector Processing - EnterpriseAI

DGWorld Brings the Future of AI and Digitization with WIZO – Business Wire

DUBAI, United Arab Emirates--(BUSINESS WIRE)--DGWorld, the leading AI and Digitalization Company, has launched the new version of its humanoid robot at the Ai Everything Summer Conference held on July 16, 2020 at the Dubai World Trade Center.

WIZO serves a multi-functional purpose in pushing beyond the boundaries and limitations of a service robot with big data analysis & management, centralized control system, safe & secure database, multimodal interaction, flexible movement, facial recognition, speech engine, proprietary SLAM technology for autonomous navigation, and automatic docking & recharging.

Eng. Bilal Al-Zoubi, Founder and CEO at DGWorld said: DGWorld aims to create sustainable solutions by utilizing the power of AI. The new and improved WIZO will make a positive impact on the way business is done and optimize the human experience. By delegating standard tasks to the robot, employees can focus on human qualities that allow them to excel and flourish.

Hardware and software features can be added to the humanoid robot to meet the business needs. Temperature checking, Payment system and Integration with mobile apps to name a few. The flexibility in customization makes WIZO versatile in a wide variety of industries like Retail, Healthcare, Education, Transportation, Hospitality, Entertainment, and Government services.

As life is slowly going back to normal with the current pandemic, some of the precautions taken are here to stay. Limiting human contact will limit the spread of any virus. Deploying WIZO at entry check-points, reception desk, or to take care of specific tasks will result in work efficiency and more importantly help people stay safe. We all know robots are part of the future. The future has already started. Eng. Bilal-Al-Zoubi continued.

This is the third version of humanoid robots that DGWorld has developed. The first two versions were 3D printed and completely manufactured by the company including all hardware and software. With the latest WIZO, DGWorld wanted to reduce costs, save manufacturing time and increase the quality, therefore, the company chose a reliable vendor to work with and enable the intelligent features.

For more information:

Website http://www.dgworld.com

YouTube youtube.com/dgworld

Facebook facebook.com/DGWorlds/

Instagram instagram.com/dgworlds/

LinkedIn linkedin.com/company/digiroboticstechnologies

Twitter twitter.com/DGWorld3

*Source: AETOSWire

Continue reading here:

DGWorld Brings the Future of AI and Digitization with WIZO - Business Wire