How AI and mosquito sex parties can save the world – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Diptera.ai has raised a $3 million seed round to fight mosquitoes with mosquitoes and AI-based sex sorting.

Jerusalem-based Diptera.ai has figured out a way to use AI to fight the growing threat of mosquitoes, which are spreading malaria and viruses like Zika, dengue, and yellow fever. While the method for fighting mosquitoes has been around for decades, AI can take it to a new level and democratize what was otherwise a very costly and localized abatement effort.

Well get to the sex parties in a bit.

Diptera.ai is using computer vision and eco-friendly technology to make it easier to control mosquito populations using the sterile insect technique, which sends sterilized male mosquitoes to mate with female mosquitoes, said Diptera.ai CEO Vic Levitin, in an interview with VentureBeat.

We think we can disrupt the $100 billion pest control market, Levitin said, noting that many other pest control methods are toxic to both humans and the environment.

Above: Mosquito larvae.

Image Credit: Diptera.ai

The company could help mitigate the death toll from mosquitoes. More than just a nuisance, they are the deadliest creatures on Earth, as they kill more than 700,000 people a year and infect hundreds of millions more with diseases. A recent book, The Mosquitoby Timothy Winegard, cites estimates that mosquitoes have killed 52 billion people nearly half of the humans who have ever lived.

Diptera.ais technology works for a host of insects, including household and agricultural pests. The company is starting with mosquitoes, a rapidly growing problem with no effective solution to date. Due to climate change, by 2050 half of the worlds population (including the U.S. and Europe) will be living among disease-spreading mosquitoes.

With its technology in the testing stage now, Diptera.ai plans to offer an affordable subscription service to what it calls a highly effective and eco-friendly biological pest control method. Most pest control methods are based on insecticides that are toxic to both humans and the environment. Despite its high effectiveness, sterilization has thus far been limited to a handful of pests because of the prohibitive costs in implementing it.

Standard control methods are losing effectiveness as mosquitoes rapidly become resistant to existing pesticides. Moreover, public opinion and regulation limit the use of toxic insecticides. As a result, people increasingly find themselves unable to enjoy the outdoors without being at risk from emerging and potentially devastating diseases.

Above: Ariel Livne, CTO of Diptera.ai, at a lab in Israel.

Image Credit: Diptera.ai

Levitin believes his company can stop mosquitoes by the billions, mainly by releasing sterile males to mate with females. We create mosquito sex parties, he said.

Trust Ventures led the funding round, with participation from existing investors IndieBio and Fresh.fund, as well as new investors who joined the round.

Diptera.ai was started by Ariel Livne, Elly Ordan, and Levitin. In October 2020, the team graduated from the IndieBio Accelerator, and it now has 10 employees. The seed round should enable the company to finish its pilot, which could grow into a product launch.

Weve raised enough money to prove the concept, Levitin said.

At some point, the Environmental Protection Agency will likely have to approve the Diptera.ai solution.

Above: Elly Ordan of Diptera.ai inspects mosquito larvae.

Image Credit: Diptera.ai

The sterile insect technique (SIT) is a biological pest control method in which mostly government-run entities release overwhelming numbers of sterile male insects into the wild. These sterile males mate with female mosquitoes, which are the only mosquitoes that bite humans and animals. The female mosquitoes only mate once in their lifetimes, but they each lay hundreds of eggs. If they can be tricked into mating with sterile males, then they wont create offspring.

The sterile insect technique is the most effective, Levitin said. Mosquitoes mate once as females in their lives. If they mate with sterile males, then it suppresses the population.

This technique has been used in the U.S. to control the spread of the Mediterranean fruit fly, with billions a month being released into the wild. But it is expensive due to high production and distribution costs, and is often limited to localized control efforts.

The technique started in the 1950s in Russia and the U.S., when it was used to control the tsetse fly in Africa.

In 1998, the Debug project saw Googles Verily unit release millions of sterile mosquitoes into the area of Fresno, California, resulting in a temporary 93% suppression of the population during mosquito season, which runs from around March through October.

Above: Vic Levitin (center) is cofounder and CEO of Diptera.ai.

Image Credit: Diptera.ai

Diptera.ais market research has shown their solution is 20 times less expensive than existing SIT methods.

For most insects, the bottleneck for SIT is sex separation. Currently mosquitoes are sex-sorted late in their development, when the mosquitoes are fragile and have a limited remaining lifespan of a few days. Shipping them is impractical, Levitin said.

Normally, implementing SIT requires building and maintaining a local mosquito factory near every release site. Diptera.ai combines computer vision, deep biology, and automation to sex-sort mosquitoes (and other insects) at the larval stage, which was previously considered impossible. This allows for a centralized mass production of sterile male mosquitoes that can then be shipped to the end customers for release.

We can sex sort them at the larva stage, said Levitin. Larvae used to be considered asexual. Nobody tried to sex-sort them. This is where we are innovative. We can tell the sex when they are larvae. Thats two weeks before they become adults. So we can produce them in mass production and then ship them across the country. This gives us economies of scale where we can offer it as a service.

Mosquitoes exist as larvae for a lot longer than they live as adults. If you can identify the males and females at this stage, then there is a lot more time to ship them to the right place in the country, and then the whole U.S. could be served by a mass-production factory that churns out sterilized mosquitoes by the billions.

Once it separates the males, Diptera.ai sterilizes them with radiation, using the equivalent of a microwave oven, except one used for sterilization purposes. The oven is about the size of a pizza oven, and its not dangerous to humans, Levitin said.

Most of the mosquitoes in the U.S. are of the Asian tiger variety (Aedes albopictus), and these mosquitoes dont travel far, making it easier to take down populations with localized efforts. By contrast, mosquitoes in Africa can fly long distances, and that makes it harder to control the population, Levitin said.

Just like the cloud disrupted the computing industry with affordable, on-demand computing power, Diptera.ai disrupts pest control with an affordable SIT-as-a-service, Levitin said. Instead of building and maintaining insect production factories, customers will subscribe to our service to receive shipments of sterile males ready for release.

With Diptera.ais service, luxury resorts, residential complexes, or even homeowners should be able to afford the eradication service. It has to be a subscription because the mosquitoes will come back, year after year, if you dont take them out regularly.

Its like the Mafia, Levitin said. You are paying protection money to us.

By the way, this is the second Israeli startup that Ive seen take up the fight against mosquitoes. Bzigo uses computer vision to find where a mosquito lands in your home, then it shines a laser on it so you can zap the mosquito yourself. No matter how much Diptera.ai succeeds, I imagine there will always be a need for Bzigos product.

Here is the original post:

How AI and mosquito sex parties can save the world - VentureBeat

Taser bought two computer vision AI companies – Engadget

The Axon AI group will include about 20 programmers and engineers. They'll be tasked with developing AI capabilities specifically for public safety and law enforcement. The backbone of the Axon AI platform comes from Dextro Inc. Their computer-vision and deep learning system can search the visual contents of a video feed in real time. Technology from the Fossil Group, which Taser also acquired, will support Dextro's search capability by "improving the accuracy, efficiency and speed of processing images and video," according to the company's press release.

The AI platform is the latest addition to Taser's Axon ecosystem, which include everything from body and dash cameras to evidence and interview logging. Altogether the Axon system handles 5.2 petabytes of data from more than half of the nation's major city police departments.

With the new AI system in place, law enforcement could finally get a handle on all that footage. "Axon AI will greatly reduce the time spent preparing videos for public information requests or court submission," Taser CEO, Rick Smith, said in a statement. "This will lay the foundation for a future system where records are seamlessly recorded by sensors rather than arduously written by police officers overburdened by paperwork."

See the original post here:

Taser bought two computer vision AI companies - Engadget

What Would an AI Doomsday Actually Look Like? – Futurism

Imagining AIs Doomsday Artificial intelligence (AI) is going to transform the world, but whether it will be a force of good or evil is still subject to debate. To that end, a team of experts gathered for Arizona State Universitys (ASU) Envisioning and Addressing Adverse AI Outcomes to talk about the worst-case scenarios that we could face if AI veers towards becoming a serious threat to humanity.

There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology, says AI scientist Eric Horvitz.

As an optimistic supporter of everything AI has to offer, Horvitz has a very positive outlook about the future of AI. But hes also pragmatic enough to recognize that for the technology to consistently advance and move forward, it has to earn the trust of the public. For that to happen, all possible concerns surrounding the technology have to be discussed.

That conversation specificallywas what the workshop hoped to tackle.40 scientists, cyber-security experts, and policy-makers were divided into two teams to hash out the numerous ways AI can cause trouble for the world. The red team were tasked with imagining all the cataclysmic scenarios AI could incite, and the blue team was asked to devisesolutions to defend against such attacks.

These situations had to be realistic rather than purely hypothetical, anchored in whats possible given our current technology, and what we expect to come from AI over the next few decades.

Among the scenarios described were automated cyber attacks (wherein a cyber weapon is intelligent enough to hide itself after an attack and prevent all efforts to destroy it), stock markets being manipulated by machines, self-driving technology failing to recognize critical road signs, and AI being used to rig or sway elections.

Not all scenarios were given sufficient solutions either, illustrating just how unprepared we are at present to face the worse possible situations that AI could bring. For example, in the case of intelligent, automated cyber attacks, it would apparently be quiteeasy for attackers to use unsuspecting internet gamers to cover their tracks, using something like an online game toobscure the attacks themselves.

As entertaining as it may be to think up all of these wild doomsday scenarios, its actually a deliberate first step towards real conversations and awareness about the threat that AI could pose. John Launchbury, from the US Defenses Advanced Research Projects Agency hopes it will lead to concrete agreements on rules of engagement for cyber war, automated weapons, and robot troops.

The purpose of the workshop after all, isnt to incite fear, but to realistically anticipate the various possibilities of how technology can be misused and hopefully, get a head start on defending ourselves againstit.

Read the rest here:

What Would an AI Doomsday Actually Look Like? - Futurism

Beyond Limits to Expand Industrial AI in Energy with NVIDIA – GlobeNewswire

LOS ANGELES, Dec. 16, 2020 (GLOBE NEWSWIRE) -- Beyond Limits, an industrial and enterprise-grade AI technology company built for the most demanding sectors, today announced it is working with NVIDIA to advance its initiative for bringing digital transformation to the energy sector.

Beyond Limits will collaborate with NVIDIA experts on joint go-to-market strategies for Beyond Limits products and solutions in the energy sector. The company will also take advantage of NVIDIA technical support and GPU-optimized AI software such as containers, models and application frameworks from the NVIDIA NGC catalog to improve the performance and efficiency of its software development cycle.

AI has the potential to make a major impact on problems facing the heart of the global energy business, but the technology requires high levels of computing power to operate on the level and scale required by many of todays global producers, said AJ Abdallat, CEO of Beyond Limits. Thats why were so excited to collaborate with NVIDIA, a leading provider of AI computing platforms. With NVIDIA technology support and expertise, Beyond Limits is better positioned to offer faster, more intelligent and efficient AI-based solutions for maximizing energy production and profitability.

Breakthroughs in novel high-performance AI solutions are projected to have significant impacts throughout the energy industry. One key challenge facing the upstream oil and gas sector includes the resource requirement for optimizing well deployments, especially when data on a regions geological properties is highly uncertain. To overcome this problem, Beyond Limits developed a novel deep reinforcement learning (DRL) framework trained using NVIDIA A100 Tensor Core GPUs, capable of running 167,000 complex scenario simulations in 36 hours. Following initial tests, the DRL framework yielded a 208% increase in NPV value by predicting and recommending well placements, based on the number of actions explored and the expected financial return from reservoir production over time.

The NVIDIA A100 offers the performance and reliability required to meet the demands of the modern day energy sector, said Marc Spieler, Global Energy Director at NVIDIA. The ability to process hundreds of thousands of AI simulations in real-time provides the insight required for Beyond Limits to develop scalable applications that advance energy technologies.

Beyond Limits Cognitive AI applies human-like reasoning to solve problems, combining encoded human knowledge with machine learning techniques and allowing systems to adapt and continue to operate even when data is in short supply or uncertain. As a result, Beyond Limits customers are able to elevate operational insights, improve operating conditions, enhance performance at every level, and ultimately increase profits. For more information, please visit https://www.beyond.ai/solutions/beyond-energy.

About Beyond Limits

Beyond Limits is an industrial and enterprise-grade artificial intelligence company built for the most demanding sectors including energy, utilities, and healthcare.

Beyond traditional artificial intelligence, Beyond Limits unique Cognitive AI technology combines numeric techniques like machine learning with knowledge-based reasoning to produce actionable intelligence. Customers implement Beyond Limits AI to boost operational insights, improve operating conditions, enhance performance at every level, and ultimately increase profits as a result.

Founded in 2014, Beyond Limits leverages a significant investment portfolio of advanced technology developed at Caltechs Jet Propulsion Laboratory for NASA space missions. The company was recently honored by CB Insights on their 2020 List of Top AI 100 most innovative artificial intelligence startups and by Frost & Sullivan for their North American Technology Innovation Award.

For more information, please visit http://www.beyond.ai.

Go here to read the rest:

Beyond Limits to Expand Industrial AI in Energy with NVIDIA - GlobeNewswire

How soft law is used in AI governance – Brookings Institution

As an emerging technology, artificial intelligence is pushing regulatory and social boundaries in every corner of the globe. The pace of these changes will stress the ability of public governing institutions at all levels to respond effectively. Their traditional toolkit, in the form of the creation or modification of regulations (also known as hard law), require ample time and bureaucratic procedures to properly function. As a result, governments are unable to swiftly address the issues created by AI. An alternative to manage these effects is soft law, defined as a program that creates substantial expectations that are not directly enforceable by the government. As soft law grows in popularity as a tool to govern AI systems, it is imperative that organizations gain a better understanding of their current deployments and best practicesa goal we aim to facilitate with the launch of a new database documenting these tools.

The governance of emerging technologies has relied on soft law for decades. Entities such as governments, private sector firms, and non-governmental organizations have all attempted to address emerging technology issues through principles, guidelines, recommendations, private standards, best practices, among others. Compared to its hard law counterparts, soft law programs are more flexible and adaptable, and any organization can create or adopt a program. Once programs are created, they can be adapted to reactively or proactively address new conditions. Moreover, they are not legally tied to specific jurisdictions, so they can easily apply internationally. Soft law can serve a variety of objectives: It can complement or substitute hard law, operate as a main governance tool, or as a back-up option. For all these reasons, soft law has become the most common form of AI governance.

The main weakness of soft law governance tools are their lack of enforcement. In place of enforcement mechanisms, the proper implementation of soft law governance mechanisms relies on aligning the incentives of programs stakeholders. Unless these incentives are clearly defined and well-understood, the effectiveness and credibility of soft law will be questioned. To prevent the creation of soft law programs incapable of managing the risks of AI, it is important that stakeholders consider the inclusion of implementation mechanisms and appropriate incentives.

As AI methods and applications have proliferated, so too have soft law governance mechanisms to oversee them. To build on efforts to document soft law AI governance, the Center for Law, Science and Innovation at Arizona State University is launching a database with the largest compilation, to date, of soft law programs governing this technology. The data, available here, offer organizations and individuals interested in the soft law governance of AI with a reference library to compare and contrast original initiatives or draw inspiration for the creation of new ones.

Using a scoping review, the project identified 634 AI soft law programs published between 2001 and 2019 and labeled them using up to 107 variables and themes. Our data revealed several interesting trends. Among them, we found that AI soft law is a relatively recent phenomenon, with about 90% of programs created between 2017 and 2019. In terms of origin, higher-income regions and countries, such as the United States, United Kingdom, and Europe, were most likely to serve as a host to the creation of these instruments.

In the process of identifying stakeholders responsible for generating AI soft law, we found that government institutions have a prominent role in employing these programs. Specifically, more than a third (36%) were created by the public sector, which is evidence that usage of these tools is not confined to the private sector and that they can behave as a complement to traditional hard law in guiding AI governance. Multi-stakeholder alliances involving government, private sector, and non-profits and non-profit/private sector alliances followed with a 21% and 12% share of the programs, respectively.

We also looked at soft laws reliance on the alignment of incentives for implementation. Because government cannot levy a fee or penalty through these programs, stakeholders participating in soft law have to voluntarily agree to participate. Considering this, about 30% of programs in the database publicly mention enforcement or implementation mechanisms. We analyzed these measures and found that they can be divided into four quadrants: internal vs. external and levers vs. roles. The first dimension represents the location of the resources necessary for a mechanisms operation, whether it uses those located within an organization or externally through third-parties. Meanwhile, levers are the toolkit of actions or mechanisms (e.g. committees, indicators, commitments, and internal procedures) that an organization can employ to implement or enforce a program. Its counterpart is characterized as roles. It describes how individuals, the most important resource of any organization, are arranged to execute the toolkit of levers.

Finally, in addition to identifying a programs characteristics, we labeled the text of programs. This was done by creating 15 thematic categories divided into 78 sub-themes that touch upon a wide variety of issues and make it possible to scrutinize how organizations interpret different aspects of AI. The three most labeled themes are education and displacement of labor, transparency and explainability, and ethics. Similarly, the most prevalent sub-themes were general transparency, general mentions of discrimination and bias, and AI literacy.

As AI proliferates and its governance challenges grow, soft law will become an increasingly important part of this technologys governance toolkit. An empirical understanding of the strengths and weaknesses of AI soft law will therefore be crucial for policymakers, technology companies, and civil society as they grapple with how to govern AI in a way that best harnesses its benefits, while managing its risks.

By creating the largest compilation of AI soft law programs, our aim is to provide a critical resource for policymakers in all sectors focused on responding to AI governance challenges. Its intent is to aid decision-makers in their pursuit of balancing the advantages and disadvantages of this tool and facilitate a deeper understanding of how and when they work best. To that end, we hope that the AI soft law databases initial findings can suggest mechanisms for improving the effectiveness and credibility of AI soft law, or even catalyze the creation of new kinds of soft law altogether. After all, the future of AI governance and by extension, AI soft law is too important not to get right.

Carlos Ignacio Gutierrez is a governance of artificial intelligence fellow at Arizona State University. He completed his Ph.D. in Policy Analysis at the Pardee RAND Graduate School.

Gary Marchant is Regents Professor and Faculty Director of the Center for Law, Science & Innovation, Arizona State University.

Continue reading here:

How soft law is used in AI governance - Brookings Institution

AI Won’t Change Companies Without Great UX – Harvard Business Review

Executive Summary

As with the adoption of all technology, user experience trumps technical refinements. Many organizations implementing AI initiatives are making a mistake by focusing on smarter algorithms over compelling use cases. Use cases where peoples jobs become simpler and more productive are essential to AI workplace adoption. Focusing on clearer, crisper use cases means better and more productive relationships between machines and humans. This article offers five use case categories assistant, guide, consultant, colleague, boss that emerge when companies use AI-empowered people and processes over autonomous systems. Each describes how intelligent entities work together to get the job done and how depending on the process, AI makes the human element matter even more.

As artificial intelligence algorithms infiltrate the enterprise, organizational learning matters as much as machine learning. How should smart management teams maximize the economic value of smarter systems?

Business process redesign and better training are important, but better use cases those real-world tasks and interactions that determine everyday business outcomes offer the biggest payoffs. Privileging smarter algorithms over thoughtful use cases is the most pernicious mistake I see in current enterprise AI initiatives. Somethings wrong when optimizing process technologies take precedence over how work actually gets done.

Unless were actually automating a process that is, taking humans out of the loop AI algorithms should make peoples jobs simpler, easier, and more productive. Identifying use cases where AI adds as much value to peoples performance as to process efficiencies is essential to successful enterprise adoption. By contrast, companies committed to giving smart machines greater autonomy and control focus on governance and decision rights.

Strategically speaking, a brilliant data-driven algorithm typically matters less than thoughtful UX design. Thoughtful UX designs can better train machine learning systems to become even smarter. The most effective data scientists I know learn from use-case and UX-driven insights. At one industrial controls company, for example, the data scientists discovered that users of one of their smart systems informally used a dataset to help prioritize customer responses. That unexpected use case led to a retraining of the original algorithm.

Focusing on clearer, cleaner use cases means better and more productive relationships between AI and its humans. The division of labor becomes a source of design inspiration and exploration. The quest for better outcomes shifts from training smarter algorithms to figuring out howtheuse case should evolve. That drives machine learning and organizational learning alike.

Five dominant use case categories emerge when organizations pick AI-empowered people and processes over autonomous systems. Unsurprisingly, these categories describe how intelligent entities work together to get the job done and highlight that a personal touch still matters. Depending on the person, process, and desired outcome, AI can make the human element matter more.

Assistants

Alexa, Siri and Cortana already embody real-world use cases for AI-assistantship. In Amazons felicitous phrasing, assistants have skills enabling them to perform moderately complex tasks. Whether mediated by voice or chatbot, simple and straightforward interfaces make assistants fast and easy to use. Their effectiveness is predicated as much on people knowing exactly what they need as algorithmic sophistication. As digital assistants become smarter and more knowledgeable, their task range and repertoire expands. The most effective assistants learn to prompt their users with timely questions and key words to improve both interactions and outcomes.

Guide

Where assistants perform requested tasks, guides help users navigate task complexity to achieve desired outcomes. Using Waze to drive through cross-town traffic troubled by construction is one example; using an augmented-reality tool to diagnose and repair a mobile device or HVAC system would be another. Guides digitally show and tell their humans what their next steps should be and, should missteps occurs, suggest alternate paths to success. Guides are smart software sherpa whose domain expertise is dedicated to getting their users to desired destinations.

Consultant

In contrast to guides, consultants go well beyond navigation and destination expertise. AI consultants span use cases where workers need either just-in-time expertise or bespoke advice to solve problems. Consultants, like their human counterparts, offer options and explanations, as well as reasons and rationales. A software development project manager needs to evaluate scheduling trade-offs; AI consultants ask questions and elicit information allowing specific next step recommendations. AI consultants can include relevant links, project histories and reports for context. More sophisticated consultants offer strategic advice to complement their tactical recommendations.

Consultants customize their functional knowledge scheduling; budgeting; resource allocation; procurement; purchasing; graphic design; etc. to their human clients use case needs. They are robo-advisers dispassionately dispensing their domain expertise.

Colleague

A colleague is like a consultant but with a data-driven and analytic grasp of the local situation. That is, a colleagues domain expertise is the organization itself. Colleagues have access to the relevant workplace analytics, enterprise budgets, schedules, plans, priorities and presentations to offer organizational advice to colleagues. Colleague use cases revolve around advice managers and workers need to work more efficiently and effectively in the enterprise. An AI colleague might recommend referencing and/or attaching a presentation in an email; which project leaders to ask for advice; what budget template is appropriate for a requisition; what client contacts need an early warning, etc. Colleagues are more collaborator than tool; they offer data-driven organizational insight and awareness. Like their human counterparts, they serve as sounding boards that who? help clarify communications, aspirations and risk.

Boss

Where colleagues and consultants advise, bosses direct. Boss AI tells its humans what to do next. Boss use cases eliminate options, choices and ambiguity in favor of dictates, decrees and directives to be obeyed. Start doing this; stop doing that; change this schedule; shrink that budget; send this memo to your team.

Boss AI is designed for obedience and compliance; the human in the loop must yield to the algorithm in the system. Boss AI represents the slippery slope to autonomy the workplace counterpart to an autopilot taking over an airplane cockpit or an automotive collision avoidance system slamming on the brakes. Specific use cases and circumstances trigger human subordination to software. But bosswares true test is human: if humans arent sanctioned or fired for disobedience, then the software really isnt a boss.

As the last example illustrates, these distinct categories can swiftly blur into each other. Its easy to conceive of scenarios and use cases where guides can become assistants, assistants situationally escalate into colleagues, and consultants transform into bosses. But the fundamental differences and distinctions these five categories present should inject real rigor and discipline intoimagining their futures.

Trust is implicit in all five categories. Do workers trust their assistants to do what theyve been told or guides to get them where they want to go? Do managers trust the competence of bossware or that their colleagues wont betray them? Trust and transparency issues persist regardless of how smart AI software becomes, and they become even more important as the reasons for decisions become overwhelmingly complex and sophisticated. One risk: these artificial intelligences evolve or devolve into frenemies. That is, software that is simultaneously friend and rival to its human complement. Consequently, use cases become essential to identifying what kinds of interfaces and interactions facilitate human/machine trust.

Use cases may prove vital to empowering smart human/smart machine productivity. But reality suggests their ultimate value may come from how thoughtfully they accelerate the organizations advance to greater automation and autonomy. The true organizational impact and influence these categories may be that they prove to be the best way for humans to train their successors.

Read this article:

AI Won't Change Companies Without Great UX - Harvard Business Review

Stressed on the job? An AI teammate may know how to help – MIT News

Humans have been teaming up with machines throughout history to achieve goals, be it by using simple machines to move materials or complex machines to travel in space. But advances in artificial intelligence today bring possibilities for even more sophisticated teamwork true human-machine teams that cooperate to solve complex problems.

Much of the development of these human-machine teams focuses on the machine, tackling the technology challenges of training AI algorithms to perform their role in a mission effectively. But less focus, MIT Lincoln Laboratory researchers say, has been given to the human side of the team. What if the machine works perfectly, but the human is struggling?

"In the area of human-machine teaming, we often think about the technology for example, how do we monitor it, understand it, make sure it's working right. But teamwork is a two-way street, and these considerations aren't happening both ways. What we're doing is looking at the flip side, where the machine is monitoring and enhancing the other side the human," says Michael Pietrucha, a tactical systems specialist at the laboratory.

Pietrucha is among a team of laboratory researchers that aims to develop AI systems that can sense when a person's cognitive fatigue is interfering with their performance. The system would then suggest interventions, or even take action in dire scenarios, to help the individual recover or to prevent harm.

"Throughout history, we see human error leading to mishaps, missed opportunities, and sometimes disastrous consequences," says Megan Blackwell, former deputy lead of internally funded biological science and technology research at the laboratory. "Today, neuromonitoring is becoming more specific and portable. We envision using technology to monitor for fatigue or cognitive overload. Is this person attending to too much? Will they run out of gas, so to speak? If you can monitor the human, you could intervene before something bad happens."

This vision has its roots in decades-long research at the laboratory in using technology to "read" a person's cognitive or emotional state. By collecting biometric data such as video and audio recordings of a person speaking and processing these data with advanced AI algorithms, researchers have uncovered biomarkers of various psychological and neurobehavioral conditions. These biomarkers have been used to train models that can accurately estimate the level of a person's depression, for example.

In this work, the team will apply their biomarker research to AI that can analyze an individual's cognitive state, encapsulating how fatigued, stressed, or overloaded a person is feeling. The system will use biomarkers derived from physiological data such as vocal and facial recordings, heart rate, EEG and optical indications of brain activity, and eye movement to gain these insights.

The first step will be to build a cognitive model of an individual. "The cognitive model will integrate the physiological inputs and monitor the inputs to see how they change as a person performs particular fatiguing tasks," says Thomas Quatieri, who leads several neurobehavioral biomarker research efforts at the laboratory. "Through this process, the system can establish patterns of activity and learn a person's baseline cognitive state involving basic task-related functions needed to avoid injury or undesirable outcomes, such as auditory and visual attention and response time."

Once this individualized baseline is established, the system can start to recognize deviations from normal and predict if those deviations will lead to mistakes or poor performance.

"Building a model is hard. You know you got it right when it predicts performance," says William Streilein, principal staff in the Lincoln Lab's Homeland Protection and Air Traffic Control Division. "We've done well if the system can identify a deviation, and then actually predict that the deviation is going to interfere with the person's performance on a task. Humans are complex; we compensate naturally to stress or fatigue. What's important is building a system that can predict when that deviation won't be compensated for, and to only intervene then."

The possibilities for interventions are wide-ranging. On one end of the spectrum are minor adjustments a human can make to restore performance: drink coffee, change the lighting, get fresh air. Other interventions could suggest a shift change or transfer of a task to a machine or other teammate. Another possibility is using transcranial direct current stimulation, aperformance-restoringtechniquethat uses electrodes to stimulate parts of the brain and has been show to bemore effective than caffeinein countering fatigue, with fewer side effects.

On the other end of the spectrum, the machine might take actions necessary to ensure the survival of the human team member when the human is incapable of doing so. For example, an AI teammate could make the "ejection decision" for a fighter pilot who has lost consciousness or the physical ability to eject themselves. Pietrucha, a retired colonel in the U.S. Air Force who has had many flight hours as a fighter/attack aviator, sees the promise of such a system that "goes beyond the mere analysis of flight parameters and includes analysis of the cognitive state of the aircrew, intervening only when the aircrew can't or wont," he says.

Determining the most helpful intervention, and its effectiveness, depends on a number of factors related to the task at hand, dosage of the intervention, and even a user's demographic background. "There's a lot of work to be done still in understanding the effects of different interventions and validating their safety," Streilein says. "Eventually, we want to introduce personalized cognitive interventions and assess their effectiveness on mission performance."

Beyond its use in combat aviation, the technology could benefit other demanding or dangerous jobs, such as those related to air traffic control, combat operations, disaster response, or emergency medicine. "There are scenarios where combat medics are vastly outnumbered, are in taxing situations, and are as every bit as tired as everyone else. Having this kind of over-the-shoulder help, something to help monitor their mental status and fatigue, could help prevent medical errors or even alert others to their level of fatigue," Blackwell says.

Today, the team is pursuing sponsorship to help develop the technology further. The coming year will be focused on collecting data to train their algorithms. The first subjects will be intelligence analysts, outfitted with sensors as they play a serious game that simulates the demands of their job. "Intelligence analysts are often overwhelmed by data and could benefit from this type of system," Streilein says. "The fact that they usually do their job in a 'normal' room environment, on a computer, allows us to easily instrument them to collect physiological data and start training."

"We'll be working on a basis set of capabilities in the near term," Quatieri says, "but an ultimate goal would be to leverage those capabilities so that, while the system is still individualized, it could be a more turnkey capability that could be deployed widely, similar to how Siri, for example, is universal but adapts quickly to an individual." In the long view, the team sees the promise of a universal background model that could represent anyone and be adapted for a specific use.

Such a capability may be key to advancing human-machine teams of the future. As AI progresses to achieve more human-like capabilities, while being immune from the human condition of mental stress, it's possible that humans may present the greatest risk to mission success. An AI teammate may know just how to lift their partner up.

View post:

Stressed on the job? An AI teammate may know how to help - MIT News

Building up its AI operations, GSK opens a $13M London hub with plans to woo talent now trekking to Silicon Valley – Endpoints News

Continuing its efforts to ramp up global AI operations, GlaxoSmithKline has opened a 10 million ($13 million-plus) research base in Kings Cross, London.

The AI hotspot is already home to Googles DeepMind, and the Francis Crick and Alan Turing research institutes. GSK said it hopes to tap into the huge London tech talent pool and attract candidates who might otherwise head to Silicon Valley.

Its a vibrant ecosystem that has everything from outstanding medicine as well as also being a big tech corridor. DeepMind is there. Google is there. Its near the Crick Institute, and of course modern computing was born, basically, with Alan Turing and the Turing Institute, GSK R&D president Hal Barron said at a London Tech Week fireside chat. So we are quite convinced that both the talent and the ecosystem will enable us to build a very vibrant hub in London, getting the top talent, the best thinkers and people to be able to interact with us in GSK to take technology and help us turn it into medicines.

The company believes AI has the power to vastly improve its drug discovery process. It claims that genetically validated drugs are twice as likely to be successful. And GSK has lots of genetic data to work with. The new workspace, located in the Stanley Building, has already lured in 30 scientists, 10 of whom are in the companys AI fellow program.

In fact, many biotechs are now turning to AI, which they believe can speed up successful development by analyzing hundreds of genes at once or rapidly screening billions of molecules.

GSK is focused on finding better medicines and vaccines not just better products, but finding them in better ways, so we are using functional genomics, human genetics and artificial intelligence and machine learning, the company said in a statement.

It also has AI researchers based in San Francisco and Boston, and aims to reach 100 AI-focused employees by mid-2021. Our goal is to have the best and brightest people in the world to join us, Barron said.

In AI, we are scouring the planet for the best people. These folks are very rare to find. Competition is high and there arent a large number of them, Tony Wood, GSKs SVP of medicinal science and technology, told The Guardian in December.

The new London hub has the capacity for 60 to 80 staff members. Now all thats left to do is fill it.

Continued here:

Building up its AI operations, GSK opens a $13M London hub with plans to woo talent now trekking to Silicon Valley - Endpoints News

How the Army plans to revolutionize tanks with artificial intelligence – C4ISRNet

Even as the U.S. Army attempts to integrate cutting edge technologies into its operations, many of its platforms remain fundamentally in the 20th century.

Take tanks, for example.

The way tank crews operate their machine has gone essentially unchanged over the last 40 years. At a time when the military is enamored with robotics, artificial intelligence and next generation networks, operating a tank relies entirely on manual inputs from highly trained operators.

Currently, tank crews use a very manual process to detect, identify and engage targets, explained Abrams Master Gunner Sgt. 1st Class Dustin Harris. Tank commanders and gunners are manually slewing, trying to detect targets using their sensors. Once they come across a target they have to manually select the ammunition that theyre going to use to service that target, lase the target to get an accurate range to it, and a few other factors.

The process has to be repeated for each target.

That can take time, he added. Everything is done manually still.

On the 21st century battlefield, its an anachronism.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

Army senior leaders recognize that the way the crews in the tank operate is largely analogous to how these things were done 30, 45 years ago, said Richard Nabors, acting principal deputy for systems and modeling at the DEVCOM C5ISR Center.

These senior leaders, many of them with extensive technical expertise, recognized that there were opportunities to improve the way that these crews operate, he added. So they challenged the Combat Capabilities Development Command, the Armaments Center and the C5ISR Center to look at the problem.

On Oct. 28, the Army invited reporters to Aberdeen Proving Ground to see their solution: the Advanced Targeting and Lethality Aided System, or ATLAS.

ATLAS uses advanced sensors, machine learning algorithms and a new touchscreen display to automate the process of finding and firing targets, allowing crews to respond to threats faster than ever before.

The assistance that were providing to the soldiers will speed up those engagement times [and] allow them to execute multiple targets in the same time that they currently take to execute a single target, said Dawne Deaver, C5ISR project lead for ATLAS.

At first glance, the ATLAS prototype the Army had set up looked like something out of a Star Wars film, albeit with treads and not easily harpooned legs. The system was installed on a mishmash of systems a sleek black General Dynamics Griffin I chassis with the Armys Advance Lethality and Accuracy System for Medium Calibur (ALAS-MC) auto-loading 50mm turret stacked on top.

And mounted on top of the turret was a small round Aided Target Recognition (AiTR) sensor a mid-wave infrared imaging sensor to be more exact. Constantly rotating to scan the battlefield, the sensor almost had a life of its own, not unlike an R2 unit on the back of an X-Wing.

Trailing behind the tank and connected via a series of long black cables was a black M113. For this demonstration, the crew station was located inside the M113, not the tank itself. Cavernous compared to the inside of an Abrams tank, the M113 had three short seats lined up. At the forward-most seat was a touchscreen display and a video game-like controller for operating the tank, while further back computer monitors displayed ATLAS' internal processes.

Of course, ATLAS isnt the tank itself, or even the M113 connected to it. The chassis served as a surrogate for either a future tank, fighting vehicle or even a retrofit of current vehicles, while the turret was an available program being developed by the Armaments Center. The M113 is not really meant to be involved at all, but the Army decided to remotely locate the crew station inside of it for safety concerns during a live fire demonstration expected to take place in the coming weeks. ATLAS, Army officials reminded observers again and again, is agnostic to the chassis or turret its installed on.

So if ATLAS isnt the tank, what is it?

Roughly speaking, ATLAS is the mounted sensor collecting data, the machine learning algorithm processing that data, and the display/controller that the crew uses to operate the tank.

Heres how it works:

ATLAS starts with the optical sensor mounted on top of the tank. Once activated, the sensor continuously scans the battlefield, feeding that data into a machine learning algorithm that automatically detects threats.

Images of those threats are then sent to a new touchscreen display, the graphical user interface for the tanks intelligent fire control system. The images are lined up vertically on the left side of the screen, with the main part of the display showing what the gun is currently aimed at. Around the edges are a number of different controls for selecting ammunition, fire type, camera settings and more.

By simply touching one of the targets on the left with your finger, the tank automatically swivels its gun, training its sights on the dead center of the selected object. As it does that, the fire control system automatically recommends the appropriate ammo and setting such as burst or single shot to respond with, though the user can adjust these as needed.

So with the target in its sights, weapon selected, the operator has a choice: Approve the AIs recommendations and pull the trigger, adjust the settings before responding, or disengage. The entire process from target detection to the pull of the trigger can take just seconds. Once the target is destroyed, the operator can simply touch the screen to select the next target picked up by ATLAS.

In automating what are now manual tasks, the aim of ATLAS is to reduce end-to-end engagement times. Army officials declined to characterize how much faster ATLAS is than a traditional tank crew. However, a demo video shown at Aberdeen Proving Ground claimed ATLAS allows the operator to engage three targets in the time it now takes to just engage one.

ATLAS is essentially a marriage between technologies developed by the Armys C5ISR Center and the Armaments Center.

We are integrating, experimenting and prototyping with technology from C5ISR center things like advanced EO/IR targeting sensors, aided target algorithms were taking those technology products and integrating them with intelligent fire control systems from the Armaments Center to explore efficiencies between those technologies that can basically buy back time for tank crews, explained Ground Combat Systems Division Deputy Director Jami Davis.

Starting in August, the Army began bringing in small groups of tank operators to test out the new system, mostly using a new virtual reality setup that replicates the ATLAS display and controller. By gathering soldier feedback early, the Army hopes that they can improve the system quickly and make it ready for fielding that much faster. Already, the Army has brought in 40 soldiers. More soldier touchpoints and a live fire demonstration are anticipated to help the Army mature its product.

In some ways, ATLAS replicates the AI-capabilities demonstrated at Project Convergence in miniature. Project Convergence is the Armys new campaign of learning, designed to integrate new sensor, AI and network capabilities to transform the battlefield. In September, the Army hauled many of its most advanced cutting edge technologies to the desert at Yuma Proving Ground, then tried to connect them in new ways. In short, at Project Convergence the Army tried to create an environment where it could connect any sensor to the best shooter.

The Army demonstrated two types of AI at Project Convergence. First were the automatic target recognition AIs. These machine learning algorithms processed the massive amount of data picked up by the Armys sensors to detect and identify threats on the battlefield, producing targeting data for weapon systems to utilize.

The second type of AI was used for fire control, and is represented by FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM. Taking in the targeting data from the other AI systems, FIRESTORM automatically looks at the weapons at the Armys disposal and recommends the best one to respond to any given threat.

While ATLAS does not yet have the networking components that tied Project Convergence together across domains, it essentially performs those two tasks: Its AI automatically detects threats and recommends the best response to the human operators. Although the full ATLAS system wasnt hauled out to Project Convergence this year, the Army was able to bring out the virtual prototyping setup to Yuma Proving Ground, and there is hope that ATLAS itself could be involved next year.

To be clear: ATLAS is not meant to replace tank crews. Its meant to make their jobs easier, and in the process, much faster. Even if ATLAS is widely adopted, crews will still need to be trained for manual operations in case the system breaks down. And theyll still need to rely on their training to verify the algorithms recommendations.

We can assist the soldier and reduce the number of manual tasks that they have to do while still retaining the soldiers' ability to always override the system, to always make the final decision of whether or not the target is a threat, whether or not the firing solution is correct, and that they can make that decision to pull the trigger and engage targets, explained Deaver.

Read more:

How the Army plans to revolutionize tanks with artificial intelligence - C4ISRNet

Physicists Teach AI to Identify Exotic States of Matter | WIRED – WIRED

Slide: 1 / of 1. Caption: Getty Images

Put a tray of water in the freezer. For a while, its liquid. And thenboomthe molecules stack into little hexagons, and youve got ice. Pour supercold liquid nitrogen onto a wafer of yttrium barium copper oxide, and suddenly electricity flows through the compound with less resistance than beer down a college students throat. Youve got a superconductor.

Those drastic alterations in physical properties are called phase transitions, and physicists love them. Its as if they could spot the exact instant Dr. Jekyll morphs into Mr. Hyde. If they could just figure out exactly how the upstanding doctors body metabolized the secret formula, maybe physicists could understand how it turns him evil. Or make more Mr. Hydes.

A human physicist might never have the neural wetware to see a phase transition, but now computers can. In two papers published in Nature Physics today, two independent groups of physicistsone based at Canadas Perimeter Institute, the other at the Swiss Federal Institute of Technology in Zurichshow that they can train neural networks to look at snapshots of just hundreds of atoms and figure out what phase of matter theyre in.

And it works pretty much like Facebooks auto-tags. We kind of repurposed the technology they use for image recognition, says physicist Juan Carrasquilla, who co-authored the Canadian paper and now works for quantum computing company D-Wave.

Of course, facial recognition, water turning to ice, and Jekylls turning to Hydes arent really the scientists bag. They want to use artificial intelligence to understand fringey phenomena with potential commercial applicationslike why some materials become superconductors only near absolute zero but others transition at a balmy -150 degrees Celsius. The high-temperature superconductors that might be useful for technology, we actually understand them very poorly, says physicist Sebastian Huber, who co-wrote the Swiss paper.

They also want to better understand exotic phases of matter called topological states, in which quantum particlesact even weirder than usual. (The physicists who discovered these new phases nabbed the Nobel Prize last October.) Quantum particles like photons or atoms change their physical states relatively easily, but topological states are sturdy. That means they might be useful for building data storage for quantum computers, if you were a company like, say, Microsoft.

The research was about more than identifying phasesit was about understanding transitions. The Canadian group trained their computer to find the temperature at which a phase transition occurred to 0.3 percent accuracy. The Swiss group showed an even trickier move, because they got their neural network to understand something without training it ahead of time. Typically in machine learning, you give the neural network a goal: Figure out what a dog looks like. You train the network with 100,000 pictures, Huber says. Whenever a dog is in one, you tell it. Whenever there isnt, you tell it.

But the physicists didnt tell their network about phase transitions at all: They just showed the network collections of particles. The phases were different enough that the computer could identify each one. Thats a level of skill acquisition that Huber thinks will eventually allow neural networks to discover entirely new phases of matter.

These new successes arent just academic. In the hunt for stronger, cheaper, or otherwise better materials, researchers have been using machine learning for a while. In 2004, a collaboration that included NASA and GE developed a strong, durable alloy for aircraft engines using neural networks by simulating the materials before troubleshooting them in the lab. And machine learning is way faster than, say, simulating the properties of a material on a supercomputer.

Still, the phase transition simulations that the physicists studied were simple compared to the real world. Before these speculative materials end up in your new gadgets, the physicists will need to figure out how to make neural networks parse 1023 particles at a timenot just hundreds, but 100 sextillion. But Carrasquilla already wants to show real experimental data to his neural network, to see if it can find phase changes. The computer of the future might be smart enough to tag your grandmas face in photosand discover the next wonder material.

Read this article:

Physicists Teach AI to Identify Exotic States of Matter | WIRED - WIRED

Google Has Started Adding Imagination to Its DeepMind AI – Futurism

Advanced AI

Researchers have started developing artificial intelligence with imagination AI that can reason through decisions and make plans for the future, without being bound by human instructions.

Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.

The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they havent been specifically programmed for. Insert your usual fears of a robot uprising here.

When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall, explain the researchersin a blog post. On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking.

If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to imagine and reason about the future. Beyond that they must be able to construct a plan using this knowledge.

Weve already seen a version of this forward planning inthe Go victoriesthat DeepMinds bots have scored over human opponents recently, as the AI works out the future outcomes that will result from its current actions.

The rules of the real world are much more varied and complex than the rules of Go though, which is why the team has been working on a system that operates on another level.

To do this, the researchers combined several existing AI approaches together, including reinforcement learning (learning through trial and error) and deep learning (learning through processing vast amounts of data in a similar way to the human brain).

What they ended up with is a system that mixes trial-and-error with simulation capabilities, so bots can learn about their environment then think before they act.

One of the ways they tested the new algorithms was with a 1980s video game calledSokoban, in which players have to push crates around to solve puzzles. Some moves can make the level unsolvable, so advanced planning is needed, and the AI wasnt given the rules of the game beforehand.

The researchers found their new imaginative AI solved 85 percent of the levels it was given, compared with 60 percent for AI agents using older approaches.

The imagination-augmented agents outperform the imagination-less baselines considerably,say the researchers. They learn with less experience and are able to deal with the imperfections in modelling the environment.

The team noted a number of improvements in the new bots: they could handle gaps in their knowledge better, they were better at picking out useful information for their simulations, and they could learn different strategies to make plans with.

Its not just advance planning its advance planning with extra creativity, so potential future actions can be combined together or mixed up in different ways in order to identify the most promising routes forward.

Despite the success of DeepMinds testing, its still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, its a promising start in developing AI that wont put a glass of water on a table if its likely to spill over, plus all kinds of other, more useful scenarios.

Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about and plan for the future,conclude the researchers.

The researchers also created a video of the AI in action, which you can see below:

You can read the two papers published to the pre-print website arXiv.orghereandhere.

Read this article:

Google Has Started Adding Imagination to Its DeepMind AI - Futurism

Delving Into the Weaponization of AI – Infosecurity Magazine

Digital transformation continues to multiply the potential attack surface exponentially, bringing new opportunities for the cyber-criminal community. In addition to their expanding arsenal of sophisticated malware and zero day threats, AI and machine learning are new tools being added to their toolbox. To the surprise of almost no-one, AI is being weaponized by cyber adversaries.

Leveraging AI and automation enables bad actors to commit more attacks at a faster rate and that means security teams are going to have to likewise quicken their speed to keep up. Adding fuel to the fire, this is happening in real-time, and were seeing rapid development, so there is little time for deciding whether to deploy your own AI countermeasures.

AI offers cyber actors more bang for the buck

Just like their victims, cyber actors are subject to economic realities: zero day threats can cost upwards of six figures to identify and exploit; developing new threats and malware takes time and can be expensive, as can renting Malware as a Service tools off the dark web. Like anyone else, they are looking to get the most bang for their buck, that means getting the most ROI with the least amount of overhead expenditure, including money, time, and effort, while maximizing the efficiency and efficacy of the tools theyre using.

Using AI and ML enables cyber-criminals to create malware that can self-seek for vulnerabilities and then autonomously determine which payloads will be the most successful without exposing itself through constant communications back to its C2 server.

We have already seen multi-vector attacks combined with advanced persistent threats (APTs) or an array of payloads. AI accelerates the effectiveness of these tools by autonomously learning about targeted systems so attacks can be laser focused rather than taking the usual slower, scattershot approach that can alert a victim that they are under attack.

AI reduces time to breach

We can all expect attacks to become faster than ever before, especially as technologies such as 5G connections are added to networks. 5G also enables edge devices to communicate faster, creating ad hoc networks that are harder to secure and easier to exploit. This can lead to swarm-based attacks where individual elements perform a specific function as part of a larger, coordinated attack.

When you incorporate AI into a network of connected devices that can communicate at 5G speeds, you create a scenario where those devices can not only launch an attack on their own, but customize that attack at digital speeds based on what it learns during the attack process.

With swarm technology, intelligent swarms of bots can share information and learn from each in other in real-time. By incorporating self-learning technologies, cyber-criminals can create attacks capable of quickly assessing vulnerabilities and then apply methods of countering efforts to stop them.

AI-based cyber-attacks will be more affordable

Traditional cyber weapons built by humans can be complex to build. Because of this, they can sell for a lot of money on the dark web. With AI in place, bad actors will be able to build weapons far more quickly, in greater quantity, and with more flexibility than ever before.

This will decrease their black market value, while at the same time, these AI-based weapons will be more plentiful and readily available to a greater number of people. In the age-old battle of quality versus quantity, threat actors will no longer need to choose: quantity will increase while quality will improve as well.

AI is AIs greatest enemy

Solutions that use AI-based strategies are the only effective defense against AI-enhanced attack strategies. However, AI takes time often years and specialized skills to develop and train. It is far more than the specialized scripts many vendors label as AI, because not everyone understands what goes into a legitimate AI solution, enterprises looking to fight fire with fire can be left in a quandary as to which solutions they should select.

This decision is critical, as future cyber battles may evolve into Flash Wars where interactions between defensive and adversarial AI systems become so fast that the attack/defense cycle is over in microseconds. Like traditional stock traders trying to compete against systems that can bid for stocks using algorithms and AI/ML models, network security professionals do not want to have to compete without having the right tools in place.

Preparing now for the coming challenges

Swarm-based network attacks are still likely a couple of years away, but the impact of AI-enhanced threats are right around the corner. Enterprises need to start preparing now for this reality and it starts with basic cybersecurity hygiene. This is about more than just having a patching and updating program in place, it also includes having proper security architectures and segmentation in place to reduce a companys attack surface and prevent hackers from gaining access to the wider system.

Collaboration is another key component to combatting the weaponization of AI. Security solutions need to be able to see and share threat intelligence, and participate in a unified and coordinated response to a detected threat, even across different network ecosystems such as multi-cloud environments.

Deception is another important tool to add to your arsenal, and which will increase in importance as attacks become faster and more sophisticated. Its essentially counterintelligence deploying decoys across the network to lure in attackers and unmask them because theyre unable to tell which assets are real and which are fake.

AI gives security teams the upper hand in the cyber arms race

As threat actors gain decreased latency and more intelligent attack resources, security teams will have to respond with even greater speed and intelligence. Humans alone cannot respond to these coming threats, and neither can the traditional security solutions they have in place. Instead, defensive strategies will have to incorporate advanced automation technology, including ML and AI.

Ultimately, enterprises have far more resources available to them than cyber-criminals do. Teams that can incorporate technologies like machine learning and AI into their cyber defenses will be able to build the quintessential security system that will not only able them to survive, but for the first time ever, gain the upper hand in the escalating cyber war.

Go here to see the original:

Delving Into the Weaponization of AI - Infosecurity Magazine

What investment trends reveal about the global AI landscape – Brookings Institution

We arent what we were in the 50s and 60s and 70s, former Secretary of Defense Ash Carter recently reflected. In those days, all technology of consequence for protecting our people, and all technology of any consequence at all, came from the United States and came from within the walls of government. Those days are irrevocably lost. To get that technology now, Ive got to go outside the Pentagon no matter what, Carter added.

The former Pentagon chief may be overstating the case, but when it comes to artificial intelligence, theres no doubt that the private sector is in command. Around the world, nations and their governments rely on private companies to build their AI software, furnish their AI talent, and produce the AI advances that underpin economic and military competitiveness. The United States is no exception.

With Big Techs titans and endless machine-learning startups racing ahead on AI, its easy to imagine that the public sector has little to contribute. But the federal governments choices on R&D policy, immigration, antitrust, and government contracting could spell the difference between growth and stagnation for Americas AI industry in the coming years. Meanwhile, as AI booms in other countries, diplomacy and trade policy can help the United States and its private sector take greatest advantage of advances abroad, and protective measures against industrial espionage and unfair competition can help keep America ahead of its adversaries.

Smart policy starts with situational awareness. To achieve the outcomes they intend and avoid unwanted distortions and side effects in the market, American policymakers need to understand where commercial AI activity takes place, who funds it and carries it out, which real-world problems AI companies are trying to solve, and how these facets are changing over time. Our latest research focuses on venture capital, private equity, and M&A deals from 2015 through 2019, a period of rapid growth and differentiation for the global AI industry.

Although the COVID-19 pandemic has since disrupted the market, with implications for AI that are still unfolding, studying this period helps us understand the foundations of todays AI sectorand where it may be headed.

America leads, but doesnt dominate

Contrary to narratives that Beijing is outpacing Washington in this field, the United States remains the leading destination for global AI investments. China is making meaningful investments in AI, but in a diverse, global playing field it is one player among many.

As of the end of 2019, the United States had the worlds largest investment market in privately held AI companies, including startups as well as large companies that arent traded on stock exchanges. We estimate AI companies attracted nearly $40 billion globally in disclosed investment in 2019 alone, as shown in Figure 1. American companies attracted the lions share of that investment: $25.2 billion in disclosed value (64% of the global total) across 1,412 transactions. (These disclosed totals significantly understate U.S. and global investment, since many deals and deal values are undisclosed, so total transaction values were probably much higher.)

Around the world, private-market AI investment grew tremendously from 2015 to 2019especially outside China. Notwithstanding occasional claims in the media that China is outstripping U.S. investment in AI, we find that Chinese investment levels in fact continue to lag behind the United States. Consistent with broader trends in Chinas tech sector, the Chinese AI market saw a dramatic boom from 2015 to 2017, prompting many of those media claims. But the following two years, investment sharply declined, resulting in little net growth in the annual level of investment from 2015 to 2019.

Figure 1: Total disclosed value of equity investments in privately held AI companies, by target region

Although Americas nearest rival for AI supremacy may not have taken the lead, our data suggest the United States shouldnt grow complacent. Americas AI companies remain ahead in overall transaction value, but they account for a steadily shrinking percentage of global transactions. And by our estimates, investment outside the United States and China is quickly expanding, with Israel, India, Japan, Singapore, and many European countries growing faster than their larger competitors by some or all metrics.

Figure 2: Investment activity and growth in the top 10 target countries (ranked by disclosed value)

Chinese investors play a meaningful but limited role

Chinas investments abroad are attracting mounting scrutiny, but in the American AI investment market, Chinese investors are relatively minor players. In 2019, we estimate that disclosed Chinese investors participated in 2% of investments into American AI companies, down from a peak of only 5% in 2016. As Figure 3 makes clear, the Chinese investors in our dataset generally seem to invest in Chinese AI companies instead.

Figure 3: Investment events with at least one Chinese investor participant, by target region

There was also little evidence in our data that disclosed Chinese investors seek out especially sensitive companies or technologies, such as defense-related AI, when they invest outside China. That said, our data are limited; some Chinese investors may be undisclosed or operate through foreign subsidiaries that obscure their interests. And aggregate trends are of course only one part of the picture. Some China-based investors clearly invest abroad in order to extract security-sensitive information or technology. These efforts deserve scrutiny. But overall, it seems that disclosed Chinese investors, and any bad actors among them, are a relatively small piece of a larger and more diverse AI investment market.

Few AI companies focus on public-sector needs

When it comes to specific applications, we found that most AI companies are focused on transportation, business services, or general-purpose applications. There are some differences across borders: Compared to the rest of the world, investment into Chinese AI companies is concentrated in transportation, security and biometrics (including facial recognition), and arts and leisure, while in the United States and other countries, companies focused on business uses, general-purpose applications, and medicine and life sciences attract more capital.

Across all countries, though, relatively few private-market investments seem to be flowing to companies that focus squarely on military and government AI applications. Even the related category of security and biometrics is relatively small, though materially larger in China. Governments can and do adapt commercial AI tools for their own purposes, but for the time being, relatively few AI startups seem to be working and raising funds with public-sector clients in mind, especially outside China.

Figure 4: Regional investment targets by application area

The bottom-line on global AI

The worlds AI landscape is changing fast, and a plethora of unpredictable geopolitical factors, from U.S.-China decoupling to COVID-related disruptions, counsel against confident claims about where the global AI landscape is headed next. Still, our estimates of investment around the world point to fundamental, longer-term trends unlikely to vanish anytime soon. These trends have important implications for policy:

Go here to read the rest:

What investment trends reveal about the global AI landscape - Brookings Institution

This AI Will Tell You Who The Next Great Football Player Will Be – Interesting Engineering

Computer scientists at Loughborough University have engineered artificial intelligence (AI) algorithms that can analyze football(that's soccer for you fellow Americans) players' abilities on the field. Dr. Baihua Li, the project lead, says the novel technology could revolutionize the sport by effectively enabling teams to properly identify the right talent to recruit.

Currently, player performance analysis is a long and labor-intensive process that sees an individual watch many video recordings of a player's performances. This process is time-consuming and could be faulty as it relies on human judgment which is often influenced by bias.

Although some automated technologies exist today, they are only able to track players on the pitch. To resolve this issue, Li and her team developed a hybrid system where human data entry can be supplemented by camera-based automated methods.

The team has made use of the latest advances in computer vision, deep learning, and AI to achieve three outcomes:

1. Detecting body pose and limbs to identify actions

2. Tracking players to get individual performance data

3. Camera stitching (using two low-cost consumer-grade normal cameras (such as GoPros), with each recording half of the football field to get a full picture)

Li believes her new system will aid in getting the data needed for accurate player performance analysis and talent identification. There is also the potential to adapt the technology to be used in other sports.

Performance data and match analysis in football is an essential part of the sport and can have a huge impact on the player and team performance.

The developed technology will allow a much greater objective interpretation of the game as it highlights the skills of players and team cooperation.

This innovation will have a positive impact on the football industry and further advance sports technology while providing value to the players, coaches, and recruiters that use the data," Li concluded.

Here is the original post:

This AI Will Tell You Who The Next Great Football Player Will Be - Interesting Engineering

Column: AI ‘magic’ helps fill in the missing pixels – RiverTowns

Weve all seen it on police procedurals. They are trying to find a criminal but all they have are grainy images of the suspect so they just "sharpen" the image and like magic a photo of a person becomes sharp enough to identify them.

I said its like magic because it is. Theres currently no possible way to take an image that is grainy and make it sharper while also having it be accurate.

The reason the photo is grainy is because theres not enough information. There are several reasons that you might end up with a grainy photo. There might not be enough light to clearly make out all aspects of the photo. The camera that took the photo didnt have very high resolution. The photo was taken using digital zoom or the photo itself has been downscaled from the original.

When you use the camera on your phone youre taking a photo that has will be saved with a certain number of pixels. Most phones these days save 12-megapixel (thats 12 million pixels) photos.

Phones with more than one lens have the ability to do both optical and digital zoom. Optical zoom switches to a different lens, the optics of the camera change to zoom further in. You can also see this happen on a camera with an expensive lens, the lens itself will extend further out to zoom in on the subject. Digital zoom is different, it takes the photo that youre going to take and crops it.

I know that 12 million pixels sounds like a lot, but it doesnt take much digital zoom to lose half of the pixels that would be used for the photo. Think of it like cropping a photo and then blowing the smaller cropped version back up to the original size. You now have a lot fewer pixels to fill the same amount of space. Thats why when you zoom in really far with digital zoom, the photo ends up looking rather grainy, theres just not enough pixels available anymore to make the image appear sharp.

Researchers have decided to try to tackle this problem, but the results arent what you might think (or hope). Researchers at Duke University have used artificial intelligence to take grainy photos and turn them into photos of a realistic person. The results are impressive --they took photos so grainy that the eyes were represented by 2 pixels (two squares that are slightly different colors) and turned them into a realistic person. Its important to note that this is just a representation of a realistic person, not necessarily the exact person from the grainy photo.

But the less grainy the photo the more accurate the AI gets. The authors themselves took their own portrait and downscaled it (though not as badly as the previous example) and then used their AI to recreate an image that has a passing resemblance to the original photo.

That last part is important because no one is going to realistically try to figure out what a person looks like from a photo so grainy that eyes are represented by two squares that differ slightly in color. But photos that are slightly grainy can be sharpened to bring back something that looks like the original are useful for lots of people. If you took a photo of a loved one with an old phone that didnt have a good camera theres a chance that you could sharpen it with this AI.

As the world of photo AI improves my hope is that well have apps available to us that we can use to make any photo we take (or took a long time ago) look better.

See the rest here:

Column: AI 'magic' helps fill in the missing pixels - RiverTowns

Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create – ZDNet

From targeted phishing campaigns to new stalking methods: there are plenty of ways that artificial intelligence could be used to cause harm if it fell into the wrong hands. A team of researchers decided to rank the potential criminal applications that AI will have in the next 15 years, starting with those we should worry the most about. At the top of the list of most serious threats? Deepfakes.

By using fake audio and video to impersonate another person, the technology can cause various types of harms, said the researchers. The threats range from discrediting public figures to influence public opinion, to extorting funds by impersonating someone's child or relatives over a video call.

The ranking was put together after scientists from University College London (UCL) compiled a list of 20 AI-enabled crimes based on academic papers, news and popular culture, and got a few dozen experts to discuss the severity of each threat during a two-day seminar.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

The participants were asked to rank the list in order of concern, based on four criteria: the harm it could cause, the potential for criminal profit or gain, how easy the crime could be carried out and how difficult it would be to stop.

Although deepfakes might in principle sound less worrying than, say, killer robots, the technology is capable of causing a lot of harm very easily, and is hard to detect and stop. Relative to other AI-enabled tools, therefore, the experts established that deepfakes are the most serious threat out there.

There are already examples of fake content undermining democracy in some countries: in the US, for example, a doctored video of House Speaker Nancy Pelosi in which she appeared inebriated picked up more than 2.5 million views on Facebook last year.

UK organization Future Advocacy similarly used AI to create a fake video during the 2019 general election, which showed Boris Johnson and Jeremy Corbyn endorsing each other for prime minister. Although the video was not malicious, it stressed the potential of deepfakes to impact national politics.

The UCL researchers said that as deepfakes get more sophisticated and credible, they will only get harder to defeat. While some algorithms are already successfully identifying deepfakes online, there are many uncontrolled routes for modified material to spread. Eventually, warned the researchers, this will lead to widespread distrust of audio and visual content.

Five other applications of AI also made it to the "highly worrying" category. With autonomous cars just around the corner, driverless vehicles were identified as a realistic delivery mechanism for explosives, or even as weapons of terror in their own right. Equally achievable is the use of AI to author fake news: the technology already exists, stressed the report, and the societal impact of propaganda shouldn't be under-estimated.

Also keeping AI experts up at night are applications that will be so pervasive that defeating them will be near impossible. This is the case for AI-infused phishing attacks, for example, which will be perpetrated via crafty messages that will be impossible to distinguish from reality. Another example is large-scale blackmail, enabled by AI's potential to harvest large personal datasets and information from social media.

Finally, participants pointed to the multiplication of AI systems used for key applications like public safety or financial transactions and to the many opportunities for attack they represent. Disrupting such AI-controlled systems, for criminal or terror motives, could result in widespread power failures, breakdown of food logistics, and overall country-wide chaos.

UCL's researchers labelled some of the other crimes that could be perpetrated with the help of AI as only "moderately concerning". Among them feature the sale of fraudulent "snake-oil" AI for popular services like lie detection or security screening, or increasingly sophisticated learning-based cyberattacks, in which AI could easily probe the weaknesses of many systems.

Several of the crimes cited could arguably be seen as a reason for high concern. For example, the misuse of military robots, or the deliberate manipulation of databases to introduce bias, were both cited as only moderately worrying.

The researchers argued, however, that such applications seem too difficult to push at scale in current times, or could be easily managed, and therefore do not represent as imminent a danger.

SEE: AI, machine learning to dominate CXO agenda over next 5 years

At the bottom of the threat hierarchy, the researchers listed some "low-concern" applications the petty crime of AI, if you may. On top of fake reviews or fake art, the report also mentions burglar bots, small devices that could sneak into homes through letterboxes or cat flaps to relay information to a third party.

Burglar bots might sound creepy, but they could be easily defeated in fact, they could pretty much be stopped by a letterbox cage and they couldn't scale. As such, the researchers don't expect that they will cause huge trouble anytime soon. The real danger, according to the report, lies in criminal applications of AI that could be easily shared and repeated once they are developed.

UCL's Matthew Caldwell, first author of the report, said: "Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime."

The marketisation of AI-enabled crime, therefore, might be just around the corner. Caldwell and his team anticipate the advent of "Crime as a Service" (CaaS), which would work hand-in-hand with Denial of Service (DoS) attacks.

And some of these crimes will have deeper ramifications than others. Here is the complete ranking of AI-enabled crimes to look out for, as compiled by UCL's researchers:

Here is the original post:

Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create - ZDNet

Forget the SciFi-like doomsday myths AI can benefit society – City A.M.

Aversions towards new technologies have been common throughout history, and artificial intelligence (AI) is no different.

Earlier this month, Elon Musk called AI a fundamental risk to human civilisation .

The Tesla founder and renowned technologist was joining the ranks of those who think that the rise of AI could bring about a dystopian science fiction scenario: machines taking our jobs and out-of-control robots making their own rules.

Read more: Robot wars: Mark Zuckerberg and Elon Musk are embroiled in a geeky AI spat

Although most of the potential benefits are yet to be realised, this year AI has truly become a mainstream topic. Every industry is exploring ways in which it can improve traditional practices, from autonomous cars to algorithmic trading. Despite this, conversations seem to be unduly focused on fear about how these developments will impact our society.

As a result, we risk allowing this fear-based narrative to overshadow public opinion. We need to ensure there is balanced discussion of the risks and benefits, including the positive impact AI could have on both the labour market and society.

If implemented responsibly and proactively, there is huge scope for AI to revolutionise the way in which we live and work, without bringing about a SciFi-like doomsday.

A recent PwC report estimated that AI will add up to $15.7 trillion to the world economy by 2030. We are talking about a huge range of fields here, many of which are not what people immediately visualise when they think about robots in the workplace.

Few would argue that using intelligent machines to increase yields on understaffed farms is a bad thing. Moreover, these robots will have to be built and maintained, and the additional crops create more human work further down the production line so AI will facilitate the creation of new jobs that would not have otherwise existed.

As the co-founder of a healthtech startup, I believe AI has huge potential to improve society if used to provide tailored information and support to both patients and doctors. AI in healthcare doesnt necessarily mean fully robotic doctors. At Ada we combine deep medical knowledge with smart reasoning, to guide people towards the right information and support them in choosing what to do next.

We also link people with doctors for remote advice and care, with those interactions used to further train and improve the automated support.

By assisting healthcare professionals, AI can accelerate and improve the industry, enabling the provision of a better and more personalised service. For example, within the NHS, AI could help relieve pressure-points and carry out time-consuming, repetitive tasks, leaving doctors free to spend more time with patients and focus on prevention.

Humans should remain in charge of the decision-making process, but machines will deliver insights to help inform those decisions.

With technology enabling faster data analysis, smarter decision-making, and a decrease in time spent on straightforward tasks, clinicians will have the breathing space to focus more fully on the elements of care where they can uniquely deliver value.

They can provide a more personalised, attentive and thorough service, delivering empathy and reassurance, and harnessing technology to push the boundaries of medical knowledge.

It is important we develop strategies around AI that put people first. Policymakers and regulators have an obligation to make sure that any new technology is used responsibly, and that intelligent machines are implemented in a way that maximises the benefits to society.

The realities of AI are far less frightening than some of the headlines might lead us to believe. Harnessing the full potential of AI will enable us to improve industries on an entirely new level. While it is always important to assess and manage risk when implementing any new technology, we should look beyond the myths and fears, and base decisions on the real-world benefits that AI can deliver.

Read more: The three up-and-coming startups joining Pfizer's healthtech accelerator

Go here to see the original:

Forget the SciFi-like doomsday myths AI can benefit society - City A.M.

56% of marketers think AI will negatively impact branding in 2020, study says – Marketing Dive

Dive Brief:

As the use of AI expands into a growing array of marketing functions, Bynder'sstudy suggests marketers are concerned with how the technology will impact creativity and branding. Brand building is a top priority for marketers in 2020 following a period when many turned their focus to driving short-term performance lifts.

However, marketers'concerns over automation do not seem to be impacting investments, as most are still ramping up their tech stack and partnerships with martech companies.

"Marketing organizations readily adopted technology for analytics, digital channels and other functions that clearly benefit from automation, said Andrew Hally, SVP of global marketing at Bynder, in a statement. "The challenge ahead is to harness emerging technologies like AI to maintain creative excellence while satisfying business demand for growing volumes and faster delivery."

The Bynder report follows a December study by the Advertising Research Foundation that highlighted how different approaches to data causes tension on marketing teams. That report revealed how researchers and creatives or strategists approach research and data is preventing creative efforts from reaching their full potential. Only 65% of creatives and strategists believe research and data are important for the creative process, while 84% of researchers found it to be key, according to the report. These varying perspectives illustrate that technology can cause issues among marketing teams, despite being foundational to modern day marketing.

Read more here:

56% of marketers think AI will negatively impact branding in 2020, study says - Marketing Dive

Here are 3 ways AI could help you commute safely after COVID-19 – World Economic Forum

As cities around the world are emerging from lockdowns to stop the spread of COVID-19, public transport companies are facing new challenges. They will have to avoid overcrowding buses and trains to reduce the risk of coronavirus transmission, while ensuring that overall passenger numbers are high enough to sustain the system. Meanwhile, commuters are tentatively returning to public transport, but will only embrace it widely if they see it as a safe, fast and convenient way of reaching their destinations.

Here are three ways cutting-edge technologies such as artificial intelligence can help us all travel at ease, by crunching huge amounts of data, devising optimal schedules and journeys and adapting them to the rapidly evolving situation:

Overcrowding is known to pose a major transmission risk for COVID-19 and other diseases. If countries want to avoid a second wave of infections, one approach is to flatten the typical morning and afternoon peaks in passenger numbers. At least in the medium term, rush hour is likely to be replaced by an even spread of passengers over the course of the day.

Countries around the world are already implementing or preparing for staggered work shifts and school schedules. This helps prevent overcrowding in offices and classrooms, and also spreads out commutes. For public transport companies, this can mean putting on relatively frequent trains and buses all day long, rather than running back-to-back transport during rush hour and more infrequent service during quieter times.

Social distancing and public transport

Image: Optibus

An all-day service has several benefits. It reduces passenger density and facilitates social distancing. It also means drivers are less likely to have to split their shifts and work during busy mornings and afternoons, with sometimes inconvenient off-time in between. Passengers benefit, too, because they can rely on fairly frequent trains and buses all day long.

However, this new model also comes with challenges. Any changes in COVID-19 infection numbers, including local outbreaks and surges, are likely to affect ridership demand. This can happen so quickly that transport providers wont have much time to prepare and adjust. The only way to deal with the uncertainty is to have the flexibility and technological capability to react within days or even hours, and implement the kinds of schedule changes that would have previously taken months of preparation.

This is where artificial intelligence comes in. Transport planning involves vast amounts of data. The number of drivers on duty, the level of passenger demand, the number of available buses and trains, as well as rules such as the maximum hours drivers can work between breaks, and the length of each break, are just some of the many different factors that need to be taken into account.

With the help of algorithms, transit officials can easily create different scenarios based on changes in any of these factors. They can enter changes to the routes and travel times and see the schedule update automatically. They can also handily compare the costs and revenues of the different scenarios. This allows them to respond quickly to broader events that will affect peoples movement, be it lockdowns or staggered shifts. It also means they can quickly put on extra trains and buses if needed.

Mobility systems must be resilient, safe, inclusive, responsive, and sustainable. This is why #WeAllMove, a mobility service match-making platform, launched April 2020 by Wunder Mobility in partnership with the World Economic Forum COVID Action Platform. The platform highlights the importance of leveraging multi-stakeholder collaboration across governments, providers, commuters and more

#WeAllMove consolidates information about a variety of mobility options available in any city, from mode share, to ride share and transit. The independent platform, co-hosted by mobility providers operating globally, will integrate private, public and joint mobility services into a single search and output engine, ensuring a better new mobility normal can be forged, regardless of the crisis ahead.

Since its launch April 2020, it has grown to include 130 mobility service providers offering tailored services in over 300 cities and 40 countries. By bringing public and private stakeholders together, the platform can ensure business continuity for an array of mobility providers, and help secure jobs and services that depend on mobility.

Smart contingency planning

Artificial intelligence (AI) can help us solve seemingly intractable transport problems. Take this example from our own business, which provides advanced technology to public transportation agencies and operators in various countries.

One of our customers wanted to add 5% more trips to their schedule in order to spread passenger numbers over more journeys, and reduce the risk of coronavirus transmission. However, they had 14% fewer drivers to hand. It seemed like an impossible problem. And yet, our AI-driven software found a way of adding more trips with fewer drivers. It did so by optimizing the schedule and extending the average shift by just 45 minutes, while adding in any necessary breaks to adhere to labour and safety regulations. If the driver shortage becomes less severe, transportation providers can easily change their preferences, and the platform would automatically restore the length of the shift.

AI-powered systems also allow operators to quickly provide extra buses to ensure social distancing. Putting on an extra bus may sound simple, but it requires contingency planning, in the form of complex algorithms that can rapidly create alternative scenarios. These enable agencies and operators to figure out which part of the transportation network needs to change to ensure that extra vehicles and drivers are available when needed.

Take a tiny family-owned operator that owns five buses and offers only 50 trips a day. This already results in over 1 billion potential vehicle combinations. Even with a medium-sized transit provider, the numbers become so large that they cant be analysed by humans alone. In big transit-friendly cities like London, some 9,700 buses make 2.2 billion trips a year. Algorithms can sift through all those combinations and choose the optimal solutions within minutes or even seconds.

Image: Optibus

Monitor the impact of altered service

Transport providers have to ensure that all those living along certain routes can still get to their destinations even if schedules are changed at short notice.

With data-driven planning systems, transport officials can enter demographic data from public sources, such as income levels in specific areas. They can then look at a map that overlays the suggested routes with this data. This allows officials to quickly see the potential impact of any changes for residents, including those who may not have the means to use private forms of transport. Such mapping is a fast and simple way of making public transit as inclusive as possible.

All told, there are many different ways social distancing may affect public transit as we encounter the changes that accompany a gradual exit from lockdown. We may not know exactly what transit demand will look like in the coming months, but principles like cooperating with local governments and private institutions to flatten peak times, creating and comparing multiple transit scheduling scenarios, and monitoring the impact that any service changes will have on residents can help transportation providers navigate the unknown roads ahead.

Read the original:

Here are 3 ways AI could help you commute safely after COVID-19 - World Economic Forum

Everseen lands $10 million so AI can keep an eye on a $45 billion problem – VentureBeat

It is an issue that the retail industry doesnt talk about in great detail, but shrinkage loss of inventory at checkout through non-scans and other errors costs $45.2 billion annually.

Today, Everseen an AI software company founded in Cork, Ireland that detects non-scans at checkout has announced $10 million in funding to help solve this problem through the use of AI and camera technology. The funds will be used to expand operations, including to anewly established New YorkCity office that serves as the headquarters for the companys U.S. operations.

EverseensAItechnology is currently being used by five of the worlds 10 largest retailers. It integrates with security cameras that sit over both staffed registers and self-checkout machines and automatically detects non-scans. When a product is left unscanned, Everseen sends an alert a notification thatincludes an image of the non-scanned item to the retail stores security teams via smartwatch, tablet, or anyothermobile device. Whether it was caused by a simple mistake, a scanning issue (which happens at self checkouts all the time), or an attempt at theft, the alert allows staff to deal with the problem quickly.

That is important, because traditional methods of dealing with shrinkage focus on assessing past activities rather than targeting losses in real time.

Historically, the only way retailers could understand what was happening at checkout was by retroactively data mining their point of sale data and trying to parse through it to create a narrative around their loss and how it influences overall gross margin, Alan OHerlihy, CEO and founder at Everseen, told me. This approach is a waste of time and leaves too much room for error. It also doesnt prevent theft and human error, and definitely doesnt give retailers information that would inform other operational issues that they dont realize track back to error/loss at checkout.

So how is AI powering this application?

Our AI software integrates directly into stores existing cameras and, from day one, understands the DNA of a transaction, meaning that it also knows when its seeing something thatisnta transaction, OHerlihy said. The AI becomes smarter over time, learning the specifics of each individual store its in (spatial layout, shopping patterns, etc.) and being able to identify inconsistencies specific to that stores checkout space.The AI does this by taking theCCTV video data (25 frames per second) and POS Stream Data (time-stamped to the millisecond) and classifying each action and event that occurs at the checkout position. Then our algorithms get to work, detecting nearly everything.

That understanding goes beyond real-time alerts to give retailers actionable data for the future.

Now that retailers have access to POS data thats paired with information about loss incidents, theyre able to determine all the various sources of loss and how they affect their operations as a whole, OHerlihy said.

Of course, AI in retail applications has been dominated by chatbots and smart mirrors. Why has Everseen focused on using AI technology for customer and cashier errors?

I think one of the reasons that chatbots are getting a lot of attention in retail at the moment is because people can relate to them, OHerlihy said. Theyre an accessible, customer-facing application that isnt so far off from other familiar day-to-day interactions they have. Ive seen products that let you walk around a shop and talk to your phone and ask it questions. We think this is cool, but its not going to stop products from walking out of retailers stores.

So while futuristic AI is at the core of the solution, the problem is deeply personal to OHerlihy and is based on traditional retail issues.

I grew up the son of a grocer in Ireland and spent my whole youth working in the family store, so my personal story ties into why Everseen chose to use its AI technology to address theft and customer/cashier error, OHerlihy said. Our store experienced so much loss over the years that we even joke that one of our cashiersbuilt a house with the money they stole. But the methods of tracking the loss were so cumbersome that it just became something you accepted, not something you ever actually solved.

The funds wont just be used for expansion into the US. Everseen will also focus on the future of retail and what can be done to make the consumer experience better, while eliminating shrinkage.

AI and machine learning are the rocket fuel that is enabling us to disrupt the retail industry, OHerlihy said. Our next step is to eliminate physical checkouts altogether, beginning with the introduction of checkout-free shopping (similar to Amazon Go) for major retailers worldwide. We have apatent pending for a virtual manager systemthat will enable our checkout-free shopping technology (0Line), and will make it a reality for retailers. Because were retailer-agnostic, without conflicting consumer interests (like Amazon or Facebook, or Google, for that matter), we believe were in a strong position to lead the retail world into this new phase.

0Line will provide retailers with video cameras and sensors. It will then integrate with inventory, POS, and customer data. That allows the system to identify products as consumers remove items from shelves and to automatically charge them upon leaving the store, along with sending an itemized receipt. It sounds like the perfect retail solution, but it isnt without issues.

The goal is to eliminate legacy technology such as self checkouts, OHerlihy said. It has been a real challenge to integrate our technology with antiquated technology like this.

With the current goal of beating the $45.2 billion shrinkage problem and futureintentions to change retail forever, Everseen is applying AI to retail environments that goes beyond the current crop of chatbots and online-to-offline applications, and thats refreshing to see.

More here:

Everseen lands $10 million so AI can keep an eye on a $45 billion problem - VentureBeat