Google’s anti-trolling AI can be defeated by typos, researchers find … – Ars Technica

Visit any news organization's website or any social media site, and you're bound to find some abusive or hateful language being thrown around. As those who moderate Ars' comments know, trying to keep a lid on trolling and abuse in comments can be an arduous and thankless task: whendone too heavily, it smacks of censorship and suppression of free speech; when applied too lightly, it can poison the community and keep people from sharing their thoughts out of fear of being targeted. And human-based moderation is time-consuming.

Both of these problems are the target of a project by Jigsaw, an Alphabet startup effort spun off from Google. Jigsaw's Perspective project is an application interface currently focused on moderating online conversationsusing machine learning to spot abusive, harassing, and toxic comments. The AI applies a "toxicity score" to comments, which can be used to either aide moderation or to reject comments outright, giving the commenter feedback about why their post was rejected. Jigsaw is currently partnering with Wikipedia and The New York Times, among others, to implement the Perspective API to assist in moderating reader-contributed content.

But that AI still needs some training, as researchers at the University of Washington's Network Security Lab recently demonstrated. In a paper published on February 27, Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran demonstrated that they could fool the Perspective AI into giving a low toxicity score to comments that it would otherwise flag by simply misspelling key hot-button words (such as "iidiot") or inserting punctuation into the word ("i.diot" or "i d i o t," for example). By gaming the AI's parsing of text, they were able to get scores that would allow comments to pass a toxicity test that would normally be flagged as abusive.

"One type of the vulnerabilities of machine learning algorithms is that an adversary can change the algorithm output by subtly perturbing the input, often unnoticeable by humans," Hosseini and his co-authors wrote. "Such inputs are called adversarial examples, and have been shown to be effective against different machine learning algorithms even when the adversary has only a black-box access to the target model."

The researchers also found that Perspective would flag comments that were not abusive in nature but used keywords that the AI had been trained to see as abusive. The phrases "not stupid" or "not an idiot" scored nearly as high on Perspective's toxicity scale as comments that used "stupid" and "idiot."

These sorts of false positives, coupled with easy evasion of the algorithms by adversaries seeking to bypass screening, belie the basic problem with any sort of automated moderation and censorship. Update: CJ Adams,Jigsaw's product manager for Perspective, acknowledged the difficulty in a statement he sent to Ars:

It's great to see research like this. Online toxicity is a difficult problem, and Perspective was developed to support exploration of how ML can be used to help discussion. We welcome academic researchers to join our research efforts on Github and explore how we can collaborate together to identify shortcomings of existing models and find ways to improve them.

Perspective is still a very early-stage technology, and as these researchers rightly point out, it will only detect patterns that are similar to examples of toxicity it has seen before. We have more details on this challenge and others on the Conversation AI research page. The API allows users and researchers to submit corrections like these directly, which will then be used to improve the model and ensure it can to understand more forms of toxic language, and evolve as new forms emerge over time.

Visit link:

Google's anti-trolling AI can be defeated by typos, researchers find ... - Ars Technica

Google hopes to prevent robot uprising with new AI training technique – The Independent

Designed by Pierpaolo Lazzarini from Italian company Jet Capsule. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph.

Jet Capsule/Cover Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi

Rex

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session

Rex

A test line of a new energy suspension railway resembling the giant panda is seen in Chengdu, Sichuan Province, China

Reuters

A test line of a new energy suspension railway, resembling a giant panda, is seen in Chengdu, Sichuan Province, China

Reuters

A concept car by Trumpchi from GAC Group is shown at the International Automobile Exhibition in Guangzhou, China

Rex

A Mirai fuel cell vehicle by Toyota is displayed at the International Automobile Exhibition in Guangzhou, China

Reuters

A visitor tries a Nissan VR experience at the International Automobile Exhibition in Guangzhou, China

Reuters

A man looks at an exhibit entitled 'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London

Getty

A new Israeli Da-Vinci unmanned aerial vehicle manufactured by Elbit Systems is displayed during the 4th International conference on Home Land Security and Cyber in the Israeli coastal city of Tel Aviv

Getty

Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S

Reuters

The Jaguar I-PACE Concept car is the start of a new era for Jaguar. This is a production preview of the Jaguar I-PACE, which will be revealed next year and on the road in 2018

AP

Japan's On-Art Corp's CEO Kazuya Kanemaru poses with his company's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' and other robots during a demonstration in Tokyo, Japan

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03'

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' performs during its unveiling in Tokyo, Japan

Reuters

Singulato Motors co-founder and CEO Shen Haiyin poses in his company's concept car Tigercar P0 at a workshop in Beijing, China

Reuters

The interior of Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Singulato Motors' concept car Tigercar P0

Reuters

A picture shows Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Connected company president Shigeki Tomoyama addresses a press briefing as he elaborates on Toyota's "connected strategy" in Tokyo. The Connected company is a part of seven Toyota in-house companies that was created in April 2016

Getty

A Toyota Motors employee demonstrates a smartphone app with the company's pocket plug-in hybrid (PHV) service on the cockpit of the latest Prius hybrid vehicle during Toyota's "connected strategy" press briefing in Tokyo

Getty

An exhibitor charges the battery cells of AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

A robot with a touch-screen information apps stroll down the pavillon at the Singapore International Robo Expo

Getty

An exhibitor demonstrates the AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

Robotic fishes swim in a water glass tank displayed at the Korea pavillon during Singapore International Robo Expo

Getty

An employee shows a Samsung Electronics' Gear S3 Classic during Korea Electronics Show 2016 in Seoul, South Korea

Reuters

Visitors experience Samsung Electronics' Gear VR during the Korea Electronics Grand Fair at an exhibition hall in Seoul, South Korea

Getty

Amy Rimmer, Research Engineer at Jaguar Land Rover, demonstrates the car manufacturer's Advanced Highway Assist in a Range Rover, which drives the vehicle, overtakes and can detect vehicles in the blind spot, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Chris Burbridge, Autonomous Driving Software Engineer for Tata Motors European Technical Centre, demonstrates the car manufacturer's GLOSA V2X functionality, which is connected to the traffic lights and shares information with the driver, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Ford EEBL Emergency Electronic Brake Lights is demonstrated during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA

Full-scale model of 'Kibo' on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan

EPA

Miniatures on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan. In its facilities, JAXA develop satellites and analyse their observation data, train astronauts for utilization in the Japanese Experiment Module 'Kibo' of the International Space Station (ISS) and develop launch vehicles

EPA

The robot developed by Seed Solutions sings and dances to the music during the Japan Robot Week 2016 at Tokyo Big Sight. At this biennial event, the participating companies exhibit their latest service robotic technologies and components

Getty

The robot developed by Seed Solutions sings and dances to music during the Japan Robot Week 2016 at Tokyo Big Sight

Getty

Government and industry are working together on a robot-like autopilot system that could eliminate the need for a second human pilot in the cockpit

AP

Aurora Flight Sciences' technicians work on an Aircrew Labor In-Cockpit Automantion System (ALIAS) device in the firm's Centaur aircraft at Manassas Airport in Manassas, Va.

AP

Stefan Schwart and Udo Klingenberg preparing a self-built flight simulator to land at Hong Kong airport, from Rostock, Germany

EPA

Read the original here:

Google hopes to prevent robot uprising with new AI training technique - The Independent

Artificial intelligence called threat to humanity, compared to nuclear weapons: Report – Washington Times

Artificial intelligence is revolutionizing warfare and espionage in ways similar to the invention of nuclear arms and ultimately could destroy humanity, according to a new government-sponsored study.

Advances in artificial intelligence, or AI, and a subset called machine learning are occurring much faster than expected and will provide U.S. military and intelligence services with powerful new high-technology warfare and spying capabilities, says a report by two AI experts produced for Harvards Belfer Center.

The range of coming advanced AI weapons include: robot assassins, superfast cyber attack machines, driverless car bombs and swarms of small explosive kamikaze drones.

According to the report, Artificial Intelligence and National Security, AI will dramatically augment autonomous weapons and espionage capabilities and will represent a key aspect of future military power.

The report also offers an alarming warning that artificial intelligence could spin out of control: Speculative but plausible hypotheses suggest that General AI and especially superintelligence systems pose a potentially existential threat to humanity.

The 132-page report was written by Gregory C. Allen and Taniel Chan for the director of the Intelligence Advanced Research Projects Activity, (IARPA), the U.S. intelligence communitys research unit.

The study calls for policies designed to preserve American military and intelligence superiority, boost peaceful uses of AI, and address the dangers of accidental or adversarial attacks from automated systems.

The report predicts that AI will produce a revolution in both military and intelligence affairs comparable to the emergence of aircraft, noting unsuccessful diplomatic efforts in 1899 to ban the use of aircraft for military purposes.

The applications of AI to warfare and espionage are likely to be as irresistible as aircraft, the report says. Preventing expanded military use of AI is likely impossible.

Recent AI breakthroughs included a $35 computer that defeated a former Air Force pilot in an air combat simulator, and a South Korean program that beat a person at Go, a chesslike board game.

AI is rapidly growing from the exponential expansion of computing power, the use of large data sets to train machine learning systems, and significant and rapidly increasing private sector investment.

Just as cyber weapons are being developed by both major powers and underdeveloped nations, automated weaponry such as aerial drones and ground robots likely will be deployed by foreign militaries.

In the short term, advances in AI will likely allow more autonomous robotic support to warfighters, and accelerate the shift from manned to unmanned combat missions, the report says, noting that the Islamic State has begun using drones in attacks.

Over the long term, these capabilities will transform military power and warfare.

Russia is planning extensive automated weapons systems and according to the report plans to have 30 percent of its combat forces remotely controlled or autonomous by 2030.

Currently, the Pentagon has restricted the use of lethal autonomous systems.

Future threats could also come from swarms of small robots and drones.

Imagine a low-cost drone with the range of a Canada Goose, a bird which can cover 1,500 miles in under 24 hours at an average speed of 60 miles per hour, the report said. How would an aircraft carrier battle group respond to an attack from millions of aerial kamikaze explosive drones?

AI-derived assassinations also are likely in the future by robots that will be difficult to detect. A small, autonomous robot could infiltrate a targets home, inject the target with a lethal dose of poison, and leave undetected, the report said. Alternatively, automatic sniping robots could assassinate targets from afar.

Terrorists also are expected in the future to develop precision-guided improvised explosive devices that could transit long distances autonomously. An example would be autonomous self-driving car bombs.

AI also could be used in deadly cyber attacks, such as hacking cars and forcing them to crash, and advanced AI cyber capabilities also will enhance cyber warfare capabilities by overwhelming human operators.

Robots also will be able to inject poisoned data into large data sets in ways that could create false images for warfighters looking to distinguish between enemy and friendly aircraft, naval systems or ground weapons.

Electronic cyber robots in the future will automate the human-intensive process of both defending networks from attacks, and probing enemy networks and software for weaknesses used in attacks.

Another danger is that in the future hostile actors will steal or replicate military and intelligence AI systems.

The report urged the Pentagon to develop counter-AI capabilities for both offensive and defensive operations.

GPS SPOOFING AND USS McCAIN

One question being asked by the Navy in the aftermath of this weeks deadly collision between the destroyer USS John S. McCain and an oil tanker is whether the collision was the result of cyber or electronic warfare attacks.

Chief of Naval Operations Adm. John Richardson was asked about the possibility Monday and said that while there is no indication yet that outside interference caused the collision, investigators will examine all possibilities, including some type of cyber attack.

Navy sources close to the probe say there is no indication cyber attacks or electronic warfare caused the collision that killed 10 sailors as the ship transited the Straits of Malacca near Singapore.

But the fact that the McCain was the second agile Navy destroyer to be hit by a large merchant ship in two months has raised new concerns about electronic interference.

Seven died on the USS Fitzgerald, another guided-missile destroyer that collided with a merchant ship in waters near Japan in June.

The incidents highlight the likelihood that electronic warfare will be used in a future conflict to cause ship collisions or groundings.

Both warships are equipped with several types of radar capable of detecting nearby shipping traffic miles away. Watch officers on the bridge were monitoring all approaching ships.

The fact that crews of the two ships were unable to see the approaching ships in time to maneuver away has increased concerns about electronic sabotage.

One case of possible Russian electronic warfare surfaced two months ago. The Department of Transportations Maritime Administration warned about possible intentional GPS interference on June 22 in the Black Sea, where Russian ships and aircraft in the past of have challenged U.S. Navy warships and surveillance aircraft.

According to the New Scientist, an online publication that first reported the suspected Russian GPS spoofing, the Maritime Administration notice referred to a ship sailing near the Russian port of Novorossiysk that reported its GPS navigation falsely indicated the vessel was located more than 20 miles inland at Gelendzhik Airport, close to the Russian resort town of the same name on the Black Sea.

The navigation equipment was checked for malfunctions and found to be working properly. The ship captain then contacted nearby ships and learned that at least 20 ships also reported that signals from their automatic identification system (AIS), a system used to broadcast ship locations at sea, also had falsely indicated they were at the inland airport.

Todd Humphreys, a professor who specializes in robotics at the University of Texas, suspects the Russians in June were experimenting with an electronic warfare weapon designed to lure ships off course by substituting false electronic signals to navigation equipment.

On the U.S. destroyers, Mr. Humphreys told Inside the Ring that blaming two similar warship accidents on human negligence seems difficult to accept.

With the Fitzgerald collision fresh on their minds, surely the crew of the USS John McCain would have entered the waters around the Malacca Strait with extra vigilance, he said. And yes, its theoretically possible that GPS spoofing or AIS spoofing was involved in the collision. Nonetheless I still think that crew negligence is the most likely explanation.

Military vessels use encrypted GPS signals that make spoofing more difficult.

Spoofing the AIS on the oil tanker that hit the McCain is also a possibility, but would not explain how the warship failed to detect the approaching vessel.

One can easily send out bogus AIS messages and cause phantom ships to appear on ships electronic chart displays across a widespread area, Mr. Humphreys said

Mr. Humphreys said he suspects Navy investigators will find three factors behind the McCain disaster: The ship was not broadcasting its AIS location beacon; the oil tankers collision warning system may have failed or the Navy crew failed to detect the approaching tanker.

Contact Bill Gertz on Twitter @BillGertz.

View post:

Artificial intelligence called threat to humanity, compared to nuclear weapons: Report - Washington Times

Is AIOps the Answer to Your AI Woes? – RTInsights

Companies must make AIOps a vital part of company operations to survive the coming digital transformation.

AI adoption is going tobe a key component for business survival by 2025, based on a global study by Genpact, but companies still struggle with what that means andhow to accomplish it. More often than not, those big AI-driven initiatives endin failure. So, wheres the disconnect between the need for AI and itsimplementation?

According to the Harvard Business Review, theres one reason and one reason only that companieskeep missing the mark. If your business wants to survive in the next phase ofdigital transformation, you need AI Operations.

See also: AIOps Gaining Traction as Technology Accelerator

Businesses are sofocused on the shiny appeal of AI that they fail to consider how theyll actually use their new AI-initiatives.HBRs biggest lesson in all of this is the need to build an AI-integratedorganization from the ground up, i.e., building and managing AI to deliverresults.

Companies must take stockof existing systems and use AI-driven initiatives to facilitate those endresults. For example, companies could use contract management software toshorten the time from inquiries to signing new contracts. The infrastructuremust be there first.

This concept is morethan just the software. A business must invest in engineers and developers ableto identify key areas where AI could transform a process into something thatproduces results, and that requires more than simple development.

Much like DevOpsrevolutionized software development and DataOps is transforming big data, AIOpstakes the same approach of integration and continual insight. A proper AIOps loopsees a measurable end goal and can get there using AI-driven initiatives.

Businesses must addressthe layers of AIOps if they want to implement AI effectively. These layers arevital for companies that want to use AI to drive insights, transform businesspractices, and survive digital transformation.

Before you getdistracted by all of AIs shiny features, consider how it will integrate intoyour existing systems. AIOps is a competitive necessity, according to HBR.Companies must make AIOps a vital part of company operations to survive thecoming digital transformation.

The rest is here:

Is AIOps the Answer to Your AI Woes? - RTInsights

The Secret AI Testers inside Tom Clancy’s The Division – Gamasutra

The following blog post, unless otherwise noted, was written by a member of Gamasutras community.The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

AI and Games is a crowdfunded YouTube series that explores research and applications of artificial intelligence in video games. You can support this work by visiting my Patreon page.

In collaboration with Ubisoft and Massive Entertainment, I present three blogs exploring the AI behind Tom Clancy's The Division 2, including excerpts from my interview with the Lead AI Programmer of the franchise, Philip Dunstan.

Part 1 of this series, where we discuss enemy AI design can be found here.

Meanwhile part 2, which explores open-world and systemic design can be found here.

Building a live-service game such as Tom Clancy's The Division, comes with all sorts of challenges. Ensuring the game is stable for players in a variety of online connections, handling the different ways players move through the world and exploring interactive systems and gameplay challenges, but more critically, checking that the game plays as expected so that players aren't getting frustrated because world events don't trigger or missions don't register as complete. While this is most certainly a challenge for quality assurance and testing teams, as the scale and complexity of these games increase, the workloads of QA teams explode in scale. And that's where artificial intelligence can help not just create in-game functionality, but change how the game is being developed.

In this final blog on the AI of The Division 2, I'm going to take a look at the secret AI players that playtest Tom Clancy's The Division with insights from Philip Dunstan: the lead AI programmer at developers Massive Entertainment. I'll be looking at the custom AI bots that are deployed to assess specific parts of the game and how the first games post-launch DLC changed how the game would be tested moving into Division 2.

Tom Clancy's The Division has not one, but two types of bots that are used to help test the games of the franchise: the server bots and the client bots. The Server bots - as the name might suggest - run natively on the server and don't interface with the game like a player would. As I'll explain in a minute, these bots behave completely different from real players and are designed to stress-test the limits of the Division servers. Meanwhile the client bots run as if they're playing a build of the game client-side. They assume control of the game instead of the player, adhering to all the same rules as a regular player, to a point that all the in-game systems genuinely believe that this player character is being controlled by a human. They don't have any special modifications that allow them to manipulate or cheat the game and are built to run on any platform, be it PC, Xbox or Playstation. Their job is to test how the game actually works for players: testing the main story missions, wandering the open world and gathering all sorts of performance stats along the way that help give the developers a stronger understanding of how the game will perform for players when they log into Washington.

The demand for these types of tools is ever increasing. As in-game worlds in live-service games continue to increase the number of potential bugs explodes exponentially. Even if you consider both Division games, it's not just the map of Washington DC being larger than Manhattan, but each update to both games not only introduces new content - which might have bugs in it - but it also can change or impact a lot of the existing content in the game - meaning even more bugs because you broke something that was already working. This is only made worse by the reality live service games need to be updated fairly frequently to maintain player engagement and these updates need to work, so the word of mouth continues to be strong. This is a problem that exceeds the capabilities of human testers: as more content is being built and existing content modified, not only does quality control need to be maintained on all the new content, but everything else that already exists in the game. This is thousands upon thousands of play hours and is increasingly difficult to balance. And sometimes, the requirements of testing exceed the number of available staff who can even sit down and play the game...

Philip Dunstan: "As you can imagine we're building servers that host a thousand players, but it's really difficult to get a thousand players to play at the same time. And especially if you want to know if your servers can stay up for a week it's difficult to find a thousand players that can play continuously to test the stability of your server while the game is in such an early stage of development.

As mentioned in a previous blog, the original Tom Clancy's The Division runs with what is known as a Server Bot: it's an AI player that logs into a Division server and plays around in the game. This is being used to test whether or not the games systems are operated as expected. As Philip explained, while the develop team really benefit from this, the actual AI they built was really simple and, well... it cheats a lot.

Philip Dunstan: "So very early on in the Division 1, we had these server bots that would connect to a game, they would... you know they're actually really stupid. They're not trying to mimic player behaviour at all. They just teleport around the world, they find NPCs to kill, they shoot the NPCs and then they teleport off to a different part of the world. And they've got god mode turned so they can't be killed and they just do this continuously and then every now and then they disconnect from the game and they reconnect or they group up into co-op sessions and they disconnect. We're testing our ability to you know group players, to create all the different phases for the players to join and disconnect. And then surprisingly it's extremely performance metrics out of these bots. Their performance metrics actually very closely matches the type of metrics we see in players, even though they're not trying play like a player."

"We had those in the Division, we honestly would not have been able to ship a stable Division 1 or Division 2. I mean Division 1 and Division 2 were both extremely stable games you know considering how many players we had after launch. If you look at this last year type thing, the number of like significant downtime causing issues that we've had has been extremely low. And we're able to do that because we're able to test it to an extent that we're satisfied through an automated method."

While the server bots were conceived from the very beginning, the client bots are a different story altogether and emerged from an interesting problem during the development of the first Division. But not at launch, rather with the second DLC update for the game The Underground.

The Underground opens up a whole new game mode in the Division accessible from the basement of your base of operations: the James A Farley post office building across the street from Madison Square Garden. In the underground players would complete procedurally generated missions comprised of different enemy factions hiding out in the tunnels underneath New York. And this is where it introduces a new problem: unlike the rest of the Division, if a mission is going to be procedurally generated how do you test each possible permutation to know it's going to work?

Philip Dunstan: "The client bots were interesting, the Underground, because it is procedurally generated had a sort of problem which had been unique up until that point. Up to that point we'd be able to test whether a level could be completed by having QC run through that level and see if we can complete it. We have a large test team at Ubisoft that is constantly playing through the levels testing things like 'is this level completable'. And that worked perfect fine for the launch of the Division and survival mode. But for underground, we had you know hundreds and thousands of different variations of the level. It no longer became possible to test this manually. We had a technical problem at the time as well that our navmesh generation wasn't consistent enough, that when we generate the navmesh for the underground level one of the variations might be playable, but later on when someone had moved some props around and we may have had a navmesh break on a subsequent generation. So it became not just impossible from a practical sense of how many testers you need, it just wasn't even feasible at all to manually test."

The Client bots were headed up by one Massive's partner studios working on the Division: Ubisoft Reflections based in the UK. As mentioned earlier, the team opted for the more challenging task of creating an entirely new system. The AI players are not based on the existing AI archetypes, instead it's a custom built AI layer that directly controls the players inputs. This helps keep all the development of these tools isolated from main enemy AI but as mentioned, it means every system in the game still believes that a human player is playing the game. The system was subsequently interfaced into the debug console and tools, allowing for a variety of game actions to bypass the controller layer and be processed by the player character. This means that just like a human, the actions it's trying to execute only work if the current game state permits it.

One of the first priorities for the bots was to the test navigation mesh coverage. Navigation meshes are the data structure baked into a level that tells AI characters where in the world they can move around. Without a working nav mesh, no friendly or enemy AI would be able to walk around the map. Hence if any of it is broken this needs to be identified immediately for designers and programmers to test. In addition, follow bots were built that allowed for AI players to follow human ones, once again checking how AI characters might be able to use the navigation mesh to move through complex environments and combat arenas. Plus simple combat behaviours that - while they didn't pay attention to their health or avoiding any hazards - would eliminate targets simply by turning the in-game camera towards an enemies head and then pulling the trigger.

But in-time this scaled up from the more low-level tests of movement space and simple combat, to being able to take on an entire mission. This requires a lot more complexity and interfacing with the separate mission system built into The Division's codebase given it needs to know what the objectives are at any point in time and naturally these shift throughout a given story mission. This requires a more nuanced process, whereby the bots kill all the enemies in an area, follow the path to the objective marker, destroying specific objects if expected to and triggering any and all interactions it finds within a radius of itself.

With the client bots successfully implemented, they could not only test hundreds of permutations of the Underground missions over a weekend, but they could also run tests overnight on all the main story missions from the first game.

Philip Dunstan: "And this was y'know, really successful for underground, it became an important part of our tool for walkthroughs, but also even more importantly I think leading into Division 2 it became even more successful as a means to test our client performance through a level. You could have a whole suite of consoles playing through the level and recording their FPS and memory usage every night or 24/7 and creating reports from that. And same for the open world, we could have a group of client bots moving through the open world and finding parts of the world where performance becomes a problem. So they became like a really important part of console performance testing and they're still being used like that on Division 2."

Transitioning this technology from Division 1 to Division 2 still required a lot of extra work, as well as a change in workflow for both testers and the level and gameplay designers. The progression of missions in the Division 2 is not as linear as it was in the first game, hence the bots might become confused in certain parts of the mission as to how to proceed. So throughout development on Division 2, invisible testing markup is laced throughout each mission of the game by the level designers. It's completely invisible to players and has zero impact on our experience of the game, but the client bots can read this mark-up as directions for continuing their work. Mission tests are ran nightly and help ensure errors don't creep through into production as they identify mission content failing or game systems not executing as expected. There is also a separate error type for logging when the client bot wasn't working as expected and the developers strive to ensure all three of these categories have zero failures in them at all times.

In addition, the open-world testing is used by the QA and Tech Art teams on the game to identify areas of the world where there are performance overheads and they report specific bugs once they've looked at the data. As the client bots visit all playable space within the world, they identify areas of the world they get stuck in and cannot return from as well as framerate drops that could be due to high poly counts or issues with textures or particle systems in the environment.Lastly, there is also the AI Gym: which is dedicated to testing both the bots functionality itself, but also core gameplay mechanics should changes be made.

There are still limitations to what these systems can do, largely in part due to the complexities of the game world that would seem intuitive to a player, but the client bot might need some extra hand holding. And of course despite this big push into automation, there's still a lot of value gained from having people sit down and play the game as well.

Philip Dunstan: "There's definite restrictions to what our client bots can do. Again they're not really trying to play like a human, we're not trying to model human play. They move through the levels on a 'golden path', a hand-placed level-designed path on this is how you move through the level. They need to know how to interact with the doors or they need to know how to interact with the laptops to unlock the next part of the level. So they do require some manual scripted setup. So they're not really playing like a player would play. But they still provide a lot of benefit even with those restrictions.

You still need to have dev testers y'know, testing that you can't walk off, there aren't nav mesh blockers preventing you from getting to parts of levels because the client bot will move along the golden path and not check the areas of the combat space. But you get an early sort of smoke-test system of saying y'know 'is there something significantly wrong with this level'?"

As games continue to become larger and more complex, there is a real need to automate critical aspects of development. Be it testing frameworks, batch processes of art pipelines, animation controllers, design tools and more, all of it serves the needs of the development team. Allowing for programmers, artists and designers to focus their efforts on delivering the best game they can. Artificial Intelligence is slowly changing the way in which video games are built and is being applied in ways that players would never really think about. The real achievement is by employing AI in these new and pragmatic ways, it helps keeps the problems that can emerge in game development manageable. It keep projects on track and in budget. But it's important that players understand the challenges of how games are made: be it the Division 2, other Ubisoft projects or the industry as a whole.

Special thanks to Ubisoft for the opportunity to work with them on this project. And of course to my patrons who crowdfund the AI and Games series.

View post:

The Secret AI Testers inside Tom Clancy's The Division - Gamasutra

Joint Statement on the Creation of the Global Partnership on Artificial Intelligence – JD Supra

[co-author: Adam Perkins, Trainee Solicitor]

On June 15, 2020, the Government of the United Kingdom issued a joint statement announcing the creation of the Global Partnership on Artificial Intelligence (GPAI) along with 14 other founding members, including the European Union and the United States of America.

As announced, GPAI is an international partnership that will aim to promote the responsible development and use of Artificial Intelligence (AI) in a human-centric manner. This means developing and deploying AI in a way that is consistent with human rights, fundamental freedoms and shared democratic values. GPAIs aim is to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.

The values which GPAI endorses reflect the core AI principles as promoted by the Organisation for Economic Co-operation and Development (OECD) in the May 2019 OECD Council Recommendation on AI. The OECD will be the host of GPAIs Secretariat in Paris, and GPAI will draw upon the OECDs international AI policy leadership. It is thought that this integration will strengthen the evidence base for policy aimed at responsible AI. In addition, GPAI has stated that it is looking forward to working with other interested countries and partners.

Centres of Expertise in Montreal and Paris will provide research and administrative support to GPAI, while the GPAI Secretariat will lend support to GPAIs governing bodies, consisting of a council and steering committee. GPAI will engage in scientific and technical work and analysis, bringing together experts within academia, industry and government to collaborate across the following four initial working groups:

The outlook of these working groups appears to reflect GPAIs recognition of the potential for AI to act as a catalyst for sustainable economic growth and development, providing that it can be done in an accountable, transparent and responsible manner.

GPAIs short term priority, however, is to investigate how AI can be used to help with the response to, and recovery from, COVID-19.

The first annual GPAI Multistakeholder Experts Group Plenary is planned to take place in December 2020.

The creation of GPAI is an exciting new step in the global effort to harvest the possibilities which AI offers in an ethical and responsible way, minimizing the risks to individuals rights and freedoms. We will be monitoring its progress.

Link:

Joint Statement on the Creation of the Global Partnership on Artificial Intelligence - JD Supra

AI And Account Based Marketing In A Time Of Disruption – Forbes

Getty

We dont know how the massive shifts in consumer behavior brought on by the COVID-19 pandemic will evolve or endure.But we do know that as our lives change, marketers data change.Both the current impact and the future implications may be significant.

I asked Alex Atzberger, CEO of Episerver, a digital experience company, to put the issues in perspective.

Paul Talbot:How is AI holding up? Has the pandemic impacted the quality of data used to feed analytic tools that help marketers create both strategic and tactical scenarios and insights?

Alex Atzberger:There is more data and more need for automation and AI now than ever. Website traffic is up, and digital engagement is way up due to COVID-19.

Business leaders and marketers now need automation and AI to free up headspace as they have to deal with so many fires.

Many marketers rely on personalization from AI engines that run in the background so that they can adjust their messaging to our times. AI is a good thing for them right now. Theyre able to get data faster, analyze faster and make better decisions.

However, they need to be aware of what has changed. For example, some of the data inputs may not be as good as before as people work from home and IP addresses are no longer identifying the company someone is with.

Talbot:Given the unknowns we all face, how can marketing strategy be adjusted thoughtfully?

Atzberger:A practitioners time horizon for strategy shortens dramatically in crisis, and you need to spend more time on it. Planning is done in weeks and months, and you need to be ready to re-plan, especially since you have limited visibility into demand.

It can still be done thoughtfully but needs to adapt to the new situation and requires input from sales, partners and others on what channels and activities are working. The more real-time you can assess what is working, the better you can adjust and plan for the future.

Talbot:On a similar note, how have coronavirus disruptions altered the landscape of account-based marketing?

Atzberger:It has created massive disruptions. ABM depends on being able to map visitors to accounts. We see companies where that mapping ability has dropped 50% since working from home started. This is a big challenge.

A lot of the gains in ABM in recent years rests on our ability to target ads, content, direct sales team efforts and look at third-party intent signals. Without a fundamental piece of data, the picture is fuzzy again. Its like being fitted with a worse prescription of glasses you just cant see as clearly.

Talbot:With the soaring numbers of people working from home, how does this impact marketing strategy for the B2B organization?

Atzberger:In a big way. Anything based on account is going to be affected because its now more difficult to identify these buyers who are at home and look the same.

Direct mail programs are a big challenge because you cant really send stuff to their homes, thats a little creepy. Events are severely impacted too and sponsoring or attending an online version of a big industry trade show just isnt quite the same thing.

The marketing mix has to shift, your website has to work harder, your emails have to work harder, webinars have to work harder, all these digital channels will need to deliver much more to make up for systemic softness in other areas.

Talbot:Any other insights youd like to share?

Atzberger:We like to say, you are what you read. Rather than relying on IP addresses, you can 1:1 personalize content based on a visitors actual site activity.

This is what ABM is all about: to figure out whats more relevant for a person based on their industry. Now leapfrog that and go to the individual to act on what shes interested in at that moment. The current crisis might give you the best reason for change.

Originally posted here:

AI And Account Based Marketing In A Time Of Disruption - Forbes

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms – All About Circuits

Artificial intelligence labs race to develop processors that are bigger, faster, stronger.

With major companies rolling out AI chips and smaller startups nipping at their heels, theres no denying that the future of artificial intelligence is indeed already upon us. While each boasts slightly different features, theyre all striving to provide ease of use, speed, and versatility. Manufacturers are demonstrating more adaptability than ever before, and are rapidly developing new versions to meet a growing demand.

In a marketplace that promises to do nothing but grow, these four are braced for impact.

The Verge reports that Qualcomms processors account for approximately 40% of the mobile market, so their entry into the AI game is no surprise. Theyre taking a slightly different approach thoughadapt existing technology that utilizes Qualcomms strengths. Theyve developed a Neural Processing Engine, which is an SDK that allows develops to optimize apps to run different AI applications on Snapdragon 600 and 800 processors. Ultimately, this integration means greater efficiency.

Facebook has already begun using its SDK to speed up augmented reality filters within the mobile app. Qualcomms website says that it may also be used to help a devices camera recognize objects and detect object for better shot composition, as well as make on-device post-processing beautification possible. They also promise more capabilities via the virtual voice assistant, and assure users of the broad market applications--from healthcare to security, on myriad mobile and embedded devices, they write. They also boast superior malware protection.

It allows you to choose your core of choice relative to the power performance profile you want for your user, said Gary Brotman, Qualcomm head of AI and machine learning.

Qualcomms SDK works with popular AI frameworks, including Tensor Flow, Caffe, and Caffe2.

Googles AI chip showed up relatively early to the AI game, disrupting what had been a pretty singular marketplace. And Googles got no plans to sell the processor, instead distributing it via a new cloud service from which anyone can build and operate software via the internet that utilizes hundreds of processors packed into Google data centers, reports Wired.

The chip, called TPU 2.0 or Cloud TPU, is a followup to the initial processor that brought Googles AI services to fruition, though it can be used to train neural networks and not just run them like its predecessor. Developers need to learn a different way of building neural networks since it is designed for Tensorflow, but they expectgiven that the chips affordabilitythat users will comply. Google has mentioned that researchers who share their research with the greater public will receive access for free.

Jeff Dean, who leads the AI lab Google Brain, says that the chip was needed to train with greater efficiency. It can handle180 trillion floating point operations per second. Several chips connect to form a pod, that offers 11,500 teraflops of computing power, which means that it takes only six hours to train 32 CPU boards on a portion of a podpreviously, it took a full day.

Intel offers an AI chip via the Movidius Neural Compute Stick, which is a USB 3.0 device with a specialized vision processing unit. Its meant to complement the Xeon and Xeon Phi, and costs only $79.

While it is optimized for vision applications, Intel says that it can handle a variety of DNN applications. They write, Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.

The stick is powered by a VPU like what you might find in smart security cameras, AI drones, and industrial equipment. It can be used with trained Caffe framework-based feed-forward Convolutional Neural Network or the user may choose another pre-trained network, Intel reports. The Movidius Neural Compute Stick supports Cnn profiling, prototyping, and tuningworkflow,provides power and data over a single USB Type A port, does not require cloud connectivity, and runs multiple devices on the same platform.

From Raspberry Pi to PC, the Movidius Neural Compute Stick can be used with any USB 3.0 platform.

NVIDIA was the first to get really serious about AI, but theyre even more serious now. Their new chipthe Tesla V100is a data center GPU. Reportedly, it made enough of a stir that itcaused NVIDIA's shares to jump 17.8% on the day following the announcement.

The chip stands apart in training, which typically requires multiplying matrices of data a single number at a time. Instead, the Volta GPU architecture multiplies rows and columns at once, which speeds up the AI training process.

With 640 Tensor Cores,Volta is five times faster than Pascal and reduces the training time from 18 hours to 7.4 and uses next generation high-speed interconnect technology which, according to the website, enables more advanced model and data parallel approaches for strong scaling to achieve the absolute highest application performance.

Heard of more AI chips coming down the pipe? Let us know in the comments below!

Read the original:

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms - All About Circuits

Demystifying AI: Can Humans and AI coexist to create a hyper-productive HumBot organisation? – The Indian Express

New Delhi | Updated: June 12, 2020 11:29:24 am

By Ravi Mehta, Sushant Kumaraswamy, Sudhi H and Prashant Kumar

In the last few years, as powerful AI technologies have become more mainstream, many apprehensions have been raised on the role AI will play in the evolution of work. While many opinions have been expressed on the predatory role of AI (for example, AI will replace most of the work humans do), we offer an alternate view of the role AI can play in our lives and especially in our organizations. The famous poet, Robert Frost, once beautifully articulated that Two roads diverged in a wood, and I took the one less traveled by, and that has made all the difference. It seems that, as business leaders, we are at a similar two roads diverged in a wood moment, and the decisions we take on the role AI will play in our organisations will probably significantly change the evolutionary trajectory of our organisations. We believe that organisations can benefit much more from following an augmentation strategy (as compared to replacement strategy) as it relates to AI. Our experience has shown that, if augmented effectively with unique human capabilities, AI has the potential to significantly transform the three key pillars of organisations work, worker and workplace and enable creation of a hyper-productive HumBot (Human + Robot) organisation.

Work is a fundamental and defining component of a human life. While technology advancements have impacted the way work is done, humans still spend a lot of time doing work that can be best done by a machine (Bot). By freeing up humans to focus more on those tasks (for example, empathy and inspiration) that maximises their potential, we are likely to significantly increase organisational productivity. However, to achieve this, we will need to redesign work to optimally utilise and integrate the best of both human and bot capabilities. For example, we can leverage the bots ability to do high-volume, complex data collation task (for example, download bulk data from multiple systems at different times and do pattern and anomaly detection) and augment that with uniquely human skills (for example, deep enquiry, crisp articulation) to create a proactive insights platform that can significantly enhance quality of decision making throughout the organisation.

As work gets redesigned through infusion of AI technologies, the role of the worker (doing the work) is also likely to change significantly. While some roles may get replaced by AI, we believe AI technologies can lead to two significant benefits for workers (a) it can create new roles that do not exist today (b) it can transform existing roles to make them more impactful. For example, while AI may automate a transactional process like invoice processing (and hence replace the work of people processing invoices), it can create new higher value-added roles for better managing the working capital of the organisation and to enhance the quality of relationship the organisation has with its ecosystem of vendors and partners. Additionally, AI has the potential to further increase the effectiveness of these new roles by acting as personalised digital augmenters (for example, alert the vendor relationship manager on a significant news about an important vendor and proactively perform quick customised correlation analysis to provide next best moves for consideration). By embracing (rather than fearing and resisting) AI, we have the opportunity to enhance the quality of work and provide human workers more opportunities to find joy, meaning and fulfilment in their work.

Also Read:Automation and AI in a changing business landscape

As the work gets redesigned and the role of worker gets enhanced, the workplace is also expected to change significantly. COVID-19 has taught us that humans are resilient enough to change their behaviours and attitudes quickly and dramatically. As work from anywhere becomes more common, the definition of workplace may become more fluid. While this increased fluidity may lead to increased productivity and better worker morale, organisations will need to consider creating a more secure, responsive and collaborative hybrid (virtual and physical) workplace. AI technologies (for example, virtual whiteboards that convert speech to text and vice versa) can help create these hybrid workplaces to help human workers achieve better outcomes in a faster, smarter and more secure manner.

As business leaders navigate the proverbial two-roads-diverged-in-a-wood moment as it relates to defining the right AI strategy for their organisations, we suggest also considering the augmentation strategy (as compared to the replacement strategy we hear most about). Defining and implementing the right AI strategy can help organisations to create a hyper-productive HumBot organisation in which a new type of work is performed by a new type of worker in a new type of (hybrid) workplace.

Ravi Mehta is Partner; Sushant Kumaraswamy, Director; Sudhi. H, Associate Director; and Prashant Kumar, Senior Consultant at Deloitte India

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

IE Online Media Services Pvt Ltd

Read more:

Demystifying AI: Can Humans and AI coexist to create a hyper-productive HumBot organisation? - The Indian Express

Discover Unlimited Possibilities with OpenAI’s AI Tool GPT-3 – Analytics Insight

Developed by Elon Musk-owned OpenAI, GPT-3 is the autoregressive language model that deploys deep learning to produce human-like text. OpenAIs GPT-3 is currently the largest artificial intelligence language model, marred in debates that range from whether it is a step closer to AGI (Artificial General Intelligence) or it is the first step toward creating this sort of superintelligence.

GPT-3 (generative pre-trained transformer.) is the third in a series of autocomplete tools designed by OpenAI. GPT-3 program has been trained on a huge corpus of text stored as billions of weighted connections between the different nodes in GPT-3s neural network. The program looks and finds patterns without any guidance, which it then uses to complete text prompts. If you input the word fire into GPT-3, the program knows, based on the weights in its network, that the words alarm and water are much more likely to follow than soil or forests.

GPT-3is trained on 175 billion parameters that are more than 100 times more than its predecessor and ten times more than comparable programs, to complete a mind-boggling array of autocomplete tasks, whose sharpness astonishes mankind!

The dataset GPT-3 was entirety trained on-

The English Wikipedia, spanning some 6 million articles which makes up only 0.6 percent of its training data.

Digitized books and various web links, including news articles, recipes, and poetry, coding manuals, fanfiction, religious prophecy, and whatever else imaginable!

Any type of good and bad text that has been uploaded on the internet including the potentially harmful conspiracy theories, racist screeds, pseudoscientific textbooks, and the manifestos of mass shooters.

Its hardly comprehensive, but heres a small sample of things people have created with GPT-3:

A chatbot that talks to historical figures

Because GPT-3 has been trained on so many digitized books, it has assimilated a fair amount of knowledge relevant to specific thinkers. Leverage GPT-3 to make a chatbot talk like the philosopher Bertrand Russell, and ask him to explain his views. Fictional characters are as accessible to GPT-3 as historical ones. Check out the exciting dialogue between Alan Turing and Claude Shannon, interrupted by Harry Potter!

Makes your own quizzes

Definitely a blessing to the education system, GPT-3 is an awesome helper of teachers as well as students. It will generate Quizzes for practice on any topics and also explain the answers to these questions in detail, helping students to learn anything from anyone be it robotics from Elon Musk, physics from Newton, relativity theory from Einstein, and literature from Shakespeare.

A question-based search engine

Trained on the entire Wikipedia, GPT-3 is like Google but for questions and answers. Type a question and GPT-3 directs you to the relevant Wikipedia URL for the answer.

Answer medical queries

A medical student from the UK used GPT-3 to answer health care questions. The program not only gave the right answer but correctly explained the underlying biological mechanism.

Style transfer for text

The input text is written in a certain style and GPT-3 can change it to another. In an example on Twitter, a user input text in plain language and asked GPT-3 to change it to legal language. This transforms inputs from my landlord didnt maintain the property to The Defendants have permitted the real property to fall into disrepair and have failed to comply with state and local health and safety codes and regulations.

Compose its own Music

Guitar tabs are shared on the web using ASCII text files, which comprise part of GPT-3s training dataset. Naturally, that means GPT-3 can generate music itself after being given a few chords to start.

Write creative fiction

This is a wide-ranging area within GPT-3s skillset but an incredibly impressive one. The best collection of the programs literary samples comes from independent researcher and writer Gwern Branwen who has collected a trove of GPT-3s writing. It ranges from a type of one-sentence pun known as a Tom Swifty to poetry in the style of Allen Ginsberg, T.S. Eliot, and Emily Dickinson to Navy SEAL copypasta.

Autocomplete images, not just text

The basic GPT architecture can be retrained on pixels instead of words, allowing it to perform the same autocomplete tasks with visual data than it does with text input.

Solving language and syntax puzzles

You can show GPT-3 certain linguistic patterns (Like truck driver becomes driver of truck and chocolate cake becomes cake made of chocolate) and it will complete any new prompts you show it correctly. However, being in the nascent stage, a lot of developments are still bound to happen. As computer science professor Yoav Goldberg whos been sharing lots of these examples on Twitter puts it, such abilities are new and super exciting for AI, but they dont mean GPT-3 has mastered language.

Code generation based on text descriptions

Describe a design element or page layout of your choice in simple words and GPT-3 spits out the relevant code. Users have used GPT-3 to generate code for a machine learning model, just by describing the dataset and required output. In another example, in a layout generator, you have to describe any layout you want, and GPT-3 will generate the JSX code for you.

A world of Unlimited Possibilities has just Begun!

Read more from the original source:

Discover Unlimited Possibilities with OpenAI's AI Tool GPT-3 - Analytics Insight

Wharton School’s Kartik Hosanagar Launches AI for Business Initiative – India West

The University of PennsylvaniasWharton Schoolof Business June 30 announced the establishment of WhartonAI for Business, an initiative led by AI expert and Wharton professor Kartik Hosanagar.

The initiative boasts that it will inspire cutting-edge teaching and research in artificial intelligence, while joining with global business leaders to set a course for better understanding of this nascent discipline.

The advances made possible by artificial intelligence hold the potential to vastly improve lives and business processes, outgoing Wharton DeanGeoff Garrett in a statement. Our students, faculty, and industry partners are eager to join in our AI knowledge creation efforts to more deeply explore how machine learning will impact the future for everyone.

Operating withinAnalytics at Whartonand led by Hosanagar, the John C. Hower Professor of Operations, Information and Decisions, AI for Business will explore artificial intelligences applications and impact across industries, the university notes.

Hosanagar is renowned for his AI research and instruction. He is the author of the book, A Humans Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control, and created the first Wharton online courses on AI, Artificial Intelligence for Business.

The Indian American entrepreneur has also founded or advised numerous startups in online marketing and retail, including Yodle and Milo, it said.

Our students and professors are energized by the idea that AI is influencing nearly every aspect of humanity, and our efforts to understand it can make a difference for years to come, he said in the university report.

Im very excited to help lead AI for Business since the future of machine learning is happening nowthere are unlimited entry points for experiential learning to explore the topic, the professor added.

The launch of AI for Business is made possible by a $5 million gift from Tao Zhang and his wife Selina Chin, the Wharton alumni couple who founded the food delivery app Dianping and run the Singapore-based Blue Hill Foundation.

Earlier this year, Hosanagar launched one of the leading online courses in the AI space, the highly popular Artificial Intelligence for Business, offered by Wharton Online, according to the university.

Hosanagar spoke tothe university media outlet, Penn Today,about what he sees happening with AI in business, especially in light of the coronavirus pandemic.

He noted that AI for Business will support students, faculty and industry.

The professor said in the report that real gains in AI lie in using it for high-risk opportunities that revolve around customer satisfaction or other revenue generating activities.

Also, what separates the AI projects that succeed from the ones that dont often has to do with the business strategies organizations follow when applying AI, he said.

Streaming platforms such as Netflix and other entertainment companies face a dilemma because social distancing has disrupted their content creation pipelines, he noted.

When asked how students will benefit from AI for Business, Hosanagar said that, in addition to having access to new AI-focused courses, students will be able to apply classroom learning to real life business challenges through an analytics accelerator project, AI focused datathon, Whartons Venture Lab Business Challenge, Industry speaker series, and AI focused business treks.

As for the future of AI, he said, The future is very bright, including innovative methods for data collection, content creation, and large-scale automation that are opening new opportunities for business with the use of AI applications.

Excerpt from:

Wharton School's Kartik Hosanagar Launches AI for Business Initiative - India West

Is government ready for AI? – FCW.com

Emerging Tech

Artificial intelligence is helping the Army keep its Stryker armored vehicles in fighting shape.

Army officials are using IBMs Watson AI system in combination with onboard sensor data, repair manuals and 15 years of maintenance data to predict mechanical problems before they happen. IBM and the Armys Redstone Arsenal post in Alabama demonstrated Watsons abilities on 350 Stryker vehicles during a field test that began in mid-2016.

The Army is now reviewing the results of that test to evaluate Watsons ability to assist human mechanics, and the early insights are encouraging.

The Watson AI enabled the pilot programs leaders to create the equivalent of a personalized medicine plan for each of the vehicles tested, said Sam Gordy, general manager of IBM U.S. Federal. Watson was able to tell mechanics that you need to go replace this [part] now because if you dont, its going to break when this vehicle is out on patrol, he added.

The Army is one of a handful of early adopters in the federal government, and several other agencies are looking into using AI, machine learning and related technologies. AI experts cite dozens of potential government uses, including cognitive chatbots that answer common questions from the public and complex AIs that search for patterns that could signal Medicaid fraud, tax cheating or criminal activity.

There are, for a lack of a better number, a gazillion sweet spots for AI in government, said Daniel Enthoven, business development manager at Domino Data Lab, a vendor of AI and data science collaboration tools.

Still, many agencies will need to answer some difficult questions before they embrace AI, machine learning and autonomous systems. For instance, how will the agencies audit decisions made by intelligent systems? How will they gather data from often disparate sources to fuel intelligent decisions? And how will agencies manage their employees when AI systems take over tasks previously performed by humans?

Intelligence agencies are using Watson to comb through piles of data and provide predictive analysis, and the Census Bureau is considering using the supercomputer-powered AI as a first-line call center that would answer peoples questions about the 2020 census, Gordy said.

A Census Bureau spokesperson added that the AI virtual assistant could improve response times and enhance caller interactions.

Using AI should save the bureau money because you have a computer doing this instead of people, Gordy said. And if trained correctly, the system will provide more accurate answers than a group of call-center workers could.

You train Watson once, and it understands everything, he said. Youre getting a very consistent answer, time after time after time.

For many agencies, however, its still early in the AI adoption cycle. Use of the technology is very, very nascent in government, said William Eggers, executive director of Deloittes Center for Government Insights and co-author of a recent study on AI in government. If it was a nine-inning [baseball] game, were probably in the first inning right now.

He added that over the next couple of years, agencies can expect to see AI-like functionality being incorporated into the software products marketed to them.

The first step for many civilian agencies appears to be using AI as a chatbot or telephone agent. Daniel Castro, vice president of theInformation Technology and Innovation Foundation, said intelligent agents should be able to answer about 90 percent of the questions agencies receive, and the people asking those questions arent likely to miss having a human response.

Its not like people are expecting to know their IRS agents when they call them up with a question, he said.

The General Services Administrations Emerging Citizen Technology program launched an open-source pilot project in April to help federal agencies make their information available to intelligent personal assistants such as Amazons Alexa, Googles Assistant and Microsofts Cortana. More than two dozen agencies including the departments of Energy, Homeland Security and Transportation are participating.

Many vendors and other technology experts see huge opportunities for AI inside and outside government. In June, an IDC study sponsored by Salesforce predicted that AI adoption will ramp up quickly in the next four years. AI-powered customer relationship management activities will add $1.1 trillion to business revenue and create more than 800,000 jobs from 2017 to 2021, the study states.

In the federal government, using AI to automate tasks now performed by employees would save at least 96.7 million working hours a year, a cost savings of $3.3 billion, according to the Deloitte study. Based on the high end of Deloittes estimates, AI adoption could save as many as 1.2 billion working hours and $41.1 billion every year.

AI-based applications can reduce backlogs, cut costs, overcome resource constraints, free workers from mundane tasks, improve the accuracy of projections, inject intelligence into scores of processes and systems, and handle many other tasks humans cant easily do on our own, such as sifting through millions of documents in real time for the most relevant content, the report states.

Although some might fear a robot takeover, Eggers said federal workers should not worry about their jobs in the near term. Although theres likely to be pressure from lawmakers to use AI to reduce the governments headcount, agencies should look at AI as a way to supplement employees work and allow them to focus on more creative and difficult tasks, he added.

See the original post:

Is government ready for AI? - FCW.com

How Toyota’s New Venture Fund Will Tackle AI Investments – R & D Magazine

Breakthroughs in robotics and artificial intelligence are poised to revolutionize a diverse array of industries.

Japanese carmaker Toyota hopes to be at the forefront of these innovations, which is why theyve launched a new venture capital fund.

Toyota A.I. Ventures, a new subsidiary of the Toyota Research Institute (TRI), will use an initial fund of $100 million to collaborate with entrepreneurs from all over the world, in an effort to improve the quality of human life through artificial intelligence.

The fund came together roughly over the last year or so with the thinking being theres a tremendous amount of innovation happening around the world in startup companies. We wanted to tap into that cauldron of innovation thats bubbling out there, explained Jim Adler, the managing director of the venture fund, to R&D Magazine in an interview.

Toyota A.I. Ventures focus is not research funding, said Adler. Instead, the venture will work with these companies at an early stage and offer a founder-friendly environment that wont impact the startups ability to work with other investors. They will also offer assistance with technology and product expertise to validate that the product being built is for the right market, and give these entrepreneurs access to Toyotas global network of affiliates and partners to ensure a successful market launch.

Adlers team will specifically look at the startups focusing on autonomous mobility, robotics, data analytics, and cloud computing.

Three startups specializing in these fields were already part of the venture capital organization when it launched:

These three companies are trying to tackle some of the less well-defined areas artificial intelligence could help with. This product suite can help senior citizens become more accustomed to operating in a new digital ecosystem, SLAMCores software can help robots and other vessels quickly scan and become adjusted to new environments, and Nautos hardware can enhance safety levels when driving.

Opportunities for A.I.

One of the areas I think is fascinating for A.I. is understanding how certain actors on the road are socially interacting. Its not just cars of course, but a mix of the obstacles that get in the way, he said.

Theres an additional level of complexity when focusing on the impact of different local customs and social contracts in place in different regions. For downtown San Francisco, for example, pedestrians pretty much rule the road whereas its taxicabs in New York City.

How will an autonomous vehicle grasp what safe driving looks like? Its not just the rules of the road because sometimes if you follow the rules of the road exactly you can become less safe, said Adler.

The best way to accomplish this goal is to perform regression tests to create systems that, not only understanddangerous situations related to driving behavior, but also ensure these novel systems are in fact safe themselves, explained Adler. This concept would mean each iteration of the system is an improvement over the last one.

Weve tested cars for decades, but data is critical for understanding these unpredictable situations and doing constant quality control, he added.

Strategies for new breakthroughs

Artificial intelligence is still a burgeoning field, but a crop of startups are attempting to make their own groundbreaking algorithms and well established companies are making forays into the field too.

Both startups and established companies like Toyota are equipped to tackle research challenges associated with A.I., but in different ways, said Adler.

There are advantages to be garnered from both speed and scale. Startups are great at running multiple experiments with an incredible ability to test things out, Adler said.

We must connect to that ecosystem of innovation because funds like ours are looking to be fast and connected to that speed within the industry.

Artificial intelligence technology will advance regardless of intervention; there is really no stopping it, said Adler. The key will be advancing it in a way that will most benefit society.

Its so important that the wisdom keeps up with the technology so it could fortify humanity in positive ways. Its great to have these discussions earlier rather than later, he concluded.

Go here to read the rest:

How Toyota's New Venture Fund Will Tackle AI Investments - R & D Magazine

Ai | Poetry Foundation

Ai is a poet noted for her uncompromising poetic vision and bleak dramatic monologues which give voice to marginalized, often poor and abused speakers. Though born Florence Anthony, she legally changed her name to Ai which means love in Japanese. She has said that her given name reflects a scandalous affair my mother had with a Japanese man she met at a streetcar stop and has no wish to be identified for all eternity with a man she never knew. Ais awareness of her own mixed race heritageshe self-identifies as Japanese, Choctaw-Chickasaw, Black, Irish, Southern Cheyenne, and Comancheas well as her strong feminist bent shape her poetry, which is often brutal and direct in its subject matter. In the volumes of verse she published since her first collection, Cruelty (1973), Ai provoked both controversy and praise for her stark monologues and gruesome first-person accounts of non-normative behavior. Dubbed All womanall human by confessional poet Anne Sexton, Ai has also been praised by the Times Literary Supplement for capturing the cruelty of intimate relationships and the delights of perverse spontaneitye.g. the joy a mother gets from beating her child. Alicia Ostriker countered Sextons summation of Ai, writing: All womanall human; she is hardly that. She is more like a bad dream of Woody Allens, or the inside story of some Swinburnean Dolorosa, or the vagina-dentata itself starting to talk. Woman, in Ais embodiment, wants sex. She knows about death and can kill animals and people. She is hard as dirt. Her realitiesvery small onesare so intolerable that we fashion female myths to express our fear of her. She, however, lives the hard life below our myths.

Ai explained her use of the dramatic monologue as an early realization that first person voice was always the stronger voice to use when writing. Her poems depict individuals that Duane Ackerson characterized in Contemporary Women Poets as people seeking transformation, a rough sort of salvation, through violent acts. The speakers in her poems are struggling individualsusually women, but occasionally menisolated by poverty, by small-town life, or life on a remote farm. Killing Floor (1978), the volume that followed Cruelty, includes a poem called The Kid which is spoken in the voice of a boy who has just murdered his family. Sin (1986) contains more complex dramatic monologues as Ai assumes actual personae, from Joe McCarthy to the Kennedy brothers. Ais characters tend to speak in a flat demotic, stripped of nuance or emotion. Poet and critic Rachael Hadas has noted that although virtually all the poems present themselves as spoken by a particular character, Ai makes little attempt to capture individual styles of diction [or] personal vocabularies. For Hadas, however, this makes the poems all the more striking, as her stripped-down diction conveys an underlying, almost biblical indignationnot, at times, without compassionat human misuses of power and the corrupting energies of various human appetites.

Fate (1991) and Greed (1993), like Sin before them, contain monologues that dramatize public figures. Readers confront the inner worlds of former F.B.I. director J. Edgar Hoover, missing-and-presumed-dead Union leader Jimmy Hoffa, musician Elvis Presley, and actor James Dean as voices from beyond-the-grave who yet remain out of sync with social or ethical norms. Noting that Ai reinvents each of her subjects within her verse, Ackerson added that, through each monologue, what these individuals say, returning after death, expresses more about the American psyche than about the real figures. Vice: New and Selected Poems (1999) contained work from Ais previous five books as well as 18 new poems. It was awarded the National Book Award for Poetry. Ais next book, Dread (2003), was likewise praised for its searing and honest treatment of, according to a Publishers Weekly reviewer, violent or baroquely sexual life stories. In the New York Times Book Review, Viijay Seshadri wrote that Dread has the characteristic moral strength that makes Ai a necessary poet. Aiming her poetic barbs directly at prejudices and societal ills of all types, Ai has been outspoken on the subject of race, saying People whose concept of themselves is largely dependent on their racial identity and superiority feel threatened by a multiracial person. The insistence that one must align oneself with this or that race is basically racist. And the notion that without a racial identity a person cant have any identity perpetuates racismI wish I could say that race isnt important. But it is. More than ever, it is a medium of exchange, the coin of the realm with which one buys ones share of jobs and social position. This is a fact which I have faced and must ultimately transcend. If this transcendence were less complex, less individual, it would lose its holiness.

In addition to the National Book Award, Ais work was awarded an American Book Award from the Before Columbus Foundation, for Sin, and the Lamont Poetry Award of the Academy of American Poets, for Killing Floor. She received grants from the Guggenheim Foundation, the Bunting Fellowship Program at Radcliffe College and the National Endowment for the Arts. She taught at Oklahoma State University. She died in 2010.

Read this article:

Ai | Poetry Foundation

Computer vision: Why its hard to compare AI and human perception – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Human-level performance. Human-level accuracy. Those are terms you hear a lot from companies developing artificial intelligence systems, whether its facial recognition, object detection, or question answering. And to their credit, the recent years have seen many great products powered by AI algorithms, mostly thanks to advances in machine learning and deep learning.

But many of these comparisons only take into account the end-result of testing the deep learning algorithms on limited data sets. This approach can create false expectations about AI systems and yield dangerous results when they are entrusted with critical tasks.

In a recent study, a group of researchers from various German organizations and universities have highlighted the challenges of evaluating the performance of deep learning in processing visual data. In their paper, titled, The Notorious Difficulty of Comparing Human and Machine Perception, the researchers highlight the problems in current methods that compare deep neural networks and the human vision system.

In their research, the scientist conducted a series of experiments that dig beneath the surface of deep learning results and compare them to the workings of the human vision system. Their findings are reminder that we must be cautious when comparing AI to humans, even if it shows equal or better performance on the same task.

In the seemingly endless quest to reconstruct human perception, the field that has become known as computer vision, deep learning has so far yielded the most favorable results. Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, are accomplishing tasks that were extremely difficult with traditional software.

However, comparing neural networks to the human perception remains a challenge. And this is partly because we still have a lot to learn about the human vision system and the human brain in general. The complex workings of deep learning systems also compound the problem. Deep neural networks work in very complicated ways that often confound their own creators.

In recent years, a body of research has tried to evaluate the inner workings of neural networks and their robustness in handling real-world situations. Despite a multitude of studies, comparing human and machine perception is not straightforward, the German researchers write in their paper.

In their study, the scientists focused on three areas to gauge how humans and deep neural networks process visual data.

The first test involves contour detection. In this experiment, both humans and AI participants must say whether an image contains a closed contour or not. The goal here is to understand whether deep learning algorithms can learn the concept of closed and open shapes, and whether they can detect them under various conditions.

For humans, a closed contour flanked by many open contours perceptually stands out. In contrast, detecting closed contours might be difficult for DNNs as they would presumably require a long-range contour integration, the researchers write.

For the experiment, the scientists used the ResNet-50, a popular convolutional neural network developed by AI researchers at Microsoft. They used transfer learning to finetune the AI model on 14,000 images of closed and open contours.

They then tested the AI on various examples that resembled the training data and gradually shifted in other directions. The initial findings showed that a well-trained neural network seems to grasp the idea of a closed contour. Even though the network was trained on a dataset that only contained shapes with straight lines, it could also performed well on curved lines.

These results suggest that our model did, in fact, learn the concept of open and closed contours and that it performs a similar contour integration-like process as humans, the scientists write.

However, further investigation showed that other changes that didnt affect human performance degraded the accuracy of the AI models results. For instance, changing the color and width of the lines caused a sudden drop in the accuracy of the deep learning model. The model also seemed to struggle with detecting shapes when they became larger than a certain size.

The neural network was also very sensitive to adversarial perturbations, carefully crafted changes that are imperceptible to the human eye but cause disruption in the behavior of machine learning systems.

To further investigate the decision-making process of the AI, the scientists used a Bag-of-Feature network, a technique that tries to localize the bits of data that contribute to the decision of a deep learning model. The analysis proved that there do exist local features such as an endpoint in conjunction with a short edge that can often give away the correct class label, the researchers found.

The second experiment tested the abilities of deep learning algorithms in abstract visual reasoning. The data used for the experiment is based on the Synthetic Visual Reasoning Test (SVRT), in which the AI must answer questions that require understanding of the relations between different shapes in the picture. The tests include same-different tasks (e.g., are two shapes in a picture identical?) and spatial tasks (e.g., is the smaller shape in the center of the larger shape?). A human observer would easily solve these problems.

For their experiment, the researchers use the ResNet-50 and tested how it performed with different sizes of training dataset. The results show that a pretrained model finetuned on 28,000 samples performs well both on same-different and spatial tasks. (Previous experiments trained a very small neural network on a million images.) The performance of the AI dropped as the researchers reduced the number of training examples, but degradation in same-different tasks was faster.

Same-different tasks require more training samples than spatial reasoning tasks, the researchers write, adding, this cannot be taken as evidence for systematic differences between feed-forward neural networks and the human visual system.

The researchers note that the human visual system is naturally pre-trained on large amounts of abstract visual reasoning tasks. This makes it unfair to test the deep learning model on a low-data regime, and it is almost impossible to draw solid conclusions about differences in the internal information processing of humans and AI.

It might very well be that the human visual system trained from scratch on the two types of tasks would exhibit a similar difference in sample efficiency as a ResNet-50, the researchers write.

The recognition gap is one of the most interesting tests of visual systems. Consider the following image. Can you tell what it is without scrolling further down?

Below is the zoomed-out view of the same image. Theres no question that its a cat. If I showed you a close-up of another part of the image (perhaps the ear), you might have had a greater chance of predicting what was in the image. We humans need to see a certain amount of overall shapes and patterns to be able to recognize an object in an image. The more you zoom in, the more features youre removing, and the harder it becomes to distinguish what is in the image.

Deep learning systems also operate on features, but they work in subtler ways. Neural networks sometimes the find minuscule features that are imperceptible to the human eye but remain detectable even when you zoom in very closely.

In their final experiment, the researchers tried to measure the recognition gap of deep neural networks by gradually zooming in images until the accuracy of the AI model started to degrade considerably.

Previous experiments show a large difference between the image recognition gap in humans and deep neural networks. But in their paper, the researchers point out that most previous tests on neural network recognition gaps are based on human-selected image patches. These patches favor the human vision system.

When they tested their deep learning models on machine-selected patches, the researchers obtained results that showed a similar gap in humans and AI.

These results highlight the importance of testing humans and machines on the exact same footing and of avoiding a human bias in the experiment design, the researchers write. All conditions, instructions and procedures should be as close as possible between humans and machines in order to ensure that all observed differences are due to inherently different decision strategies rather than differences in the testing procedure.

As our AI systems become more complex, we will have to develop more complex methods to test them. Previous work in the field shows that many of the popular benchmarks used to measure the accuracy of computer vision systems are misleading. The work by the German researchers is one of many efforts that attempt to measure artificial intelligence and better quantify the differences between AI and human intelligence. And they draw conclusions that can provide directions for future AI research.

The overarching challenge in comparison studies between humans and machines seems to be the strong internal human interpretation bias, the researchers write. Appropriate analysis tools and extensive cross checks such as variations in the network architecture, alignment of experimental procedures, generalization tests, adversarial examples and tests with constrained networks help rationalizing the interpretation of findings and put this internal bias into perspective. All in all, care has to be taken to not impose our human systematic bias when comparing human and machine perception.

Original post:

Computer vision: Why its hard to compare AI and human perception - TechTalks

Microsoft 2017 annual report lists AI as top priority – CNBC.com – CNBC

Mobile is gone -- not a surprise, given the company's struggles with its Windows Phone operating system and its acquisition of Nokia, which Microsoft essentially declared worthless when it wrote down the total value of that acquisition in 2015.

Cloud computing, including fast-growing products like Office 365 and the Azure public cloud are still there. Now AI is there with it, too.

Microsoft has acquired a few AI startups, like Maluuba and Swiftkey, since Nadella took over, and has established a formal AI and Research group. That team "focuses on our AI development and other forward-looking research and development efforts spanning infrastructure, services, applications, and search," the annual report says.

Microsoft's vision reset comes after Sundar Pichai, CEO of Alphabet's Google, began saying that the world is shifting from being mobile-first to AI-first. Facebook has also invested in both long-term AI research and AI product enhancements alongside Microsoft and Alphabet.

Read more:

Microsoft 2017 annual report lists AI as top priority - CNBC.com - CNBC

AI is here to save your career, not destroy it – VentureBeat

Imagine: Humans waging an epic battle against technology, with human intelligence inevitably subjugated by artificial overlords. Plenty of folks would line up with front-row tickets and popcorn in hand. But its also the very real manifestation of a universal fear jobs relegated to machines, livelihoods handed over to bots.

But when we take a closer look at bots and other forms of artificial intelligence, our worst fears are a far cry from the truth. Weve built bots to help us succeed. And instead of viewing them as our grand reckoning, we should view AI and bots as tools to exponentially expand our human capabilities in and out of the workplace. Yes, bots can make us more human in our daily lives.

Those who use bots as superhuman digital assistants will find the most success. Itll be humans to the bot-th power, rather than humans versus bots.

Much of our understanding of AI and the future is rooted in misconception. Were trepidatious toward the future. Its a valid and human response that shouldnt go ignored. But the truth is, the future is already here.

Anyone whos tagged a photo of a friend on Facebook has used AI. But do people think that way? While 86 percent of people say theyre interested in trying AI tools, 63 percent dont realize theyre already using AI.

Machines are much better at quickly surfacing the most relevant information the internet holds. Its on us humans to take that knowledge and make the most informed decisions. But finding information not all our bot friends can help us with they can do much more than just answer direct questions.

Soon, bots will work in the background on our behalf and initiate a conversation when something interesting has happened. Well be prompted with a notable result, and then well make the choice to move forward.

Its simple, but so powerful. As technology should be.

Computers now have the ability to do what we once thought only human intelligence could handle. In the near future, AI is going to feel less artificial and more intelligent.

Humans learn from example and experience. So do machines. Machine learning allows you tell a system what you want, not how to do it.

Once something a few PhDs wrote about, machine learning is now something millions of people benefit from. Everything from predictive learning and lead scoring to content recommendations and email optimization will get much easier for marketers and salespeople alike.

Already, 40 percent of people dont care if theyre served by an AI tool or a human so long as the question gets answered. Only 26 percent say the same for more complicated customer requests. But how those humans will best serve their customers will take (you guessed it) bots.

If you want your employees and business to benefit from all this machine learning, youll need to invest in getting the data in one centralized place. After all, the data is what gives machine learning the learning part. Theres no learning without the data.

Not only is AI the future of marketing and sales, its the future of the inbound movement. AI and bots allow you to provide highly personalized, helpful, and human experiences for your customers. It may not be a summer blockbuster fit for theaters, but AI and bots sure feel like theyre fit for businesses.

Visit link:

AI is here to save your career, not destroy it - VentureBeat

Rt Hon Jacinda Ardern – Prime Minister of New Zealand – NZ Labour Party

Jacinda Ardern is the Prime Minister of New Zealand and the Leader of the New Zealand Labour Party.

Born in Hamilton, Jacinda grew up in rural Waikato and attended Morrinsville College, before graduating from the University of Waikato with a Bachelor of Communications Studies in International Relations and Professional Communication. She joined the New Zealand Labour Party at age 18.

After university, Jacinda worked in a variety of roles across government and business, including as an advisor to Prime Minister Helen Clark, and in the Government Cabinet Office in London. She was elected to Parliament in 2008, becoming the MP for the Mt Albert electorate in 2017 and the Leader of the Labour Party later that year. She became Prime Minister in September 2017, and in 2018, she gave birth to her daughter, Neve.

During her time in Parliament, Jacinda has been a strong advocate for children, women and the right of every New Zealander to have meaningful work. She was responsible for the landmark Child Poverty Reduction Act, and has taken a lead on climate change through initiatives like the establishment of the Zero Carbon Act and the ban on future offshore oil and gas exploration in New Zealand.

As well as Prime Minister, Jacinda holds the roles of Minister for National Security and Intelligence, and Minister for Child Poverty Reduction, an issue particularly close to her heart. She is also the Minister Responsible for Ministerial Services and Associate Minister for Arts, Culture and Heritage.

See original here:

Rt Hon Jacinda Ardern - Prime Minister of New Zealand - NZ Labour Party

New Zealand will continue to cooperate with more assertive China …

New Zealand will continue to cooperate on shared interests with China, even as tensions increase in the region and China grows more assertive in the pursuit of its interests, Jacinda Ardern has said.

Speaking to the China Business Summit in Auckland on Monday, the prime minister said she was planning a trip to China to seize new opportunities for dialogue, support the trade relationship, and further cooperate on the climate crisis.

Even as China becomes more assertive in the pursuit of its interests, there are still shared interests on which we can and should cooperate, she said.

The prime ministers speech comes during a tense period for the Indo-Pacific, with western allies concerned about Chinas push for influence, particularly its proposed regional Pacific security deal. Ardern called for Beijing to respect and support the institutions that she said undergirded regional and international peace and stability.

Both New Zealand and China had been major beneficiaries of relative peace, stability and prosperity The rules, norms and institutions, such as the United Nations, that underlie that stability and prosperity remain indispensable, Ardern said, but are also under threat.

We see how much we have to lose should the international rules-based system falter, she said.

The speech was closely wedded to the party line of Arderns second-term governments foreign policy. The policy has emphasised respect, consistency, and predictability in dealings with China: essentially, that the government will continue to cooperate and work closely with China on mutually beneficial matters, particularly trade, while calling out differences typically on foreign policy and human rights.

That balancing act has, at times, been a difficult one to manage. New Zealand remains highly dependent on China for trade the country is its largest trading partner, accounting for 23% of total trade and 32% of goods exports.

But as Chinas economic importance to New Zealand has grown, ideological differences with Beijing have become increasingly stark, with reports of severe human rights abuses in Xinjiang, Beijings push into the Pacific and South China Sea, and erosion of democracy in Hong Kong.

In response to increasing tensions or risks in the region be they in the Pacific, the South China Sea, or the Taiwan Strait New Zealands position remains consistent we call for adherence to international rules and norms; for diplomacy, de-escalation and dialogue rather than threats, force and coercion, Ardern said.

Our differences need not define us. But we cannot ignore them. This will mean continuing to speak out on some issues sometimes with others and sometimes alone, she said.

We have done this recently on issues in the Pacific. We also have consistently expressed our concerns about economic coercion, human rights, Xinjiang, and Hong Kong.

One of the prime ministers primary examples of faltering institutions and norms was Putins war on Ukraine, and she called on China to to be clear that it does not support the Russian invasion and to use its access and influence to help bring an end to the conflict.

Over the past year, the Pacific has become an arena for broader geopolitical competition: with increasing interest from China, the US has also been looking to beef up its connections and alliances in the region.

Following Arderns speech on Monday, the commander of the US military in the Pacific said he wanted to expand and strengthen its ties with New Zealand.

Adm John Aquilino, head of the US Indo-Pacific Command, was in Wellington to meet top New Zealand defence force and government officials.

Our partnership runs very deep, Aquilino said. We are doing many things together to continue to ensure peace and prosperity for both of our nations and for all the nations in the region.

He said the leadership of Australia and New Zealand in the Pacific was critically important.

In June, the US signed Partners in the Blue Pacific, a cooperation agreement between Australia, Japan, New Zealand, the UK, and US.

The United States has been a Pacific nation our entire life. We will continue to operate in the Pacific no matter what else you might hear, Aquilino said.

Read the original:

New Zealand will continue to cooperate with more assertive China ...

Jacinda Ardern to travel to New York for UN meeting later this month – New Zealand Herald

Prime Minister Jacinda Ardern on a previous visit to New York. Photo / Supplied

Prime Minister Jacinda Ardern will fly to New York City later this month for an annual meeting of world leaders at the United Nations, the UN General Assembly leaders' week.

The meeting was previously an annual fixture for New Zealand prime ministers, but Ardern has not attended in person since 2019.

This is not her first visit to New York since the pandemic, however - she visited earlier this year as part of her US trade mission.

Ardern will fly to the United States on Air New Zealand's inaugural direct flight between Auckland and New York's JFK Airport.

"I look forward to visiting the United States to meet with counterparts, and taking the opportunity to further promote New Zealand's reconnecting plan," Ardern said.

"It's an important opportunity to set out New Zealand's continued commitment to the multilateral system and international rules-based order. As the world continues to grapple with Covid-19, climate change, the Ukraine and geopolitical tensions, international co-operation is more important than ever," Ardern said.

While in the United States, she will co-host a Christchurch Call to Action Leaders' Summit, with French President Emmanuel Macron and participate in events to promote trade, investment, and tourism.

"I look forward to meeting with heads of state and global tech leaders to continue our important work to eliminate terrorist and violent extremist content online," Ardern said.

Leaders typically use UN General Assembly as an opportunity to meet one-on-one on the sidelines of the main event.

Ardern has a number of these planned, although she has not announced with whom she will be meeting.

She will also deliver New Zealand's national statement at the General Assembly.

Ardern will also meet with the Motion Picture Association of America, a trade body representing the film industry, to promote New Zealand as a film destination.

She will also attend the launch of the Invest New Zealand campaign "Do Good, Do Well" alongside major US investment funds.

Ardern said Air New Zealand's new direct flight to New York was "an exciting step in reconnecting New Zealand with the world, and will bring a welcome boost for our tourism and other businesses".

Here is the original post:

Jacinda Ardern to travel to New York for UN meeting later this month - New Zealand Herald