Page 3«..2345..1020..»

Category Archives: Ai

NuerIPS 2019: China’s WeBank, Mila, and Tencent Partner on AI Federated Learning to Protect Data Privacy – Business Wire

Posted: December 18, 2019 at 8:44 pm

VANCOUVER, British Columbia--(BUSINESS WIRE)--Top AI conference NeurIPS 2019 was held in Vancouver from December 8-14th. Attending experts were excited about a new research direction named federated learning (FL). Professor Yoshua Bengio, A.M. Turing Award Winner, founder of the worlds top deep learning research facility Mila-Quebec Artificial Intelligence Institute and one of the "three musketeers of deep learning", said that In terms of better training neural networks, federated learning is at the forefront of research and will have important impact on business.

Currently, data silos and privacy protection are two big challenges for AI. As an encrypted distributed machine learning framework, FL can tackle both problems by allowing different parties to build models collaboratively without the need to reveal their data. The method helps to advance AI modeling while protecting data and privacy.

Chinas digital bank WeBank is a leading research facility in federated learning. At NuerIPS 2019, WeBank co-organized the FL workshop with Google, CMU, and NTU, with 400 scholars joining in the discussion.

During the WeBank AI Night event, WeBank announced two strategic partnerships with Mila and the leading cloud computing platform Tencent Cloud. The cooperation will focus on further developing federated learning, based on WeBank's real-world experiences in finance and fintech, adhering to Milas core philosophy AI for Humanity, Tencents AI for Good and WeBanks Make Banking Better for All " to create safe, inclusive AI applications.

Professor Qiang Yang, WeBanks chief AI officer, explained that large-scale AI application relies on big data, which is scattered across many different organizations. Direct data merging will violate privacy regulations. FL is a compliance method strictly following laws and regulations, and is now used in fintech, healthcare, smart city, and other industrial applications.

To reduce the use threshold of federated learning, WeBank launched the world's first industry FL open-source framework Federated AI Technology Enabler (FATE) in February 2019. This grants a ready-to-use FL framework tool to any companies wishing to work together. Partner Tencent Cloud and companies including Huawei, JD.com and other tech giants have all joined the ecosystem. The company is also leading international IEEE standards on the technology.

Founded in 2014, WeBank is the worlds leading digital bank operating solely online, now serving over 170 million individual customers and over 500,000 small and micro-sized enterprises.

View original post here:

NuerIPS 2019: China's WeBank, Mila, and Tencent Partner on AI Federated Learning to Protect Data Privacy - Business Wire

Posted in Ai | Comments Off on NuerIPS 2019: China’s WeBank, Mila, and Tencent Partner on AI Federated Learning to Protect Data Privacy – Business Wire

Researchers were about to solve AI’s black box problem, then the lawyers got involved – The Next Web

Posted: at 8:44 pm

AI has a black box problem. We cram data in one side of a machine learning system and we get results out the other, but were often unsure what happens in the middle. Researchers and developers nearly had the issue licked, with explainable algorithms and transparent AI trending over the past few years. Then came the lawyers.

Black box AI isnt as complex as some experts make it out to be. Imagine you have 1,000,000 different spices and 1,000,000 different herbsand you only have a couple of hours to crack Kentucky Fried Chickens secret recipe. Youre pretty sure you have all the ingredients but youre not sure which eleven herbs and spices you should use. You dont have time to guess, and it would take billions of years or more to manually try every combination. This problem cant realistically be solved using brute force, at least not under normal kitchen paradigms.

But imagine if you had a magic chicken fryer that did all the work for you in seconds. You could pour all your ingredients into it and then give it a piece of KFC chicken to compare against. Since a chicken fryer cant taste chicken, it would rely on your taste-buds to confirm whether itd managed to recreate the Colonels chicken or not.

It spits out a drumstick, you take a bite and tell the fryer whether the piece youre eating now tastes more or less like KFCs than the last one you tried. The fryer goes back to work, tries more combinations, and keeps going until you tell it to stop once it has the recipe right.

Thats basically how black box AI works. You have no idea how the magic fryer came up with the recipe maybe it used 5 herbs and 6 spices, maybe it used 32 herbs and 0 spices but, it doesnt matter. All we care about is using AI as a way to do something humans could do, but much faster.

This is fine when were using blackbox AI to determine whether something is a hotdog or not, or when Instagram uses it to determine if youre about to post something that might be offensive. Its not fine when we cant explain why an AI sentenced a black man with no priors to more time than a white man with a criminal history for the same offense.

The answer is transparency. If there is no black box, then we can tell where things went wrong. If our AI sentences black people to longer prison terms than white people because its over-reliant on external sentencing guidance, we can point to that problem and fix it in the system.

But theres a huge downside to transparency: If the world can figure out how your AI works, it can figure out how to make it work without you. The companies making money off of black box AI especially those like Palantir, Facebook, Amazon, and Google who have managed to entrench biased AI within government systems dont want to open the black box anymore than they want their competitors to have access to their research. Transparency is expensive and, often, exposes just how unethical some companies use of AI is.

As legal expert Andrew Burt recently wrote in Harvard Business Review:

To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isnt worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which information about the model is available to others.

The AI gold rush of the 2010s led to a Wild West situation where companies can package their AI any way they want, call it whatever they want, and sell it in the wild without regulation or oversight. Companies that have made millions or billions selling products and services related to biased, black box AI have managed to entrench themselves in the same position as the health insurance and fossil fuel industries. Their very existence is threatened by the idea that they may be regulated against doing harm to the greater good.

Simply put: No. The lawyers will make sure well never know any more about why a commercial system is biased, even if we develop fully transparent algorithms, than if these systems remain in black boxes. As Axios Kaveh Waddell recently wrote:

Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.

The calculus for the AI industry is the same as the private healthcare industry in the US. Extricating biased black box AI from the world would probably put dozens of companies out of business and likely result in hundreds of billions of dollars lost. The US industrial law enforcement complex runs on black box AI were unlikely to see the government end its deals with Microsoft, Palantir, and Amazon any time soon. So long as the lawmakers are content to profit from the use of biased, black box AI, itll remain embedded in society.

And we also cant rely on businesses themselves to end the practice. Our desire to extricate black box systems simply means companies cant blame the algorithm anymore, so theyll hide their work entirely. With transparent AI, well get opaque developers. Instead of choosing not to develop dual use, or potentially dangerous AI, theyll simply lawyer up.

As Burt puts it in his Harvard Business Review article:

Indeed, this is exactly why lawyers operate under legal privilege, which gives the information they gather a protected status, incentivizing clients to fully understand their risks rather than to hide any potential wrongdoings. In cybersecurity, for example, lawyers have become so involved that its common for legal departments to manage risk assessments and even incident-response activities after a breach. The same approach should apply to AI.

When things go wrong and AI runs amok, the lawyers will be there to tell us the most company-friendly version of what happened. Most importantly, theyll protect companies from having to share how their AI systems work.

Were trading a technical black box for a legal one. Somehow, this seems even more unfair.

Read next: The super-rare Nintendo Play Station prototype is going to auction

View original post here:

Researchers were about to solve AI's black box problem, then the lawyers got involved - The Next Web

Posted in Ai | Comments Off on Researchers were about to solve AI’s black box problem, then the lawyers got involved – The Next Web

Baidu, Samsung Electronics Announce Production of its Cloud-to-Edge AI Accelerator to Start Early 2020 – HPCwire

Posted: at 8:44 pm

BEIJING AND SEOUL, South Korea, Dec. 18, 2019 Baidu, a leading Chinese-language Internet search provider, and Samsung Electronics, a world leader in advanced semiconductor technology, today announced that Baidus first cloud-to-edge AI accelerator, Baidu KUNLUN, has completed its development and will be mass-produced early next year.

Baidu KUNLUN chip is built on the companys advanced XPU, a home-grown neural processor architecture for cloud, edge, and AI, as well as Samsungs 14-nanometer (nm) process technology with its I-Cube (Interposer-Cube) package solution.

The chip offers 512 gigabytes per second (GBps) memory bandwidth and supplies up to 260 Tera operations per second (TOPS) at 150 watts. In addition, the new chip allows Ernie, a pre-training model for natural language processing, to infer three times faster than the conventional GPU/FPGA-accelerating model.

Leveraging the chips limit-pushing computing power and power efficiency, Baidu can effectively support a wide variety of functions including large-scale AI workloads, such as search ranking, speech recognition, image processing, natural language processing, autonomous driving, and deep learning platforms like PaddlePaddle.

Through the first foundry cooperation between the two companies, Baidu will provide advanced AI platforms for maximizing AI performance, and Samsung will expand its foundry business into high performance computing (HPC) chips that are designed for cloud and edge computing.

We are excited to lead the HPC industry together with Samsung Foundry, said OuYang Jian, Distinguished Architect of Baidu. Baidu KUNLUN is a very challenging project since it requires not only a high level of reliability and performance at the same time, but is also a compilation of the most advanced technologies in the semiconductor industry. Thanks to Samsungs state of the art process technologies and competent foundry services, we were able to meet and surpass our goal to offer superior AI user experience.

We are excited to start a new foundry service for Baidu using our 14nm process technology, said Ryan Lee, vice president of Foundry Marketing at Samsung Electronics. Baidu KUNLUN is an important milestone for Samsung Foundry as were expanding our business area beyond mobile to datacenter applications by developing and mass-producing AI chips. Samsung will provide comprehensive foundry solutions from design support to cutting-edge manufacturing technologies, such as 5LPE, 4LPE, as well as 2.5D packaging.

As higher performance is required in diverse applications such as AI and HPC, chip integration technology is becoming more and more important. Samsungs I-Cube technology, which connects a logic chip and high bandwidth memory (HBM) 2 with an interposer, provides higher density/ bandwidth on minimum size by utilizing Samsungs differentiated solutions.

Compared to previous technology, these solutions maximize product performance with more than 50% improved power/signal integrity. It is anticipated that I-Cube technology will mark a new epoch in the heterogeneous computing market. Samsung is also developing more advanced packaging technologies, such as redistribution layers (RDL) interposer and 4x, 8x HBM integrated package.

About Samsung Electronics Co., Ltd.

Samsung Electronics inspires the world and shapes the future with transformative ideas and technologies. The company is redefining the worlds of TVs, smartphones, wearable devices, tablets, digital appliances, network systems, and memory, system LSI, foundry and LED solutions. For the latest news, please visit the Samsung Newsroom athttp://news.samsung.com.

About Baidu

Baidu, Inc. is the leading Chinese language Internet search provider. Baidu aims to make the complicated world simpler through technology. Baidus ADSs trade on the NASDAQ Global Select Market under the symbol BIDU. Currently, ten ADSs represent one Class A ordinary share.

Source: Samsung Electronics Co., Ltd.

Excerpt from:

Baidu, Samsung Electronics Announce Production of its Cloud-to-Edge AI Accelerator to Start Early 2020 - HPCwire

Posted in Ai | Comments Off on Baidu, Samsung Electronics Announce Production of its Cloud-to-Edge AI Accelerator to Start Early 2020 – HPCwire

Tech connection: To reach patients, pharma adds AI, machine learning and more to its digital toolbox – FiercePharma

Posted: at 8:44 pm

Posted in Ai | Comments Off on Tech connection: To reach patients, pharma adds AI, machine learning and more to its digital toolbox – FiercePharma

Introducing AI to the Back OfficeDoes the Tech Measure Up to the Hype? – www.waterstechnology.com

Posted: at 8:44 pm

Introducing AI to the Back OfficeDoesthe Tech Measure Up to theHype? - WatersTechnology.com Sponsored by: ?

This article was paid for by a contributing third party.

Throughout 2019, artificial intelligence (AI) has beenone of themost predominant buzzwords in thefinancial technology space. AI haspromisedenhanced accuracy andimproved efficiencies, allowing staff to focus on higher-value tasksit trulyhas the potential to revolutionize the back office.

So, whats stopping capital markets firms taking the leap?

This webinar identifies firms already using AI across their back offices, the benefits of doing so and the challenges they face.

Key topics discussed:

You need to sign in to use this feature. If you dont have a WatersTechnology account, please register for a trial.

Best Digital B2B Publishing Company 2016, 2017 & 2018

Best Digital B2B Publishing Company

You need to sign in to use this feature. If you dont have a WatersTechnology account, please register for a trial.

To use this feature you will need an individual account. If you have one already please sign in.

Alternatively you can request an individual account here:

Read the original:

Introducing AI to the Back OfficeDoes the Tech Measure Up to the Hype? - http://www.waterstechnology.com

Posted in Ai | Comments Off on Introducing AI to the Back OfficeDoes the Tech Measure Up to the Hype? – www.waterstechnology.com

Instagram Touts Anti-Bullying AI Created to Curb Offensive Speech – NewsBusters

Posted: at 8:44 pm

Its the future you probably didnt ask for -- being nagged by Artificial Intelligence to stop being offensive and bullying.

Instagram touted its new anti-bullying Artificial Intelligence program in its Dec. 16 blog about the social media giants long-term commitment to lead the fight against online bullying. Instagram claims the AI program notifies people when their captions on a photo or video may be considered offensive, and gives them a chance to pause and reconsider their words before posting.

Instagram originally announced this new AI that preempts offensive posts in a July 8th blog headlined Our Commitment to Lead the Fight Against Online Bullying. The Big Tech photo-sharing giant wrote that the program gives users a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification. From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.

Instagram has been experimenting with tackling the issue of bullying for quite some time now. Previously this took the form of a content filter that was created to help keep Instagram a safe place for self-expression by blocking offensive comments. Instagram CEO & Co-Founder Kevin Systrom wrote in a June 2017 blog that weve developed a filter that will block certain offensive comments on posts and in live video, further specifying that the content filter was intended to foster kind, inclusive communities on Instagram. This filter program came to fruition in May 2018 with a follow-up blog proclaiming that Instagram will filter bullying comments intended to harass or upset people in order to keep the platform an inclusive, supportive place.

Instagram followed this filter up with a separate AI program that will anticipate users offensive posts rather than merely filter them retroactively.

Providing an update on the AI program, Instagram wrote that the "[r]esults have been promising, and weve found that these types of nudges can encourage people to reconsider their words when given a chance.

This program is initially being rolled out in select countries, though it will soon beexpanding globally in the coming months, noted Instagram in their blog.

The process, as Instagram explains it, is that when an Instagram user writes a caption on a post, and our AI detects the caption as potentially offensive, they will receive a prompt informing them that their caption is similar to those reported for bullying. Users will then have the opportunity to change their caption before posting it.

How serious are these warnings? What is the price of not heeding them? According to the recent blog, In addition to limiting the reach of bullying, this warning helps educate people on what we dont allow on Instagram, and when an account may be at risk of breaking our rules.

The example of one such offensive comment shown on the blog was a user commenting youre stupid before getting sent the notification, which read: This caption looks similar to others that have been reported. The question remains on the platform as to what constitutes bullying, what constitutes a critique, and what are the potential biases that enable the AI to classify various comments as offensive or bullying.

But how can a computer program be biased? Rep. Alexandria Ocasio-Cortez explained this in a way that the left may find difficult to debunk. She accused algorithms of potentially being rife with bias while speaking at an MLK Now event in January. She claimed that algorithms "always have these racial inequities that get translated if you don't fix the bias, then you're just automating the bias. LiveScience backed up Ocasio-Cortez claim by citing an example about facial recognition. It wrote that if a program is being trained to recognize women in photographs, and all the images it is given are of women with long hair, then it will think anyone with short hair is a man.

If instagrams algorithm is trained to see mere disagreement as a form of bullying or fact-checking by opposing political figures as offensive then it will be categorized as such. This has scary implications for current American politics.

Instagram may have done something similar already, when it protected Sen. Elizabeth Warren (D-MA) from critique in February, 2019. GOP spokeswoman Kayleigh McEnany tweeted, I have been warned by @instagram and cannot operate my account because I posted an image of Elizabeth Warrens Bar of Texas registration form via @washingtonpost. Im warned that I am harassing, bullying, and blackmailing her.

Later, as reported by the Daily Caller, Instagram reinstated McEnanys account and sent an apology, saying that it mistook the post for sharing her private address.

More:

Instagram Touts Anti-Bullying AI Created to Curb Offensive Speech - NewsBusters

Posted in Ai | Comments Off on Instagram Touts Anti-Bullying AI Created to Curb Offensive Speech – NewsBusters

AI has bested chess and Go, but it struggles to find a diamond in Minecraft – The Verge

Posted: December 13, 2019 at 3:24 pm

Whether were learning to cook an omelet or drive a car, the path to mastering new skills often begins by watching others. But can artificial intelligence learn the same way? A new challenge teaching AI agents to play Minecraft suggests its much trickier for computers.

Announced earlier this year, the MineRL competition asked teams of researchers to create AI bots that could successfully mine a diamond in Minecraft. This isnt an impossible task, but it does require a mastery of the games basics. Players need to know how to cut down trees, craft pickaxes, and explore underground caves while dodging monsters and lava. These are the sorts of skills that most adults could pick up after a few hours of experimentation or learn much faster by watching tutorials on YouTube.

But of the 660 entries in the MineRL competition, none were able to complete the challenge, according to results that will be announced at the AI conference NeurIPS and that were first reported by BBC News. Although bots were able to learn intermediary steps, like constructing a furnace to make durable pickaxes, none successfully found a diamond.

The task we posed is very hard, Katja Hofmann, a principal researcher at Microsoft Research, which helped organize the challenge, told BBC News. While no submitted agent has fully solved the task, they have made a lot of progress and learned to make many of the tools needed along the way.

This may be a surprise, especially when you think that AI has managed to best humans at games like chess, Go, and Dota 2. But it reflects important limitations of the technology as well as restrictions put in place by MineRLs judges to really challenge the teams.

The bots in MineRL had to learn using a combination of methods known as imitation learning and reinforcement learning. In imitation learning, agents are shown data of the task ahead of them, and they try to imitate it. In reinforcement learning, theyre simply dumped into a virtual world and left to work things out for themselves using trial and error.

Often, AI is only able to take on big challenges by combining these two methods. The famous AlphaGo system, for example, first learned to play Go by being fed data of old games. It then honed its skills and surpassed all humans by playing itself over and over.

The MineRL bots took a similar approach, but the resources available to them were comparatively limited. While AI agents like AlphaGo are created with huge datasets, powerful computer hardware, and the equivalent of decades of training time, the MineRL bots had to make do with just 1,000 hours of recorded gameplay to learn from, a single Nvidia graphics processor to train with, and just four days to get up to speed.

Its the difference between the resources available to an MLB team coaches, nutritionists, the finest equipment money can buy and what a Little League squad has to make do with.

It may seem unfair to hamstring the MineRL bots in this way, but these constraints reflect the challenges of integrating AI into the real world. While bots like AlphaGo certainly push the boundary of what AI can achieve, very few companies and research labs can match the resources of Google-owned DeepMind.

The competitions lead organizer, Carnegie Mellon University PhD student William Guss, told BBC News that the challenge was meant to show that not every AI problem should be solved by throwing computing power at it. This mindset, said Guss, works directly against democratizing access to these reinforcement learning systems, and leaves the ability to train agents in complex environments to corporations with swathes of compute.

So while AI may be struggling in Minecraft now, when it cracks this challenge, itll hopefully deliver benefits to a wider audience. Just dont think about those poor Minecraft YouTubers who might be out of a job.

Read the original post:

AI has bested chess and Go, but it struggles to find a diamond in Minecraft - The Verge

Posted in Ai | Comments Off on AI has bested chess and Go, but it struggles to find a diamond in Minecraft – The Verge

AI for Peace – War on the Rocks

Posted: at 3:24 pm

This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the fourth question (part a.) which asks what international norms for artificial intelligence should the United States lead in developing, and whether it is possible to create mechanisms for the development and enforcement of AI norms.

In 1953, President Dwight Eisenhower asked the world to join him in building a framework for Atoms for Peace. He made the case for a global agreement to prevent the spread of nuclear weapons while also sharing the peaceful uses of nuclear technology for power, agriculture, and medicine. No one would argue the program completely prevented the spread of weapons technology: India and Pakistan used technology gained through Atoms for Peace in their nascent nuclear weapons programs. But it made for a safer world by paving the way for a system of inspections and controls on nuclear facilities, including the establishment of the International Atomic Energy Agency and, later, the widespread ratification of the Treaty on the Nonproliferation of Nuclear Weapons (NPT). These steps were crucial for building what became known as the nuclear nonproliferation regime.

The world stands at a similar juncture today at the dawn of the age of artificial intelligence (AI).The United States shouldapply lessonsfrom the 70-year history of governing nuclear technology by building a framework for governing AI military technology.

What would AI for Peace look like? The nature of AI is different than nuclear technology, but some of the principles that underpinned the nonproliferation regime can be applied to combat the dangers of AI. Government, the private sector, and academia can work together to bridge national divides. Scientists and technologists not just traditional policymakers will be instrumental in providing guidance about how to govern new technology. At a diplomatic level, sharing the peaceful benefits of technology can encourage countries to open themselves up to inspection and controls. And even countries that are competitors can cooperate to establish norms to prevent the spread of technology that would be destabilizing.

AI for Peace couldgo beyond current efforts by involving the private sector from the get-go and identifying the specific dangers AI presents and the global norms that could prevent those dangers (e.g., what does meaningful human control over smart machines mean in specific contexts?). It would also go beyond Department of Defense initiativesto build norms by encompassing peaceful applications. Finally, it would advance the United States historic role as a leader in forging global consensus.

The Dangers of Artificial Intelligence

The uncertainty surrounding AIs long-term possibilities makes it difficult to regulate, but the potential for chaos is more tangible. It could be used to inflict catastrophic kinetic, military, and political damage. AI-assisted weapons are essentially very smart machines that can find hidden targets more quickly and attack them with greater precision than conventional computer-guided weapons.

As AI becomes incorporated into societys increasingly autonomous information backbone, it could also pose a risk of catastrophic accidents. If AI becomes pervasive, banking, power generation, and hospitals will be even more vulnerable to cyberattack. Some speculate than an AI superintelligence could develop a strategic calculating ability so superior that it destabilizes arms control efforts.

There are limits to the nuclear governance analogy. Whereas nuclear technology was once the purview only of the most powerful states, the private sector leads AI innovation. States could once agree to safeguard nuclear secrets, but AI is already everywhere including in every smartphone on the planet.

Its ubiquity shows its appeal, but the same ubiquity lowers the cost of sowing disorder. A recent study found that for less than $10 anyone could create a fake United Nations speech credible enough to be shared on the internet as real. Controlling the most dangerous uses of technology will require private sector initiatives to build safety into AI systems.

Scientists Speak Out

In 2015, Stephen Hawking, Peter Norvig, and others signed an open letter calling for more research on AIs impacts on society. The letter recognized the tremendous benefits AI could bring for human health and happiness, but also warned of unpredictable dangers. The key issue is that humans should remain in control. More than 700 AI and robotics researchers signed the 2017 Asilomar AI Principles calling for shared responsibility and warning against an AI arms race.

The path to governing nuclear technology followed a similar pattern of exchange between scientists and policymakers. Around 1943, Niels Bohr, a famous Danish physicist, made the case that since scientists created nuclear weapons, they should take responsibility for efforts to control the technology. Two years later, after the first use of nuclear weapons, the United States created a committee to deliberate about whether the weapons should become central to U.S. military strategy, or whether the country should forego them and avoid a costly arms race. The Acheson-Lilienthal committees proposal to put nuclear weapons under shared international control failed to gain support, but it was one step in a consensus-building process. The U.S. Department of Defense, Department of State, and other agencies developed their own perspectives, and U.N. negotiations eventually produced the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Since entering into force in 1970, it has become the most widely subscribed arms control treaty in history with a total of 191 signatory states.

We are in the Acheson-Lilienthal age of governing AI. Neither disarmament nor shared control is feasible in the short term, and the best hope is to limit risk. The NPT was created with the principles of non-possession and non-transfer of nuclear weapons material and technology in mind, but AI code is too diffuse and too widely available for those principles to be the lodestar of AI governance.

What Norms Do We Want?

What then does nonproliferation look like in AI? What could or should be prohibited? One popular proposal is a no kill rule for unassisted AI: humans should bear responsibility for military attack.

A current Defense Department directive requires appropriate levels of human judgment in autonomous system attacks aimed at humans. This allows the United States to claim the moral high ground. The next step is to add specificity to what appropriate levels of judgement means in particular classes of technology. For example, greater human control might be proportional to greater potential for lethality. Many of AIs dangers stem from the possibility that it might act through code too complex for humans to understand, or that it might learn so rapidly so as to be outside of human direction and therefore threaten humanity. We must consider how these situations might arise and what could be done to preserve human control. Roboticists say that such existing tools as reinforcement learning and utility functions will not solve the control problem.

An AI system might need to be turned off for maintenance or, crucially, in cases where the AI system poses a threat. Robots often have a red shutdown button in case of emergency, but an AI system might be able learn to turn off its own off switch, which would likely be software rather than a big red button. Google is developing an off switch it terms a kill switch for its applications, and European lawmakers are debating whether and how to make a kill switch mandatory. This may require a different kind of algorithm than currently exists one with safety and interpretability at the core. It is not clear what an off switch means in military terms, but American-Soviet arms control faced a similar problem. Yet arms control proceeded though technical negotiations that established complex yet robust command and control systems.

Building International Consensus

The NPT was preceded by a quarter century of deliberation and consensus building. We are at the beginning of that timeline for AI. The purpose of treaties and consensus building is to limit the risks of dangerous technology by convincing countries that restraint is in the interests of mankind and their own security.

Nuclear nonproliferation agreements succeeded because the United States and the Soviet Union convinced non-nuclear nations that limiting the spread of nuclear weapons was in their interest even if it meant renouncing weapons while other countries still had them. In 1963, John F. Kennedy asked what it would mean to have nuclear weapons in so many hands, in the hands of countries large and small, stable and unstable, responsible and irresponsible, scattered throughout the world. The answer was that more weapons in the hands of more countries would increase the chance of accident, proxy wars, weak command and control systems, and first strikes. The threat of nuclear weapons in the hands of regional rivals could be more destabilizing than in the hands of the superpowers. We do not yet know if the same is true for AI, but we should investigate the possibility.

Access to Peaceful Technology

It is a tall order to ask countries to buy into a regime that limits their development of a powerful new technology. Nuclear negotiations offered the carrot of eventual disarmament, but what disarmament means in the AI context is not clear. However, the principle that adopting restrictions on AI weapons should be linked to access to the benefits of AI for peaceful uses and security cooperation could apply. Arms control negotiator William Foster wrote in 1967 that the NPT treaty would stimulate widespread, peaceful development of nuclear energy. Why not promise to share peaceful and humanitarian applications of AI for agriculture and medicine, for example with countries that agree to participate in global controls?

The foundation of providing access to peaceful nuclear technology in exchange for monitoring materials and technology led to the development of a system of inspections known as safeguards. These were controversial and initially not strong enough to prevent the spread of nuclear weapons, but they took hold over time. A regime for AI inspection and verification will take time to emerge.

As in the nuclear sphere, the first step is to build consensus and identify what other nations want and where common interest lies. AI exists in lines of code, not molecules of uranium. For publicly available AI code, principles of transparency may help mutual inspection. For code that is protected, more indirect measures of monitoring and verification may be devised.

Finally, nuclear arms control and nonproliferation succeeded as part of a larger strategy (including extended deterrence) that provided strategic stability and reassurance to U.S. allies. America and the Soviet Union despite their Cold War competition found common interests in preventing the spread of nuclear weapons. AI strategy goes hand-in-hand with a larger defense strategy.

A New AI for Defense Framework

Once again, the world needs U.S. global leadership this time to prevent an AI arms race, accident, or catastrophic attack. U.N.-led discussions are valuable but overly broad, and the technology has too many military applications for industry alone to lead regulation. Current U.N. talks are preoccupied with discussion of a ban on lethal autonomous weapons. These are sometimes termed killer robots because they are smart machines that can move in the world and make decisions without human control. They cause concern if human beings are not involved in the decision to kill. The speed and scale of AI deployment calls for more nuance than the current U.N. talks can provide, and more involvement by more stakeholders, including national level governments and industry.

As at the dawn of the nuclear age, the United States can build global consensus in the age of AI to reduce risks and make the world safe for one of its leading technologies one thats valuable to U.S. industry and to humanity.

Washington should build a framework for a global consensus on how to govern AI technology that could be weaponized. Private sector participation would be crucial to address governance, as well as how to share peaceful benefits to incentivize participation. The Pentagon, in partnership with private sector technology firms, is a natural leader because of its budget and role in the industrial base.

An AI for Peace program should articulate the dangers of this new technology, principles (e.g. no kill, human control, off switch) to manage the dangers, and a structure to shape the incentives for other states (perhaps a system of monitoring and inspection). Our age is not friendly to new treaties, but we can foster new norms. We can learn from the nuclear age that countries will agree to limit dangerous technology with the promise of peaceful benefits for all.

Patrick S. Roberts is a political scientist at the nonprofit, nonpartisan RAND Corporation.Roberts served as an advisor in the State Departments Bureau of International Security and Nonproliferation, where he worked on the NPT and other nuclear issues.

Image: Nuclear Regulatory Commission

More:

AI for Peace - War on the Rocks

Posted in Ai | Comments Off on AI for Peace – War on the Rocks

An AI conference once known for blowout parties is finally growing up – MIT Technology Review

Posted: at 3:24 pm

Only two years ago, so Im told, one of the hottest AI research conferences of the year was more giant party than academic exchange. In a fight for the best talent, companies handed out endless free swag and threw massive, blowout events, including one featuring Flo Rida, hosted by Intel. The attendees (mostly men in their early 20s and 30s), flush with huge salaries and the giddiness of being highly coveted, drank free booze and bumped the night away.

I never witnessed this version of NeurIPS, short for the Neural Information Processing Systems conference. I came for my first time last year, after the excess had reached its peak. Externally, the community was coming under increasing scrutiny as the upset of the 2016 US presidential election drove people to question the influence of algorithms in society. Internally, reports of sexual harrassment, anti-Semitism, racism, and ageism were also driving conference goers to question whether they should continue to attend.

Sign up for The Algorithm artificial intelligence, demystified

So when I arrived in 2018, a diversity and inclusion committee had been appointed, and the long-standing abbreviation NIPS had been updated. Still, this years proceedings feel different from the last. The parties are smaller, the talks are more socially minded, and the conversations happening in between seem more aware of the ethical challenges that the field needs to address.

As the role of AI has expanded dramatically, along with the more troubling aspects of its impact, the community, it seems, has finally begun to reflect on its power and the responsibilities that come with it. As one attendee put it to me: It feels like this community is growing up.

This change manifested in some concrete ways. Many of the technical sessions were more focused on addressing real-world, human-centric challenges rather than theoretical ones. Entire poster tracks were centered on better methods for protecting user privacy, ensuring fairness, and reducing the amount of energy it can take to run and train state-of-the-art models. Day-long workshops, scheduled to happen today and tomorrow, have titles like Tackling Climate Change with Machine Learning and Fairness in Machine Learning for Health.

Additionally, many of the invited speakers directly addressed the social and ethical challenges facing the fieldtopics once dismissed as not core to the practice of machine learning. Their talks were also well received by attendees, signaling a new openness to engage with these issues. At the opening event, for example, cognitive psychologist and #metoo figurehead Celeste Kidd gave a rousing speech exhorting the tech industry to take responsibility for how its technologies shape peoples beliefs and debunking myths around sexual harassment. She received a standing ovation. In an opening talk at the Queer in AI symposium, Stanford researcher Ria Kalluri also challenged others to think more about how their machine-learning models could shift the power in society from those who have it to those who dont. Her talk was widely circulated online.

Much of this isnt coincidental. Through the work of the diversity and inclusion committee, the conference saw the most diverse participation in the its history. Close to half the main-stage speakers were women and a similar number minorities; 20% of the over 13,000 attendees were also women, up from 18% last year. There were seven community-organized groups for supporting minority researchers, which is a record. These included Black in AI, Queer in AI, and Disability in AI, and they held parallel proceedings in the same space as NeurIPS to facilitate mingling of people and ideas.

When we involve more people from diverse backgrounds in AI, Kidd told me, we naturally talk more about how AI is shaping society, for good or for bad. They come from a less privileged place and are more acutely aware of things like bias and injustice and how technologies that were designed for a certain demographic may actually do harm to disadvantaged populations, she said. Kalluri echoed the sentiment. The intentional efforts to diversify the community, she said, are forcing it to confront the questions of how power works in this field.

Despite the progress, however, many emphasized that the work is just getting started. Having 20% women is still appalling, and this year, as in past years, there continued to be Herculean challenges in securing visas for international researchers, particularly from Africa.

Historically, this field has been pretty narrowed in on a particular demographic of the population, and the research that comes out reflects the values of those people, says Katherine Heller, an assistant professor at Duke University and co-chair of the diversity committee. What we want in the long run is a more inclusive place to shape what the future direction of AI is like. Theres still a far way to go.

Yes, theres still a long way to go. But on Monday, as people lined up to thank Kidd for her talk one by one, I let myself feel hopeful.

Visit link:

An AI conference once known for blowout parties is finally growing up - MIT Technology Review

Posted in Ai | Comments Off on An AI conference once known for blowout parties is finally growing up – MIT Technology Review

12 Everyday Applications Of Artificial Intelligence Many People Aren’t Aware Of – Forbes

Posted: at 3:24 pm

By now, almost everyone knows a little bit about artificial intelligence, but most people arent tech experts, and many may not be aware of just how big an impact AI has. The truth is most consumers interact with technology incorporating AI every day. From the searches we perform in Google to the advertisements we see on social media, AI is an ever-present feature of our lives.

To help nonspecialists grasp the degree to which AI has been woven into the fabric of modern society, 12 experts from Forbes Technology Council detail some applications of AI that many may not be aware of.

1. Offering Better Customer Service

Calling customer service used to be as exciting as seeing a dentist. AI has changed that: You no longer have to repeat the same information countless times to different call center agents. Brands are able to tap into insights on all their previous interactions with you. Data analytics and AI help brands anticipate what their customers want and deliver more intelligent customer experiences. - Song Bac Toh, Tata Communications

2. Personalizing The Shopping Experience

Every time you shop online at an e-commerce site, as soon as you start clicking on a product the site starts to provide personalized recommendations of relevant products. Nowadays most of these applications use some form of AI algorithms (reinforced learning and others) to come up with such results. The experience is so transparent most shoppers dont even realize its AI. - Brian Sathianathan, Iterate.ai

3. Making Recruiting More Efficient

Next time you go to look for a new job, write your rsum for a computer, not a recruiter. AI is aggregating the talent pool, slimming the selection to a shortlist and ranking matches based on skills and qualifications. AI has thoroughly reviewed your rsum and application through machine learning before a human ever gets to look at them. - Tammy Cohen, InfoMart Inc.

4. Keeping Internet Services Running Smoothly

Consumers have come to expect their favorite apps and services to run smoothly, and AI makes that possible. AI does what humans cannot: It monitors apps, identifies problems and helps humans resolve them in a fraction of the time it would take manually. AI has the ability to spot patterns at scale in monitored data with the goal of having service interruptions solved before customers even notice. - Phil Tee, Moogsoft

5. Protecting Your Finances

For credit card companies and banks, AIs incredible ability to analyze massive amounts of data has become indispensable behind the scenes. These financial institutions leverage machine learning algorithms to identify potential fraudulent activity in your accounts and get ahead of any resulting detrimental effects. Every day, this saves people from tons of agony and headaches. - Marc Fischer, Dogtown Media LLC

6. Enhancing Vehicle Safety

Even if you dont have a self-driving vehicle, your car uses artificial intelligence. Lane-departure warnings notify a driver if the car has drifted out of its lane. Adaptive cruise control ensures that the car maintains a safe distance while cruising. Automated emergency braking senses when a collision is about to happen and applies the brakes faster than the driver can. - Amy Czuchlewski, Bottle Rocket

7. Converting Handwritten Text To Machine-Readable Code

The post office has tech called optical character recognition that converts handwritten text to machine-readable code. Reading handwriting requires human intelligence, but there are machines that can do it, too! Fun fact: This technology was invented in 1914 (yes, you read that right!). So, we experience forms of AI all the time. Its just a lot trendier now to call it AI. - Parry Malm, Phrasee

8. Improving Agriculture Worldwide

Most people dont think of AI when they eat a meal, but AI is improving agriculture worldwide. Some examples: satellites scanning farm fields to monitor crop and soil health; machine learning models that track and predict environmental impacts, like droughts; and big data to differentiate between plants and weeds for pesticide control. Thank AI for the higher crop yields. - John McDonald, ClearObject

9. Helping Humanitarian Efforts

While we often hear about AI going wrong, its doing good things, like guiding humanitarian aid, supporting conservation efforts and helping local government agencies fight droughts. AI always seems to get painted as some sci-fi type of endeavor when really its already the framework of many things going on around us all the time. - Alyssa Simpson Rochwerger, Figure Eight

10. Keeping Security Companies Safe From Cyberattacks

AI has become the main way that security companies keep us safe from cyber attacks. Deep learning models run against billions of events each day, identifying threats in ways that were simply unimaginable five years ago. Unfortunately, the bad actors also have access to AI tools, so the cat-and-mouse game continues. - Paul Lipman, BullGuard

11. Improving Video Surveillance Capabilities

In cities, along highways and in neighborhoods, video cameras are proliferating. Federal, state and/or local authorities deploy these devices to monitor traffic and security. In the background, AI-related technologies that include object and facial recognition technologies underpinned by machine and deep learning capabilities speed problem identification, reducing crime and mitigating traffic. - Michael Gurau, Kaiser Associates, Inc.

12. Altering Our Trust In Information

AI will change how we learn and the level of trust we place in information. Deepfakes and the ability to create realistic videos, pictures, text, speech and other forms of communication on which we have long relied to convey information will give rise to concerns about the foundational facts used to inform decision-making in every aspect of life. - Mike Fong, Privoro

Read more here:

12 Everyday Applications Of Artificial Intelligence Many People Aren't Aware Of - Forbes

Posted in Ai | Comments Off on 12 Everyday Applications Of Artificial Intelligence Many People Aren’t Aware Of – Forbes

Page 3«..2345..1020..»