Daily Archives: May 2, 2017

Microsoft’s new head of research has spent his career building powerful AIand making sure it’s safe – Quartz

Posted: May 2, 2017 at 11:03 pm

As director of Microsofts Building 99 research lab in Redmond, Washington, Eric Horvitz gave each of his employees a copy of David McCulloughs The Wright Brothers. I said to them, Please read every word of this book, Horvitz says, tapping the table to highlight each syllable.

Horvitz wanted them to read the story of the Wright brothers determination to show them what it takes to invent an entirely new industry. In some ways, his own career in artificial intelligence has followed a similar trajectory. For nearly 25 years, Horvitz has endeavored to make machines as capable as humans.

The effort has required breaking new ground in different scientific disciplines and maintaining a belief in human ingenuity when skeptics saw only a pipe dream. The first flying machines were canvas flapping on a beach, it was a marvel they got it off the ground, says Horvitz. But in 50 summers, youve got a Boeing 707, complete with a flight industry.

Horvitz wants to fundamentally change the way humans interact with machines, whether thats building a new way for AI to fly a coworkers plane or designing a virtual personal assistant that lives outside his office. He will get a chance to further his influence, with his appointment yesterday as head of all of Microsofts research centers outside Asia.

In his new role, Horvitz will harness AI expertise from each labin Redmond, Washington; Bangalore, India; New York City, New York; Cambridge, Massachusetts; and Cambridge, Englandinto core Microsoft products, as well as setting up a dedicated AI initiative within Redmond. He also plans to make Microsoft Research a place that studies the societal and social influences of AI. The work he plans to do, he says, will be game-changing.

Horvitz, 59, has the backing of one of the industrys most influential figures. Microsoft CEO Satya Nadella has spent the last two years rebuilding the company around artificial intelligence. We want to bring intelligence to everything, to everywhere, and for everyone, he told developers last year.

Handing Horvitz the reins to Microsofts research ensures a renewed, long-term focus on the technology.

Horvitz, long a leading voice in AI safety and ethics, has used his already considerable influence to ask many of the uncomfortable questions that AI research has raised. What if, for instance, the machines unconsciously incarcerated innocent people, or could be used to create vast economic disparity with little regard to society?

Horvitz has been instrumental in corralling thinking on these issues from some of techs largest and most powerful companies through the Partnership on Artificial Intelligence, a consortium that is eager to set industry standards for transparency, accountability, and safety for AI products. And hes testified before the US Senate, giving level-headed insight on the promise of automated decision-making, while recommending caution given its latent dangers.

In 2007, Horvitz was elected to a two-year term as president of the Association for the Advancement of Artificial Intelligence (AAAI), the largest trade organization for AI research. Its hard to overstate the groups influence. Find an AI PhD student and ask them whos the most important AI researcher of all time. Marvin Minsky? President from 1981-1982. John McCarthy? President from 1983-1983. Allen Newell? The groups first president, from 1979-1980. You get the picture.

Throughout Horvitzs AAAI tenure, he looked for the blind spots intelligent machines encountered when put into the open world. They have to grapple with this idea of unknown unknowns, he says. Today, we have a much better idea of what these unknowns can be. Even unintentionally biased data powering AI used by law enforcement can discriminate against people by gender or skin color; driverless cars could miss seeing dangers in the world; malicious hackers could try to fool AI into seeing things that arent there.

The culmination of Horvitzs AAAI presidency, in 2009, was a conference held at the famous Asilomar hotel in Pacific Grove, California, to discuss AI ethics, in the spirit of the meetings on DNA modification held at the same location in 1975. It was the first time such a discussion had been held outside academia, and was in many ways a turning point for the industry.

All the people there who were at the meeting went on to be major players in the implementation of AI technology, says Bart Selman, who co-chaired the conference with Horvitz. The meeting went on to get others to think about the consequences and how to do responsible AI. It led to this new field called AI safety.

Since then, the role of AI has become a topic of public concern. Facebooks Mark Zuckerberg has had to answer the very question that Horvitz began a decade ago: Whos responsible when an algorithm provides false information, or traps people within a filter bubble? Automakers in Detroit and their upstart competitors in Silicon Valley have philosophers debating questions like: When faced with fatalities of passengers or pedestrians, who should a driverless car decide to kill?

But there are also unquestionably good uses for AI, and Horvitz arguably spends more time thinking about thoseeven when hes far from the lab.

When I first met Horvitz, he was stepping off the ice at the Kent Valley Ice Centre hockey rink in, about a 30-minute drive south of Building 99. Fresh from an easy 4-1 victory on the ice and wearing a jersey emblazoned with the team name Hackers, he quickly introduced me to teammate Dae Lee, and launched into a discussion of a potential uses for AI. There are 40,000 people who die every year in the hospital from preventable errors, Horvitz said, still out of breath and wearing a helmet. Dae is working with some predictive machine-learning algorithms to reduce those deaths.

Meeting with him the next day, examples abounded: Algorithms that can reduce traffic by optimizing ride-sharing, systems that aim to catch cancer a full stage before doctors based on your search history (the idea being that you might be searching for information about health conditions that indicate early warnings of the disease), and trying to predict the future by using the past.

Horvitz has been chewing on some of these ideas for decades, and hes quick to tell you if a thought isnt yet completely formedwhether hes discussing the structure of an organization hes a member of, or a theory on whether consciousness is more than a sum of its parts (his current feeling: probably not).

In college, Horvitz pursued similar questions, while earning an undergraduate degree in biophysics from Binghamton University in upstate New York. After finishing his degree, he spent a summer at Mt. Sinai hospital in Manhattan, measuring the electric actuation of neurons in a mouse brain. Using an oscilloscope, he could watch the electric signals that indicated neurons firing.

He didnt intend to go into computer software, but during his first year of medical school at Stanford, he realized he wanted to explore electronic brainsthat is, machines that could be made to think like humans. He had been looking at an Apple IIe computer, and realized he had been approaching the problem of human brain activity the wrong way.

I was thinking about this work of sticking glass electrodes to watch neurons would be like sticking a wire into one of those black motherboard squares and trying to infer the operating system, he said.

He was trying to understand organic brains from the outside in, instead of building them from the inside out. After finishing his medical degree, he went on to get a PhD in artificial intelligence at Stanford.

Some of his first ideas for AI had to do directly with medicine. Among those formative systems was a program meant to help trauma surgeons triage tasks in emergency situations by enabling them to quickly discern whether a patient was in respiratory distress or respiratory failure.

But the machines at the time, like the famed Apple IIe, were slow and clunky. They huffed and puffed when making a decision, Horvitz says. The only way for a machine to be able to make a good decision within the allotted time was if the machine knew its limitationsto know and decide whether it could make a decision, or whether it was too late. The machine had to be self-aware.

Self-aware machines have been the fodder for science fiction for decades; Horvitz has long been focused on actually constructing them. Since the rise of companies like Amazon, Google, and Facebookwhich use AI to manage workflow in fulfillment centers or in products like Alexa or search, or to help connect people on social mediamuch research has been focused on building deep neural networks, which have been proven useful for recognizing people or objects in images, recognizing speech, and understanding text. Horvitzs work pinpoints the act of making a decision: How can machines make decisions like expert humans, considering the effects on themselves and the environment, but with the speed and improvable accuracy of a computer?

In his 1990 Stanford thesis, Horvitz described the idea as a model of rational action for automated reasoning systems that makes use of flexible approximation methods and decision-theoretic procedures to determine how best to solve a problem under bounded computational resources.

Well just call it a kind of self-awareness. While the term is often used interchangeably with consciousness, a term that philosophers still argue over, self-awareness can be considered acting after understanding ones limitations. Horvitz makes it clear that self-awareness isnt a light switchits not just on or off, but rather a sea of small predictions that humans make unconsciously every day, and that can sometimes be reverse-engineered.

To see this in action, consider a game that Horvitz worked on in 2009, where an AI agent moderated a trivia game between two people. It would calculate how much time it had to formulate a sentence and speak it, predicting whether it would be socially acceptable to do so. It was a polite bot. In addition, if a third person was seen by the AIs camera in the background, it would stop the game and ask if they wanted to joina small feat for a human, but something completely out of left field for an artificial game show host.

And thats the magic, right? Thats the moment where it goes from just being a system to being alive, says Anne Loomis Thompson, a senior research engineer at Microsoft. When these systems really work, it is magic. It feels like theyre really interacting with you, like some sentient creature.

Outside of Microsoft, Horvitzs interests in AI safety have gone well past the Asilomar conference. Hes personally funded the Stanford 100 Year Study, a look at the long-term effects of artificial intelligence by a cadre of academics with expertise in economics, urban development, entertainment, public safety, employment, and transportation. Its first goal: to gauge the impact of artificial intelligence on a city in the year 2030.

The Partnership on AI, made up of AI leaders from Microsoft, Google, IBM, Amazon, Facebook, and Apple, represents a way for Horvitz to bring the industry together to talk about use of AI for humanitys benefit. The group has recently published its goals, chiefly creating best practices around fairness, inclusivity, transparency, security, privacy, ethics, and safety of AI systems. It has brought in advisors from outside technology, such as Carol Rose from the ACLUs Massachusetts chapter, and Jason Furman, who was US president Barack Obamas chief economic adviser. Horvitz says there are about 60 companies now trying to join.

Despite the potential dangers of an AI-powered world, Horvitz fundamentally believes in the technologys ability to make human life more meaningful. And now hell have an even larger platform from which to share the message.

Visit link:

Microsoft's new head of research has spent his career building powerful AIand making sure it's safe - Quartz

Posted in Ai | Comments Off on Microsoft’s new head of research has spent his career building powerful AIand making sure it’s safe – Quartz

Bitfusion raises $5M for its AI lifecycle management platform … – TechCrunch

Posted: at 11:03 pm

When Bitfusion launched at Disrupt NY 2015, its focus was on helping developers speed up their applications by giving them pre-compiled libraries that made better use of GPUs, FPGAs and other co-processing technologies. That was two years ago. Today, the hottest market for these technologies is intraining deep learning models, something that was barely on the radar when the company launched. Unsurprisingly, though, thats exactly what Bitfusion is focusing on now.

As the company announced today, it has raised a $5 million Series A round led by Vanedge Capital, with participation from new investor Sierra Ventures and existing investors Data Collective, Resonant VC and Geekdom. The company plans to invest this money into strengthening its research and development efforts and to focus on Bitfusion Flex, its new framework-agnostic platform for building and managing AI projects.

Now in beta, Bitfusion Flex essentially aims to give developers a single platform for managing the life cycle of an AI application. Developers get a single dashboard that takes them from development to training, testing and eventually deployment. Under the hood, Flex uses containers to make scaling and moving experiments and models between local machines and the cloud easy, but it also supports deployments on bare metal, too.

Its important to note that Flexs focus isntnecessarily on making the modelling easier. While it does offer an app store-like experience for setting up your framework of choice (no matter whether thats TensorFlow, Torch, Caffe or similar tools), its strength is in managing the infrastructure you need to build and run these applications. Because of this, it neither cares about the framework, nor where you want to deploy the application.

The service offers both a web-based interface to manage this process as well as a command-line interface that, for example, lets you attach remote GPUs to your local laptop during the development phase.

A lot of people who start deep learning projects cant take them beyond the prototype phase, Bitfusion CEO and co-founder Subbu Rama told me. Everybody wants to do deep learning everywhere, but the Global 2000 they dont have enough people. So with Flex, Bitfusionwants toabstract the tedious work of managing infrastructure away so that the data scientists that companies do eventually manage to hire can focus on their applications.

Looking ahead, Bitfusion plans to expand Flex and bring it out of beta in the next few months. The Austin-based company also plans to expand its Silicon Valley presence (though Rama noted that most of the R&D work will still happen in Austin).

See more here:

Bitfusion raises $5M for its AI lifecycle management platform ... - TechCrunch

Posted in Ai | Comments Off on Bitfusion raises $5M for its AI lifecycle management platform … – TechCrunch

Tinder Has Been Raided For Research Again, This Time To Help AI ‘Genderize’ Faces – Forbes

Posted: at 11:03 pm


Forbes
Tinder Has Been Raided For Research Again, This Time To Help AI 'Genderize' Faces
Forbes
In the age of screen shots and data trails, the idea of putting yourself 'out there' has gained new meaning, especially as dating apps are increasingly mined for users' potentially quite personal info. In a new perceived privacy breach, one developer ...

and more »

Go here to read the rest:

Tinder Has Been Raided For Research Again, This Time To Help AI 'Genderize' Faces - Forbes

Posted in Ai | Comments Off on Tinder Has Been Raided For Research Again, This Time To Help AI ‘Genderize’ Faces – Forbes

Watch this documentary about the AI-powered future of self-driving cars – TNW

Posted: at 11:03 pm

With giants like Google, Apple, Samsung and Uber in the race, we are likely tobegin spotting driverless vehicles on the road much more often in the years to come. But what is the current state of affairs in the self-driving car industry? This fascinating short documentary will bring you up to date.

Produced by Red Hat Films, Road to AIexplores the future of technology at the intersection between self-driving cars and artificial intelligence. The docufilm is thelatest instalment to the companysOpen Source Stories series thattraces the various ways in which AI has crept into our lives and surroundings.

Gary Vaynerchuk was so impressed with TNW Conference 2016 he paused mid-talk to applaud us.

Featuring commentariesfrom AI luminaries like NutonomyCEO Karl Iagnemma, Skymind CEO Chris Nicholson, Google researcherFranois Cholletand Duke University professor Mary Cummings, Road to AItakes a deep look at how AI is paving the way for self-driving cars to reach the masses.

AI will increasingly integral to our lives, to our society. It will become part of our basic infrastructure of society, it will become our interface to the world, to a world that will be increasingly information rich and complex. AI will change what it means to be human, says Chollet.

Building on this thought, Road to AI goes on to speculate it is precisely AI that will save lives on the roads and help autonomous driving tech cement its wayinto mainstream ubiquity.

Road to AI premieres todaywith adebut on two fronts both online and at the Red Hat Summit in Boston. Watch the full documentary in the video section above.

Road to AI on Red Hat

Read next: About.com is reborn as Dotdash

Here is the original post:

Watch this documentary about the AI-powered future of self-driving cars - TNW

Posted in Ai | Comments Off on Watch this documentary about the AI-powered future of self-driving cars – TNW

6 ways AI can improve how government works right now – GCN.com

Posted: at 11:03 pm

READ ME

What: AI-augmented government: Using cognitive technologies to redesign public sector work, a report by the Deloitte Center for Government Insights that explores how governments can use artificial intelligence to become more efficient.

Why: At a minimum, AI could save 96.7 million federal hours annually, which would mean potential savings of $3.3 billion, Deloitte says.

Findings: AI can increase speed, enhance quality and reduce costs. Some of the possibilities include:

1. Overcome resource constraint: AI is much faster and more accurate at sifting through large volumes of information. The Georgia Government Transparency and Campaign Finance Commission uses handwriting analysis software to speed the processing of 40,000 pages of disclosures it receives every month.

2. Reduce paperwork: The federal government spends a half-billion hours every year on documenting and recording information. Robotics and cognitive automation could perform data entry and paperwork processing in any number of areas -- for child welfare workers, for example, leaving them more time for interaction with children and their families.

3. Cut backlogs: The U.S. Patent and Trademark Offices backlog of patent applications hinders innovation, but cognitive technologies can sift through large data backlogs and perform simple, repetitive actions, leaving difficult cases to human experts. Robotic process automation can automate workflow, in some cases with little human interaction.

4. Enable smart cities: When combined with internet-of-things infrastructure, AI can monitor the surrounding environment to dim street lighting, monitor pedestrian traffic and adjust traffic lights to ease rush hours.

5. Predict outcomes: Machine learning and natural-language processing can spot patterns and suggest responses. Measuring soldiers vital signs with wearable physiological monitors lets the Army predict the seriousness of wounds and prioritize treatment, for example. The Southern Nevada Health District, meanwhile, uses AI to analyze Twitter posts to find restaurants where people reported food poisoning so it can direct investigations to those locations.

6. Answer questions: Automation can offload work from call centers that answer many of the same questions multiple times a day. The Armys SGT STAR virtual assistant, for example, helps recruits understand their different enlistment options, performing the work of 55 recruiters with a 94 percent accuracy rate.

Read the full report here.

About the Author

Matt Leonard is a reporter/producer at GCN.

Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.

Leonard can be contacted at mleonard@gcn.com or follow him on Twitter @Matt_Lnrd.

Click here for previous articles by Leonard.

Read the original here:

6 ways AI can improve how government works right now - GCN.com

Posted in Ai | Comments Off on 6 ways AI can improve how government works right now – GCN.com

Artificial intelligence prevails at predicting Supreme Court decisions – Science Magazine

Posted: at 11:03 pm

Artificial intelligence can predict Supreme Court decisions better than some experts.

davidevison/iStockphoto

By Matthew HutsonMay. 2, 2017 , 1:45 PM

See you in the Supreme Court! President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholarsat predicting Supreme Court decisions, even with less information.

Several other studies have guessed at justices behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices whod been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.

The new study draws on a much richer set of data to predict the behavior of any set of justices at any time. Researchers used the Supreme Court Database, which contains information on cases dating back to 1791, to build a general algorithm for predicting any justices vote at any time. They drew on 16 features of each vote, including the justice, the term, the issue, and the court of origin. Researchers also added other factors, such as whether oral arguments were heard.

For each year from 1816 to 2015, the team created a machine-learning statistical model called a random forest. It looked at all prior years and found associations between case features and decision outcomes. Decision outcomes included whether the court reversed a lower courts decision and how each justice voted. The model then looked at the features of each case for that year and predicted decision outcomes. Finally, the algorithm was fed information about the outcomes, which allowed it to update its strategy and move on to the next year.

From 1816 until 2015, the algorithm correctly predicted 70.2% of the courts 28,000 decisions and 71.9% of the justices 240,000 votes, the authors report in PLOS ONE. That bests the popular betting strategy of always guess reverse, which has been the case in 63% of Supreme Court cases over the last 35 terms. Its also better than another strategy that uses rulings from the previous 10 years to automatically go with a reverse or an affirm prediction. Even knowledgeable legal experts are only about 66% accurate at predicting cases, the 2004 study found. Every time weve kept score, it hasnt been a terribly pretty picture for humans, says the studys lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago.

Roger Guimer, a physicist at Rovira i Virgili University in Tarragona, Spain, and lead author of the 2011 study, says the new algorithm is rigorous and well done. Andrew Martin, a political scientist at the University of Michigan in Ann Arbor and an author of the 2004 study, commends the new team for producing an algorithm that works well over 2 centuries. Theyre curating really large data sets and using state-of-the-art methods, he says. Thats scientifically really important.

Outside the lab, bankers and lawyers might put the new algorithm to practical use. Investors could bet on companies that might benefit from a likely ruling. And appellants could decide whether to take a case to the Supreme Court based on their chances of winning. The lawyers who typically argue these cases are not exactly bargain basement priced, Katz says.

Attorneys might also plug different variables into the model to forge their best path to a Supreme Court victory, including which lower court circuits are likely to rule in their favor, or the best type of plaintiff for a case. Michael Bommarito, a researcher at Chicago-Kent College of Law and study co-author, offers a real example in National Federation of Independent Business v. Sebelius, in which the Affordable Care Act was on the line: One of the things that made that really interesting was: Was it about free speech, was it about taxation, was it about some kind of health rights issues? The algorithm might have helped the plaintiffs decide which issue to highlight.

Future extensions of the algorithm could include the full text of oral arguments or even expert predictions. Says Katz: We believe the blend of experts, crowds, and algorithms is the secret sauce for the whole thing.

Read more here:

Artificial intelligence prevails at predicting Supreme Court decisions - Science Magazine

Posted in Artificial Intelligence | Comments Off on Artificial intelligence prevails at predicting Supreme Court decisions – Science Magazine

Does Artificial Intelligence Discriminate? – Forbes

Posted: at 11:03 pm


Forbes
Does Artificial Intelligence Discriminate?
Forbes
As the old joke goes, on the internet nobody knows you're a dog. But thanks to the rise of artificial intelligence, not only do today's machines know you're a canine, they can tell what color dog you are -- and may treat you differently as a result. AI ...

See the original post here:

Does Artificial Intelligence Discriminate? - Forbes

Posted in Artificial Intelligence | Comments Off on Does Artificial Intelligence Discriminate? – Forbes

Why Mark Cuban is Dead Wrong About Twitter and Artificial Intelligence – Inc.com

Posted: at 11:03 pm

Twitter hasn't done anything interesting with AI lately.

I know this because whatever machine learning they use to stop online harassment is more like an email filter to weed out some political fluff from your inbox or kill spam. Users are still able to create fake accounts, send harassing tweets, criticize you over and over again, remain completely anonymous, and come up with a variety of unflattering slams against celebrities that are never caught by the filters and go completely ignored for days or weeks on end.

That's what makes Mark Cuban's comments today about investing in Twitter because of their foray into AI a bit perplexing. What AI? The one that still lets trolls do whatever they want? The one that allows a tweet through that tells me to stick a fork into my front lobe?

There might be some confusion on this topic.

Recently, the company did implement new algorithms that can limit the accounts of users who show a pattern of abuse, something that is not exactly an AI. And, they've talked about using IBM Watson to help, but that's not exactly developing the AI in house.

An AI--designed and developed by Twitter--would identify abusive content based on context and be able to warn the offending user before he or she ever posts it in real time--say, when the user types a message that is obviously hurtful and tries to post it. Facebook has the same issue, because today you can post a revenge porn image or say something derogatory, and it's only when another user identifies the harmful comments or images that any pattern recognition kicks in.

The problem, of course, is that Twitter wants to appear intelligent. They haven't fully addressed the problem, and have let the issue slide since a blog post way back in March. If they are making progress on actual machine learning, they haven't let any of us know. Today, you can still tell someone to commit suicide or send other abusive comments--any real form of artificial intelligence would spot that and block it.

Twitter is walking on a tightrope here. To block very hurtful comments that do not use hate speech (something like "why don't you step in front of a truck") could be perceived as limiting free speech. An AI is not able to tell the difference quite yet. Context--e.g., who said what and why, when they said it, who they know, how often they engage in conversation--is difficult even for humans at times. I've received multiple tweets this last week that I took as hurtful and harmful, and I'd rather not see them, but it's all part of living in the age of trolls. If Twitter made a better AI, I could simply choose to block these tweets. If someone keeps sending death threats, or I report that person, or they use abusive words or hate speech, then Twitter's machine learning might kick in--or it might not. The problem is that, if the AI is only partially effective, is it really effective at all?

Meanwhile, an entire generation of people under 30 have moved over to Instagram and tend to avoid Twitter, which is widely known as a bastion of trolls.

There's often a serious misunderstanding about how an AI works. It's not just a filter or an algorithm. There has to be "intelligence" and understanding, a way to make a decision about context. That's the really hard part. Filters have existed for decades, but computer scientists know that an AI has to be able to deal with fuzzy logic and even moral quagmires. Is someone I know just joking around and telling me to jump off of a bridge? Is it someone who has a bone to pick with me because they invest in the company I'm criticizing? When I've tweeted before about email overload, the people who often send abusive comments just happen to be email marketers. An AI would have to understand that, and it's not exactly humming along nicely at Twitter.

There's no reason to think Twitter won't solve this, but for now--the machine learning is more like a spam filter. That doesn't seem like rocket science to me.

Read more:

Why Mark Cuban is Dead Wrong About Twitter and Artificial Intelligence - Inc.com

Posted in Artificial Intelligence | Comments Off on Why Mark Cuban is Dead Wrong About Twitter and Artificial Intelligence – Inc.com

The skeptic’s guide to artificial intelligence – CIO Dive

Posted: at 11:03 pm

If your company is not embracing artificial intelligence, it is going to suffer. That's the message from all corners of the tech and business media world these days. CIO Dive is no exception. We've reported on the growing reliance on artificial intelligence everywhere from call centers to cloud computing to the airline industry.

Yet, the state of technology is still a long way from imbuing machines with human-level intelligence. It's even further from the merging of machine and humans that fans of the technological singularity believe will unlock our true potential and, eventually, immortality.

Despite the remarkable victory that Google's AI-powered Alpha computer scored against the world's top Go player, there is healthy debate around when machines will be able to truly attain human-like intelligence. That would mean a machine that could do more than just recognizing patterns and learning from mistakes, but also accurately responding to new information and understanding unstructured data.

Plus, it is no easy task to transfer a given type of AI from one application to another. For example, The Nature Conservancy wants to fight illegal fishing by using facial recognition software, running on cameras mounted over haul-in decks on fishing boats, to mark whenever an endangered or non-target species is brought aboard and not thrown back.

But it's not as simple as uploading a catalog of fish faces and pressing enter. Constantly changing light, variations in the orientation of the fish to the camera, and the movement of the boat all complicate matters. Kaggle, a code crowdsourcing platform, recently held a contest to incentivize coders to write software that addressed those variables.

Yet, the more pressing question around AI is not whether it has truly arrived, but whether the AI features vendors are trying to sell your company actually, and consistently, work as advertised. And if they'll meet your objectives.

The healthcare industry stands to benefit significantly from AI. Wearable devices are being developed to track changes or patterns in a patient's vital signs that could signal an approaching cardiac event. When one is detected, a physician can be alerted automatically. Of course, that's very different than relying on technology to make an actual clinical decision.

But a number of companies are starting to sell digital health assistants. This technology accesses a patient's medical records and analyzes input from the user to assess symptoms and provide a possible diagnosis. But why should we trust these platforms?

That's a question that Zoubin Ghahramani, professor of Information Engineering at the University of Cambridge, has spent a lot of time pondering.

We know that machine learning improves over time, as the software accrues more data and essentially learns from past events. So what if, Ghahramani and his team posit in a University of Cambridge research brief, we designed artificial intelligence with training wheels of sorts? Vehicles with autopilot mode, for instance, might ping a driver for help in unfamiliar territory,if the car's cameras or sensors are not capturing adequate data for processing.

But unless you've actually written the algorithms that power that autopilot, or any other piece of AI technology, it is not clear how the system reached a decision, or the soundness of that decision.

"We really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data whether you are a baby learning a language or a scientist analyzing some data you start with a lot of uncertainty and then as you have more and more data you have more and more certainty," Ghahramani said.

"When machines make decisions, we want them to be clear on what stage they have reached in this process," he said. "And when they are unsure, we want them to tell us."

Last year, and with collaborators from the University of Oxford, Imperial College London, and at the University of California, Berkeley, Ghahramani helped launch the Leverhulme Centre for the Future of Intelligence.

One of the center's area of study is trust and transparency around AI, while other areas of focus include policy, security and the impacts that AI could have on personhood.

Adrian Weller, a senior researcher on Ghahramani's team and the trust and transparency research leader at the Leverhulme Centre for the Future of Intelligence, explained that AI systems based on machine learning use processes to arrive at decisions that do not mimic the "rational decision-making pathways" that humans comprehend. Using visualization and other approaches, the center is creating tools that can put AI decision processes in a human context.

The goal is not just to provide tools for cognitive scientists, but also for policy makers, social scientists, and even philosophers, because they will also take roles in integrating AI into society.

But by providing a means for making AI functions more transparent, commercial users of AI tools and their consumers could better understand how it works, determine its trustworthiness, and decide whether it is likely to meet the company's or its customer's needs.

The tech industry has begun collaborating around guiding principles to help ensure AI is deployed in an ethical, equitable, and secure manner. Representatives from Amazon, Apple, Facebook, IBM, Google and Microsoft have joined with academics as well as groups including the ACLU and MacArthur Foundation to form the Partnership on AI.

It seeks to explore the influence of AI on society and is organized around themes that include safety, transparency, labor and social good.

But a system for rating AI features and ensuring compliance with basic quality metrics similar to how Underwriters Lab ensures appliances meet basic safety or performance measures could also go a long way toward helping end users evaluate AI products.

Read the original:

The skeptic's guide to artificial intelligence - CIO Dive

Posted in Artificial Intelligence | Comments Off on The skeptic’s guide to artificial intelligence – CIO Dive

Democratizing Artificial Intelligence – Project Syndicate

Posted: at 11:03 pm

OXFORD Artificial Intelligence is the next technological frontier, and it has the potential to make or break the world order. The AI revolution could pull the bottom billion out of poverty and transform dysfunctional institutions, or it could entrench injustice and increase inequality. The outcome will depend on how we manage the coming changes.

Unfortunately, when it comes to managing technological revolutions, humanity has a rather poor track record. Consider the Internet, which has had an enormous impact on societies worldwide, changing how we communicate, work, and occupy ourselves. And it has disrupted some economic sectors, forced changes to long-established business models, and created a few entirely new industries.

But the Internet has not brought the kind of comprehensive transformation that many anticipated. It certainly didnt resolve the big problems, such as eradicating poverty or enabling us to reach Mars. As PayPal co-founder Peter Thiel once noted: We wanted flying cars; instead, we got 140 characters.

In fact, in some ways, the Internet has exacerbated our problems. While it has created opportunities for ordinary people, it has created even more opportunities for the wealthiest and most powerful. A recent study by researchers at the LSE reveals that the Internet has increased inequality, with educated, high-income people deriving the greatest benefits online and multinational corporations able to grow massively while evading accountability.

Perhaps, though, the AI revolution can deliver the change we need. Already, AI which focuses on advancing the cognitive functions of machines so that they can learn on their own is reshaping our lives. It has delivered self-driving (though still not flying) cars, as well as virtual personal assistants and even autonomous weapons.

But this barely scratches the surface of AIs potential, which is likely to produce societal, economic, and political transformations that we cannot yet fully comprehend. AI will not become a new industry; it will penetrate and permanently alter every industry in existence. AI will not change human life; it will change the boundaries and meaning of being human.

How and when this transformation will happen and how to manage its far-reaching effects are questions that keep scholars and policymakers up at night. Expectations for the AI era range from visions of paradise, in which all of humanitys problems have been solved, to fears of dystopia, in which our creation becomes an existential threat.

Making predictions about scientific breakthroughs is notoriously difficult. On September 11, 1933, the famed nuclear physicist Lord Rutherford told a large audience, Anyone who looks for a source of power in the transformation of the atoms is talking moonshine. The next morning, Leo Szilard hypothesized the idea of a neutron-induced nuclear chain reaction; soon thereafter, he patented the nuclear reactor.

The problem, for some, is the assumption that new technological breakthroughs are incomparable to those in the past. Many scholars, pundits, and practitioners would agree with Alphabet Executive Chairman Eric Schmidt that technological phenomena have their own intrinsic properties, which humans dont understand and should not mess with.

Others may be making the opposite mistake, placing too much stock in historical analogies. The technology writer and researcher Evgeny Morozov, among others, expects some degree of path dependence, with current discourses shaping our thinking about the future of technology, thereby influencing technologys development. Future technologies could subsequently impact our narratives, creating a sort of self-reinforcing loop.

To think about a technological breakthrough like AI, we must find a balance between these approaches. We must adopt an interdisciplinary perspective, underpinned by an agreed vocabulary and a common conceptual framework. We also need policies that address the interconnections among technology, governance, and ethics. Recent initiatives, such as Partnership on AI or Ethics and Governance of AI Fund are a step in the right direction, but lack the necessary government involvement.

These steps are necessary to answer some fundamental questions: What makes humans human? Is it the pursuit of hyper-efficiency the Silicon Valley mindset? Or is it irrationality, imperfection, and doubt traits beyond the reach of any non-biological entity?

Only by answering such questions can we determine which values we must protect and preserve in the coming AI age, as we rethink the basic concepts and terms of our social contracts, including the national and international institutions that have allowed inequality and insecurity to proliferate. In a context of far-reaching transformation, brought about by the rise of AI, we may be able to reshape the status quo, so that it ensures greater security and fairness.

One of the keys to creating a more egalitarian future relates to data. Progress in AI relies on the availability and analysis of large sets of data on human activity, online and offline, to distinguish patterns of behavior that can be used to guide machine behavior and cognition. Empowering all people in the age of AI will require each individual not major companies to own the data they create.

With the right approach, we could ensure that AI empowers people on an unprecedented scale. Though abundant historical evidence casts doubt on such an outcome, perhaps doubt is the key. As the late sociologist Zygmunt Bauman put it, questioning the ostensibly unquestionable premises of our way of life is arguably the most urgent of services we owe our fellow humans and ourselves.

See the rest here:

Democratizing Artificial Intelligence - Project Syndicate

Posted in Artificial Intelligence | Comments Off on Democratizing Artificial Intelligence – Project Syndicate