Daily Archives: August 20, 2017

Elon Musk and AI Experts Call for Total Ban on Robotic Weapons – Fortune

Posted: August 20, 2017 at 6:17 pm

One hundred and sixteen roboticists and AI researchers, including SpaceX founder Elon Musk and Google Deepmind co-founder Mustafa Suleyman, have signed a letter to the United Nations calling for strict oversight of autonomous weapons, a.k.a. "killer robots." Though the letter itself is more circumspect, an accompanying press release says the group wants "a ban on their use internationally."

Other signatories of the letter include executives and founders from Denmarks Universal Robotics, Canadas Element AI, and Frances Aldebaran Robotics.

The letter describes the risks of robotic weaponry in dire terms, and says that the need for strong action is urgent. It is aimed at a group of UN officials considering adding robotic weapons to the UNs Convention on Certain Conventional Weapons . Dating back to 1981, the Convention and parallel treaties currently restrict chemical weapons, blinding laser weapons, mines, and other weapons deemed to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.

Get Data Sheet , Fortunes technology newsletter.

Robotic warriors could arguably reduce casualties among human soldiers at least, those of the wealthiest and most advanced nations. But the risk to civilians is the headline concern of Musk and Suleymans group, who write that these can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

The letter also warns that failure to act swiftly will lead to an arms race towards killer robots but thats arguably already underway. Autonomous weapons systems or precursor technologies are available or under development from firms including Raytheon , Dassault , MiG , and BAE Systems.

Element AI founder Yoshua Bengio had another intriguing warning that weaponizing AI could actually hurt the further development of AIs good applications. Thats precisely the scenario foreseen in Frank Herberts sci-fi novel Dune, set in a universe where all thinking machines are banned because of their role in past wars.

The UN weapons group was due to meet on Monday, August 21, but that meeting has reportedly been delayed until November.

See the original post:

Elon Musk and AI Experts Call for Total Ban on Robotic Weapons - Fortune

Posted in Ai | Comments Off on Elon Musk and AI Experts Call for Total Ban on Robotic Weapons – Fortune

AI-powered filter app Prisma wants to sell its tech to other companies – The Verge

Posted: at 6:17 pm

Prisma, the Russian company best known for its AI-powered photo filters, is shifting to B2B. The company wont retire its popular app, but says in the future, it will focus on selling machine vision tools to other tech firms.

We see big opportunities in deep learning and communication, Prisma CEO and co-founder Alexey Moiseenkov told The Verge. We feel that a lot of companies need expertise in this area. Even Google is buying companies for computer vision. We can help companies put machine vision in their app because we understand how to implement the technology. The firm has launched a new website prismalabs.ai in order to promote these services.

Prisma will offer a number of off-the-shelf vision tools, including segmentation (separating the foreground of a photo from the background), face mapping, and both scene and object recognition. The companys expertise is getting these sorts of systems powered by neural networks to run locally on-device. This can be a tricky task, but avoiding using the cloud to power these services can result in apps that are faster, more secure, and less of a drain on phone and tablet battery life.

Getting copied by Facebook might help account for its pivot to B2B

Although Prismas painting-inspired filters were all the rage last year (the app itself was released in June 2015), they were soon copied by the likes of Facebook, which might account for the Russian companys change in direction.

Moiseenkov denies this is the case, and says it wasnt his intention to compete with bigger social networks. We never thought we were a competitor of Facebook were a small startup, with a small budget, he said. But, he says, the popularity of these deep learning filters shows there are plenty of consumer applications for the latest machine vision tech.

Moiseenkov says his company will continue to support the Prisma app, and that it will act as a showcase for the firms latest experiments. He says the app still has between 5 million and 10 million monthly active users, most of which are based in the US. The company also started experimenting with selling sponsored filters on its main app last year, and says it will continue to do so. It also launched an app for turning selfies into chat stickers.

There have been rumors that Prisma would get bought out by a bigger company. Moiseenkov visited Facebooks headquarters last year, and the US tech giant has made similar acquisitions in the past buying Belarus facial filter startup MSQRD in March 2016. When asked if the company would consider a similar deal, co-founder Aram Airapetyan replied over email: We want to go on doing what we do and what we can do best. The whole team is super motivated and passionately committed to what we do! So the rest doesn't matter (where, when, with whom). Make of that what you will.

See original here:

AI-powered filter app Prisma wants to sell its tech to other companies - The Verge

Posted in Ai | Comments Off on AI-powered filter app Prisma wants to sell its tech to other companies – The Verge

Merging big data and AI is the next step – TNW

Posted: at 6:17 pm

AI is one of hottest trends in tech at the moment, but what happens when its merged with another fashionable and extremely promising tech?

Researchers are looking for ways to take big data to the next level by combining it with AI. Weve just recently realized how powerful big data can be, and by uniting it with AI, big data is swiftly marching towards a level of maturity that promises a bigger, industry-wide disruption.

The application of artificial intelligence on big data is arguably the most important modern breakthrough of our time. It redefines how businesses create value with the help of data. The availability of big data has fostered unprecedented breakthroughs in machine learning, that could not have been possible before.

With access to large volumes of datasets, businesses are now able to derive meaningful learning and come up with amazing results. It is no wonder then why businesses are quickly moving from a hypothesis-based research approach to a more focused data first strategy.

Businesses can now process massive volumes of data which was not possible before due to technical limitations. Previously, they had to buy powerful and expensive hardware and software. The widespread availability of data is the most important paradigm shift that has fostered a culture of innovation in the industry.

The availability of massive datasets has corresponded with remarkable breakthroughs in machine learning, mainly due to the emergence of better, more sophisticated AI algorithms.

The best example of these breakthroughs is virtual agents. Virtual agents (more commonly known as chat bots), have gained impressive traction over the course of time. Previously, chatbots had trouble identifying certain phrases or regional accents, dialects or nuances.

In fact, most chatbots get stumped by the simplest of words and expressions, such as mistaking Queue for Q and so on. With the union of big data and AI however, we can see new breakthroughs in the way virtual agents can self-learn.

A good example of self-learning virtual agents is Amelia, a cognitive agent recently developed by IPSoft. Amelia can understand everyday language, learn really fast and even gets smarter with time!

She is deployed at the help desk of Nordic bank SEB along with a number of public sector agencies. The reaction of executive teams to Amelia has been overwhelmingly positive.

Google is also delving deeper into big data-powered AI learning. DeepMind, Googles very own artificial intelligence company, has developed an AI that can teach itself to walk, run, jump and climb without any prior guidance. The AI was never taught what walking or running is but managed to learn it itself through trial and error.

The implications of these breakthroughs in the realm of artificial intelligence are astounding and could provide the foundation for further innovations in the times to come. However, there are dire repercussions of self-learning algorithms too and, if werent too busy to notice, you may have observed quite a few in the past.

Not long ago, Microsoft introduced its own artificial intelligence chatbot named Tay. The bot was made available to the public for chatting and could learn through human interactions. However, Microsoft pulled the plug on the project only a day after the bot was introduced to Twitter.

Learning at an exponential level mainly through human interactions, Tay transformed from an innocent AI teen girl to an evil, Hitler-loving, incestuous, sex-promoting, Bush did 9/11-proclaiming robot in less than 24 hours.

Some fans of sci-fi movies like Terminator also voice concerns that with the access it has to big data, artificial intelligence may become self-aware and that it may initiate massive cyberattacks or even take over the world. More realistically speaking, it may replace human jobs.

Looking at the rate of AI-learning, we can understand why a lot of people around the world are concerned with self-learning AI and the access it enjoys to big data. Whatever the case, the prospects are both intriguing and terrifying.

There is no telling how the world will react to the amalgamation of big data and artificial intelligence. However, like everything else, it has its virtue and vices. For example, it is true that self-learning AI will herald a new age where chatbots become more efficient and sophisticated in answering user queries.

Perhaps we would eventually see AI bots on help desks in banks, waiting to greet us. And, through self-learning, the bot will have all the knowledge it could ever need to answer all our queries in a manner unlike any human assistant.

Whatever the applications, we can surely say that combining big data with artificial intelligence will herald an age of new possibilities and astounding new breakthroughs and innovations in technology. Lets just hope that the virtues of this union will outweigh the vices.

Read next: Military-funded prosthetic technologies benefit veterans, but also kids

Link:

Merging big data and AI is the next step - TNW

Posted in Ai | Comments Off on Merging big data and AI is the next step – TNW

Is AI More Threatening Than North Korean Missiles? – NPR

Posted: at 6:17 pm

In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif. Ringo H.W. Chiu/AP hide caption

In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif.

One of Tesla CEO Elon Musk's companies, the nonprofit start-up OpenAI, manufactures a device that last week was victorious in defeating some of the world's top gamers in an international video game (e-sport) tournament with a multi-million-dollar pot of prize money.

We're getting very good, it seems, at making machines that can outplay us at our favorite pastimes. Machines dominate Go, Jeopardy, Chess and as of now at least some video games.

Instead of crowing over the win, though, Musk is sounding the alarm. Artificial Intelligence, or AI, he argued last week, poses a far greater risk to us now than even North Korean warheads.

No doubt Musk's latest pronouncements make for good advertising copy. What better way to drum up interest in a product than to announce that, well, it has the power to destroy the world.

But is it true? Is AI a greater threat to mankind than the threat posed to us today by an openly hostile, well-armed and manifestly unstable enemy?

AI means, at least, three things.

First, it means machines that are faster, stronger and smarter than us, machines that may one day soon, HAL-like, come to make their own decisions and make up their own values and, so, even to rule over us, just as we rule over the cows. This is a very scary thought, not the least when you consider how we have ruled over the cows.

Second, AI means really good machines for doing stuff. I used to have a coffee machine that I'd set with a timer before going to bed; in the morning I'd wake up to the smell of fresh coffee. My coffee maker was a smart, or at least smart-ish, device. Most of the smart technologies, the AIs, in our phones, and airplanes, and cars, and software programs including the ones winning tournaments are pretty much like this. Only more so. They are vastly more complicated and reliable but they are, finally, only smart-ish. The fact that some of these new systems "learn," and that they come to be able to do things that their makers cannot do like win at Go or Dota is really beside the point. A steam hammer can do what John Henry can't but, in the end, the steam hammer doesn't really do anything.

Third, AI is a research program. I don't mean a program in high-tech engineering. I mean, rather, a program investigating the nature of the mind itself. In 1950, the great mathematician Alan Turing published a paper in a philosophy journal in which he argued that by the year 2000 we would find it entirely natural to speak of machines as intelligent. But more significantly, working as a mathematician, he had devised a formal system for investigating the nature of computation that showed, as philosopher Daniel Dennett puts it in his recent book, that you can get competence (the ability to solve problems) without comprehension (by merely following blind rules mechanically). It was not long before philosopher Hilary Putnam would hypothesize the mind is a Turing Machine (and a Turing Machine just is, for all intents and purposes, what we call a computer today). And, thus, the circle closes. To study computational minds is to study our minds, and to build an AI is, finally, to try to reverse engineer ourselves.

Now, Type 3 AI, this research program, is alive and well and a continuing chapter in our intellectual history that is of genuine excitement and importance. This, even though the original hypothesis of Putnam is wildly implausible (and was given up by Putnam decades ago). To give just one example: the problem of the inputs and the outputs. A Turing Machine works by performing operations on inputs. For example, it might erase a 1 on a cell of its tape and replace it with a 0. The whole method depends on being able to give a formal specification of a finite number of inputs and outputs. We can see how that goes for 1s and 0s. But what are the inputs, and what are the outputs, for a living animal, let alone a human being? Can we give a finite list, and specify its items in formal terms, of everything we can perceive, let alone, do?

And there are other problems, too. To mention only one: We don't understand how the brain works. And this means that we don't know that the brain functions, in any sense other than metaphorical, like a computer.

Type 1 AI, the nightmare of machine dominance, is just that, a nightmare, or maybe (for the capitalists making the gizmos) a fantasy. Depending on what we learn pursuing the philosophy of AI, and as luminaries like John Searle and the late Hubert Dreyfus have long argued, it may be an impossible fiction.

Whatever our view on this, there can be no doubt that the advent of smart, rather than smart-ish, machines, the sort of machines that might actually do something intelligent on their own initiative, is a long way off. Centuries off. The threat of nuclear war with North Korea is both more likely and more immediate than this.

Which does not mean, though, that there is not in fact real cause for alarm posed by AI. But if so, we need to turn our attention to Type 2 AI: the smart-ish technologies that are everywhere in our world today. The danger here is not posed by the technologies themselves. They aren't out to get us. They are not going to be out to get us any time soon. The danger, rather, is our increasing dependence on them. We have created a technosphere in which we are beholden to technologies and processes that we do not understand. I don't mean you and me, that we don't understand: No one person can understand. It's all gotten too complicated. It takes a whole team or maybe a university to understand adequately all the mechanisms, for example, that enable air traffic control, or drug manufacture, or the successful production and maintenance of satellites, or the electricity grid, not to mention your car.

Now this is not a bad thing in itself. We are not isolated individuals all alone and we never have been. We are a social animal and it is fine and good that we should depend on each other and on our collective.

But are we rising to the occasion? Are we tending our collective? Are we educating our children and organizing our means of production to keep ourselves safe and self-reliant and moving forward? Are we taking on the challenges that, to some degree, are of our own making? How to feed 7 billion people in a rapidly warming world?

Or have we settled? Too many of us, I fear, have taken up a "user" attitude to the gear of our world. We are passive consumers. Like the child who thinks chickens come from supermarkets, we are hopelessly alienated from how things work.

And if we are, then what are we going to do if some clever young person some where maybe a young lady in North Korea writes a program to turn things off? This is a serious and immediate pressing danger.

Alva No is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. He is the author of several books, including his latest, Strange Tools: Art and Human Nature (Farrar, Straus and Giroux, 2015). You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe

See original here:

Is AI More Threatening Than North Korean Missiles? - NPR

Posted in Ai | Comments Off on Is AI More Threatening Than North Korean Missiles? – NPR

I was worried about artificial intelligenceuntil it saved my life – Quartz

Posted: at 6:16 pm

Earlier this month, tech moguls Elon Musk and Mark Zuckerberg debated the pros and cons of artificial intelligence from different corners of the internet. While SpaceXs CEO is more of an alarmist, insisting that we should approach AI with caution and that it poses a fundamental existential risk, Facebooks founder leans toward a more optimistic future, dismissing doomsday scenarios in favor of AI helping us build a brighter future.

I now agree with Zuckerbergs sunnier outlookbut I didnt used to.

Beginning my career as an engineer, I was interested in AI, but I was torn about whether advancements would go too far too fast. As a mother with three kids entering their teens, I was also worried that AI would disrupt the future of my childrens education, work, and daily life. But then something happened that forced me into the affirmative.

Imagine for a moment that you are a pathologist and your job is to scroll through 1,000 photos every 30 minutes, looking for one tiny outlier on a single photo. Youre racing the clock to find a microscopic needle in a massive data haystack.

Now, imagine that a womans life depends on it. Mine.

This is the nearly impossible task that pathologists are tasked with every day. Treating the 250,000 women in the US who will be diagnosed with breast cancer this year, each medical worker must analyze an immense amount of cell tissue to identify if their patients cancer has spread. Limited by time and resources, they often get it wrong; a recent study found that pathologists accurately detect tumors only 73.2% of the time.

In 2011 I found a lump in my breast. Both my family doctor and I were confident that it was a Fibroadenoma, a common noncancerous (benign) breast lump, but she recommended I get a mammogram to make sure. While the original lump was indeed a Fibroenoma, the mammogram uncovered two unknown spots. My journey into the unknown started here.

Since AI imaging was not available at the time, I had to rely solely on human analysis. The next four years were a blur of ultrasounds, biopsies, and surgeries. My well-intentioned network of doctors and specialists were not able to diagnose or treat what turned out to be a rare form of cancer, and repeatedly attempted to remove my recurring tumors through surgery.

After four more tumors, five more biopsies, and two more operations, I was heading toward a double mastectomy and terrified at the prospect of the cancer spreading to my lungs or brain.

I knew something needed to change. In 2015, I was introduced to a medical physicist that decided to take a different approach, using big data and a machine-learning algorithm to spot my tumors and treat my cancer with radiation therapy. While I was nervous about leaving my therapy up to this new technology, itcombined with the right medical knowledgewas able to stop the growth of my tumors. Im now two years cancer-free.

I was thankful for the AI that saved my life but then that very same algorithm changed my sons potential career path.

The positive impact of machine learning is often overshadowed by the doom-and-gloom of automation. Fearing for their own jobs and their childrens future, people often choose to focus on the potential negative repercussions of AI rather than the positive changes it can bring to society.

After seeing what this radiation treatment was able to do for me, my son applied to a university program in radiology technology to explore a career path in medical radiation. He met countless radiology technicians throughout my years of treatment and was excited to start his training off in a specialized program. However, during his application process, the program was cancelled: He was told it was because there were no longer enough jobs in the radiology industry to warrant the programs continuation. Many positions have been lost to automationjust like the technology and machine learning that helped me in my battle with cancer.

This was a difficult period for both my son and I: The very thing that had saved my life prevented him from following the path he planned. He had to rethink his education mid-application when it was too late to apply for anything else, and he was worried that his back up plans would fall through.

Hes now pursuing a future in biophysics rather than medical radiation, starting with an undergraduate degree in integrated sciences. In retrospect, we both now realize that the experience forced him to rethink his career and unexpectedly opened up his thinking about what research areas will be providing the most impact on peoples lives in the future.

Although some medical professionals will lose their jobs to AI, the life-saving benefits to patients will be magnificent. Beyond cancer detection and treatment, medical professionals are using machine learning to improve their practice in many ways. For instance, Atomwise applies AI to fuel drug discovery, Deep Genomics uses machine learning to help pharmaceutical companies develop genetic medicines, and Analytics 4 Life leverages AI to better detect coronary artery disease.

While not all transitions from automated roles will be as easy as my sons pivot to a different scientific field, I believe that AI has the potential to shape our future careers in a positive way, even helping us find jobs that make us happier and more productive.

As this technology rapidly develops, the future is clear: AI will be an integral part of our lives and bring massive changes to our society. Its time to stop debating (looking at you, Musk and Zuckerberg) and start accepting AI for what it is: both the good and the bad.

Throughout the years, Ive found myself on both sides of the equation, arguing both for and against the advancement of AI. But its time to stop taking a selective view on AI, choosing to incorporate it into our lives only when convenient. We must create solutions that mitigate AIs negative impact and maximize its positive potential. Key stakeholdersgovernments, corporates, technologists, and moreneed to create policies, join forces, and dedicate themselves to this effort.

And were seeing great progress. AT&T recently began retraining thousands of employees to keep up with technology advances and Google recently dedicated millions of dollars to prepare people for an AI-dominated workforce. Im hopeful that these initiatives will allow us to focus on all the good that AI can do for our world and open our eyes to the potential lives it can save.

One day, yours just might depend on it, too.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

More:

I was worried about artificial intelligenceuntil it saved my life - Quartz

Posted in Artificial Intelligence | Comments Off on I was worried about artificial intelligenceuntil it saved my life – Quartz

How Will Artificial Intelligence Change the Classroom? – The Good Men Project (blog)

Posted: at 6:16 pm

Embed from Getty Images

Have a question? Ask Siri. Want to order a pizza? Amazon Echo is there for you. There are AI technologies to help us park our car, polish our photos, and automate our time.

The technology hasnt been as widespread in K-12 education, but that could soon change. A recent piece of research predicted that AI in education could see an expansion rate of more than 47 percent by 2021.

Artificial Intelligence can be an excellent supplement for the work of the teacher.

Personalized Tutors for Students

Anyone who has spent time around children understands that they learn at different rates. Some students are also auditory or visual learners. Personalized AI tutors within the classroom could offer an alternative method for teaching students the fundamentals.

The technology is here, and being improved on. IBM and Microsoft are working on classroom applications. Other examples of tutoring AI include Thinkster Math, Carnegie Learning or Third Space Learning. Essentially, AI can help with the basic skills so that teachers can focus on the more advanced, creative-based topics at least until the technology is more developed.

Its conceivable at some point that we could have AI programs that serve as companions to students throughout the course of their K-12 education, collecting data on the student and offering custom solutions along the way.

A New Kind of Teachers Assistant

AI might be able to help the teacher with classroom tasks as well. Since artificial intelligence is good at handling repetitive tasks, it could be an asset for grading. Teachers can use any newfound time to better interact with students.

More Customized Lessons

Teachers could conceivably use AI to create more customized lessons for their students. The information for lessons can be compiled in a more personalized way than what appears inside the course textbook.

Helping Students Learn

Traditionally the K-12 system in the United States has been designed to prepare students for manufacturing work, or helping them develop the skills theyll need as they select a career and stay with an employer for many years.

The reality is that the economy and the workplace are changing. AI can help prepare our grade school students for jobs that dont yet exist. Bringing AI into the classroom can be an innovative tool to help teach students the skills they will need for the future.

Matt Brennan is a marketing copywriter, occasional parenting writer, and journalist in the Chicago area. He is also the author of Write Right-Sell Now.

__

__

Photo credit: Getty Images

Matt Brennan is a marketing copywriter, occasional parenting writer, and journalist in the Chicago area. He is also the author of Write Right-Sell Now.

The rest is here:

How Will Artificial Intelligence Change the Classroom? - The Good Men Project (blog)

Posted in Artificial Intelligence | Comments Off on How Will Artificial Intelligence Change the Classroom? – The Good Men Project (blog)

America Can’t Afford to Lose the Artificial Intelligence War | The … – The National Interest Online

Posted: at 6:16 pm

Today, the question of artificial intelligence (AI) and its role in future warfare is becoming far more salient and dramatic than ever before. Rapid progress in driverless cars in the civilian economy has helped us all see what may become possible in the realm of conflict. All of a sudden, it seems, terminators are no longer the stuff of exotic and entertaining science-fiction movies, but a real possibility in the minds of some. Innovator Elon Musk warns that we need to start thinking about how to regulate AI before it destroys most human jobs and raises the risk of war.

It is good that we start to think this way. Policy schools need to start making AI a central part of their curriculums; ethicists and others need to debate the pros and cons of various hypothetical inventions before the hypothetical becomes real; military establishments need to develop innovation strategies that wrestle with the subject. However, we do not believe that AI can or should be stopped dead in its tracks now; for the next stage of progress, at least, the United States must rededicate itself to being the first in this field.

First, a bit of perspective. AI is of course not entirely new. Remotely piloted vehicles may not really qualifyafter all, they are humanly, if remotely, piloted. But cruise missiles already fly to an aimpoint and detonate their warheads automatically. So would nuclear warheads on ballistic missiles, if God forbid nuclear-tipped ICBMs or SLBMs were ever launched in combat. Semi-autonomous systems are already in use on the battlefield, like the U.S. Navy Phalanx Close-In Weapons System, which is capable of autonomously performing its own search, detect, evaluation, track, engage, and kill assessment functions, according to the official Defense Department description, along with various other fire-and-forget missile systems.

But what is coming are technologies that can learn on the jobnot simply follow prepared plans or detailed algorithms for detecting targets, but develop their own information and their own guidelines for action based on conditions they encounter that were not initially foreseeable in specific.

A case in point is what our colleague at Brookings, retired Gen. John Allen, calls hyperwar. He develops the idea in a new article in the journal Proceedings, coauthored with Amir Husain. They imagine swarms of self-propelled munitions that, in attacking a given target, deduce patterns of behavior of the targets defenses and find ways to circumvent them, aware all along of the capabilities and coordinates of their teammates in the attack (the other self-propelled munitions). This is indeed about the place where the word robotics seems no longer to do justice to what is happening, since that term implies a largely prescripted process or series of actions. What happens in hyperwar is not only fundamentally adaptive, but also so fast that it far supercedes what could be accomplished by any weapons system with humans in the loop. Other authors, such as former Brookings scholar Peter Singer, have written about related technologies, in a partly fictional sense. Now, Allen and Husain are not just seeing into the future, but laying out a near-term agenda for defense innovation.

The United States needs to move expeditiously down this path. People have reasons to fear fully autonomous weaponry, but if a Terminator-like entity is what they are thinking of, their worries are premature. That software technology is still decades away, at the earliest, along with the required hardware. However, what will be available sooner is technology that will be able to decide what or who is a targetbased on the specific rules laid out by the programmer of the software, which could be highly conservative and restrictiveand fire upon that target without any human input.

To see why outright bans on AI activities would not make sense, consider a simple analogy. Despite many states having signed the Non-Proliferation Treaty, a ban on the use and further development of nuclear weapons, the treaty has not prevented North Korea from building a nuclear arsenal. But at least we have our own nuclear arsenal with which we can attempt to deter other such countries, a tactic that has been generally successful to date. A preemptive ban on AI development would not be in the United States best interest because non-state actors and noncompliant states could still develop it, leaving the United States and its allies behind. The ban would not be verifiable and it could therefore amount to unilateral disarmament. If Western countries decided to ban fully autonomous weaponry and a North Korea fielded it in battle, it would create a highly fraught and dangerous situation.

To be sure, we need the debate about AIs longer-term future, and we need it now. But we also need the next generation of autonomous systemsand America has a strong interest in getting them first.

Michael O'Hanlon is a senior fellow at the Brookings Institution.Robert Karlen is a student at the University of Washington and an intern in the Center for Twenty-First Century Security and Intelligence at the Brookings Institution.

Image: Reuters

More:

America Can't Afford to Lose the Artificial Intelligence War | The ... - The National Interest Online

Posted in Artificial Intelligence | Comments Off on America Can’t Afford to Lose the Artificial Intelligence War | The … – The National Interest Online

I Built an Artificial-Intelligence System for Investing — and It Showed How Smart Warren Buffett Is – Motley Fool

Posted: at 6:16 pm

I was curious.

The idea of driverless cars has been, and still is, fascinating to me. And so is the technology that makes driverless cars possible -- artificial intelligence (AI). Like many, I grew up enjoying books, movies, and TV shows that featured machines that could think in similar ways as humans. But what was once science fiction has now become reality.

A couple of years ago, I began a quest to really understand how AI works. I already knew the general concepts, but I wanted to delve into the nitty-gritty of one of the most revolutionary technologies of all time. So I read everything I could get my hands on, from fairly high-level books to textbooks to websites geared toward AI developers.

Along the way, I decided to build my own AI system. One thing even more interesting to me than AI is investing, so I decided to combine the two and develop an AI system that could make investment recommendations. That system is now up and running. And it told me just how smart Warren Buffett really is.

Image source: Getty Images.

AI includes quite a few approaches. One that especially intrigued me was artificial neural networks. The idea for artificial neural networks originated back in the 1940s, when aneurophysiologist and a mathematicianteamed up to write a paper about how neurons in the human brain might work. Based on their research, they built a simple neural network using electrical circuits.

Fast-forward to today. Artificial neural networks are used in many AI applications. Facebook (NASDAQ:FB), for example, uses neural networks in recognizing the faces of people in photos and to decide which advertisements to display to which users.Apple (NASDAQ:AAPL) uses neural networks to enable Siri to recognize what people ask and respond to questions.

Artificial neural networks work in a similar way that your neurons do. Each neuron is connected to multiple other neurons. When there is input (for example, a bee sting), the neuron transmits a signal to the neurons to which it's linked. In artificial neural networks, though, the inputs are data -- like images and speech. The artificial neural network learns from when it gets things wrong, self-adjusts, and gets better at recognition the more data it handles.

Image source: Getty Images.

My artificial neural network was child's play compared to what Apple and Facebook use. I created a relatively simple network that received financial input. This input included price, earnings, and valuation history for the S&P 500 index. I also threw consumer price index (CPI) data, prime lending rates, three-month Treasury bill rates, industrial productivity index data, unemployment rates, and other financial data into the mix.

The kind of artificial neural network I built used what's called "supervised learning," where the AI system is trained using a lot of data for which the desired outputs are known. I trained my system using over 50 years of data, going back to the 1940s. I then tested it using data from 2000 through today.

What I wanted the artificial neural network to determine was whether a person should be invested in the S&P 500 or in cash on a month-by-month basis. After a few stumbles along the way, I finally received an answer from the AI system. That answer was: Always be invested in stocks.

I ran that system all kinds of ways. I pared down the inputs. I changed out some data for other data. I experimented with several variables that the AI experts recommend tweaking. And the answer always came back the same.

It occurred to me that my AI system was basically saying to do what legendary investor Warren Buffett wrote in his letter to Berkshire Hathaway (NYSE:BRK-A) (NYSE:BRK-B) shareholders in 2014. He related the instructions in his will for the trustee of his estate to follow upon his death: Invest most of the money in an S&P 500 index fund and let it ride.

Here's the really interesting part. I examined in more detail the recommendations from the artificial neural network. The system had more confidence in being in stocks when the market was going down and less confidence when the market was going up. That's basically what Buffett had in mind when he said to "be fearful when others are greedy and greedy when others are fearful." I realized that I had created a "Buffett-bot"!

Image source: Getty Images.

The more you think about it, though, the more my AI system -- and Warren Buffett -- makes sense. Historically, the stock market has risen a lot more months than it's fallen. Every time the market has fallen, it's come roaring back. There's every reason in the world to be confident when the market is down, because better days will surely be ahead. That's been Buffett's philosophy his entire career.

Of course, Buffett hasn't followed the advice that he is leaving for his heirs. Instead of parking his money in an S&P 500 index fund, he has used Berkshire Hathaway as a vehicle to build his own investment fund of sorts. If you pick the right stocks, your success will be even greater than going only with the S&P 500. Buffett's track record proves the point.

I haven't asked my AI system yet, by the way, which stocks it would recommend. My hunch is that, if it's as smart as I think it is, it would come up with suggestions pretty close to the stock picks made by The Motley Fool's investing newsletters, which have trounced the S&P 500's performance. (For what it's worth, The Motley Fool long ago recommended several stocks of AI leaders, including both Apple and Facebook, as well as Buffett's Berkshire Hathaway.)

What's the key takeaway from my experiment with AI? Invest in stocks, stay invested in stocks, and buy even more when others are too afraid to do so. The concept applies to the S&P 500 or solid individual stocks like The Motley Fool recommends. That's an intelligent approach -- whether the intelligence is artificial or not.

Keith Speights owns shares of Apple and Facebook. The Motley Fool owns shares of and recommends Apple, Berkshire Hathaway (B shares), and Facebook. The Motley Fool has a disclosure policy.

View post:

I Built an Artificial-Intelligence System for Investing -- and It Showed How Smart Warren Buffett Is - Motley Fool

Posted in Artificial Intelligence | Comments Off on I Built an Artificial-Intelligence System for Investing — and It Showed How Smart Warren Buffett Is – Motley Fool

Artificial intelligence is coming to medicine don’t be afraid – STAT

Posted: at 6:16 pm

A

utomation could replace one-third of U.S. jobs within 15 years. Oxford and Yale experts recently predicted that artificial intelligence could outperform humans in a variety of tasks by 2045, ranging from writing novels to performing surgery and driving vehicles. A little human rage would be a natural response to such unsettling news.

Artificial intelligence (AI) is bringing us to the precipice of an enormous societal shift. We are collectively worrying about what it will mean for people. As a doctor, Im naturally drawn to thinking about AIs impact on the practice of medicine. Ive decided to welcome the coming revolution, believing that it offers a wonderful opportunity for increases in productivity that will transform health care to benefit everyone.

Groundbreaking AI models have bested humans in complex reasoning games, like the recent victory of Googles AlphaGo AI over the human Go champ. What does that mean for medicine?

advertisement

To date, most AI solutions have solved minor human issues playing a game or helping order a box of detergent. The innovations need to matter more. The true breakthroughs and potential of AI lie in real advancements in human productivity. A McKinsey Global Institute report suggests that AI is helping us approach an unparalleled expansion in productivity that will yield five times the increase introduced by the steam engine and about 1 1/2 times the improvements weve seen from robotics and computers combined. We simply dont have a mental model to comprehend the potential of AI.

Across all industries, an estimated 60 percent of jobs will have 30 percent of their activities automated; about 5 percent of jobs will be 100 percent automated.

What this means for health care is murky right now. Does that 5 percent include doctors? After all, medicine is a series of data points of a knowable nature with clear treatment pathways that could be automated. That premise, though, fantastically overstates and misjudges the capabilities of AI and dangerously oversimplifies the complexity underpinning what physicians do. Realistically, AI will perform many discrete tasks better than humans can which, in turn, will free physicians to focus on accomplishing higher-order tasks.

If you break down the patient-physician interaction, its complexity is immediately obvious. Requirements include empathy, information management, application of expertise in a given context, negotiation with multiple stakeholders, and unpredictable physical response (think of surgery), often with a life on the line. These are not AI-applicable functions.

I mentioned AlphaGo AI beating human experts at the game. The reason this feat was so impressive is due to the high branching factor and complexity of the Go game tree there are an estimated 250 choices per move, permitting estimates of 10 to the 170th different game outcomes. By comparison, chess has a branching factor of 35, with 10 to the 47th different possible game outcomes. Medicine, with its infinite number of moves and outcomes, is decades away from medical approaches safely managed by machines alone.

We still need the human factor.

That said, more than 20 percent of a physicians time is now spent entering data. Since doctors are increasingly overburdened with clerical tasks like electronic health record entry, prior authorizations, and claims management, they have less time to practice medicine, do research, master new technology, and improve their skills. We need a radical enhancement in productivity just to sustain our current health standards, much less move forward. Thoughtfully combining human expertise and automated functionality creates an augmented physician model that scales and advances the expertise of the doctor.

Physicians would rather practice at the top of their licensing and address complex patient interaction than waste time entering data, faxing (yes, faxing!) service authorizations, or tapping away behind a computer. The clerical burdens pushed by fickle health care systems onto physicians and other care providers is both unsustainable and a waste of our best and brightest minds. Its the equivalent of asking an airline pilot to manage the ticket counter, count the passengers, handle the standby and upgrade lists, and give the safety demonstrations then fly the plane. AI can help with such support functions.

But to radically advance health care productivity, physicians must work alongside innovators to atomize the tasks of their work. Understanding where they can let go to unlock time is essential, as is collaborating with technologists to guide truly useful development.

Perhaps it makes sense to start with automated interpretation of basic labs, dose adjustment for given medications, speech-to-text tools that simplify transcription or document face-to-face interactions, or even automate wound closure. And then move on from there.

It will be important for physicians and patients to engage and help define the evolution of automation in medicine in order to protect patient care. And physicians must be open to how new roles for them can be created by rapidly advancing technology.

If it all sounds a bit dreamy, I offer an instructive footnote about experimentation with AlphaGo AI. The recent game summit proving AlphaGos prowess also demonstrated that human talent increases significantly when paired with AI. This hybrid model of humans and machines working together presents a scalable automation paradigm for medicine, one that creates new tasks and roles for essential medical and technology professionals, increasing the capabilities of the entire field as we move forward.

Physicians should embrace this opportunity rather than fear it. Its time to rage with the machine.

Jack Stockert, M.D., is a managing director and leader of strategy and business development at Health2047, a Silicon Valley-based innovation company.

Trending

Democrats in Congress to explore creating an expert panel

Democrats in Congress to explore creating an expert panel on Trumps mental health

See how the opioid epidemic swept across the United

See how the opioid epidemic swept across the United States

Can detox waters really flush your fat and toxins

Can detox waters really flush your fat and toxins away?

Recommended

A new survey says doctors are warming up to

A new survey says doctors are warming up to single-payer health care

Beyond aggravation: Constipation is an American epidemic

Beyond aggravation: Constipation is an American epidemic

Express Scripts to limit opioids, concerning doctors

Express Scripts to limit opioids, concerning doctors

See more here:

Artificial intelligence is coming to medicine don't be afraid - STAT

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is coming to medicine don’t be afraid – STAT

Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence – Government Technology

Posted: at 6:16 pm

Government is always being asked to do more with less less money, less staff, just all around less and that makes the idea of artificial intelligence (AI)a pretty attractive row to hoe. If a piece of technology could reduce staff workload or walk citizens through a routine process or form, you could effectively multiply a workforce without ever actually adding new people.

But for every good idea, there are caveats, limitations, pitfalls and the desire to push the envelope. While innovating anything in tech is generally a good thing, when it comes to AI in government, there is fine line to walk between improving a process and potentially making it more convoluted.

Outside of a few key government functions, a new white paper from the Harvard Ash Center for Democratic Governance and Innovation finds that AI could actually increase the burden of government and muddy-up the functions it is so desperately trying to improve.

Hila Mehr, a Center for Technology and Democracy fellow, explained that there are five key government problems that AI might be able to assist with reasonably: resource allocation, large data sets, expert shortages, predictable scenarios, and procedural and diverse data.

And governments have already started moving into these areas. In Arkansas and North Carolina, chatbots are helping the state connect with its citizens through Facebook. In Utah and Mississippi, Amazon Alexa skills have been introduced to better connect constituents to the information and services they need.

Unlike Hollywood representations of AI in film, Mehr said, the real applications for artificial intelligence in a government organization are generally far from sexy. The administrative aspects of governing are where tools like this will excel.

Where it comes to things like expert shortages, she said she sees AI as a means to support existing staff. In a situation where doctors are struggling to meet the needs of all of their patients, AI could act as a research tool. The same is true of lawyers dealing with thousands of pages of case law. AI could be used as a research assistant.

If youre talking about government offices that are limited in staff and experts," Mehr said, "thats where AI trained on niche issues could come in.

But, she warned, AI is not without its problems, namely making sure that it is not furthering human bias written in during the programming process and played out through the data it is fed. Rather than rely on AI to make critical decisions, she argues that any algorithms and decisions made for or as a result of AI should retain a human component.

We cant rely on them to make decisions, so we need that check, the way we have checks in our democracy, we need to have checks on these systems as well, and thats where the human group or panel of individuals comes in, Mehr said. The way that these systems are trained, you cant always know why they are making the decision they are making, which is why its important to not let that be the final decision because it can be a black box depending on how it is trained and you want to make sure that it is not running on its own.

But past the fear that the technology might disproportionately impact certain citizens or might somehow complicate the larger process, there is the somewhat legitimate fear that the implementation of AI will mean lost jobs. Mehr said its a thought that even she has had.

On the employee side, I think a lot of people view this, rightly so, as something that could replace them," she added. "I worry about that in my own career, but I know that it is even worse for people who might have administrative roles. But I think early studies have shown that youre using AI to help people in their work so that they are spending less time doing repetitive tasks and more time doing the actual work that requires a human touch.

In both her white paper and on the phone, Mehr is careful to advise against going whole hog into AI with the expectation that it can replace costly personnel. Instead she advocates for the technology as a tool to build and supplement the team that already exists.

As for where the technology could run affront of human jobs, Mehr advises that government organizations and businesses alike start considering labor practices in advance.

Inevitably, it will replace some jobs, she said. People need to be looking at fair labor practices now, so that they can anticipate these changes to the market and be prepared for them.

With any blossoming technology, there are barriers to entry and hurdles that must be overcome before a useful tool is in the hands of those best fit to use it. And as with anything, money and resources present a significant challenge but Mehr said large amounts of data are also needed to get AI, especially learning systems, off the ground successfully.

If you are talking about simple automation or [answering] a basic set of questions, it shouldnt take that long. If you are talking about really training an AI system with machine learning, you need a big data set, a very big data set, and you need to train it, not just feed the system data and then its ready to go, she said. The biggest barriers are time and resources, both in the sense of data and trained individuals to do that work.

More here:

Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence - Government Technology

Posted in Artificial Intelligence | Comments Off on Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence – Government Technology