How Google is making music with artificial intelligence – Science Magazine

A musician improvises alongside A.I. Duet, software developed in part by Googles Magenta

google

By Matthew HutsonAug. 8, 2017 , 3:40 PM

Can computers be creative? Thats a question bordering on the philosophical, but artificial intelligence (AI) can certainly make music and artwork that people find pleasing. Last year, Google launched Magenta, a research project aimed at pushing the limits of what AI can do in the arts. Science spoke with Douglas Eck, the teams lead in San Francisco, California, about the past, present, and future of creative AI. This interview has been edited for brevity and clarity.

Q: How does Magenta compose music?

A: Learning is the key. Were not spending any effort on classical AI approaches, which build intelligence using rules. Weve tried lots of different machine-learning techniques, including recurrent neural networks, convolutional neural networks, variational methods, adversarial training methods, and reinforcement learning. Explaining all of those buzzwords is too much for a short answer. What I can say is that theyre all different techniques for learning by example to generate something new.

Q: What examples does Magenta learn from?

A:We trained theNSynthalgorithm, which uses neural networks to synthesize new sounds, on notes generated by different instruments. TheSketchRNNalgorithm was trained onmillions of drawingsfrom ourQuick, Draw!game. Our most recent music algorithm,Performance RNNwas trained on classical piano performances captured on a modern player piano [listen below]. I'd like musicians to be able to easily train models on their own musical creations, then have fun with the resulting music, further improving it.

Q: How has computer composition changed over the years?

A:Currently the focus is on algorithms which learn by example, i.e., machine learning, instead of using hard-coded rules. I also think theres been increased focus on using computers as assistants for human creativity rather than as a replacement technology, such as our work and Sonys Daddys Car [a computer-composed song inspired by The Beatles and fleshed out by a human producer].

Q: Do the results of computer-generated music ever surprise you?

A:Yeah. All the time. I was really surprised at how expressive the short compositions were from Ian Simon and Sageev Oores recent Performance RNN algorithm. Because they trained on real performances captured in MIDI on Disklavier pianos, their model was able to generate sequences with realistic timing and dynamics.

Q: What else is Magenta doing?

A:We did a summer internship around joke telling, but we didnt generate any funny jokes. Were also working on image generation and drawing generation [seeexample below]. In the future, Id like to look more at areas related to design. Can we provide tools for architects or web page creators?

Magenta software can learn artistic styles from human paintings and apply them to new images.

Fred Bertsch

Q: How do you respond to art that you know comes from a computer?

A: When I was on the computer science faculty at University of Montreal [in Canada], I heard some computer music by a music faculty member, Jean Pich. Hed written a program that could generate music somewhat like that of the jazz pianist Keith Jarrett. It wasnt nearly as engaging as the real Keith Jarrett! But I still really enjoyed it, because programming the algorithm is itself a creative act. I think knowing Jean and attributing this cool program to him made me much more responsive than I would have been otherwise.

Q: If abilities once thought to be uniquely human can be aped by an algorithm, should we think differently about them?

A: I think differently about chess now that machines can play it well. But I dont see that chess-playing computers have devalued the game. People still love to play! And computers have become great tools for learning chess.Furthermore, I think its interesting to compare and contrast how chess masters approach the game versus how computers solve the problemvisualization and experience versus brute-force search, for example.

Q: How might people and machines collaborate to be more creative?

A: I think its an iterative process. Every new technology that made a difference in art took some time to figure out. I love to think of Magenta like an electric guitar. Rickenbacker and Gibson electrified guitars with the purpose of being loud enough to compete with other instruments onstage.Jimi Hendrix and Joni Mitchell and Marc Ribot and St. Vincent and a thousand other guitarists who pushed the envelope on how this instrument can be played were all using the instrument the wrong way, some saidretuning, distorting, bending strings, playing upside-down, using effects pedals, etc. No matter how fast machine learning advances in terms of generative models, artists will work faster to push the boundaries of whats possible there, too.

See original here:

How Google is making music with artificial intelligence - Science Magazine

TechCrunch Disrupt SF 2017 is all in on artificial intelligence and machine learning – TechCrunch

As fields of research, machine learning and artificial intelligence both date back to the 50s. More than half a century later, the disciplines have graduated from the theoretical to practical, real world applications. Well have some of the top minds in both categories to discuss the latest advances and future of AI and ML on stage and Disrupt San Francisco in just over a month.

Well be joined on stage by Brian Krzanich of Intel, John Giannandrea of Google, Sebastian Thrun of Udacity and Andrew Ng of Baidu, to outline the various ways these cutting edge technologies are already impacting our lives, from simple smart assistants, to self-driving cars. Its a broad range of speakers, which is good news, because weve got a lot of ground to cover in some of the industrys most exciting advances.

John (JG) Giannandrea, SVP Engineering at Google: Giannandrea joined Google in 2010, when the company acquired his startup Metaweb Technologies, a move that formed the basis for the search giants Knowledge Graph technology. Last year, Google appointed Giannandrea the head of search, the latest indication of the companys deep interest for machine learning and AI. Teaching machines to be smarter is a long time passion for the executive, who told Fortune in a 2016 interview that, computers are remarkably dumb. Giannandrea will discuss the work hes doing at Google to fix exactly that.

Sebastian Thrun, Founder, Udacity: Prior to founding online educational service Udacity, Sebastian Thrun headed up Google X, helping make artificial intelligence a foundational key for the companys moonshot products. The topic has been a long time passion for the CMU computer science grad, in fact, he now teaches a course on the subject at Udacity. The introductory Artificial Intelligence for Robotics class takes students through the basics of AI and the ways in which the technology is helping pave the way for his other key passion, self-driving cars.

Andrew Ng, Former Chief Scientist atBaidu: Earlier this year, Andrew Ng stepped down from his role as the head of Baidus AI Group. In a post for Medium announcing the move, the executive reconfirmed his commitment to the space, noting that AI will also now change nearly every major industryhealthcare, transportation, entertainment, manufacturing. After Baidu, NG has shifted his focus toward harnessing artificial intelligence for the benefit of larger society, beyond just a single company, targeting a broad range of industries from healthcare to conversational computing.

Brian Krzanich, CEO Intel: When Brian Krzanich took over as Intel CEO in 2013, the company was reeling from an inability to adapt from desktop computing to mobile devices. Under his watch, hes shifted much of Intels resources to forward thinking technologies, from 5G networks and cloud computing to drones and self-driving cars. Artificial Intelligence and Machine Learning are at the heart of much of Intels forward looking plans, as the company works to stay on the bleeding edge of technology breakthroughs.

Alongside this main-stage panel, well also have an Off The Record session on AI with some of the top minds in the field, which will only be available to attendees at Disrupt. Plus, there are plenty of startups in Startup Alley this year that are focusing in on machine learning.

Were incredibly excited to be joined by so many top names, and hope youll be there as well. Early bird general admission tickets are still available for whats shaping up to be another blockbuster Disrupt.

View post:

TechCrunch Disrupt SF 2017 is all in on artificial intelligence and machine learning - TechCrunch

AI Vs. Bioterrorism: Artificial Intelligence Trained to Detect Anthrax by Scientists – Newsweek

South Korean scientists have been able to train artificial intelligence to detect anthrax at fast speeds, potentially dealing ablow to bioterrorism.

Hidden in letters, the biological agent killed five Americans and infected 17 morein the yearfollowing the 9/11 attacks, and the threat of a biological attackremains a top concern of Western security services as radicals such as the Islamic State militant group (ISIS) seek new ways to attack the West.

Researchers from the Korea Advanced Institute of Science and Technology have now created an algorithm that is able to study bacterial spores and quickly identify the biological agent, according to a paper published last week for the Science Advances journal.

Tech & Science Emails and Alerts - Get the best of Newsweek Tech & Science delivered to your inbox

The new training of AI to identify the bacteria usingmicroscopic images could decrease the time it takes to detect anthrax drastically, to mere seconds from a day. It is also accurate 95 percent of the time.

Anthrax contaminates the body when spores enter it, mostly through inhalation, multiplying and spreading an illness that could be fatal. Skin infections of anthrax are less deadly.

Spores from the Sterne strain of anthrax bacteria (Bacillus anthracis) are pictured in this handout scanning electron micrograph obtained by Reuters May 28, 2015. Reuters/Center for Disease Control/Handout

This study showed that holographic imaging and deep learning can identify anthrax in a few seconds,YongKeun Paul Park, associate professor of physics at the Korea Advanced Institute of Science and Technology, told the IEEE Spectrum blog.

Conventional approaches such as bacterial culture or gene sequencing would take several hours to a day, he added.

Park is working with the South Korean agency responsible for developing the country's defense capabilities amid fears that North Korea may plan a biological attack against its archenemy across their shared border.

North Korea's regime is no stranger to chemical agents. South Korea has accused operatives linked toPyongyang of responsibility for the assassination of North Korean leader Kim JongUn's half brother, Kim Jong Nam, using a VX agent at Malaysia's Kuala Lumpur International Airport in February.

Contamination by anthrax hasa death rate of 80 percent, so detection of the bacteria is crucial.

Spreading anthrax far and wide in an attack would mean that thousands would die if contaminated. So Western security services fear that hostile parties, such as ISIS sympathizers or regimes such as North Korea, will make attempts to develop a capability to cause a mass-casualty attack.

The researchers say the AI innovation could bring advances elsewhere, too, including the potential to detect other bacterias, such as those that cause food poisoning and kill more than a quarter of a million people every year.

Follow this link:

AI Vs. Bioterrorism: Artificial Intelligence Trained to Detect Anthrax by Scientists - Newsweek

Artificial Intelligence might not destroy us after all – New York Post

Elon Musk famously equated Artifical Intelligence with summoning the demon and sounds the alarm that AI is advancing faster than anyone realizes, posing an existential threat to humanity. Stephen Hawking has warned that AI could take off and leave the human race, limited by evolutions slow pace, in the dust. Bill Gates counts himself in the camp concerned about super intelligence. And, although Mark Zuckerburg is dismissive about AIs potential threat, Facebook recently shut down an AI engine after reportedly discovering that it had created a new language humans cant understand.

Concerns about AI are entirely logical if all that exists is physical matter. If so, itd be inevitable that AI designed by our intelligence but built on a better platform than biochemistry would exceed human capabilities that arise by chance.

In fact, in a purely physical world, fully realized AI should be recognized as the appropriate outcome of natural selection; we humans should benefit from it while we can. After all, sooner or later, humanity will cease to exist, whether from the sun running out or something more mundane including AI-driven extinction. Until then, wouldnt it be better to maximize human flourishing with the help of AI rather than forgoing its benefits in hopes of extending humanitys end date?

As possible as all this might seem, in actuality, what we know about the human mind strongly suggests that full AI will not happen. Physical matter alone is not capable of producing whole, subjective experiences, such as watching a sunset while listening to seagulls, and the mechanisms proposed to address the known shortfalls of matter vs. mind, such as emergent properties, are inadequate and falsifiable. Therefore, it is highly probable that we have immaterial minds.

Granted, forms of AI are already achieving impressive results. These use brute force, huge and fast memory, rules-based automation, and layers of pattern matching to perform their extraordinary feats. But this processing is not aware, perceiving, feeling, cognition. The processing doesnt go beyond its intended activities even if the outcomes are unpredictable. Technology based on this level of AI will often be quite remarkable and definitely must be managed well to avoid dangerous repercussions. However, in and of itself, this AI cannot lead to a true replication of the human mind.

Full AI that is, artificial intelligence capable of matching and perhaps exceeding the human mind cannot be achieved unless we discover, via material means, the basis for the existence of immaterial minds, and then learn how to confer that on machines. In philosophy, the underlying issue is known as the qualia problem. Our awareness of external objects and colors; our self-consciousness; our conceptual understanding of time; our experiences of transcendence, whether simple awe in front of beauty or mathematical truth; or our mystical states, all clearly point to something that is qualitatively different from the material world. Anyone with a decent understanding of physics, computer science and the human mind ought to be able to know this, especially those most concerned about AIs possibilities.

That those who fear AI dont see its limitations indicates that even the best minds fall victim to their biases. We should be cautious about believing that exceptional achievements in some areas translate to exceptional understanding in others. For too many including some in the media the mantra Question everything applies only within certain boundaries. They never question methodological naturalism the belief that there is nothing that exists outside the material world which blinds them to other possibilities. Even with what seems like more open-minded thinking, some people seem to suffer from a lack of imagination or will. For example, Peter Thiel believes that the human mind and computers are deeply different yet doesnt acknowledge that implies that the mind comprises more than physical matter. Thomas Nagle believes that consciousness could not have arisen via materialistic evolution yet explicitly limits the implications of that because he doesnt want God to exist.

Realizing that we have immaterial minds, i.e. genuine souls, is far more important than just speculating on AIs future. Without immaterial minds, there is no sustainable basis for believing in human exceptionalism. When human life is viewed only through a materialistic lens, it gets valued based on utility. No wonder the young nones young Americans who dont identify with a religion think their lives are meaningless and some begin to despair. It is time to understand that evolution is not a strictly material process but one in which the immaterial mind plays a major role in human, and probably all sentient creatures, adaption and selection.

Deep down, we all know were more than biological robots. Thats why almost everyone rebels against materialisms implications. We dont act as though we believe everything is ultimately meaningless.

Were spiritual creatures, here by intent, living in a world where the supernatural is the norm; each and every moment of our lives is our souls in action. Immaterial ideas shape the material world and give it true meaning, not the other way around.

In the end, the greatest threat that humans face is a failure to recognize what we really are.

If were lucky, what people learn in the pursuit of full AI will lead us to the rediscovery of the human soul, where it comes from, and the important understanding that goes along with that.

Go here to read the rest:

Artificial Intelligence might not destroy us after all - New York Post

The Race to Cyberdefense, Artificial Intelligence and the Quantum Computer – Government Technology

I've been following cybersecurity startups and hackers for years, and I suddenly discovered how hackers are always ahead of the rest of us they have a better business model funding them in their proof of concept (POC) stage of development.

To even begin protecting ourselves from their well-funded advances and attacks, cyberdefense and artificial intelligence (AI)technologies must be funded at the same level in the POC stage.

Today, however, traditional investors not only want your technology running, they also need assurances that you already have a revenue stream which stifles potential new technology discovery at the POC level. And in some industries, this is dangerous.

Consider the fast-paced world of cybersecurity, in which companies are offered traditional funding avenues as they promote their product's tech capabilities so people will invest. This promotion and disclosure of their technology, however, gives hackers a road map to the new cyberdefense technologies and a window of time to gain knowledge on how to exploit them.

This same road map exists for technologies covered in detail when standard groups, universities, governments and private labs publish white papers documents that essentially assist hackers by giving them advanced notice of cyberdefense techniques.

In addition to this, some hackers receive immediate funding through nation states that are coordinating cyberwarfare like the traditional military and others are involved in organized secret groups that fund the use of ransomware and DDoS attacks. These hackers get immediate funding and then throw their technology on the Internet for POC discovery.

One project that strongly makes a case for rapidly funding cyberdefense technologies in an effort to keep up with hackers is the $5.7 billion U.S. Department of Homeland Security's (DHS) EINSTEIN cyberdefense system, which was deemed obsolete upon its deployment for failing to detect 94 percent of security vulnerabilities. As this situation illustrates, the traditional methods of funding cyberdefense taking years of bureaucratic analysis and vendor contracts does not work in the fast technology discovery world of cyberdefense. After the EINSTEIN project failure, DHS decided to conduct an assessment it's currently working to understand if it's making the right investments in dealing with the ever-changing cyberenvironment.

But it also has other roadblocks, as even large technology companies and contractors with which DHS does business have their own bureaucracies and investments that ultimately deter the department from getting the best in cyberdefense technologies. And once universities, standards groups, regulation and funding approvals are added to these processes, you're pretty much assured to be headed for another disaster.

But DHS doesnt need to develop these technologies itself. The department needs to support public- and private-sector POCs to rapidly mature and deploy new cyberdefense technologies. This suggestion is supported by what other countries are successfully doing including our adversaries.

The same two things that have motivated mankind all through history immediate power and money are now motivating hackers, and cyberdefense technologies are taking years to be deployed. So I'll say it again: The motivational and funding model of cyberdefense technologies must change. The key to successful cyberdefense technology development is making it as aggressive as the hackers that attack it. And this needs to be done at the conceptual POC level.

The concern in cyberdefense (and really all AI) is the race to the quantum computer.

Quantum computer technologies cant be hacked, and in theory, its processing power can break all encryption. The computational physics behind the quantum also offer remarkable capabilities that will drastically change all current AI and cyberdefense technologies. This is a winner-takes-all technology that offers capability with absolute security capabilities capabilities that we can now only imagine.

The most recent funding source for hackers is Bitcoin, which uses the decentralized and secure blockchain technology. It has even been used to support POC funding in what is called an Initial Coin Offering (ICO), the intent of which is to crowdfund early startup companies at the development or POC level by bypassing traditional and lengthy funding avenues. Because this type of startup seed offering has been clouded with scams, it is now in regulatory limbo.

Some states have passed laws that make it difficult to legally present and offer an ICO. While the U.S. seems to be pushing ICOregulation, other countries are still deciding what to do. But like ICOs or not, they offer first-time startups an avenue of fast-track funding at the concept level where engineers and scientists can jump on newer technologies by focusing seed money on testing their concepts. Bogging ICOs down with regulatory laws will both slow down legitimate POC innovation in the U.S. and give other countries a competitive edge.

Another barrier to cyberdefense POC funding is the size and technological control of a handful of tech companies. Google, Facebook, Amazon, Microsoft and Apple have become enormous concentrations of wealth and data, drawing the attention of economists and academics who warn they're growing too powerful. Now as big as major American cities, these companies are mega centers of both money and technology. They are so large and control so much of the market that many are beginning to view them as in violation of the Sherman Antitrust Act. So how can small startups compete with these tech giants and potentially fund POCs in areas such as cyberdefense and AI? By aligning with giant companies in industries that have the most need for cyberdefense and AI technologies: critical infrastructure.

The industries that are most vulnerable and could cause the most devastation if hacked are those involved in critical infrastructure. These large industries have the resources to fund cyberdefense technologies at the concept level and they would obtain superior cyberdefense technologies in doing so.

Cyberattacks to critical infrastructure could devastate entire country economies and must be protected by the most advanced cyberdefense. Quantum computing and artificial intelligence will initiate game-changing technology in both cyberdefense and the new intellectual property deriving from quantum sciences. Entering these new technologies at the POC level is like being a Microsoft or Google years ago. Funding the development of these new technologies in cyberdefense and AI are needed soon but what about today?

Future quantum computer capabilities will also demand immediate short-term fixes in current cyberdefense and AI. New quantum-ready compressed encryption and cyberdefense deep learning AI must be funded and tested now at the concept level. The power grid, oil and gas, and even existing telecoms are perfect targets for this funding and development. Investing today would offer current cyberdefense and business intelligence protection while creating new profit centers in the licensing and sale of these leading-edge technologies. This is true for many other industries, all differing in their approach and requiring specialized cyberdefense capabilities and new intelligence gathering that will shape their future.

So we must find creative ways of rapidly funding cyberdefense technologies at the conceptual level. If this is what hackers do and it's why they're always one step ahead, shouldn't we work to surpass them?

Read more:

The Race to Cyberdefense, Artificial Intelligence and the Quantum Computer - Government Technology

Artificial intelligence is inevitable. Will you embrace or resist it in your practice? – Indiana Lawyer

Growing up, Kightlinger and Gray LLP attorney Adam Ira can recall members of his family, many of whom were factory workers, expressing concerning about the prospect of automated machines taking their jobs. Now, Ira said similar concerns are creeping into his work as a lawyer, as the rise of artificial intelligence in the practice of law has begun automating legal tasks previously performed by humans.

As the number of available AI products grows, attorneys have begun to gravitate toward tools that enable them to do their work quickly and more efficiently. Artificial intelligence can come in multiple forms, legal tech experts say, from simple document automation to more complex intelligence using algorithms to predict legal outcomes.

In recent months, several new AI products have been introduced with the promise of automating the mundane tasks of being a lawyer, leaving attorneys with more time to focus on the complex legal questions raised by their clients.

For example, Seattle-based TurboPatent Corp. launched an AI tool in mid-July known as RoboReview. Through RoboReview, patent attorneys can upload a patent application into the AI software, which then scans the document and assesses it for similarities to previous patent applications and uses the level of similarity to predict patent eligibility. RoboReview can also make other predictions about the patent process, such as how long the process might take or what actions the U.S. Patent and Trademark Office may take with the application, said Dave Billmaier, TurboPatent vice president of product marketing.

Shortly after RoboReview went public, Amy Wan, attorney and founder and CEO of Bootstrap Legal, introduced an AI product that automates the process of drafting legal paperwork for people trying to raise capital for a real estate project of $2 million or less. As a former real estate securities attorney, Wan said she witnessed firsthand how inefficient the process of drafting such documents could be, especially considering that much of the work involved routine tasks such as copying and pasting from previous documents.

With Wans AI product, users answer questions about their real estate project, and the software uses those answers to develop the necessary legal documents, which are returned to the user within 48 hours. Such technology expedites drafting the documents a process she said could otherwise take 20 to 25 hours to complete while also cutting the costs associated with raising real estate capital. Wan said her company and AI product are based on the principle that cost considerations should not prevent people from accessing legal services.

Saving time and cutting costs are AI advantages that serve as the key selling points for legal tech developers, as clients have come to expect their attorneys to use modern technology to perform efficient work at the lowest possible cost, said Jason Houdek, a patent and intellectual property attorney with Taft Stettinius & Hollister LLP. Though RoboReview is new, Houdek said he has been using similar AI tools to determine patent eligibility, ensure application quality and predict patent examiner behavior for several years.

Similarly, Haley Altman, CEO of Indianapolis-based Doxly Inc., said most legal tech entrepreneurs like her are trying to develop AI tools that take large sets of data or documents and extrapolate the relevant information lawyers are looking for, thus reducing the amount of time they spend combing through documents. The Doxly software, which is designed to automate the legal transaction process, uses AI to mimic a transactional attorneys natural workflow, making the software feel natural, she said.

Job security?

Despite these benefits, some attorneys are concerned that continued use of AI in the practice of law could put them out of a job. Further, Jim Billmaier, TurboPatent CEO, said the old guard of attorneys, those who have been in practice for many years, can be inclined to resist artificial intelligence tools because they go against traditional practice.

There may be some legitimacy to those concerns, the attorneys and legal tech experts said. For example, attorneys at large firms that still employ the typical billable-hour model could see a drop in their hours as a result of AI products, said Dan Carroll with vrsus LLC, a rebranded version of legal tech company CasePacer. The vrsus technology utilizes AI to enable attorneys at plaintiffs firms to reach outcomes for their clients as quickly as possible, rather than focusing on how many hours they are able to bill, Carroll said.

Similarly, certain practice areas that are more transactional in nature, such as bankruptcy or tax law, might be more susceptible to automation, Ira said.

But such automation is now inevitable, as further AI development is a matter of when, not if, Houdek said. Jim Billmaier agreed and noted that attorneys who are resistant to AI advancements will find themselves underperforming if they choose not to take advantage of tools that increase efficiency.

While technological advancements might be inevitable, they do not have to be uncontrollable, said Ira Smith, vrsus chief strategy officer. Few attorneys fully understand the nuances of what makes AI work, Smith said, yet few tech developers, such as IBM, understand the nuances of practicing law.

As a result, attorneys and legal tech companies should focus less on how new artificial intelligence products might change their work and instead try to mold whatever AI tools are currently on the market to improve the product of their work, Smith said. He encouraged attorneys to be product agnostic and focus less on the technological platform and more on technologys possible benefits.

Why would it matter whether (IBMs) Watson is utilizing my data as long as I can take that and serve it back to my clients? Smith said.

Human advantage

Even as legal tech and other companies offer new and ever more advanced AI products, attorneys said the human mind will always be needed in the practice of law.

For example, even if a computer becomes intelligent enough to draw up contracts on its own, lawyers will still need to review and finalize them, Altman said. Ira agreed and noted that use of AI can create ethical issues, as attorneys must ensure the automated documents they produce reflect accurate and competent work.

Further, the power of persuasion is a trait that is uniquely human, and one that is critical to the practice of law, Ira said. Though an intelligent computer might able to cobble together a legal argument one day an advancement he thinks is still at least 10 to 15 years off it could never speak to a judge or a jury in a manner meant to persuade and effectively advocate on behalf of a client, Ira said.

Similarly, judges will always be needed to use their minds and legal training to decide the outcome of cases, Houdek said, and human juries will always be needed to decide cases.

Though some human jobs or billable hours might decrease as a result of advancements in artificial intelligence, the legal tech experts said AI is more of a benefit than a threat because it allows legal professionals to use their minds and training for the creative work that comes with being an attorney.

AI technology isnt taking their jobs, Altman said. The whole point of it is to enable them to do the work that they really want to be focusing on.

Read the rest here:

Artificial intelligence is inevitable. Will you embrace or resist it in your practice? - Indiana Lawyer

When artificial intelligence goes wrong – Livemint

Even as artificial intelligence and machine learning continue to break new ground, there is enough evidence to indicate how easy it is for bias to creep into even the most advanced algorithms. Photo: iStockphoto

Bengaluru: Last year, for the first time ever, an international beauty contest was judged by machines. Thousands of people from across the world submitted their photos to Beauty.AI, hoping that their faces would be selected by an advanced algorithm free of human biases, in the process accurately defining what constitutes human beauty.

In preparation, the algorithm had studied hundreds of images of past beauty contests, training itself to recognize human beauty based on the winners. But what was supposed to be a breakthrough moment that would showcase the potential of modern self-learning, artificially intelligent algorithms rapidly turned into an embarrassment for the creators of Beauty.AI, as the algorithm picked the winners solely on the basis of skin colour.

The algorithm made a fairly non-trivial correlation between skin colour and beauty. A classic example of bias creeping into an algorithm, says Nisheeth K. Vishnoi, an associate professor at the School of Computer and Communication Sciences at Switzerland-based cole Polytechnique Fdrale de Lausanne (EPFL). He specializes in issues related to algorithmic bias.

A widely cited piece titled Machine bias from US-based investigative journalism organization ProPublica in 2016 highlighted another disturbing case.

It cited an incident involving a black teenager named Brisha Borden who was arrested for riding an unlocked bicycle she found on the road. The police estimated the value of the item was about $80.

In a separate incident, a 41-year-old Caucasian man named Vernon Prater was arrested for shoplifting goods worth roughly the same amount. Unlike Borden, Prater had a prior criminal record and had already served prison time.

Yet, when Borden and Prater were brought for sentencing, a self-learning program determined Borden was more likely to commit future crimes than Praterexhibiting the sort of racial bias computers were not supposed to have. Two years later, it was proved wrong when Prater was charged with another crime, while Bordens record remained clean.

And who can forget Tay, the infamous racist chatbot that Microsoft Corp. developed last year?

Even as artificial intelligence and machine learning continue to break new ground, there is enough evidence to indicate how easy it is for bias to creep into even the most advanced algorithms. Given the extent to which these algorithms are capable of building deeply personal profiles about us from relatively trivial information, the impact that this can have on personal privacy is significant.

This issue caught the attention of the US government, which in October 2016 published a comprehensive report titled Preparing for the future of artificial intelligence, turning the spotlight on the issue of algorithmic bias. It raised concerns about how machine-learning algorithms can discriminate against people or sets of people based on the personal profiles they develop of all of us.

If a machine learning model is used to screen job applicants, and if the data used to train the model reflects past decisions that are biased, the result could be to perpetuate past bias. For example, looking for candidates who resemble past hires may bias a system toward hiring more people like those already on a team, rather than considering the best candidates across the full diversity of potential applicants, the report says.

The difficulty of understanding machine learning results is at odds with the common misconception that complex algorithms always do what their designers choose to have them do, and therefore that bias will creep into an algorithm if and only if its developers themselves suffer from conscious or unconscious bias. It is certainly true that a technology developer who wants to produce a biased algorithm can do so, and that unconscious bias may cause practitioners to apply insufficient effort to preventing bias, it says.

Over the years, social media platforms have been using similar self-learning algorithms to personalize their services, offering content better suited to the preferences of their usersbased solely on their past behaviour on the site in terms of what they liked or the links they clicked on.

What you are seeing on platforms such as Google or Facebook is extreme personalizationwhich is basically when the algorithm realizes that you prefer one option over another. Maybe you have a slight bias towards (US President Donald) Trump versus Hillary (Clinton) or (Prime Minister Narendra) Modi versus other opponentsthats when you get to see more and more articles which are confirming your bias. The trouble is that as you see more and more such articles, it actually influences your views, says EPFLs Vishnoi.

The opinions of human beings are malleable. The US election is a great example of how algorithmic bots were used to influence some of these very important historical events of mankind, he adds, referring to the impact of fake news on recent global events.

Experts, however, believe that these algorithms are rarely the product of malice. Its just a product of careless algorithm design, says Elisa Celis, a senior researcher along with Vishnoi at EPFL.

How does one detect bias in an algorithm? It bears mentioning that machine learning-algorithms and neural networks are designed to function without human involvement. Even the most skilled data scientist has no way to predict how his algorithms will process the data provided to them, said Mint columnist and lawyer Rahul Matthan in a recent research paper on the issue of data privacy published by the Takshashila Institute, titled Beyond consent: A new paradigm for data protection.

One solution is black-box testing, which determines whether an algorithm is working as effectively as it should without peering into its internal structure. In a black-box audit, the actual algorithms of the data controllers are not reviewed. Instead, the audit compares the input algorithm to the resulting output to verify that the algorithm is in fact performing in a privacy-preserving manner. This mechanism is designed to strike a balance between the auditability of the algorithm on the one hand and the need to preserve proprietary advantage of the data controller on the other. Data controllers should be mandated to make themselves and their algorithms accessible for a black box audit, says Matthan, who is also a fellow with Takshashilas technology and policy research programme.

He suggests the creation of a class of technically skilled personnel or learned intermediaries whose sole job will be to protect data rights. Learned intermediaries will be technical personnel trained to evaluate the output of machine-learning algorithms and detect bias on the margins and legitimate auditors who must conduct periodic reviews of the data algorithms with the objective of making them stronger and more privacy protective. They should be capable of indicating appropriate remedial measures if they detect bias in an algorithm. For instance, a learned intermediary can introduce an appropriate amount of noise into the processing so that any bias caused over time due to a set pattern is fuzzed out, Matthan explains.

That said there still remain significant challenges in removing the bias once discovered.

If you are talking about removing biases from algorithms and developing appropriate solutions, this is an area that is still largely in the hands of academiaand removed from the broader industry. It will take time for the industry to adopt these solutions on a larger scale, says Animesh Mukherjee, an associate professor at the Indian Institute of Technology, Kharagpur, who specializes in areas such as natural language processing and complex algorithms.

This is the first in a four-part series. The next part will focus on consent as the basis of privacy protection.

A nine-judge Constitution bench of the Supreme Court is currently deliberating whether or not Indian citizens have the right to privacy. At the same time, the government has appointed a committee under the chairmanship of retired Supreme Court judge B.N. Srikrishna to formulate a data protection law for the country. Against this backdrop, a new discussion paper from the Takshashila Institute has proposed a model of privacy particularly suited for a data-intense world. Over the course of this week we will take a deeper look at that model and why we need a new paradigm for privacy. In that context, we examine the increasing reliance on software to make decisions for us, assuming that dispassionate algorithms will ensure a level of fairness that we are denied because of human frailties. But algorithms have their own shortcomingsand those can pose a serious threat to our personal privacy.

Excerpt from:

When artificial intelligence goes wrong - Livemint

An artificial intelligence researcher reveals his greatest fears about the future of AI – Quartz

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systemsthe RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plantengineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disastersinking a ship, blowing up two shuttles, and spreading radioactive contamination across Europe and Asiaa set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm, and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master; these are not world-changing consequences. Indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolutionand factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty, and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collectedand get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political selftogether with the rest of humanitymay be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator, and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge, or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some timesomewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different thingsas are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published on The Conversation. Read the original article.

View original post here:

An artificial intelligence researcher reveals his greatest fears about the future of AI - Quartz

Artificial Intelligence And Its Impact On Legal Technology (Part II) – Above the Law

Artificial intelligence (AI) is quickly coming into its own in terms of use by the legal industry. We are on the cusp of a revolution in the legal profession led by the adoption of AI throughout the legal industry, but in particular by in-house lawyers. Much like how email changed the way we do business every day, AI will become ubiquitous an indispensable assistant to practically every lawyer. But what is the future of AI in the legal industry? A bigger question is whether AI will actually replace lawyers as seems to be implicated above (a scary thought if you are new to the profession vs. an old-timer like me). And if so, are there ethical or moral dilemmas that should be considered regarding AI and the legal industry? When considering the future of AI in the industry, a few things are for sure. First, those who do not adopt and embrace the change will get left behind in some manner and second, those who do embrace AI will ultimately find themselves freed up to do the two things there always seems to be too little time for: thinking and advising. Welcome to the second of a four-part series on AI; this article discusses whether lawyers should be concerned about whether AI will replace lawyers.

Robot Lawyer Army?

In the first installment of this series, I wrote about what AI is, how it works, and its general impact on the legal industry and legal technology. In this article, I will tackle the question of whether AI will replace lawyers.

I am sorry to disappoint anyone who had visions of unleashing a horde of mechanical robot lawyers to lay waste to their enemies via a mindless rampage of bone-chilling logic and robo-litigation. That isnt happening, but it does paint a pretty cool picture of the robot lawyer army Ive always wanted. Instead, what most likely to happen are three things.

1) Some legal jobs will be eliminated, e.g., those which involve the sole task of searching documents or other databases for information and coding that information are most at risk.

2) Jobs will be created, including managing and developing AI (legal engineers), writing algorithms for AI, and reviewing AI-assisted work product (because lawyers can never concede the final say or the provision of legal advice to AI).

3) Most lawyers will be freed from the mundane task of data gathering for the value-added task of analyzing results, thinking, and advising their clients. These are roles that will always require the human touch. AI will just be a tool to help lawyers do all of this better, faster and more cost effectively.

For more about the future of AI for in-house counsel, see the full version of this article. Or visit the larger Legal Department 2025 Resource Center from Thomson Reuters.

Sterling Miller spent over 20 years as in-house counsel, including being general counsel for Sabre Corporation and Travelocity. He currently serves as Senior Counsel for Hilgers Graben PLLC focusing on litigation, contracts, data privacy, compliance, and consulting with in-house legal departments. He is CIPP/US certified in data privacy.

View post:

Artificial Intelligence And Its Impact On Legal Technology (Part II) - Above the Law

Six disturbing predictions about how artificial intelligence will transform life on Earth by 2050 – Mirror.co.uk

We all know that the world is being transformed by technology, but a leading artificial intelligence expert has made a series of predictions that put these changes into harsh perspective.

In his new book, It's Alive!: Artificial Intelligence from the Logic Piano to Killer Robots , Professor Toby Walsh paints a horrifying picture of life in 2050.

From autonomous vehicles to robot managers, humans will be at the mercy of artificially intelligent computers that will control almost every aspect of our lives.

As people's role in society diminishes, they will retreat further and further into virtual worlds, were they will be able to live out their darkest fantasies without fear of recrimination.

"By 2050, the year 2000 will look as quaintly old-fashioned as the horse drawn era of 1900 did to people in 1950," said Walsh, who is professor of artificial intelligence at the University of New South Wales in Australia.

Here are some of his most bone-chilling predictions about life in 2050:

Work is already underway to build cars that can drive themselves, but by 2050, Professor Walsh predicts that humans will be banned from driving althogether.

The vast majority of road accidents are caused by human error, he argues, so autonomous vehicles will make the roads inherently safer and less congested.

As self-driving cars become more ubiquitous, most people will lose their driving skills, and street parking will disappear.

Eventually, ships, planes and trains will also become autonomous, allowing goods to be transported all over the world without human intervention.

"If we can take the human out of the loop, we can make our roads much safer," said Professor Walsh.

As computers become more "intelligent", AI systems will increasingly manage how you work - from scheduling your tasks and approving holidays to monitoring and rewarding your performance.

They could even be put in charge of hiring and firing employees, looking at qualifications and skill sets to match people with jobs.

Professor Walsh points out that matching people with jobs is no more complicated than matching people with each other - something that we already rely on dating sites to do for us.

However, he admits there are some decisions that machines should not be allowed to make.

"We will have to learn when to say to computers: 'Sorry, I can't let you do that,'" he said.

If you're not answering to a computer, then you've probably been replaced by one.

Robots are already replacing humans in many factories and customer service roles, but by 2050, the same technology will have eliminated many middle-class "white collar" jobs.

The news will be written by artificially intelligent computers and presented by avatars and chatbots, which will tailor content to viewers' personal preferences.

Robots will surpass athletes on the sports field, exhibiting greater speed, accuracy and stamina than their human counterparts, and data scientists will be some of the best paid members of football clubs.

Even doctors will be largely replaced by AI physicians that will continually monitor your blood pressure, sugar levels, sleep and exercise, and record your voice for signs of a cold, dementia or a stroke.

"Our personal AI physician will have our life history, it will know far more about medicine than any single doctor, and it will stay on top of all the emerging medical literature," Professor Walsh said.

As society becomes less and less reliant on human input, people will become increasingly absorbed in virtual worlds that merge the best elements of Hollywood and the computer games industry.

Viewers will have complete control over the course of events, and avatars can be programmed to act and talk like anyone they choose - including long-dead celebrities like Marilyn Monroe.

However, there will be increasing concern about the seductive nature of these virtual worlds, and the risk of addicts abandoning reality in order to spend every waking moment in them.

They could also give people the opportunity to behave in distasteful or illegal ways, or live out their darkest fantasies without fear of recrimination.

"This problem will likely trouble our society greatly," Professor Walsh said. "There will be calls that behaviours which are illegal in the real world should be made illegal or impossible in the virtual."

Governments already rely heavily on hacking and cyber surveillance to gather intelligence about foreign enemies, but they will increasingly use these tools to carry out attacks.

Artificial intelligence will quickly surpass human hackers, and the only defence will be other AI programs, so governments will be forced to enter a cyber arms race with other nation states.

As these tools make their way onto the dark web and into the hands of cyber criminals, they will also be used to attack companies and financial institutions.

"Banks will have no choice but to invest more and more in sophisticated AI systems to defend themselves from attack," said Professor Walsh.

Humans will become further and further removed from these crimes, making tracking down the perpetrators increasingly difficult for law enforcement authorities.

If you thought that death would be sweet relief from this dystopian vision of the future, you can think again.

In 2050, humans will will live on as artificially intelligent chatbots after they die, according to Professor Walsh.

These chatbots will draw from social media and other sources to mimic the way you talk, recount the story of your life and comfort your family when you die.

Some people might even give their chatbot the task of reading their will, settling old scores, or relieving grief through humour.

This will of course raise all kinds of ethical questions, such as whether humans have a right know if they're interacting with a computer rather than a real person, and who can switch off your bot after you die.

"It will be an interesting future," said Professor Walsh.

Read the rest here:

Six disturbing predictions about how artificial intelligence will transform life on Earth by 2050 - Mirror.co.uk

Microsoft’s New Artificial Intelligence Mission Is Nothing To Dismiss – Seeking Alpha

Just when you thought you were getting to know Microsoft (MSFT) it goes and changes personalities.

Actually, the new-and-improved Microsoft has been making itself known for quite some time with a minimal amount of fanfare - it only became officially official last week. In that the shift is apt to make in an increasingly big difference in the company's results though, fans and followers of the company would be wise to take a closer look at what Microsoft has become.

And what is this new focal point for CEO Satya Nadella? Take it with a grain of salt, because corporate slogans are as much of a sales pitch as they are an ambition anymore. But, per the company's most recent annual filing with the SEC, Microsoft is now an "AI (artificial intelligence) first" outfit. Previous annual reports had suggested its focal point was mobile... a mission that ended with mixed results. While Microsoft has a strong presence in the mobility market in the sense that many of its cloud services are accessible via mobile devices, Microsoft's smartphone dreams turned into nightmares.

It does beg the question though - what exactly does an AI-focused Microsoft look like when artificial intelligence was never a priority before?

They were touted by the company, though in light of the fact that it's the big new hot button, the AI acquisitions Microsoft has made to date weren't touted enough (and certainly not framed within the context of its new mission).

As a quick recap, the more prescient artificial intelligence deals Nadella has made:

1. SwiftKey

Back in early 2016, Microsoft ponied up a reported $250 million to get its hands on a technology that predicts what word you're typing into your smartphone or tablet before you have to tap all the letters out. Some find it annoying because the word it guesses isn't always the one you want... a problem solved just by continuing to type. Others love the idea of not being forced to finish typing a word.

At first blush it seems superfluous, and truth be told, it is. It's not quite as meaningless as some have made it out to be though, in that users have largely come to expect such a feature from most of their electronics.

2. Genee

Just a few months after acquiring SwiftKey last year, it bought chatbot specialist Genee, primarily to make its office productivity programs more powerful an easy to use. Users can simply speak into their computer to manipulate apps like Office 365. Its claim to fame is the ability to schedule meetings on a calendar just by understanding the context of an e-mail.

The tool in itself isn't the proverbial "killer app." In fact, Microsoft shut down Genee shortly after it bought it. It just didn't shut it down after ripping out the most marketable pieces of the platform and adding them to its bigger chatbot machine.

Microsoft has struggled with AI chat in the past - like Tay, which quickly learned to be racist - but it's getting very, very good at conversational instructions. But the establishment of a 100-member department aimed solely improving artificial intelligence strongly suggests the company is going to keep working on its chat technologies until it gets it right.

3. Maluuba

It's arguably the most game-changing artificial intelligence acquisition Microsoft has made to date, even though it's the furthest away from being useful.

Maluuba was the Canadian artificial intelligence outfit Microsoft bought in January of this year. It was billed as a general AI company, which could mean a lot of different things. For Maluuba though, that meant building systems that could read (and comprehend) words, understand dialog, and perform common-sense reasoning.

A completely impractical but amazingly impressive use of that technology: Maluuba's technology was the platform that allowed a computer to beat the notoriously difficult Ms. Pac Man video game for the Atari 2600. Even more interesting is how it happened. Microsoft essentially arranged for a committee of different digital thought patterns with different priorities. That is, one AI's priority was to score as many points as possible. Another AI's priority was to eat the game's ghosts when they were edible. Yet another AI's purpose was avoiding those ghosts. All of the different 'committee' members negotiated each move Ms. Pac Man made at any given time, based on the risk or reward of a particular (and ever-changing) scenario in the game.

The end result: The artificial intelligence achieved the best-ever known score for the game.

It remains to be seen how that premise will be applied in the future, but it's got a lot of potential. It's one of the few artificial intelligence platforms that had to reason its way through a problem created by an outside, third-party source rather than one that was built from the ground up to perform a very specific, limited function.

Getting a bead on the nascent artificial market is tough. There's no shortage of outlooks. There's just a shortage of history and understanding about what artificial intelligence really is and how it can be practically commercialized.

To the extent AI's potential can be quantified though, PricewaterhouseCoopers thinks it will create an additional $16 trillion worth of commerce over the course of the coming ten years... that's above and beyond what would have been created without it.

In other words, that's not the likely market size for artificial intelligence software, hardware and services - that figure will be smaller. Tractica thinks the actual amount of spending on AI services and hardware will be on the order of $16 billion by 2025... a number that seems reasonable and rational, though also somehow seems small relative to the value artificial intelligence will have to enterprises. In fact, others think (when factoring in the underlying software and related services that will mature with AI) the artificial intelligence market will be worth $59 billion by 2025.

Whatever's in the cards, it's a worthy market to address, and Microsoft is surprisingly almost as well equipped to run the race as well as its peers and rivals can. Though meaningful revenue is still a few years off, the new Microsoft mantra is one that matters, in that it's a viable growth engine for the company.

In other words, take Microsoft's AI ambitions as seriously as you should have taken its cloud-computing ambitions a couple of years ago.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

View post:

Microsoft's New Artificial Intelligence Mission Is Nothing To Dismiss - Seeking Alpha

Artificial Intelligence will lead to the human soul, not destroy it | Fox … – Fox News

Elon Musk famously equated Artifical Intelligence with summoning the demon and sounds the alarm that AI is advancing faster than anyone realizes, posing an existential threat to humanity. Stephen Hawking has warned that AI could take off and leave the human race, limited by evolutions slow pace, in the dust. Bill Gates counts himself in the camp concerned about super intelligence. And, although Mark Zuckerburg is dismissive about AIs potential threat, Facebook recently shut down an AI engine after reportedly discovering that it had created a new language humans cant understand.

Concerns about AI are entirely logical if all that exists is physical matter. If so, itd be inevitable that AI -- designed by our intelligence but built on a better platform than biochemistry -- would exceed human capabilities that arise by chance.

In fact, in a purely physical world, fully-realized AI should be recognized as the appropriate outcome of natural selection; we humans should benefit from it while we can. After all, sooner or later, humanity will cease to exist, whether from the sun running out or something more mundane including AI-driven extinction. Until then, wouldnt it be better to maximize human flourishing with the help of AI rather than forgoing its benefits in hopes of extending humanitys end date?

As possible as all this might seem, in actuality, what we know about the human mind strongly suggests that full AI will not happen. Physical matter alone is not capable of producing whole, subjective experiences, such as watching a sunset while listening to sea gulls, and the mechanisms proposed to address the known shortfalls of matter vs. mind, such as emergent properties, are inadequate and falsifiable. Therefore, it is highly probable that we have immaterial minds.

Deep down, we all know were more than biological robots. Thats why almost everyone rebels against materialisms implications. We dont act as though we believe everything is ultimately meaningless.

Granted, forms of AI are already achieving impressive results. These use brute force, huge and fast memory, rules-based automation, and layers of pattern matching to perform their extraordinary feats. But this processing is not aware, perceiving, feeling, cognition. The processing doesnt go beyond its intended activities even if the outcomes are unpredictable. Technology based on this level of AI will often be quite remarkable and definitely must be managed well to avoid dangerous repercussions. However, in and of itself, this AI cannot lead to a true replication of the human mind.

Full AI that is, artificial intelligence capable of matching and perhaps exceeding the human mind -- cannot be achieved unless we discover, via material means, the basis for the existence of immaterial minds, and then learn how to confer that on machines. In philosophy the underlying issue is known as the qualia problem. Our awareness of external objects and colors; our self-consciousness; our conceptual understanding of time; our experiences of transcendence whether simple awe in front of beauty or mathematical truth; or our mystical states, all clearly point to something that is qualitatively different from the material world. Anyone with a decent understanding of physics, computer science and the human mind ought to be able to know this, especially those most concerned about AIs possibilities.

That those who fear AI dont see its limitations indicates that even the best minds fall victim to their biases. We should be cautious about believing that exceptional achievements in some areas translate to exceptional understanding in others. For too many including some in the media -- the mantra, question everything, applies only within certain boundaries. They never question methodological naturalism -- the belief that there is nothing that exists outside the material world -- which blinds them to other possibilities. Even with what seems like more open-minded thinking, some people seem to suffer from a lack of imagination or will. For example, Peter Thiel believes that the human mind and computers are deeply different yet doesnt acknowledge that implies that the mind comprises more than physical matter. Thomas Nagle believes that consciousness could not have arisen via materialistic evolution yet explicitly limits the implications of that because he doesnt want God to exist.

Realizing that we have immaterial minds, i.e. genuine souls, is far more important than just speculating on AIs future. Without immaterial minds, there is no sustainable basis for believing in human exceptionalism. When human life is viewed only through a materialistic lens, it gets valued based on utility. No wonder the young nones young Americans who dont identify with a religion think their lives are meaningless and some begin to despair. It is time to understand that evolution is not a strictly material process but one in which the immaterial mind plays a major role in human, and probably all sentient creatures, adaption and selection.

Deep down, we all know were more than biological robots. Thats why almost everyone rebels against materialisms implications. We dont act as though we believe everything is ultimately meaningless.

Were spiritual creatures, here by intent, living in a world where the supernatural is the norm; each and every moment of our lives is our souls in action. Immaterial ideas shape the material world and give it true meaning, not the other way around.

In the end, the greatest threat that humans face is a failure to recognize what we really are.

If were lucky, what people learn in the pursuit of full AI will lead us to the re-discovery of the human soul, where it comes from, and the important understanding that goes along with that.

Bruce Buff is a management consultant and the author of the acclaimed scientific-spiritual thriller The Soul of the Matter (Simon & Schuster).

Follow this link:

Artificial Intelligence will lead to the human soul, not destroy it | Fox ... - Fox News

Explainer: What is artificial intelligence? – ABC Online

Updated August 07, 2017 12:02:45

Artificial intelligence has jumped from sci-fi movie plots into mainstream news headlines in just a couple of years.

And the headlines are often contradictory. AI is either a technological leap into greater prosperity or mass unemployment; it will either be our most valuable servant or terrifying master.

But what is AI, how does it work, and what are the benefits and the concerns?

AI is a computer system that can do tasks that humans need intelligence to do.

"An intelligent computer system could be as simple as a program that plays chess or as complex as a driverless car," Mary-Anne Williams, professor of social robotics at the University of Technology, Sydney, said.

A driverless car, for example, relies on multiple sensors to understand where it is and what's around it. These include speed, location, direction and 360-degree vision. Based on those inputs, among others, the "intelligent" computer system controls the car by deciding, like a human would, when to turn the steering and when to accelerate or brake.

Then there's machine learning, a subset of AI, which involves teaching computer programs to learn by finding patterns in data. The more data, the more the computer system improves.

"Whether it's recognizing objects, identifying people in photos, reading lung scans or transcribing spoken mandarin, if we pick a narrow task like that [and] we give it enough data, the computer learns to do it as well as, if not better, than us," University of New South Wales professor of artificial intelligence Toby Walsh said.

AI doesn't have to sleep or make the same mistake twice. It can also access vast troves of digital data in seconds. Our brains cannot.

Yes, probably every day.

AI is in your smart phone; it's there every time you ask a question of iPhone's Siri or Amazon's Alexa. It's in your satellite navigation system and instant translation apps.

AI algorithms recognise your speech, provide search results, help sort your emails and recommend what you should buy, watch or read.

"AI is the new electricity," according to Andrew Ng, former chief scientist at Baidu, one of the leading Chinese web services companies. AI will increasingly be all around you from your phone to your TV, car and home appliances.

Four factors have now converged to push AI beyond games and into our everyday lives and workplaces:

The term artificial intelligence was first coined in 1956 by US computer scientist John McCarthy. Until recently, the public mostly heard about AI in Hollywood movies like The Terminator or whenever it defeated a human in a competition.

In 1997, IBM's Deep Blue computer beat Russian chess master Garry Kasparov. In 2011, IBM's supercomputer Watson beat human players on the US game show Jeopardy. Last year, Google's AlphaGo beat Go master Lee Sedol.

"We now have the compute power, the data, the algorithms and a lot of people working on the problems," Professor Walsh said.

AI promises spectacular benefits for humanity, including better and more precise medical diagnosis and treatment; relieving the drudgery and danger of repetitive and dehumanising jobs; and super-charging decision making and problem solving.

"Driverless cars could save many, many lives because 95 per cent of accidents are due to human error," Professor Walsh said.

"Many of the problems that are stressing our planet today will be tackled through having better decision making with computers" that access and analyse vast troves of data, he said.

There are a range of concerns:

Experts are famously split on this.

Prominent tech entrepreneurs and scientists such as Elon Musk and Stephen Hawking, among others, warn that AI could reach and quickly surpass humans, transforming into super-intelligence that would render us the second most intelligent species on the planet.

Musk has compared it to "summoning the demon". Scientists call it singularity, "where machines improve themselves almost without end," Professor Walsh said.

Facebook's Mark Zuckerberg accuses Musk of being alarmist. Professor Walsh says we don't yet even fully understand all the facets of human intelligence and there may be limits to how far AI can develop.

He's surveyed 300 of his AI colleagues around the world and most believe if AI can reach human level intelligence, it is at least 50 to 100 years away.

If it happens, humanity will likely have already solved most of the problems about whether the machines' values are aligned with ours. "I'm not so worried about that," he says.

The recent push into AI came from big US tech companies such as Google, Facebook, Amazon, Microsoft and Apple. And the US military. What could go wrong?

There's growing concern that these companies are too big and control too much data, which trains the AI algorithms.

China has now also joined the race with plans to dominate the world in AI development by 2030.

There's presently very little national or international regulation around how AI is developed. The Big Tech companies have begun discussing the need for guiding principles to ensure AI is only used for public good.

"One of those is what is the point of AI? It has to be to augment people, to support people, not replace them," Microsoft Australia national technology officer James Kavanagh says.

"Secondly, it has to be democratised. It can't be in the hands of a small number of technology companies.

"Thirdly, it has to be built on foundations of trust. We need to be able to understand any biases in algorithms and how they make decisions."

Topics: robots-and-artificial-intelligence, science-and-technology, australia

First posted August 07, 2017 06:02:12

Original post:

Explainer: What is artificial intelligence? - ABC Online

Why Japan will profit the most from artificial intelligence – South China Morning Post

A resident of the Silver Wing Social Care elderly care home in Tokyos Chuo Ward chats happily to a staff member in the facilitys communal area, while in a nearby room another senior is being helped by a rehabilitation specialist to walk again after a fall last month. These workers never take a day off, never complain and dont need to be paid, for they are robots.

Silver Wing Social Care provides a glimpse into the future of Japan and indeed other industrialised nations as they follow its path to ageing societies and labour shortages. The companys flagship care facility began using robots to help care for residents four years ago after being selected by the city government as a test project.

Japan is entering uncharted territory for a modern economy. A consistently low birth rate has shrunk the working-age population by around 10 million since its mid-1990s peak, with another 20 million set to disappear from workplaces in the coming decades. The situation is becoming critical, with nearly 1.5 vacancies for every jobseeker and chronic shortages in sectors such as nursing care, manufacturing, construction and parcel delivery.

At a time when the government is pressuring companies to cut infamously long working hours, raise wages and ensure holidays are taken, and in a country still unwilling to countenance mass immigration, robotics and artificial intelligence (AI) look to be the only solutions.

We tried out various kinds of robots to see which would work best for us. Weve gradually increased their use and now have 20 different models operating, including robots for nursing care, rehabilitation, communication and recreation, explains Silver Wings Yukari Sekiguchi, who oversees the programme.

The companys staff used to regularly injure their backs lifting residents, leading to them being off work or quitting the profession altogether, a major problem given the tight labour market. Workers can now use robots they stand inside to help them do such heavy lifting.

A lot of people thought that elderly people would be scared or uncomfortable with robots, but they are actually very interested and interact naturally with them. They really enjoy talking to them and their motivation goes up when they use the rehabilitation robots, helping them to walk again more quickly, says Sekiguchi.

Japan may be the best-placed country to cope with the advance of automation its likely to cause less unemployment than elsewhere, given the shortage of workers and lifetime employment practises. Unemployment has fallen to 2.8 per cent and a record-high 97.6 per cent of new university graduates found jobs by the start of the business year in April.

The situation should be a boon for workers, but gains are being distributed unequally. Despite the tight labour market and many companies logging record profits, wage inflation remains stubbornly sluggish.

There is a shortage of manual workers, but an excess of white-collar workers, especially middle-aged men, says Naohiro Yashiro, a labour economist and dean of the Global Business Department Showa Womens University in Tokyo.

The government has set an inflation target, but its not happening yet. My explanation of this mystery is there is a kind of structural reform going on. The seniority based wage system, whereby employees wages in Japan rise rapidly with their age is not sustainable anymore, with the ageing of the population, says Yashiro, an adviser on labour economics to three prime ministers.

Companies are thus trying to halt automatic salary raises for workers in their 40s and 50s, and increase pay for younger ones, with one largely offsetting the other, according to Yashiro.

It is these mid-level workers who would normally be most at threat from the oncoming wave of robotics, AI and other new technologies. But in Japan, they should be saved from unemployment, if not wage stagnation, according to Dr Martin Schulz, senior economist at the Fujitsu Research Institute.

Much of the debate about automation squeezing workers out of the labour market is not an issue in Japan. Wages at the lower end wont be squeezed much because automation is costly, so the cheapest workers wont be replaced. At the top end, people with skills are usually helped by digitalisation because they benefit from new systems, says Schulz.

The squeeze would be at the mid-level. But they are comparatively protected in Japan by labour regulations. So they are not hit as hard as they are in, for example, the UK or US, where we are seeing political disruptions as a result of this, says Schulz, referring to the Brexit vote and election of Donald Trump.

But neither the governments employment reforms nor automation are the solution to Japans labour problems, according to Toyonori Sugita, owner of Daimaru Seisakusho, a metalworking factory just outside Tokyo.

If we put up wages and reduced hours as the government is suggesting, wed go bankrupt. But the shortage of workers in technical industries is terrible now, says Sugita, who is looking at bringing back skilled workers in their 70s.

Automation isnt the answer either. The type of work that can be automated is going overseas to other Asian countries; work that requires high levels of technical skills is what remains in Japan and can be profitable, says Sugita.

We need more workers from overseas, from the Philippines and places like that. If the government is going to do something, it should promote that, adds Sugita.

But with advocating mass immigration still seen as political suicide in Japan, the march of the robots looks set to continue.

Continued here:

Why Japan will profit the most from artificial intelligence - South China Morning Post

When Artificial Intelligence and Social Media Marketing Collide

Both artificial intelligence and social media marketing are getting a lot of attention nowadays because of their huge benefits and growth potential. They are benefiting both businesses and normal people in various ways. The investment has already been growing in the artificial intelligence, and the investment is further expected to grow by around 300%, according to the prediction made by the Forrester.

Talking about the social media platforms, more than 2.5 billion people are already using various social media platforms as per the statistic. This is nearly a 1/3 population of the whole planet. A marketer has the potential to reach a large no. of potential customers from all over the world with the help of various social media platforms. The artificial intelligence (AI) is already playing a key role in various business sectors, and now its colliding with the social media marketing.

The artificial intelligence has a long way to go; however, its thriving in a very quick pace. The entrance of the artificial intelligence has already revolutionized the social media marketing. Here are the ways how the artificial intelligence is changing the social media marketing.

There are some brands that need to publish huge volumes of posts every day. These brands also employ plenty of influencers by doing some social media outreach to promote their products. They find it difficult to decide which posts to highlight and which posts are likely to perform well among their audiences. Because its a tedious task to analyze huge volumes of contents, its more about guesswork.

The slack bots have been developed to avoid the guesswork. The bots have the ability to predict the chances of success of various contents and they can suggest the pieces of contents which have the highest possibility of doing well. Furthermore, these bots can also find the similar content on the social media and show you the performance of the content.

The Facebook, which is the most popular social media platform in the world today, is focusing a lot on the development of the artificial intelligence these days. They have recently developed the facial recognition feature, and this feature is not only the tool to enhance the tagging function of the Facebook.

This feature can be used in various ways by the brands for developing their social media marketing strategies to further increase the reach and success of their social media marketing campaign. For an instance, the hotels, restaurants, clothing stores, and others can provide the coupons to their followers who post their picture in their place. With the images publishing getting more popular these days, this feature can help the brands to stand out with their posts.

There are many creative social media marketers who are awesome at creating awesome contents. However, its not an easy task for the marketers to release the content, building schedules, maintaining content, and analyzing content. This is another reason why the artificial intelligence is so crucial for the social media marketers. The artificial intelligence can release all the pressure of analyzing and managing the content for the marketers.

Not only the use of AI will take off the pressure from the marketers, but it will also help them to grow as a successful marketer. The machine learning and other AI tools can analyze competitors performance, your historical content and performance, and much more to help you learn. These tools also provide you with the idea of what consumers want to see or want, which will help to make your every campaign an effective campaign. This can also help to publish better sponsored blog posts to reach more people with the content people want to see.

According to the study, the majority of the customers want to interact with the businesses via message nowadays. Its because its very easy to communicate with the brands via a message in comparison to the telephone. Furthermore, the customers also want businesses to respond them as quickly as possible. Its not possible to respond to a lot of customers queries quickly, and this is where the artificial intelligence is playing the role.

The social media marketers have the responsibility to engage with the customers after they are successful in getting plenty of queries regarding their posts. The artificial intelligence can help them to prioritize the queries of the customers, help them to find out whether the messages are from trolls or real users, and much more. All these tools can help the social media marketers to serve their customers in a better way; thus, increasing the chance of conversion.

The social media marketers need to listen to their followers to plan their next posts and also to make an overall strategy. The only way to find out what the customers want, the marketer needs to collect, interpret, and understand the data. However, the problem is that the massive amount of data is uploaded and downloaded each day, making it impossible for the human beings to correctly interpret the information.

The various AI tools help them to collect the valuable insights from the data collected through various social media platforms to get incredible insights on the customer taste and preferences. In the near future, the brands may also be able to find out who among their followers are wearing their brands T-shirt and using their products with the analysis of images and videos. It will help marketers create more personalized marketing campaigns.

By now, you mustve known how the artificial intelligence is changing the whole picture of the social media marketing. You must have also realized that the traditional social media marketing strategies need to be updated in order to get the success of your social media marketing campaign. If you dont combine the artificial intelligence in your social media marketing strategy, then youre likely to fail to get any return from your campaign because the competitor will be using AI to gain competitive advantage.

This post is part of our contributor series. The views expressed are the author's own and not necessarily shared by TNW.

Read next: Kevin Rose: Answering The Big Five

View post:

When Artificial Intelligence and Social Media Marketing Collide

How Facebook’s AI Bots Learned Their Own Language and How to Lie – Newsweek

Facebook has been working on artificial intelligence that claims to be great at negotiating, makes up its own language and learns to lie.

OMG! Facebook must be building an AI Trump! Art of the deal. Biggest crowd ever. Cofveve. Beep-beep!

This AI experiment comes out of a lab called Facebook Artificial Intelligence Research. It recently announced breakthrough chatbot software that can ruthlessly negotiate with other software or directly with humans. Research like that usually gets about as much media attention as a high school math bee, but the FAIR project points toward a bunch of intriguing near-term possibilities for AI while raising some creepy concernslike whether it will be kosher for a bot to pretend it is human once bots get so good you cant tell whether theyre code or carbon.

Tech & Science Emails and Alerts - Get the best of Newsweek Tech & Science delivered to your inbox

AI researchers around the world have been working on many of the complex aspects of negotiation because it is so important to technologys future. One of the long-held dreams for AI, for example, is that well all have personal bot-agents we can send out into the internet to do stuff for us, like make travel reservations or find a good plumber. Nobody wants a passive agent that pays retail. You want a deal. Which means you want a badass bot.

There are so many people working on negotiating AI bots that they even have their own Olympicsthe Eighth International Automated Negotiating Agents Competition gets underway in mid-August in Melbourne, Australia. One of the goals is to encourage design of practical negotiation agents that can proficiently negotiate against unknown opponents in a variety of circumstances. One of the leagues in the competition is a Diplomacy Strategy Game. AI programmers are anticipating the day when our bot wrangles with Kim Jong Uns bot over the fate of the planet while Secretary of State Rex Tillerson is out cruising D.C. on his Harley.

Artifical Intelligence is no longer a futuristic concept. Bots can already debate, negotiateand lielike humans. Isaac Lawrence/AFP/Getty

As the Facebook researchers point out, todays bots can manage short exchanges with humans and simple tasks like booking a restaurant, but they arent able to have a nuanced give-and-take that arrives at an agreed-upon outcome. To do that, AI bots have to do what we do: make a mental model of the opponent, anticipate reactions, read between the lines, communicate in fluent human language and even throw in a few bluffs. Facebooks AI had to figure out how to do those things on its own: The researchers wrote machine-learning software, then let it practice on both humans and other bots, constantly improving its methods.

This is where things got a little weird. First of all, most of the humans in the practice sessions didnt know they were chatting with bots. So the day of identity confusion between bots and people is already here. And then the bots started getting better deals as often as the human negotiators. To do that, the bots learned to lie. This behavior was not programmed by the researchers, Facebook wrote in a blog post, but was discovered by the bot as a method for trying to achieve its goals. Such a trait could get ugly, unless future bots are programmed with a moral compass.

The bots ran afoul of their Facebook overlords when they started to make up their own language to do things faster, not unlike the way football players have shorthand names for certain plays instead of taking the time in the huddle to describe where everyone should run. Its not unusual for bots to make up a lingo that humans cant comprehend, though it does stir worries that these things might gossip about us behind our back. Facebook altered the code to make the bots stick to plain English. Our interest was having bots who could talk to people, one of the researchers explained.

The bots ran afoul of their Facebook overlords when they started to make up their own language to do things faster. Dado Ruvic/Reuters

Outside of Facebook, other researchers have been working to help bots comprehend human emotions, another important factor in negotiations. If youre trying to sell a house, you want to model whether the prospective buyer has become emotionally attached to the place so you can crank up the price. Rosalind Picard of the Massachusetts Institute of Technology has been one of the leaders in this kind of research, which she calls affective computing. She even started a company, Affectiva, thats training AI software in emotions by tracking peoples facial expressions and physiological responses. It has been used to help advertisers know how people are reacting to their commercials. One Russian company, Tselina Data Lab, has been working on emotion-reading software that can detect when humans are lying, potentially giving bot negotiators an even bigger advantage. Imagine a bot that knows when youre lying, but youll never know when it is lying.

While many applications of negotiating botslike those personal-assistant AI agentssound helpful, some seem like nightmares. For instance, a handful of companies are working on debt-collection bots. Describing his companys product, Ohad Samet, CEO of debt-collection AI maker TrueAccord, told American Banker , People in debt are scared, theyre angry, but sometimes they need to be told, Look, this is the debt and this is the situation, we need to solve this. Sometimes being too empathetic is not in the consumers best interest. It sounds like his bots are going to negotiate by saying, Pay up, plus 25 percent compounded daily, or we make you part of a concrete bridge strut.

Put all of these negotiation-bot attributes together and you get a potential monster: a bot that can cut deals with no empathy for people, says whatever it takes to get what it wants, hacks language so no one is sure what its communicating and cant be distinguished from a human being. If were not careful, a bot like that could rule the world.

More:

How Facebook's AI Bots Learned Their Own Language and How to Lie - Newsweek

Artificial Intelligence and Internal Audit – HuffPost

Auditing is about analyzing, being able to collect information around the audited subject and understanding its connections to other relevant subjects or areas. Going forward, auditors will not only uncover issues and errors, but will also provide solutions. This means that the reports from internal audits will not only list errors and process flaws, but also potential solutions to issues in collaboration with the experts from the audited area...

In most cases, the issues addressed by the auditors are known to the audited area, but in the day-to-day context of business activities, these issues are mostly not considered as urgent. In the future audit needs to adopt its approach to generate a benefit for the audited area too. The most important task of an internal auditor is to be able to analyze the collected information, while the question part of an audit can be done by a junior auditor. By repeatedly asking why?, an auditor can collect large amounts of information which helps to understand the entire landscape around a subject. It enables the auditor to evaluate the facts and make assessments.

When I was working as an internal auditor, I was involved in a project that was searching for an early warning system using available technology. We were tasked to ask simple questions and to evaluate the collected facts about a particular subject. In contrast to Eliza which became famous decades ago for being a revolutionary IT solution, the operating system featured in the movie HER did not provide a revolutionary new insight into the latest AI (Artificial Intelligence) technology.

Humans can differentiate between a conversation with a human and a conversation with a robot. Furthermore, they are able to make jokes, write poetry or recognise a lie by a persons voice or mimic. Can AI such as Eliza take over the activity of internal auditing? As mentioned, by asking the simplest questions, almost everybody can access information. How intelligent must AI become in order to be able to act as an internal auditing system, and what would be the role of humans in this process? I think that by even having simple artificial interaction software in place, the interview part, as well as the structuring of the collected information, can be taken over by a computer. The collected information can help clarify subjects and to draw conclusions about the problem.

Here is an example of how a simple question can be asked to collect all needed information about a particular issue: The issue is that I get up early in the morning.

AI: Why? Answer: Because I like the early morning energy and silence.

AI: Why do you like silence? Answer: Because, if its quiet, I get into a different state of mind with little effort.

AI: Why do you need a different state of mind? Answer: Because in a different state of mind I can see ordinary things from a new perspective.

AI: Why do you need a new perspective? Answer: Because different perspectives can reveal new solutions to problems.

AI: Why? Answer: Because new solutions will give me the ability to better solve outstanding issues.

AI: Why? Answer: Because by better solving my issues I have a better day .

In practice, questions can be chosen in a way that the person being interviewed does not discover that the questions are asked without the answers being listened to.

After the financial crisis, the area that would benefit the most from the introduction of AI into its processes is internal audit. This way, existing resources can be used in a more efficient way and it would be possible to audit more areas in shorter period of time. In the future, internal audit will use software like Eliza to interview experts from all areas almost monthly and will be able to collect information by setting up an early warning system for reporting by searching for critical words.

Areas with the most critical words will be audited with the most urgency. The interview can also include hidden checks to ensure that the person understands the answers being given and validates the truth. This will help to control more efficiently, without any additional resources. Furthermore, issues can be found more quickly, potential losses can be detected much earlier and before they cause damage to the organization...

Source: Banks of the Future, by Ella Thuiner; Published by Springer, 2015

The Morning Email

Wake up to the day's most important news.

Read more:

Artificial Intelligence and Internal Audit - HuffPost

Can artificial intelligence help create jobs? – RCR Wireless News

The fourth industrial revolution

As artificial intelligence is deployed in the realm of customer service, telecom companies are showing increased interest in a number of these tools. Like previous industrial revolutions, many worry whether these technological innovations are weeding out human jobs. What many do not consider is the kinds of jobs A.I. can create.

But what exactly is A.I.? To begin with, its more than automation. Automation refers to computers or programs capable of performing repetitive, human tasks, but that doesnt mean automation itself is intelligent. By contrast, A.I. is an effort to enable computers to perform tasks that demand the ability to reason, solve problems, perceive and understand language.

There are three key positions advancements in A.I. could open: trainers, explainers and sustainers. Trainers teach A.I. algorithms how to mirror human behavior, and keep language processing and translating errors down to a minimum. Explainers serve as the middlemen between technologies and industry leaders, communicating the intricacies of A.I. algorithms to nontechnical staff. And managers uphold A.I. systems to legal and ethical norms.

As the maturity of A.I. moves out of academia, which its still kind of on the edge of, and to commercially hardened software and capability, I think youll see some these data science roles that you hear everybody hiring morph into their ability to adapt the products that are in the market to their specialty needs, explained JC Ramey, CEO of DeviceBits. And so that will create higher tech jobs, and most of those should be domestic based on where we see a lot of the hiring for the data science groups that we work with.

In terms of higher-tech jobs, chatbots, for instance, are answering basic tier-one calls at off-shore call centers instead of live agents. Technical questions are forwarded to tier 2 where the customer can talk to a person. This may eliminate several off-shore jobs for tier 1 calls, but it could provide companies with the means to invest in more tier-2 jobs. Ramey said he believes many of these jobs could be based in the U.S.

Technocrats have long pointed how automation can help workers take on more fulfilling tasks. But A.I. extends beyond automation. According to a survey of 352 A.I. researchers, there is a 50% chance A.I. will outperform all human tasks in 45 years, and that all human jobs will be automated in 120 years. The real question isnt whether A.I. can create jobs, but whether it can outmatch the numbers of jobs it takes.

I think this retooling will scare a lot of people and that there are some people who will not be able to make the shift, said Ramey, but the machinery and ecosystem that its creating at the same time creates a completely different market of jobs than whats available today.

The fruits of A.I. are discussed more than its limitations. Facebook, for instance, had to put efforts to build a chatbot for Messenger on hold after its bots hit a 70% failure rate. No budding technology is without glitches. However, the acceptable failure rate for these projects has yet to be clearly defined, which can help inform whether a technology is worth a long-term investment.

I think knowledge engineering is the biggest level of limitation, said Ramey. Today, people think it is the silver bullet. I think everyone who is thinking a bot is an A.I., but the reality is the knowledge engineering that has to happen underneath to give that bot a starting point, and how do you train that bot overtime, is still the big gap, and that is the limitation that we see as a big opportunity in the market-to-sell.

Risks versus benefits aside, several tech giants like Apple, Facebook, Google and IBM believe A.I. has a future worth investing in. The telecom ecosystem will likely absorb A.I. tools as it becomes more complex. I think we will look back in ten years and realize A.I. created a whole new sector for us and gave us another bump like the dot com boom did, said Ramey.

Visit link:

Can artificial intelligence help create jobs? - RCR Wireless News

How artificial intelligence can help deliver better search results – TechRadar

Google has become very interested in artificial intelligence in recent years, and particularly its applications for regular people. For example, here's a load of experiments that it's running involving machine learning.

Now, however, researchers at the Texas Advanced Computing Center have shown how artificial intelligence techniques can also deliver better search engine results. They've combined AI, crowdsourcing and supercomputers to develop a better system for information extraction and classification.

At the 2017 Annual Meeting for the Association of Computational Linguistics in Vancouver this week, associate professor Matthew Lease led a team presenting two papers that described a new kind of informational retrieval system.

"An important challenge in natural language processing is accurately finding important information contained in free-text, which lets us extract it into databases and combine it with other data in order to make more intelligent decisions and new discoveries," Lease said.

"We've been using crowdsourcing to annotate medical and news articles at scale so that our intelligent systems will be able to more accurately find the key information contained in each article."

They were able to use that crowdsourced data to train a neural network to predict the names of things, and extract useful information from texts that aren't annotated at all.

In the second paper, they showed how to weight different linguistic resources so that the automatic text classification is better. "Neural network models have tons of parameters and need lots of data to fit them," said Lease.

In testing on both biomedical searches and movie reviews, the system delivered consistently better results than methods that didn't involve weighting the data.

"We had this idea that if you could somehow reason about some words being related to other words a priori, then instead of having to have a parameter for each one of those word separately, you could tie together the parameters across multiple words and in that way need less data to learn the model," said Lease.

He added: "Industry is great at looking at near-term things, but they don't have the same freedom as academic researchers to pursue research ideas that are higher risk but could be more transformative in the long-term."

Read the original post:

How artificial intelligence can help deliver better search results - TechRadar

Artificial Intelligence: A Journey to Deep Space – insideHPC

In this sponsored post,Ramnath Sai Sagar, Marketing Manager at Mellanox Technologies, explores how recent advancements in Artificial Intelligence, especially deep learning, are set to make an impact in the field ofastronomy and astrophysics.

Ramnath Sai Sagar, Marketing Manager at Mellanox Technologies

Since the dawn of the space age, unmanned spacecraft have flown blind, with little to no ability to make autonomous decisions based on their environment. That, however, changed in the early 2000s, when NASA started working on leveraging Artificial Intelligence (AI) and laying the foundation that would help Astronauts and Astronomers to work more efficiency in Space. In fact, just last month, NASAs Jet Propulsion Laboratory published how AI will govern the behavior of space probes.

Recent advancements in Artificial Intelligence, especially Deep Learning (a subfield in AI), are set to make a deeper impact in the field of astronomy and astrophysics. From navigating the unknown terrain of Mars, to analyzing petabytes of data generated from Square Kilometer Array, to finding Earth-like planets in our messy galaxy, AI is already revolutionizing our lives here on earth by building smarter and more autonomous cars, helping us find solutions to climate change, revolutionizing healthcare and much more. Mellanox is proud to be working closely with the leading companies and research organizations to make advancements in the field of Artificial Intelligence and Astronomy.

AI: The Next Industrial Revolution

Coined in 1956 by Dartmouth Assistant Professor John McCarthy, AI existed before the Race to Space but could only deliver rudimentary displays of intelligence in specific context. Progress was limited due to the complexities of algorithms needed to tackle various real-world issues. Many were above the ability of a mere human to execute. This however, changed in the past decade mainly due to two reasons:

Due to this, AI now presents one of the most exciting and potentially transformative opportunities for the mankind. In fact, in some quarters it is being heralded as the next industrial revolution:

The last 10 years have been about building a world that is mobile-first. In the next 10 years, we will shift to a world that is AI-first. Sundar Pichai, CEO of Google, October 2016

AI for the Messy Galaxy

While humanity has made great strides in exploring the observable universe, we need to rely on intelligent robots to explore where we cannot humanly go. This is because our galaxy, the Milky Way, is one messy place, filled with cosmic dust from stars, comets, and more; concealing the very things scientists want to study. That said, there are three major challenges in leveraging AI in the future of space exploration. Firstly, the probes will have to be able to learn about and adapt to unknown environments including responding to thick layers of gas in a planets atmosphere, extreme temperatures or unplanned for fluctuations in gravity.

Secondly, when a probe falls outside the communication range, would have tofigure out when and how to return the data collected during the time the signal was lost. Finally, given the vast distances in space, it could take several generations before the probe reaches its destination and therefore, will need to be flexible enough to adapt to any new discoveries and innovations we make here on earth. The solution to these problems will require training AI models on petabytes of data captured using supercomputers.

The benefits of using AI to control space-exploring robots are already being realized by missions that are currently underway. For example, Opportunity, the Mars Exploration Rover, which was launched back in 2003, has an AI driving system called Autonav that allows it to explore the surface of Mars. In addition, Autonomous Exploration for Gathering Increased Science (AEGIS) has been used by the NASA Mars rover, Curiosity, since May in order to select which aspects of Mars are particularly interesting and subsequently take photos of.

Image Captured by AEGIS Enabled Curiositys ChemCam.

But Mars is by no means the final destination and the exploration of more challenging destinations will require even more advanced AI. For example, exploring the subsurface ocean of the Jovian moon Europa in the hope of finding alien life, will require bypassing a thick (~10km) ice crust. Controlling this exploration would be severely limited without advanced autonomy.

Artificial Intelligence Needs Intelligent Network

Since the early age of Mellanox, we have been working closely with NASA and many research labs help solve the challenges of scientific computing, whether its the aerodynamic simulation of the Jet Propulsion Engine or monitoring the universe in unprecedented detail. In addition, over the last few years, Mellanox has also enabled the pioneers in the field of AI including Baidu for their advancements in autonomous cars and Yahoo for image recognition. The applications of autonomous driving and object recognition go far beyond the limits of Earth and Mellanox is proud to be working closely with several research organizations and companies and helping them achieve technological breakthroughs in the field of astronomy and astrophysics.

Forty-eight years ago, Neil Armstrong said Thats one small step for man, one giant leap for mankind, when he became the first human to set the foot on the surface of the moon. The next giant leap for mankind will come from the small step of a robot, powered by AI and Mellanox.

Ramnath Sai Sagar is Marketing Manager at Mellanox Technologies. This post originally ran as part of Mellanox Technologies Interconnected Planet blog series.

Read more from the original source:

Artificial Intelligence: A Journey to Deep Space - insideHPC