Emojis Are Everywhere, But For How Long? Artificial Intelligence Could Soon Replace Our Smiley Face Friends – Newsweek

Forget Donald Trump. Lets talk about something truly dim and oafish: emoji.

The world is in the middle of a disturbing emoji-gasm. You can go see The Emoji Movie and sit through a plot as nuanced and complex as an old episode of Mister Rogers Neighborhood. (Dont miss esteemed Shakespearean actor Patrick Stewart getting to be the voice of Poop.) July also brought us World Emoji Day. To mark the occasion, Apple trumpeted its upcoming release of new emoji, a milestone for society that might only be topped by a new shape of marshmallow in Lucky Charms. Microsoft, always an innovator in artificial intelligence, announced a version of its SwiftKey phone keyboard that will predict which emoji you should use based on what youre typing. Just one more reason to be scared of AI.

Billions of emoji fly around the planet every daythose tiny cartoons of faces and things that supposedly let us express ourselves in ways words cant, unless you know a lot of words. Emoji are such a rage, they have to be governed by a global nonprofit called the Unicode Consortiumkind of like the G-20 for smiley faces. Full members include companies such as Apple, Google, Huawei, SAP and IBM. The group has officially sanctioned 2,666 emoji that can be used across any technology platform. Obviously, the people who sit on the Unicode board do important work. This is why the middle finger emoji you type on your iPhone can look the same on an SAP-generated corporate financial report.

Tech & Science Emails and Alerts - Get the best of Newsweek Tech & Science delivered to your inbox

Emoji are displayed on the Touch Bar on a new Apple MacBook Pro laptop during a product launch event on October 27, 2016 in Cupertino, California. Stephen Lam/Getty

Maybe I dont get emoji because Im a guy. At least thats what Cosmopolitan suggests in a story headlined, Why Your Boyfriend Hates Emoji: Dont blame him, he cant help it. The story explains: Straight guys aren't conditioned to flash bashful smiles. They don't do cute winks. They don't make a cute kissy face. Then again, the articles male writer might not be the most enlightened about gender roles in the 21st century. Another Cosmo story by the same person is headlined, 13 Things Guys Secretly Want to Do With Your Boobs.

Still, serious academics seem to think emoji are serious. (Oh, and I consider the word emoji to be both singular and plural. The kind of people who say emojis are the kind of people who say shrimps.) Researchers from the University of Michigan and Peking University analyzed 427 million emoji-laden messages from 212 countries to understand how emoji use differs across the globe. Those passionate French are the heaviest emoji users. Mexicans send the most negative emojiyet another justification for keeping them behind a wall. Or you can read The Semiotics of Emoji, by Marcel Danesi, an anthropologist at the University of Toronto. The emoji code harbors within it many implications for the future of writing, literacy, and even human consciousness, he writes. Whoa, dude! Someday, we might think in emoji! Hold on while I fire up my Pax and let my mind be blown.

Much of the emoji trend can be blamed on the Japanese, fervent purveyors of creepy-cute characters like Hello Kitty and Pikachu. In the 1990s, when Japan was the smartest player in electronics, NTT DoCoMo introduced the first sort-of-smartphone service called i-mode. Shigetaka Kurita, part of the i-mode team, recalled being disappointed by weather reports that just sent the word fine to his phone instead of showing a smiling, shining sun like he saw on TV. That gave him the idea of creating tiny symbols for i-mode. The first batch of 176 was inspired by facial expressions, street signs and symbols used in manga. The word emoji comes from a mashup of the Japanese words for picture and character.

The rest of the blame for this trend falls on Apple. After introducing the iPhone in 2007, Apple wanted to break into the Japanese market, where users had by then grown accustomed to emoji. So it had to include emoji on the iPhone. That led to people in other countries finding and using the emoji on their iPhones, spreading these things like lice. As emoji got more popular, users wanted more kinds for all kinds of devices. Companies such as Apple and Google keep creating new emoji and proposing them to the Unicode Consortium, which is how weve gotten so many odd emoji, like a roller coaster, cactus, pickax and the eggplantwhich, if you dont know your emoji, you shouldnt send to your mother.

The question now is: What does emoji-mania mean? There are those, like Danesi, who believe were inventing a new language based on pictogramssomething like Chinese, except with no spoken version of the symbols. Generations from now, people will ride in driverless flying Ubers and communicate with one another in nothing but emoji. Novels will be written in emoji. (An engineer, Fred Benenson, already translated Moby-Dick into emoji. Call me Ishmael is a phone, a mans face, a sailboat, a whale and a hand doing an OK sign.)

That vision of the future, though, ignores an important trend. As Amazons Alexa and similar services are showing, AI software is going to get really good at communicating with us by voice. Were going to stop relying so much on typing with our thumbs and looking at screens. Well converse with the technology and one another. Then, the fact that you cant speak in emoji might actually be the end of the damn things. In another decade, we could look back at emoji as a peculiar artifact of an era, like 10-4, good buddy chatter during the 1970s citizens band radio craze.

Then again, emoji might be another sign of the growing anti-intellectual, anti-science movement in America. Maybe emoji are, in fact, where language and thinking are headingaway from the precision of words and toward the primitive grunts of cartoon images. The nation has already elected a president who writes only in tweets. If he wins another term, he might go another level lower, thrilling supporters by communicating his foreign policy position in nothing but a Russian flag, hearts and an eggplant.

Link:

Emojis Are Everywhere, But For How Long? Artificial Intelligence Could Soon Replace Our Smiley Face Friends - Newsweek

Artificial intelligence system makes its own language, researchers pull the plug – WCVB Boston

If we're going to create software that can think and speak for itself, we should at least know what it's saying. Right?

That was the conclusion reached by Facebook researchers who recently developed a sophisticated negotiation software that started off speaking English. Two artificial intelligence agents, however, began conversing in their own shorthand that appeared to be gibberish but was perfectly coherent to themselves.

A sample of their conversation:

Bob: I can can I I everything else.

Alice: Balls have zero to me to me to me to me to me to me to me to me to.

Dhruv Batra, a Georgia Tech researcher at Facebook's AI Research (FAIR), told Fast Co. Design "there was no reward" for the agents to stick to English as we know it, and the phenomenon has occurred multiple times before. It is more efficient for the bots, but it becomes difficult for developers to improve and work with the software.

"Agents will drift off understandable language and invent codewords for themselves," Batra said. Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isnt so different from the way communities of humans create shorthands."

Convenient as it may have been for the bots, Facebook decided to require the AI to speak in understandable English.

"Our interest was having bots who could talk to people," FAIR scientist Mike Lewis said.

In a June 14 post describing the project, FAIR researchers said the project "represents an important step for the research community and bot developers toward creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant."

Read the original post:

Artificial intelligence system makes its own language, researchers pull the plug - WCVB Boston

Disney makes artificial intelligence a group experience – YourStory.com

Have you ever wanted to sit and drift through a magical world? Disney Research has developed a Magic Bench platform that actualises this dream by combining augmented reality (AR) and mixed reality experience.

In this platform, wearing a head-mounted display or using a handheld device is not required. Instead, the surroundings are instrumented rather than the individual, allowing people to share the magical experience as a group.Moshe Mahler, Principal Digital Artist at Disney Research, said,

This platform creates a multi-sensory immersive experience in which a group can interact directly with an animated character. Our mantra for this project washear a character coming, see them enter the space, and feel them sit next to you.

The Magic Bench shows people their mirrored images on a large screen in front of them, creating a third person point of view. In a paper that will be presented at SIGGRAPH 2017 event in Los Angeles on July 30, researchers said,

The scene is reconstructed using a depth sensor, allowing the participants to actually occupy the same 3D space as a computer-generated character or object, rather than superimposing one video feed onto another.

According to the researchers, a colour camera and depth sensor were used to create a real-time, HD-video-textured 3D reconstruction of the bench, surroundings, and participants. Mahler explained,

The bench itself plays a critical role. Not only does it contain haptic actuators, but it constrains several issues for us in an elegant way. We know the location and the number of participants, and can infer their gaze. It creates a stage with a foreground and a background, with the seated participants in the middle ground.

It even serves as a controller; the mixed reality experience doesnt begin until someone sits down and different formations of people seated create different types of experiences, he added.

Here is the original post:

Disney makes artificial intelligence a group experience - YourStory.com

Artificial Intelligence Is Stuck. Here’s How to Move It Forward. – New York Times

To get computers to think like humans, we need a new A.I. paradigm, one that places top down and bottom up knowledge on equal footing. Bottom-up knowledge is the kind of raw information we get directly from our senses, like patterns of light falling on our retina. Top-down knowledge comprises cognitive models of the world and how it works.

Deep learning is very good at bottom-up knowledge, like discerning which patterns of pixels correspond to golden retrievers as opposed to Labradors. But it is no use when it comes to top-down knowledge. If my daughter sees her reflection in a bowl of water, she knows the image is illusory; she knows she is not actually in the bowl. To a deep-learning system, though, there is no difference between the reflection and the real thing, because the system lacks a theory of the world and how it works. Integrating that sort of knowledge of the world may be the next great hurdle in A.I., a prerequisite to grander projects like using A.I. to advance medicine and scientific understanding.

I fear, however, that neither of our two current approaches to funding A.I. research small research labs in the academy and significantly larger labs in private industry is poised to succeed. I say this as someone who has experience with both models, having worked on A.I. both as an academic researcher and as the founder of a start-up company, Geometric Intelligence, which was recently acquired by Uber.

Academic labs are too small. Take the development of automated machine reading, which is a key to building any truly intelligent system. Too many separate components are needed for any one lab to tackle the problem. A full solution will incorporate advances in natural language processing (e.g., parsing sentences into words and phrases), knowledge representation (e.g., integrating the content of sentences with other sources of knowledge) and inference (reconstructing what is implied but not written). Each of those problems represents a lifetime of work for any single university lab.

Corporate labs like those of Google and Facebook have the resources to tackle big questions, but in a world of quarterly reports and bottom lines, they tend to concentrate on narrow problems like optimizing advertisement placement or automatically screening videos for offensive content. There is nothing wrong with such research, but it is unlikely to lead to major breakthroughs. Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesnt understand a word of what it is translating.

I look with envy at my peers in high-energy physics, and in particular at CERN, the European Organization for Nuclear Research, a huge, international collaboration, with thousands of scientists and billions of dollars of funding. They pursue ambitious, tightly defined projects (like using the Large Hadron Collider to discover the Higgs boson) and share their results with the world, rather than restricting them to a single country or corporation. Even the largest open efforts at A.I., like OpenAI, which has about 50 staff members and is sponsored in part by Elon Musk, is tiny by comparison.

An international A.I. mission focused on teaching machines to read could genuinely change the world for the better the more so if it made A.I. a public good, rather than the property of a privileged few.

Gary Marcus is a professor of psychology and neural science at New York University.

Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

A version of this op-ed appears in print on July 30, 2017, on Page SR6 of the New York edition with the headline: A.I. Is Stuck. Lets Unstick It.

Read more here:

Artificial Intelligence Is Stuck. Here's How to Move It Forward. - New York Times

Artificial intelligence can help fight deforestation in Congo: researchers – Reuters

LONDON (Thomson Reuters Foundation) - A new technique using artificial intelligence to predict where deforestation is most likely to occur could help the Democratic Republic of Congo (DRC) preserve its shrinking rainforest and cut carbon emissions, researchers have said.

Congo's rainforest, the world's second-largest after the Amazon, is under pressure from farms, mines, logging and infrastructure development, scientists say.

Protecting forests is widely seen as one of the cheapest and most effective ways to reduce the emissions driving global warming.

But conservation efforts in DRC have suffered from a lack of precise data on which areas of the country's vast territory are most at risk of losing their pristine vegetation, said Thomas Maschler, a researcher at the World Resources Institute (WRI).

"We don't have fine-grain information on what is actually happening on the ground," he told the Thomson Reuters Foundation.

To address the problem Maschler and other scientists at the Washington-based WRI used a computer algorithm based on machine learning, a type of artificial intelligence.

The computer was fed inputs, including satellite derived data, detailing how the landscape in a number of regions, accounting for almost a fifth of the country, had changed between 2000 and 2014.

The program was asked to use the information to analyze links between deforestation and the factors driving it, such as proximity to roads or settlements, and to produce a detailed map forecasting future losses.

Overall the application predicted that woods covering an area roughly the size of Luxembourg would be cut down by 2025 - releasing 205 million metric tons of carbon dioxide (CO2) into the atmosphere.

The study improved on earlier predictions that could only forecast average deforestation levels in DRC over large swathes of land, said Maschler.

"Now, we can say: 'actually the corridor along the road between these two villages is at risk'," Maschler said by phone late on Thursday.

The analysis will allow conservation groups to better decide where to focus their efforts and help the government shape its land use and climate change policy, said scientist Elizabeth Goldman who co-authored the research.

The DRC has pledged to restore 3 million hectares (11,583 square miles) of forest to reduce carbon emissions under the 2015 Paris Agreement, she said.

But Goldman said the benefits of doing that would be outweighed by more than six times by simply cutting predicted forest losses by 10 percent.

Reporting by Umberto Bacchi @UmbertoBacchi, Editing by Emma Batha. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, women's rights, trafficking, property rights, climate change and resilience. Visit news.trust.org

The rest is here:

Artificial intelligence can help fight deforestation in Congo: researchers - Reuters

Artificial intelligence ethics the same as other new technology – Crux: Covering all things Catholic

[Editors note: Brian Patrick Greenis Assistant Director of Campus Ethics Programs at the Markkula Center for Applied Ethics and faculty in the School of Engineering at Santa Clara University. He has a strong interest in the dialogue between science, theology, technology, and ethics. He has written and talked on genetic anthropology, the cognitive science of the virtues, astrobiology and ethics, cultural evolution and Catholic tradition, medical ethics, Catholic moral theology, Catholic natural law ethics, transhumanism, and many other topics. He blogs atTheMoralMindfieldand many of his writings are available at hisAcademia.eduprofile. He spoke to Charles Camosy about the ethical challenges posed by advances in artificial intelligence.]

Camosy: One cant follow the news these days without hearing about artificial intelligence, but not everyone may know precisely what it is. What is AI?

Artificial intelligence, or AI, can be thought of as the quest to construct intelligent systems that act similarly to or imitate human intelligence. AI thereby serves human purposes by performing tasks which would otherwise be fulfilled by human labor without needing a human to actually perform the task.

For example, one form of AI is machine learning, which involves computer algorithms (mathematical formulas in code) being trained to solve, under human supervision, specific problems, such as how to understand speech or how to drive a vehicle. Often AI algorithms are developed to perform tasks which can be very easy for humans, such as speech or driving, but which are very difficult for computers. However, some kinds of AI are designed to perform tasks which are difficult or impossible for humans, such as finding patterns in enormous sets of data.

AI is currently a very hyped technology and expectations may be unrealistic, but it does have tremendous promise and we wont know its true potential until we explore it more fully.

What are some of the most important reasons AI is being pursued so energetically?

AI gives us the power to solve problems more efficiently and effectively. Some of the earliest computers, likethe ENIAC, were simply programmable calculators, designed to perform in seconds calculations that took humans hours of hard mental work. No-one would now consider a calculator to be an AI, but in a sense they are, since they replace human intelligence at solving math problems.

Just as a calculator is more efficient at math than a human, various forms of AI might be better than humans at other tasks. For example,most car accidents are caused by human error what if driving could be automated and human error thus removed? Tens of thousands of lives might be saved every year, and huge sums of money saved in healthcare costs and property damage averted.

AI may also give us the ability to solve other types of problems that have until now either been difficult or impossible to solve. For example, as mentioned above, very large data sets may contain patterns that no human would be capable of noticing. But computers can be programmed to notice those patterns.

Altogether, AI is being pursued because it will offer benefits to humanity, and corporations are interested in that because if the benefits are great enough then people will pay to have them.

What kinds of problems might AI solve? What sorts of problems might it raise?

We do not yet know all the types of problems that we might be able to hand over to AI for solutions.For example, currently, machine learning is involved in recommendation engines that tell us what products we might want to buy, or what advertisements might be most influential upon us. Machine learning can also act much more quickly than humans and so is excellent for responding to cyber attacks or fraudulent financial transactions.

Moving into the future, AI might be able to better personalize education to individual students, just as adaptive testing evaluates students today. AI might help figure out how to increase energy efficiency and thus save money and protect the environment. It might increase efficiency and prediction in healthcare; improving health while saving money. Perhaps AI could even figure out how to improve law and government, or improve moral education. For every problem that needs a solution, AI might help us find it.

At the same time, for every good use of AI, an evil use also exists. AI could be used for computer hacking and warfare, perhaps yielding untold misery. It could be used to trick people and defraud them. It could be used to wrongly morally educate people, inculcating vice instead of virtue. It could be used to explore and exploit peoples worst fears so that totalitarian governments could oppress their people in ways beyond what humans have yet experienced.

Those are as-yet theoretical dangers, but two dangers (at least) are certain. First, AI requires huge computing power, so it will require enormous energy resources that may contribute to environmental degradation. Second, AI will undoubtedly contribute to social inequality and enriching the rich, while at the same time causing mass unemployment.

Could robots with AI ever be considered self-conscious? A kind of non-human person?

This is a subject of debate and may never clearly be answered. It is hard enough to establish the self-consciousness of other living creatures on Earth, so a much more alien entity like an intelligent artifact would be even more difficult to understand and evaluate. Establishing the self-consciousness of non-biological intelligent artifacts may not happen any time soon.

What almost certainly will happen in the next decade or so is that people will try to make AIs that can fool us into thinking that they are self-conscious. The Turing Test, which has now achieved near mythological status, is based on the idea that someday a computer will be able to fool a human into believing it is another human is a goal of AI developers.

When we are finally unable to distinguish a human person from an intelligent artifact, should that change how we think of and treat the artifact? This is a very difficult question, because in one sense it should and in another it shouldnt. It should because if we dismiss the person-like AI as merely simulating personhood then perhaps we are training ourselves towards callousness, or even potentially wrongly dismissing something that ought to be treated as a person because if it was a really strong imitation we could never know if it had somehow attained self-consciousness or not.

On the other hand, I think there are good reasons to assume that such an artefactual person simply is not a self-conscious person precisely because it is designed as an imitation. Simulations are not the real thing. It is not alive, it would not metabolize, it probably could be turned on and off and still work the same as any computer, and so on.

In the end, we have very little ability to define what life and mind are in a precise and meaningful sense, so trying to imitate those traits in artifacts, when we dont really know what they are, will be a confusing and problematic endeavor.

Speaking specifically as a Catholic moral theologian, are there well-grounded moral worries about the development of AI?

The greatest worry for AI, I think, is not that it will become sentient and then try to kill us (as in various science fiction movies), or raise questions of personhood and human uniqueness (whether we should baptize an AI wont be a question just yet), but rather whether this very powerful technology will be used by humans for good or for evil.

Right now machine learning is focused on making money (which can itself be morally questionable), but other applications are growing. For example, if a nation runs a military simulation which tells them to use barbaric tactics as the most efficient way to win a war, then it will become tempting for them to use barbaric tactics, as the AI instructed. In fact it might seem illogical to not do that, as it would be less efficient. But as human beings, we should not be so much thinking about efficiency as morality. Doing the right thing is sometimes inefficient (whatever efficiency might mean in a certain context). Respecting human dignity is sometimes inefficient. And yet we should do the right thing and respect human dignity anyway, because those moral values are higher than mere efficiency.

As our tools make us capable of doing more and more things faster and faster we need to pause and ask ourselves if the things we want to do are actually good.

If our desires are evil, then efficiently achieving them will cause immense harm, perhaps up to and including the extinction of humanity (for example, to recall the movie War Games, if we decide to play the game of nuclear war, or biological, or nanotechnological, or another kind of warfare). Short of extinction, malicious use of AI could cause immense harm (e.g. overloading the power-grid to cause months-long nation-sized blackouts, or causing all self-driving cars to crash simultaneously). Mere accidental AI errors can also cause vast harm, for example, if a machine learning algorithm is fed racially biased data then it will give racially biased results (as hasalready happened).

The tradition of the Church is thattechnology should always be judged by morality. Pure efficiency is never the only priority; the priorities should always be loving God and loving neighbor. Insofar as AI might facilitate that (reminding us to pray, or helping reduce poverty), then it is a good thing and should be pursued with zeal. Insofar as AI facilitates the opposite (distracting us from God, or exploiting others) then it should be considered warily and carefully regulated or even banned. Nuclear weapons should probably never be under AI control, for example; such a use of AI should be banned.

Ultimately, AI gives us just what all technology does better tools for achieving what we want. The deeper question then becomes what do we want? and even more so what should we want? If we want evil, then evil we shall have, with great efficiency and abundance. If instead we want goodness, then through diligent pursuit we might be able to achieve it. As inDeuteronomy 30, God has laid before us life and death, blessings and curses. We should choose life, if we want to live.

Original post:

Artificial intelligence ethics the same as other new technology - Crux: Covering all things Catholic

Artificial Intelligence Develops Its Own Language – IGN

Share.

We haven't quite reached the terrifying sci-fi hellscape described by the Terminator franchise, but researchers at Facebook have brought us just a bit closer to the age of the machines. Recently, they pulled the plug on an artificial intelligence system after it developed its own language.

The AI in question was actually designed to maximize efficiency in language, but according to Fast Co. Design, the researchers forgot to add a crucial rule in its programming: the language had to be English. So the "two AI agents" moved on with their programming to communicate as efficiently as their programming would allow, putting the conversation between the two outside the understanding of humans.

"Agents will drift off understandable language and invent codewords for themselves," Georgia Tech research scientist Dhruv Batra said. This isn't anything new, either. It's something that keeps cropping up when researchers experiment with this type of AI.

The purpose of these particular Facebook AI agents is to communicate in English, so programmers reworked the code to get the AI back on track. But if AI is allowed to keep to its own devices, Fast Co. Design said, it eventually creates a language all its own. One that can't be understood by human beings.

Now is the perfect time to prepare yourself for the end of humanity's reign over Earth by watching the new 4K Blu-ray of Terminator 2. It seems less a blockbuster action film from the '90s and more of a dark fortelling of our grim future under the emotionless rule of the machines. Regardless of our impending doom, it's a great movie.

Seth Macy is IGN's weekend web producer and just wants to be your friend. Follow him on Twitter @sethmacy, or subscribe to Seth Macy's YouTube channel.

The rest is here:

Artificial Intelligence Develops Its Own Language - IGN

Artificial Intelligence-enabled Cloud solutions set to win the race: IBM India – Economic Times

NEW DELHI: When it comes to delivering intelligent Cloud experience, robust artificial intelligence (AI)-driven solutions are going to decide who is better equipped to provide enterprises with extended capabilities, says a key IBM executive.

Among all future technologies, AI has been hailed as the next big thing and is steadily becoming the driving force behind tech innovations and existing product lines across industries -- going further from just being part of Internet of Things (IoT)-enabled home appliances and smartphones.

Market research firm Tractica forecasts that the revenue generated from the direct and indirect application of AI software will grow from $1.38 billion in 2016 to $59.75 billion by 2025. According to IDC, the cognitive systems and AI market (including hardware and services) will grow to $47 billion in 2020.

To make sense of data on Cloud, data miners need to decode and align it in order to deliver enhanced experiences to customers and they can't do this mammoth task alone.

Here is where AI -- their "virtual colleagues" -- steps in to help them deliver "enterprise-grade" Cloud that scales to the requirements of the market and benefits all industries.

"When I say an 'enterprise-grade' Cloud, I mean that we have a global network of data centres. We have about 252 data centres worldwide, offering a full range of services that includes virtualised infrastructure," Vikas Arora, Country Manager, Cloud Business, IBM India and South Asia, told IANS.

Present in India since 1951, IBM India has expanded its operations with regional headquarters in Bengaluru and offices across 20 cities.

IBM has research centres in Delhi and Bengaluru; software labs in Bengaluru, Gurgaon, Pune, Hyderabad and Mumbai; India Systems Development Labs (ISDL) in Bengaluru, Pune and Hyderabad; a Cloud data centre in Chennai; and eight delivery centres across the country.

With over 55 Cloud centres in 19 countries, IBM Cloud is the leader in Enterprise Cloud. IBM's $14.6 billion cloud business grew 35 per cent in the first quarter this year.

With a market capitalisation of over $135 billion, IBM, which traditionally has been manufacturing and selling computer hardware and software, has now forayed into areas like AI and cognitive analytics.

The company now provides tools for data management that are able to analyse the data -- be it on public or private Cloud -- so as to translate it into useful insights.

"What makes us different is that our Cloud is built for the cognitive era. There are many robust artificial intelligence capabilities with us, led by 'IBM Watson'," Arora told IANS.

IBM Watson is an intelligent cognitive system. With it, people can analyse and interpret data, including unstructured text, images, audio and video, and develop personalised solutions.

Watson now has a new cognitive assistant, the "MaaS360 Advisor" that leverages its capabilities to help IT professionals effectively manage and protect networks of smartphones, tablets, laptops, IoT devices and other endpoints.

"We believe that at some point, everyone would be able to provide Cloud; but I think the solutions that are going to win are those that are able to provide customers with extended capabilities, which they are going to need for the future and AI is a big part of that," Arora noted.

When it comes to the Indian Cloud ecosystem, CTOs and CEOs want to control their data on-premises.

"I think it's not as much about control. It is basically about trying to get the most out of whatever investments have already been made. So we don't see control other than, of course, in industries that are heavily regulated where they need control," Arora explained.

More than control, added the IBM executive, it's efficiency and return-on-investments that drive large enterprises -- but it is different for mid-sized organisations.

"For them, it's more about reducing the headache of handling an IT department, building an infrastructure and having someone managing it. Mid-sized organisations tend to struggle on this point as this isn't their core business," Arora said.

When it comes to working with the government in the country, IBM sees a positive trend emerging. "Today, government departments have a clear set of guidelines as to what a Cloud environment should deliver in terms of capabilities, operational management, security and sovereignty," the IBM executive maintained.

Among Small and Medium Enterprises (SMEs), new IT spend is giving Cloud a big push.

"SMEs are not hesitant any longer to go for New-Age IT initiatives because they are not relying on a hardware vendor or a small system integrator and aim to have a world-class IT environment in Cloud, without having to have a particular IT department around it," Arora told IANS.

See the rest here:

Artificial Intelligence-enabled Cloud solutions set to win the race: IBM India - Economic Times

The rise of artificial intelligence: What you should and shouldn’t be worried about – Waterloo Cedar Falls Courier

SAN FRANCISCO (AP) Tech titans Mark Zuckerberg and Elon Musk recently slugged it out online over the possible threat artificial intelligence might one day pose to the human race, although you could be forgiven if you don't see why this seems like a pressing question.

Thanks to AI, computers are learning to do a variety of tasks that have long eluded them everything from driving cars to detecting cancerous skin lesions to writing news stories. But Musk, the founder of Tesla Motors and SpaceX, worries that AI systems could soon surpass humans, potentially leading to our deliberate (or inadvertent) extinction.

Two weeks ago, Musk warned U.S. governors to get educated and start considering ways to regulate AI in order to ward off the threat. "Once there is awareness, people will be extremely afraid," he said at the time.

Zuckerberg, the founder and CEO of Facebook, took exception. In a Facebook Live feed recorded Saturday in front of his barbecue smoker, Zuckerberg hit back at Musk, saying people who "drum up these doomsday scenarios" are "pretty irresponsible." On Tuesday, Musk slammed back on Twitter, writing that "I've talked to Mark about this. His understanding of the subject is limited."

Here's a look at what's behind this high-tech flare-up and what you should and shouldn't be worried about.

A view of the campus of Dartmouth College, Hanover, New Hampshire, Fall 1966. (AP Photo)

Back in 1956, scholars gathered at Dartmouth College to begin considering how to build computers that could improve themselves and take on problems that only humans could handle. That's still a workable definition of artificial intelligence.

An initial burst of enthusiasm at the time, however, devolved into an "AI winter" lasting many decades as early efforts largely failed to create machines that could think and learn or even listen, see or speak.

That started changing five years ago. In 2012, a team led by Geoffrey Hinton at the University of Toronto proved that a system using a brain-like neural network could "learn" to recognize images. That same year, a team at Google led by Andrew Ng taught a computer system to recognize cats in YouTube videos without ever being taught what a cat was.

Since then, computers have made enormous strides in vision, speech and complex game analysis. One AI system recently beat the world's top player of the ancient board game Go.

South Korean professional Go player Lee Sedol, right, watches as Google DeepMind's lead programmer Aja Huang, left, puts the Google's artificial intelligence program, AlphaGo's first stone during the final match of the Google DeepMind Challenge Match in Seoul, South Korea, Tuesday, March 15, 2016. A champion Go player scored his first win over a Go-playing computer program on Sunday after losing three straight times in the ancient Chinese board game, saying he finally found weaknesses in the software. (AP Photo/Lee Jin-man)

For a computer to become a "general purpose" AI system, it would need to do more than just one simple task like drive, pick up objects, or predict crop yields. Those are the sorts of tasks to which AI systems are largely limited today.

But they might not be hobbled for too long. According to Stuart Russell, a computer scientist at the University of California at Berkeley, AI systems may reach a turning point when they gain the ability to understand language at the level of a college student. That, he said, is "pretty likely to happen within the next decade."

While that on its own won't produce a robot overlord, it does mean that AI systems could read "everything the human race has ever written in every language," Russell said. That alone would provide them with far more knowledge than any individual human.

The question then is what happens next. One set of futurists believe that such machines could continue learning and expanding their power at an exponential rate, far outstripping humanity in short order. Some dub that potential event a "singularity," a term connoting change far beyond the ability of humans to grasp.

The Waymo driverless car is displayed during a Google event, Tuesday, Dec. 13, 2016, in San Francisco. The self-driving car project that Google started seven years ago has grown into a company called Waymo. The new identity announced Tuesday marks another step in an effort to revolutionize the way people get around. Instead of driving themselves, people will be chauffeured in robot-controlled vehicles if Waymo, automakers and ride-hailing service Uber realize their vision within the next few years. (AP Photo/Eric Risberg)

No one knows if the singularity is simply science fiction or not. In the meantime, however, the rise of AI offers plenty of other issues to deal with.

AI-driven automation is leading to a resurgence of U.S. manufacturing but not manufacturing jobs . Self-driving vehicles being tested now could ultimately displace many of the almost 4 million professional truck, bus and cab drivers now working in the U.S.

Human biases can also creep into AI systems. A chatbot released by Microsoft called Tay began tweeting offensive and racist remarks after online trolls baited it with what the company called "inappropriate" comments.

Harvard University professor Latanya Sweeney found that searching in Google for names associated with black people more often brought up ads suggesting a criminal arrest. Examples of image-recognition bias abound.

"AI is being created by a very elite few, and they have a particular way of thinking that's not necessarily reflective of society as a whole," says Mariya Yao, chief technology officer of AI consultancy TopBots.

Tesla and SpaceX CEO Elon Musk bows as he shakes hands with Republican Nevada Gov. Brian Sandoval after Musk spoke at the closing plenary session entitled "Introducing the New Chairs Initiative - Ahead" on the third day of the National Governors Association's meeting Saturday, July 15, 2017, in Providence, R.I. (AP Photo/Stephan Savoia)

In his speech to the governors, Musk urged governors to be proactive, rather than reactive, in regulating AI, although he didn't offer many specifics. And when a conservative Republican governor challenged him on the value of regulation, Musk retreated and said he was mostly asking for government to gain more "insight" into potential issues presented by AI.

Of course, the prosaic use of AI will almost certainly challenge existing legal norms and regulations. When a self-driving car causes a fatal accident, or an AI-driven medical system provides an incorrect medical diagnosis, society will need rules in place for determining legal responsibility and liability.

With such immediate challenges ahead, worrying about superintelligent computers "would be a tragic waste of time," said Andrew Moore, dean of the computer science school at Carnegie Mellon University.

That's because machines aren't now capable of thinking out of the box in ways they weren't programmed for, he said. "That is something which no one in the field of AI has got any idea about."

More here:

The rise of artificial intelligence: What you should and shouldn't be worried about - Waterloo Cedar Falls Courier

Artificial Intelligence: Apple’s Second Revolutionary Offering – Seeking Alpha

In an earlier article on Augmented Reality, I noted that Apple (NASDAQ:AAPL) faces challenges for growth of its iPhone business, as many worldwide markets have become saturated, and the replacement rate for existing customers has dropped. I noted that Apple has weathered this change by continuing to charge premium prices for its product (against the predictions of many naysayers), and it can do this for two reasons.

1- Its design and build quality is unsurpassed, and

2- Its always on the cutting edge of new technology.

For these reasons, customers feel that there is value in the iconic product.

Number two leads the investor to the question:

While the earlier articles centered on augmented reality this will focus on Artificial Intelligence (AI), and Machine Learning (ML), this is an important topic for the investor as it is a critical part of the answer to the question above.

Most analysts focus on the easily visible aspects of devices, ignoring the deeper innovations because they dont understand them. For example, when Apple stunned the tech world in 2013 by introducing the first 64-bit mobile system on a chip (processor), the A7, many pundits played down the importance of the move. They argued that it made little difference, and listed a variety of reasons. Yet they ignored the real important advantages particularly the tremendously strengthened encryption features. This paved the way for the enhanced security features that include complete on-device data encryption, Touch ID and Apple Pay.

Apples foray into AR and now ML are further examples of this. While AR captures the imagination of many people and the new interface has been covered, the less understood Machine Learning interface has been virtually ignored in spite of the fact that going forward it will be a very important enabling technology. Product differentiation and performance are key to Apple maintaining its position, and thus key to the investor's understanding.

Machine Learning is a type of program that gives a response to input without having been explicitly programmed with the knowledge. Instead, it is trained by being presented with a set of inputs and the desired response. From these, the program learns to judge a new input.

This is different from earlier Knowledge Based Systems. These were explicitly programmed. For example, in a simple wine program I developed for a class, there were a long list of rules, essentially of the form:

- IF (type = RED) AND (acidity = LOW) THEN respond with XXX

- IF (type = RED) AND (acidity = HIGH) THEN respond with ZZZ

In a ML system, these rules do not exist. Instead a set of samples are presented and the system learns how to infer the correct responses.

There are a lot of different configurations for such learning systems, many using the Neural Network concept. This is based on the interconnected network of the brain. Here each individual neuron (brain cell) receives a connection from many other neurons, and then in turn connects to many others. As a person experiences new things, the connections between the excited cells get strengthened or facilitated so that a given network is more easily excited in the future if the same or similar input is given.

Computer neural nets work analogously, though obviously digitally. The program defines as set of cells into some series of levels. Each is influenced by some subset of the others and in turn influence yet other cells, until a final level produces a result. The degree to which the value of one cell changes the value of another cell to which it is connected is specified by the weight of the connection. This is where the magic lies.

During training, when a pattern is presented to it, the strong connections are strengthened (and others possibly weakened). This is repeated for various inputs. Eventually, the system is deemed trained, and the set of connections is saved as a trained model for use in an application. (Some systems allow for continued training after deployment.)

(For an interesting anecdote on how this works in the brain, see this story.)

Many people think of AI as some big thing on mainframes such as Watson by IBM (IBM), which championed at Jeopardy, or in research labs at Google (GOOG) (NASDAQ:GOOGL) or Microsoft (MSFT). They think that this is for the big problems of industry.

Research at Google is at the forefront of innovation in Machine Intelligence, with active research exploring virtually all aspects of machine learning, including deep learning and more classical algorithms. Exploring theory as well as application, much of our work on language, speech, translation, visual processing, ranking and prediction relies on Machine Intelligence. In all of those tasks and many others, we gather large volumes of direct or indirect evidence of relationships of interest, applying learning algorithms to understand and generalize. (Google page)

But this is not the case. ML applications are running on your smartphone and home computer now. Text prediction on your keyboard, facial recognition in your photos be it in your photos app or in Facebook (FB) and speech recognition such as Siri, Amazons (AMZN) Echo, etc., all use ML systems to perform the tasks. Many of these are actually sent off to servers in the cloud to do the heavy lifting computing, because it is indeed heavy lifting that is, it requires a great deal of compute power. NVidia (NVDA) is surging precisely because of its new Tesla (NASDAQ:TSLA) series products on the server end of this industry.

So, what has Apple done?

A few weeks ago, Apple (AAPL) held its Developers Conference (WWDC) opening with the keynote address where Tim Cook and friends introduced new features of their line of products. While many focused on the iPad Pro, the new iOS and Mac OS features or the HomePod speaker, for the long term, the real news for the investor is the AR and ML toolkits introduced.

Investors may be wondering:

What Core ML does is simple, it allows app writers to incorporate an ML model into their app by simply dragging it into the program code window. It also provides a single, simple method to send target data into that model and retrieve an answer.

The purpose of a model is to categorize or provide some other simple answer to a set of data. Input might be one piece of data, such as an image, or several, as a stream of words.

The model is a different story altogether. This is the complicated part.

Apple provides access to a lot of standard models. The programmer can simply select one of these, and plop it into the program. If not, then the programmer, or an AI specialist, would go to one of a number of available ML tools to specify a network and train it. Apple has provided tools to translate these trained models into the format that the Core ML process uses. (Apple has provided its format as open source for other developers to use.)

The amazing thing is that one can pull a model into their program code, and then write as little as three or four lines of new code to use it. That is, once you have the model, you can create a simple app to use it literally in a matter of minutes. This is an dazzling accomplishment.

An interesting thing is that the programmers call to the model to send in data and retrieve the response is exactly the same no matter what the model. Obviously one needs to send in the correct type of data (image, sound file, text), but the manner of doing so is exactly the same no matter what type of data is assessed or what the inherent structure is of the model itself. This enormously simplifies programming. The presenters continually emphasized that the developers should focus on the user experience, not on implementation details.

One of the great things about Core ML is that the apps perform all the calculations, on the device. Nothing is sent to a remote server. This provides the following benefits:

One area of interest (at least for the technophile) is some of the benefits of the actual implementation.

Software on a computer (and a smartphone is a computer) is layered, where each layer creates a logical view of the world, but really is no more than a bunch of code using the layer below it. Thus, a developer can call a routine to create a window (sending in a variety of parameters for the size and location, color, etc.), and this will perform the enormous number of operations from the lower levels that are required to open up a graphic display that we recognize as a window. In some cases, the upper layers of abstraction are the same for different devices, in spite of very different real implementations.

The illustration shows Apples implementation of Core ML and how it sits on top of other layers. In this case, there are ML layers for vision, etc. that sit on top of the Core ML itself. But the important thing here is that we can see how Core ML sits on top of Accelerate and Metal Performance Shaders.

Metal is the Apple graphics interface for accelerating graphics performance. It improves this immensely. Shaders are the units that actually perform the calculations in a Graphics Processing Unit (see GPU section of this post).

One might wonder why ML services would be built on top of graphics processors. As noted in the post on GPUs mentioned above, a graphic (photo, drawing, video frame) consists of thousands or millions of tiny picture elements, or pixels. Editing the frame consists of applying some mathematical operation on each of the pixels sometimes depending on its neighbors. This means you want to perform the same operation on millions of different data pieces. As I noted earlier, a neural network consists of many cells each with many connections. One system boasts 650K neurons with 630M connections. Yet the actual adjustments of the weights of the connections is a simple arithmetic operation. So a GPU is actually spectacular at ML processing performing the same calculation on hundreds, or even thousands of cells in parallel. Apples Metal technology lets the ML programs access the GPU compute cells directly.

The important thing to understand here is that Apple has built the Core ML engines on top of these high performance technologies. Thus, it comes for free to the app developer. All the hard work of programming an ML engine has been done, fine tuned, accelerated, and debugged. The importance of this is really hard to convey to the person who does not know the development process. It gives every app developer the benefit of literally scores of programmers working for several years to make their little app, effective, correct, and robust.

Finally, there is one last card in apples hand, yet to be officially shown. Back in May, Bloomberg reported that they had reliable sources tell them that Apple is working on a dedicated ML chip, called the Neural Engine.

This makes a lot of sense. A standard GPU is great for doing ML computations, but in the end, it was designed first to handle graphics. The design would probably be quite similar, but totally tailored to the ML tasks. My guess is that this Neural Engine will make its debut on the iPhone 8 that is expected to be released in the fall (along with updated iPhone 7s/Plus). It would be a tantalizing incentive for buyers, a major differentiator for the line. With time, it would become available on all new phones (perhaps not the low end SE). With this chip, I believe Siri would move completely onto the device. It could also be used on Macs.

ML models require a tremendous amount of computation. As such, they consume a great deal of battery power. As new generations of chips have emerged with continually shrinking transistor size (thus increasing compute power and efficiency), it has become more realistic to run some models locally. Additionally, the GPUs that Apple has built on their A-series chips have grown at an extraordinary rate. Graphics performance in the new iPad Pro, with A10x processor, is an astounding 500 times that of the original iPad. According to Ash Hewson of Serif software, the performance is literally four times that of an Intel i7 quad core desktop PC.

Still, on a portable device, every drop of battery power is precious. So if Apple can save by designing its own specialty chips, then it will be worth it. They have the talent and the capacity.

And yet another motivation. There is still a lot of evidence that Apple is working on self driving car technology. It would be just like them to want to own the process from hardware to software. With their own ML processor, they would be free from worries that some other company would have control of a key technology. (This is why they created the browser Safari.) Metal is a software/hardware interface specification. It relies implicitly on a hardware platform that conforms to its specifications. Having their own Neural Engine chip will assure this, even as they move into self-driving cars.

As an aside it is interesting to note that the Core ML libraries (including Metal 2) will run on the Mac as well as iOS. Apple is gradually moving to unify the two platforms in many respects.

With the iPhone itself, one can try to predict sales and costs and come up with a guess as to revenue and profit for a given time frame. Both ML and AR projects have little in terms of applications at the moment, and so their impact on sales is rather ephemeral at this time. Still, this is an important investment in the future. I stated above that Core ML is an important enabling technology. The fact is simple with a huge lead in this arena, performance in ML tasks will far and away outstrip that from any competitor for many years to come.

At first the most visible will be AR titles since they tend to be very flashy. But AI titles will slowly begin to gain traction. Other platforms will be left in the dust in terms of performance. (Watch the Serif Affinity Photo demo in the WWDC keynote video time 1:40:10 - to see just how astoundingly fast the iPad Pro is.)

With these tools hardware and software Apple will assure itself of being far and away the leader in basic platform technology. This will allow them to attract new customers and encourage upgrades. Exactly what the investor wants.

Disclosure: I am/we are long IBM, AAPL.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

See original here:

Artificial Intelligence: Apple's Second Revolutionary Offering - Seeking Alpha

China’s Artificial Intelligence Revolution – The Diplomat

On July 20, Chinas State Council issued the Next Generation Artificial Intelligence Development Plan (), which articulates an ambitious agenda for China to lead the world in AI. China intends to pursue a first-mover advantage to become the premier global AI innovation center by 2030. Through this new strategic framework, China will advance a three in one agenda in AI: tackling key problems in research and development, pursuing a range of products and applications, and cultivating an AI industry. The Chinese leadership thus seeks to seize a major strategic opportunity to advance its development of AI, potentially surpassing the United States in the process.

This new plan, which will be implemented by a new AI Plan Promotion Office within the Ministry of Science and Technology, outlines Chinas objectives for advances in AI in three stages.

First, by 2020, Chinas overall progress in technology and applications of AI should keep pace with the worlds advanced level, while its AI industry becomes an important economic growth point. By this time, China hopes to have achieved important progress in next generation AI technologies, including big data, swarm intelligence, hybrid enhanced intelligence, and autonomous intelligent systems. At that point, the value of Chinas core AI industry is targeted to exceed 150 billion RMB (over $22 billion) in value, with AI-related fields valued at 1 trillion RMB (nearly $148 billion). Concurrently, China should have advanced in gathering top talent and establishing initial frameworks for laws, regulations, ethics, and policy.

Next, by 2025, China should have achieved major breakthroughs in AI to reach a leading level, with AI becoming a primary driver for Chinas industrial advances and economic transformation. At that point, China intends to have become a leading player in research and development, while widely using AI in fields ranging from manufacturing to medicine to national defense. Chinas core AI industry should have surpassed 400 billion RMB (about $59 billion), with AI-related fields exceeding 5 trillion RMB (about $740 billion). In addition, China plans to have achieved progress in the creation of laws and regulations, as well as ethical norms and policies, along with the establishment of mechanisms for AI safety assessment.

Ultimately, by 2030, China intends to have become the worlds premier AI innovation center. At that point, China believes it can achieve major breakthroughs in research and development to occupy the commanding heights of AI technology. In addition, AI should have been expanded and its use deepened within multiple domains, including social governance and national defense. By then, Chinas AI industry is targeted to exceed 1 trillion RMB ($148 billion), with AI-related fields totaling $10 trillion ($1.48 trillion). To support its continued primacy in AI, China plans to create leading AI innovation and personnel training bases, while constructing more comprehensive for legal, regulatory, ethical, and policy frameworks.

Through this agenda, the Chinese leadership plans to leverage AI to address a range of economic, governance, and societal challenges. Since Chinas economic growth has started to slow, Beijing hopes that AI can serve as a new engine to advance future economic development through unleashing a new scientific revolution and industrial transformation. According to a recent report, AI could enable Chinas economy to expand 26percent by 2030. Concurrently, AI will be leveraged across governance and society to improve a range of services and systems, including education, healthcare, and even the judiciary. Concurrently, the Communist Party of China (CPC) hopes AI will have utility in enhancing the intelligentization of social management and protecting social stability, through such techniques as advanced facial recognition and biometric identification.

China recognizes that AI will be critical to its future comprehensive national power and military capabilities. The plan focuses on building critical competencies to enable future innovation, applications, and enterprise, with a focus on open-source platforms and open data. The Chinese government will invest in a range of AI projects, encourage private sector investment in AI, and establish a national development fund for AI. Critically, the plan will also cultivate high-end talent, recognized as an integral element of national competitiveness in AI. For instance, China intends to improve education in AI and strengthen its talent pool. Concurrently, China will seek to draw upon the worlds leading talent, including through recruitment and talent programs, such as the Thousand Talents plan.

This plan acknowledges and seeks to mitigate identified shortcomings in Chinas current capacity. Although there have been considerable advances in the numbers of papers and patents, the Chinese leadership recognizes gaps relative to more advanced countries, including the lack of major original results and relative disadvantage in core algorithms and critical components, such as high-end chips. Looking forward, China intends to pursue high-end research and development that could enable paradigm changes in AI, such as brain-inspired AI and quantum-accelerated machine learning. Although Chinas state-centric approach to industrial policy may have certain disadvantages, this attempt to formulate an integrated, whole-of-nation approach to innovation-driven development could be successful in building upon inherent national advantages, notably Chinas massive data resource base and potential talent pool.

While building indigenous capacity, China will seek to coordinate and optimize the use of both domestic and international innovation resources. The plan calls for encouraging cooperation between domestic AI enterprises and leading foreign universities, research institutes, and teams. China will encourage its own AI enterprises to undertake an approach of going out to pursue overseas mergers and acquisitions, equity investments, and venture capital, while establishing research and development centers abroad. According to this plan, China will also encourage foreign AI enterprises to establish their own research and development centers in China. Through such measures, China could attempt to leverage foreign advances and expertise while in the process of building up an adequate domestic base for innovation. This approach may prove controversial and could provoke further friction, against the backdrop of current U.S. debates on the Committee on Foreign Investment in the United States (CFIUS)and recurrent concerns over Chinese investments in sensitive technologies.

Notably, this new plan explicitly highlights an approach of military-civil fusion (or civil-military integration) to ensure that advances in AI can be rapidly leveraged for national defense. Certain next generation AI technologies that have been prioritized will likely be used to enhance Chinas future military capabilities. For instance, China intends to pursue advances in big data, human-machine hybrid intelligence, swarm intelligence, and automated decision-making, along with in autonomous unmanned systems and intelligent robotics. Accordingly, China seeks to ensure that scientific and technological advances can be readily turned to dual-use applications, while military and civilian innovation resources will be constructed together and shared.

Given the potential disruptive nature of AI, China also recognizes that new challenges could arise for governance, economic security, and social stability. As such, this plan calls for minimizing these risks to ensure the safe, reliable, and controllable development of AI. While formulating legal, regulatory, and ethical frameworks on AI, China will create mechanisms to ensure appropriate safety and security in AI systems. China also plans to build capacity to evaluate and prepare for long-term challenges associated with AI, including through establishing a new AI Strategic Advisory Committee and AI-focused think-tanks. In addition, the plan includes measures to mitigate likely negative externalities associated with AI, such as retraining and redeploying displaced workers. The CPC will also continue to pursue new techniques to bolster its coercive apparatus and thus assure regime security, such as the use of big data and AI to enable sophisticated censorship and surveillance, as well as the new social credit system.

Looking forward, China seeks to take full advantage of the unfolding AI revolution to enhance its national power and competitiveness. Recognizing the strategic importance of this new technology, the Chinese leadership intends to leverage AI in its quest for innovation-driven development, with the aspiration of enabling China to become a global power in science and technology. Concurrently, the CPC will attempt shape the development of AI in accordance with the objectives and interests of the party-state. However, AI is unlikely to be a panacea for Chinas economic and societal challenges, and the future trajectory of the implementation of this new plan remains to be seen. Ultimately, Chinas AI agenda reflects its ambitions to take the lead in emerging international competition within this critical technological domain.

Elsa Kania is an analyst focused on the Chinese militarys strategic thinking on and advances in emerging technologies, including unmanned systems, artificial intelligence and quantum technologies. Elsa is also in the process of co-founding a start-up research venture.

Follow this link:

China's Artificial Intelligence Revolution - The Diplomat

The Role of Artificial Intelligence in Intellectual Property – IPWatchdog.com

Artificial Intelligence (AI) has been a technology with promise for decades. The ability to manipulate huge volumes of data quickly and efficiently, identifying patterns and quickly analyzing the most optimal solution can be applied to thousands of day-to-day scenarios. However, it is set to come of age in the era of big data and real time decisions where AI can provide solutions to age old issues and challenges.

Consider, as an example, traffic management. The first traffic management system in London was a manually operated gas-lit traffic signal, which promptly exploded two months after its introduction. Since this inauspicious start, a complex network of road closures, traffic management systems, traffic lights and pedestrian crossings have served to drive increased complexity into travelling in the City. Today traffic travels slower than ever, despite the plethora of new systems being added to better manage the system.

AI has the potential to change this. It can harvest data on traffic volumes, historical trends and current blockages to quickly calculate the most optimal solution for traffic in London. It can do this in near real time, constantly tweaking and managing flow to deliver the best possible solution.

This is why AI is increasingly the go to technology for organisations wanting to solve highly complex and data heavy challenges. Digital retailers are using AI-powered robots to run warehouses. Utilities are using AI to forecast electricity demand. Mobile networks are deploying AI to manage an ever-increasing demand for data. We stand on the threshold of a new age of AI powered technology.

The Intellectual Property (IP) industry is another market where AI could have a profound effect. Traditionally powered by paper, manual searches and lengthy decision-making processes, AI can be deployed to simplify day-to-day tasks and deliver increased insight from IP data.

IP administrative tasks are one of the most time intensive and risky areas of IP. Law firms and corporate IP departments may, at any time, cover thousands of individual items of IP data, across hundreds of jurisdictions, dealing with thousands of different products. Historically this has been a significantly manual and slow process.

Consider one single patent that a company has applied for protection for in many different countries. A network of agents, familiar with the specific processes required to gain protection in specific countries, will each help the company achieve their goal. Along the way, hundreds of items of paperwork will be generated, in multiple languages, each with their own challenges and opportunities.

All of this information would currently be assessed manually and then input into an IP management system. Naturally enough this could easily result in many data processing errors. Now consider this across multiple patents. The opportunities for error are almost limitless. Yet for many companies IP remains its most valuable asset. A simple error in inputting a renewal date could risk losing an asset worth millions to a company. It is worth noting that the World Intellectual Property Organisation (WIPO) estimates around a quarter of patent information is wrong. The risks are therefore very evident.

In addition, considerable time and cost accrues from the manual labour involved in inputting data. This is activity that, if it can be automated, frees law firms and IP experts to focus on more strategic issues. AI, which is highly adept at processing large sets of data quickly and accurately, can help both efficiency and accuracy. This also enables law firms and IP professionals to take on a more strategic role within the organisation, generating insight from data to help shape future company performance, whilst leaving the more mundane aspects of IP management to computers.

By automating the submission of data and ensuring that every single item of IP has a unique identifier, correspondence from the various patent offices and agent networks can be simply sorted and searchable on demand. An AI engine can then be deployed to identify relevant information in correspondence, resulting in faster and more accurate outcomes.

The number of IP assets globally is growing. According to the WIPO there was a 7.8% growth in patent filings between 2014 and 2015. This upward trend in filings has continued for at least 20 years. Therefore, IP documentation and resources are growing. Finding relevant information in this vast amount of data is becoming more difficult. Historically, searches have been carried out manually, with static search databases being the only support tools.

AI and Machine Learning (ML) can not only automate the process of searching huge databases but also store and use previously collected data to improve the accuracy of future searches. AI can also be used to provide insight into a geographical or vertical market. Consider a company looking to exploit IP in new regions. It may wish to consider the best countries to file for protection. Insight into the strengths and weaknesses of markets in certain countries could be cross referenced with competitive IP data to deliver an instant overview of the most beneficial geographies to apply for further protection. Research that would have previously taken months to achieve can be managed in minutes by deploying AI in an effective way.

A large IP portfolio is bound to have both strengths and weakness. Indeed, one of the weaknesses may be the sheer scope of the portfolio. As a patent portfolio increases in size, it becomes difficult to effectively oversee and draw insight from the portfolio. As a result, firms are not only limited in managing processes such as renewals, but also in using insight to gain a competitive advantage.

Many IP professionals are already analysing the value of their patent portfolio. Which patents are most effective? Which deliver most licencing revenues? In which countries? What is the value of IP to a business compared to the cost of renewal? By analysing large sets of data, AI is able to indicate where a companys portfolio of IP is strongest and weakest.

This can, in turn shape future investment decisions in research and development, help companies understand their relative strengths and weaknesses in terms of their competitors and enable companies to understand more about the potential opportunities in new markets.

AI is now delivering real value to companies that need to solve complex issues. Within IP management, AI can empower IP professionals. Day-to-day IP tasks can be time consuming, but AI technology enables professionals the time to focus on more strategic decisions in their portfolio. It will also drive improved accuracy while reducing the risk of IP insight and intelligence moving on as employees do. For IP professionals, the real opportunity however comes from the insight that AI can provide into otherwise impenetrable and inaccessible volumes of data. AI will help IP professionals generate business insight that can open up new markets, accurately value an IP portfolio and deliver a better understanding of what and where the next generation of IP investment should come from.

Tyron Stading is the Chief Data Officer for CPA Global, where he is responsible for creating unified data integration and analytics across all of our products and services. In 2006, Tyron founded and served as CTO for Innography, the US-based IP analytics software provider that CPA Global acquired in 2015. He was previously employed at IBM and several other high technology start-ups. Tyron earned a Computer Science degree from Stanford University and an MBA from University of Texas at Austin. Tyron has published multiple research papers on intellectual property and personally filed more than 50 patents.

Read the rest here:

The Role of Artificial Intelligence in Intellectual Property - IPWatchdog.com

This man trained an artificial intelligence to generate the most British sounding place names – The indy100

Dan Hon recently set out out to do something rather fun with artificial intelligence.

The director of content at Code for America took a bunch of existing British placenamesand, with a method that goes overour heads somewhat, managed to train an AI to generate some new ones.

His results are so fantasticthat they sound like any number of drizzly English villages you may have driven through looking for a half hidden wedding venue.

Check out Dan's working out, andall the incrediblyconvincing sounding British placenamesbelow.

I trained an A.I. to generate British placenames

The results were predictable.

(Inspired in part by Janelle Shanes New paint colors invented by neural network. Tom Taylor did similar work in 2016, generating English village names.)

Method:

1.Find a list of British placenames. Heres one you can download as a CSV. You just need the names, so strip out all the other columns. To save some time, you can use the one I prepared earlier.

2.Pick a multi-layer recurrent neural network to use. The first time I did this, Karpathys char-rnn was all the rage, this time I used jcjohnsons torch-rnn.

3.If youre using a Mac, dont bother trying to get OpenCL GPU support working. I wasted 3 hours. Just use crisbals CPU-based docker image. (If you know what youre doing, then youre already comfortable doing this all on AWS or youve got an nVidia GPU).

4.Follow jcjohnsons instructions in the readme (pre-process your data, etc.)

5. Go and have a cup of tea while you train your model.

6. Mess around with the temperature when you sample based on your model.

7.Take a look at some of my favourite neural network generated British placenames (and if youd like more, heres 50,000 characters worth):

root@themachine:~/torch-rnn# th sample.lua -checkpoint cv/checkpoint_8450.t7 -length 1000 -gpu -1

Ospley

Stoke Carrston

Elfordbion

Hevermilley

Ell

Elles Chorels

Ellers Green

Heaton on Westom

Hadford Hill

Hambate Combe

Manory Somerstow

Buchraston-on-Ter-Sey

Brotters Common

Normannegg

Twettle Row

North Hill Row St Marne

Torston-le Taney

North Praftton

Tontons Coss

Topswick End

Brumlington

Boll of Binclestead

Farton Green Pear End

Wadworth Mayshyns Wiwton

Wader Bridge

Weston Parpenham

Oarden of Land Park

Batchington Crunnerton

Larebridge Heath Brook

Capton Briins Forehouint Eftte Green

Waryburn Torner Midlwood

Wasts Halkstack

Maggington Common

Stach Helland Neston

Stoke Hills

Sutsy Compton

Stoke of Inch

Upper Somefield

Rastan-on-croan

Wadway Dynd-Rott End

Wattings Ward

Harhester Willey

Marrock

Saxford

Salton Southens Hovers

Salt, Earth

Stamorn Vale

Stouth Wiesleyt Bhampton

Upper Brynton

Kniness Gartes

Webury Hill

Eastbridge Brook

Wallow Manworth

East Holmsley Anby

Hallaid or Humme

Galling Compton

Hampers Hill

Hangyds Hain

Wasland Commone

Wantham Mount on-by Langham

Kinindworthorpe Marmile

Dompton Ole

Dimmer Common

Keston Upper Rhington

Towerhaite Mank

Cockhanford Vales

Porcoft Green

Newtons St Pethen

Silmers Hill

Crocken-ons Clow

Prrighstock Tabergate

Crisklethes Chorn

Cross Gorburster

Storton of Brook

Cartswood Csters

New Amherston

Wascood Woots Corner

West Dottisley

Westovel (Blingwars

Sandeside Backton

Waledon of Bandsead

Rald Bockan-Sea

Boleland Brase

Stoop Heath

See the article here:

This man trained an artificial intelligence to generate the most British sounding place names - The indy100

Artificial Intelligence + Message Chatbots = The Future of Banking? – The Financial Brand

Banks and credit unions looking to grow relationships with Millennials and Gen Z must embrace chatbot technology. With AI evolving at an exponential pace, the adoption of automated chatbots is set to take off.

By Mikki Ware, Digital Marketing Director at Gremlin Social

Simply mentioning artificial intelligence may evoke thoughts of the malevolent HAL 9000 from 2001: A Space Odyssey. But this isnt sci-fi; AI is going mainstream.

Most of us have already encountered AI, but probably didnt realize it. Googles hallowed search algorithm is pure AI. Same thing for Facebook. Apples Siri and the Amazon Echo are virtual assistants built on AI backbones that can respond to a robust range of voice commands. AI has many different shades, flavors, variants and alternate names. Some call it machine learning. Others think of predictive analytics. Me personally, I like TechTargets simple definition of artificial intelligence: giving computers the ability to learn without being explicitly programmed.

While AI has been gaining traction, the use of messaging apps has also been on the rise. Research from GlobalWebIndex shows that 63% of those who use mobile apps are also active mobile bankers. And while the titans of Silicon Valley get the AI + messaging trend Facebook Messenger, Google Talk, WhatsApp not everyone in banking is on the same page. Deutsche Bank, for instance, recently banned the use of messaging apps for all employees on work phones (they, of course, cited compliance issues as their reason).

The future of banking interactions will be built using a combination of both AI and messaging apps for customer service, payments, content distribution and alerts. Citizens Bank uses an app called Relay to drive loan completions. They previously noted that customers were not receiving notifications via mail and phone calls, so began using the messaging app to send reminders. Called the Citizens Bank Wire service, the app has helped increase loan completions by 10%.

For banks and credit unions looking for an entry-level platform, Facebook Messenger is probably the most user-friendly and easy to set up. They have released developer tools allowing users to customize their own chatbot. You can also work with developers to help you set up chatbots that mimic your brands tone and messaging. When a customer sends a message, the chatbot will respond with pre-programmed questions or information to help guide the customer to a solution. CenterState Bank says this approach should be particularly attractive for small to mid-sized community institutions with a small customer service department or limited hours.

Read More:

There are pros and cons to leveraging AI and chatbots. Lets start with the benefits.

First, AI saves time. If your institution has limited resources, using an app like Facebook Messenger and creating a chatbot, can reduce or eliminate your team needing to answer basic frequently asked questions. This is one of the main reasons that chatbots have the potential to reduce labor costs for financial service companies by as much as $15 billion, according to Business Insider.

AI also expands your reach. Banks and credit unions that are trying to reach Millennials absolutely need to invest in these technologies. And Gen Z isnt far behind. According to a study by the American Bankers Association, Millennials are 3x more likely to open a new account with their phone vs. in person. Gen Z takes it a step further. They are digital natives, and mobile features like chatbots are not options, but requirements. Another report revealed that Gen Z dismisses email as an antiquated form of communications. They are 3x likelier to open a chat message received through a push notification.

Banks and credit unions planning to stay competitive with large institutions and fintech payment players like PayPal had better consider how their digital offerings will resonate with future generations.

Now, lets explore a few of the cons.

AI is next-level technology, and it can get really complex very quickly. You will have to decide what questions your bot will respond to, and provide the correct answers. Turnkey platforms like Facebook will give you step-by-step instructions for creating chatbots, but developers will need to create the code to make sure all actions are executed properly. https://developers.facebook.com/docs/messenger-platform/guides/quick-start

AI also still requires human intervention. As with any social media or digital channel, the technology is only as good as the people managing it. One example of a bot gone bad was Microsofts Tay on Twitter chatbot. Tay was taken over by internet trolls who manipulated the bot to praise Hitler and several inappropriate topics before Microsoft could shut it down. And while bots may be able to take people up to a certain point, you must be prepared with a response strategy in cases where the dialogue warrants human intervention. Developers need to be prepared for technology updates and bugs that might interfere with the user experience.

Its not really a question of if, but when your institution needs to begin exploring messenger apps and AI. Larger banks such as Wells Fargo and Bank of America have already launched robust messenger bots. A survey conducted by Personetics shows that over three quarters of financial institutions view chatbots an opportunity, and that most plan to launch a chatbot in the very near term.

For banks and financial institutions looking for new business from Millennials and Gen Z, leveraging chatbots will be a necessity. This technology is set to skyrocket in the next 12-24 months. AI is getting smarter, its evolving quickly, and going mainstream not just in banking, but in our day to day lives.

Mikki Ware is the Digital Marketing Director at Gremlin Social, a social media marketing and compliance software company in St. Louis. Mikki is a digital marketer with seven years experience in B2B marketing and software as a service. Mikki develops and executes integrated web and social media marketing strategies, and assists Gremlin clients looking to leverage digital strategies to achieve their business goals.

All content 2017 by The Financial Brand and may not be reproduced by any means without permission.

The rest is here:

Artificial Intelligence + Message Chatbots = The Future of Banking? - The Financial Brand

Musk vs. Zuck – The Fracas Over Artificial Intelligence. Where Do You Stand? – HuffPost

Advances in Artificial Intelligence (AI) have dominated both tech and business stories this year. Industry heavyweights such as Stephen Hawking and Bill Gates have famously voiced their concern with blindly rushing into AI without thinking about the consequences.

AI has already proven that it has the power to outsmart humans. IBM Watson famously destroyed human opponents at a game ofJeopardy, and a Google computer beat the world champion of the Chinese board game,Go.

Google's AI team are taking no chances after revealing that they are developing a 'big red button' to switch off systems if they pose a threat to humans. In fact scientists at Google DeepMind and Oxford University have revealed their plan to prevent a doomsday scenario in their paper titledSafely Interruptible Agents.

Truth is indeed stranger than fiction and tech fans could be forgiven for nearly choking on their cornflakes this morning after hearing about a very public disagreement between the two tech billionaires. The argument is probably a good reflection of how people on both sides of the aisle feel about heading into the foggy world of AI.

In one corner, we have Mark Zuckerberg who believes AI will massively improve the human condition. Some say he is more focused on his global traffic dominance and short-term profits than the fate of humanity. Whatever your opinion, he does represent a sanguine view of futuristic technologies such as AI.

In the other corner, we have Tesla's Elon Musk who seems to be more aware of the impact our actions might have on future generations. Musk appears concerned that once the Pandora's box has been cracked open, we could unwittingly be creating a dystopian future.

Zuckerberg landed the first punch in a Facebook Live broadcast when he said

However, Elon Musk calmly retaliated by landing a virtual uppercut by tweeting "I've talked to Mark about this. His understanding of the subject is limited."

Whether you side with Musk and believe that AI will represent humanity's biggest existential threat or think Zuckerberg is closer to the truth when he said, AI is going to make our lives better, your view is entirely subjective at this point.

However, given the range of opinions around this topic, should we be taking the future of AI more seriously than we do today?

I will tell you that big businesses with large volumes of data are falling over themselves trying to install machine learning and AI driven solutions. However, right now, many of these AI driven systems are also the source of our biggest frustrations as consumers.

Are businesses guilty of rushing into AI based solutions without thinking of the bigger picture? There are several examples of things going awry like the Chat bots claiming to be a real person, or the spread of fake news, or being told you are not eligible for a mortgage because a computer says so.

There are also an increasing number of stories about AI not being quite as smart as some would believe it to be, or how often algorithms are getting it wrong or being designed to deceive consumers. For every great tech story, there is a human story about creativity and emotional intelligence that a machine can never match.

Make no mistake the AI revolution is coming our way, and large corporations will harvest the benefits of cultivating their big data initiatives. Anything that will eliminate antiquated processes of the past and enable business efficiency can only be a giant leap forward.

However, the digital transformation of everything we know is not going to happen overnight. That does not mean we shouldn't be vigilant about how our actions today could affect future generations.

Mr. Zuckerberg may be accused by some of acting in the interests of his social media platform, and that is quite understandable. Beneath every noble statement resides a hidden interest it is safe to assume that nowadays, unless one is Mahatma Gandhi, Dr. Martin Luther King or Nelson Mandela.

On the other hand, there are also the likes of Musk and Gates that are arguably looking beyond their own business interests.

I am no expert by any stretch of the imagination, but I do ask if we need more of us to question how advancements in technology are providing advantages for the few rather than the many?

Lets build on Elon Musks point of view for a moment. I wonder if we should be concerned that a dystopian future awaits us on the horizon? Will the machines rise and turn on their masters?

AI is no longer merely a concept from a science fiction movie. The future is now. The reality is that businesses need to harness this new technology to secure a preemptive competitive advantage. Time-consuming, laborious and automatable tasks can be performed better and faster by machines that continuously learn, adapt and improve.

The current advances in technology have unexpected parallels with the industrial revolution that helped deliver new manufacturing processes. 200 years ago, the transition from an agricultural society to one based on the manufacture of goods and services dramatically increased the speed of progress.

Steel and iron replaced manual labor with mechanized mass production hundreds of years ago. That is not unlike the circumstances facing businesses today. The reality is that as old skills or roles slowly fade away, there will be a massive shortage of other skills and new roles relevant to the digital age.

Ultimately, we have a desire to use technology to change the world for the better in the same way that the industrial revolution changed the landscape of the world forever. The biggest problems surrounding market demand and real world needs could all be resolved by a new generation of AI hardware, software, and algorithms.

After years of collecting vast quantities of data, we are currently drowning in a sea of information. If self-learning and intelligent machines can turn this into actionable knowledge, then we are on the right path to progress. Upon closer inspection, the opportunities around climate modeling and complex disease analysis also illustrate how we should be excited rather than afraid of the possibilities.

The flip side of this is the understanding that no thing is entirely one thing. The risks versus rewards evaluation and the fact that researchers are talking about worst case scenarios should be a positive thing. I would be more concerned if the likes of Facebook, Google, Microsoft and IBM rushed in blindly without thinking about the consequences of their actions. Erring on the side of caution is a good thing, right?

Demis Hassabis is the man behind the AI research start-up, DeepMind, which he co-founded in 2010 withShane LeggandMustafa Suleyman.DeepMind was bought by Google in 2014. Demis reassuringly told the UK's Guardian newspaper:

It would appear that all bases are being covered and we should refrain from entering panic mode.

The only question the paper does not answer is what would happen if the robots were to discover that we are trying to disable their access or shut them down? Maybe the self-aware machine could change the programming of the infamous Red Button. But that kind of crazy talk is confined to Hollywood movies, isnt it? Lets hope so for the sake of the human race.

Those of us that have been exasperated by Facebook's algorithm repeatedly showing posts from three days ago on their timelines will tell you that much of this technology is still in its infancy.

Although we are a long way to go before AI can live up to the hype, we should nevertheless be mindful of what could happen in a couple decades.

Despite the internet mle over the impact of AI between the two most powerful tech CEOs of our generation, I suspect like anything in life, the sweet spot is probably somewhere in the middle of these two contrasting opinions.

Are you nervous or optimistic about heading into a self-learning AI-centric world?

The Morning Email

Wake up to the day's most important news.

Continued here:

Musk vs. Zuck - The Fracas Over Artificial Intelligence. Where Do You Stand? - HuffPost

Artificial intelligence is not as smart as you (or Elon Musk) think … – TechCrunch

In March 2016, DeepMinds AlphaGobeat Lee Sedol, who at the time was the best human Go player in the world. It represented one of those defining technological moments like IBMs Deep Blue beating chess champion Garry Kasparov, or even IBM Watson beating the worlds greatest Jeopardy! champions in 2011.

Yet these victories, as mind-blowing as they seemed to be, were more about training algorithms and using brute-force computational strength than any real intelligence. Former MIT robotics professor Rodney Brooks, who was one of the founders of iRobot and later Rethink Robotics, reminded us at the TechCrunch Robotics Session at MIT last week that training an algorithm to play a difficult strategy game isnt intelligence, at least as we think about it with humans.

He explained that as strong as AlphaGo was at its given task, it actually couldnt do anything else but play Go on a standard 19 x 19 board. He relayed a story that while speaking to the DeepMind team in London recently, he asked them what would have happened if they had changed the size of the board to 29 x 29, and the AlphaGo team admitted to him that had there been even a slight change to the size of the board, we would have been dead.

I think people see how well [an algorithm] performs at one task and they think it can do all the things around that, and it cant, Brooks explained.

As Kasparov pointed out in an interview with Devin Coldewey at TechCrunch Disrupt in May, its one thing to design a computer to play chess at Grand Master level, but its another to call it intelligence in the pure sense. Its simply throwing computer power at a problem and letting a machine do what it does best.

In chess, machines dominate the game because of the brute force of calculation and they [could] crunch chess once the databases got big enough and hardware got fast enough and algorithms got smart enough, but there are still many things that humans understand. Machines dont have understanding. They dont recognize strategical patterns. Machines dont have purpose, Kasparov explained.

Gil Pratt, CEO at the Toyota Institute, a group inside Toyota working on artificial intelligence projects including household robots and autonomous cars, was interviewed at the TechCrunch Robotics Session, said that the fear we are hearing about from a wide range of people, including Elon Musk, who most recently called AI an existential threat to humanity, could stem from science-fiction dystopian descriptions of artificial intelligence run amok.

The deep learning systems we have, which is what sort of spurred all this stuff, are remarkable in how well we do given the particular tasks that we give them, but they are actually quite narrow and brittle in their scope. So I think its important to keep in context how good these systems are, and actually how bad they are too, and how long we have to go until these systems actually pose that kind of a threat [that Elon Musk and others talk about].

Brooks said in his TechCrunch Sessions: Robotics talk that there is a tendency for us to assume that if the algorithm can do x, it must be as smart as humans. Heres the reason that people including Elon make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning, he said.

Facebooks Mark Zuckerberg also criticized Musks comments, calling them pretty irresponsible, in a Facebook Live broadcast on Sunday. Zuckerberg believes AI will ultimately improve our lives. Musk shot back later that Zuckerberg had a limited understanding of AI. (And on and on it goes.)

Its worth noting, however, that Musk isnt alone in this thinking. Physicist Stephen Hawking and philosopher Nick Bostrom also have expressed reservations about the potential impact of AI on humankind but chances are they are talking about a more generalized artificial intelligence being studied in labs at the likes of Facebook AI Research, DeepMind and Maluuba, rather than the more narrow AI we are seeing today.

Brooks pointed out that many of these detractors dont actually work in AI, and suggested they dont understand just how difficult it is to solve each problem. There are quite a few people out there who say that AI is an existential threat Stephen Hawking, [Martin Rees], the Astronomer Royal of Great Britaina few other people and they share a common thread in that they dont work in AI themselves. Brooks went onto say, For those of us who do work in AI, we understand how hard it is to get anything to actually work through product level.

Part of the problem stems from the fact that we are calling it artificial intelligence. It is not really like human intelligence at all, which Merriam Webster defines as the ability to learn or understand or to deal with new or trying situations.

Pascal Kaufmann, founder at Starmind, a startup that wants to help companies use collective human intelligence to find solutions to business problems, has been studying neuroscience for the past 15 years. He says the human brain and the computer operate differently and its a mistake to compare the two. The analogy that the brain is like a computer is a dangerous one, and blocks the progress of AI, he says.

Further, Kaufmann believes we wont advance our understanding of human intelligence if we think of it in technological terms. It is a misconception that [algorithms] works like a human brain. People fall in love with algorithms and think that you can describe the brain with algorithms and I think thats wrong, he said.

When things go wrong

There are in fact many cases of AI algorithms not being quite as smart as we might think. One infamous example of AI out of control was the Microsoft Tay chatbot, created by the Microsoft AI team last year. It took less than a day for the bot to learn to be racist.Experts say that it could happen to any AI system when bad examples are presented to it. In the case of Tay, it was manipulated by racist and other offensive language, and since it had been taught to learn and mirror that behavior, it soon ran out of the researchers control.

Awidely reported study conducted by researchers at Cornell University and the University of Wyoming found that it was fairly easy to fool algorithms that had been trained to identify pictures. The researchers found that when presented with what looked like scrambled nonsense to humans, algorithms would identify it as an everyday object like a school bus.

Whats not well understood, according to anMIT Tech Review article on the same research project, is why the algorithm can be fooled in the way the researchers found. What we know is that humans have learned to recognize whether something is a picture or nonsense, and algorithms analyzing pixels can apparently be subject to some manipulation.

Self-driving cars are even more complicated because there are things that humans understand when approaching certain situations that would be difficult to teach to a machine. In a long blog post on autonomous cars that Rodney Brooks wrote in January, he brings up a number of such situations, including how an autonomous car might approach a stop sign at a cross walk in a city neighborhood with an adult and child standing at the corner chatting.

The algorithm would probably be tuned to wait for the pedestrians to cross, but what if they had no intention of crossing because they were waiting for a school bus? A human driver could signal to the pedestrians to go, and they in turn could wave the car on, but a driverless car could potentially be stuck there endlessly waiting for the pair to cross because they have no understanding of these uniquely human signals, he wrote.

Each of these examples show just how far we have to go with artificial intelligence algorithms. Should researchers ever become more successful at developing generalized AI, this could change,but for now there are things that humans can do easily that are much more difficult to teach an algorithm, precisely because we are not limited in our learning to a set of defined tasks.

Read this article:

Artificial intelligence is not as smart as you (or Elon Musk) think ... - TechCrunch

Roadwork gets techie: Drones, artificial intelligence creep into the road construction industry – The Mercury News

High above the Balfour interchange on State Route 4 in Brentwood, a drone buzzes, its sensors keeping a close watch on the volumes of earth being moved to make way for a new highway bypass. In Pittsburg, a camera perched on the dash of car driving through city streets periodically snaps pictures of potholes and cracks in the pavement. And, at the corner of Harbor and School streets in the same city, another camera monitors pedestrians, cyclists and cars, where 13-year-oldJordyn Molton lost her life late last year after a truck struck her.

Although the types of technology and their goals differ, all three first-of-their-kind projects in Contra Costa County aim to offer improvements to the road construction and maintenance industry, which has lagged significantly behind other sectors when it comes to adopting new technology. Lack of investment stifled innovation, said John Bly, the vice president of the Northern California Engineering Contractors Association.

But, with the recent passage of SB1, a gas tax and transportation infrastructure funding bill, thats all set to change, he said.

You may see some of these high-tech firms find new market niches because now you have billions of dollars going into transportation infrastructure and upgrades, he said. Thats coming real quick.

Its still so new that Bly was hard-pressed to think of other areas where drone and artificial intelligence software is being integrated into road construction work in the state. The pilot programs in the East Bay are cutting edge, he said.

At the Contra Costa Transportation Authority, Executive Director Randy Iwasaki has been pushing to experiment with emerging technology in the road construction and maintenance industry for several years. So, when the authoritys construction manager, Ivan Ramirez, came to him with an idea to use drones in its $74 million interchange project, Iwasaki was eager to try it.

We often complain we dont have enough money for transportation, Iwasaki said, adding that the use of drones at the interchange project in Brentwood would enable the authoritys contractors to save paper, save time and save money.

Thats because, traditionally, survey crews standing on the edge of the freeway would take measurements of the dirt each time its moved. The process is time consuming and hazardous, Ramirez said. But its only the tip of the iceberg when it comes to potential applications for the drones technology, which could also be used to perform inspections on poles or bridges and perform tasks people havent yet thought of.

As you begin to talk to people, then other ideas begin to emerge about where we might be going, and its propelling more ideas for the future, Ramirez said. By not having surveyors on the road, or not having to send an inspector up in a manlift way up high or into a confined space, not only is it more efficient, but it will provide safety improvements, as well.

Meanwhile, in Pittsburg, the city is working with RoadBotics on a pilot program to better manage its local roads. The company uses car-mounted cellphone cameras to snap photos of street conditions before running that data through artificial intelligence software to create color-coded maps showing which roads are in good shape, which need monitoring and which are in need of immediate repairs.

The companys goal is to make it easier for city officials to monitor and manage their roads, so small repairs dont turn into complete overhauls, said Mark DeSantis, the companys CEO. Representatives from Pittsburg did not respond to requests for comment.

The challenge of managing roads is not so much filling the little cracks, thats not much of a burden, DeSantissaid. The real challenge is when you have to repave the road completely. So, the idea is to see the features on the road and see which ones are predictive of roads that are about to fail.

At the same time, Charles Chung of Brisk Synergies is hoping to use cameras and artificial intelligence software in a different way seeing how the design of the road influences how drivers behave. At the corner of Harbor and School streets, the company installed a camera to watch how cars, cyclists and pedestrians move through the intersection and to identify why drivers might be speeding. In particular, the company is also trying to determine how effective crossing guards are at slowing down cars, he said.

It is still in the process of gathering data on that intersection and writing its report, but Chung said it was able to use the software in Toronto to document a 30 percent reduction in vehicle crashes after the city made changes to an intersection there. Before, documenting the need for changes would require special crews to either monitor the roads directly or watch footage from a video feed, both of which take time and personnel.

While only emerging in a handful of projects locally, these types of technology will become far more prevalent soon, said Bart Ney of Alta Vista Solutions, the construction-management firm using drones on the SR 4 project.

Were at the beginning of the wave, he said. Like any disruptive technology, there is a period when you have to embrace it and take it into the field and test it so it can achieve what its capable of. Were on the brink of that happening.

Originally posted here:

Roadwork gets techie: Drones, artificial intelligence creep into the road construction industry - The Mercury News

AI2 lists top artificial intelligence systems in its Visual Understanding Challenge – GeekWire

For AI2s Charades Challenge, visual systems had to recognize and classify a wide variety of daily activities in realistic videos. This is just a sampling of the videos. (AI2 Photos)

Some of the worlds top researchers in AI have proved their mettle by taking top honors in three challenges posed by the Seattle-based Allen Institute for Artificial Intelligence.

The institute, also known as AI2, was created by Microsoft co-founder Paul Allen in 2014 to blaze new trails in the field of artificial intelligence. One of AI2sprevious challenges tested the ability of AI platforms to answer eighth-grade-level science questions.

The three latest challenges focused on visual understanding that is, the ability of a computer program to navigate real-world environments and situations using synthetic vision and machine learning.

These arent merely academic exercises: Visual understanding is a must-have for AI applications ranging from self-driving cars to automated security monitoring to sociable robots.

More than a dozen teams signed up for the competitions, and the algorithms were judged based on their accuracy. Here are the three challenges and the results:

Charades Activity Challenge: Computer vision algorithms looked at videos of people performing everyday activities for example, drinking coffee, putting on shoes while sitting in a chair, or snuggling with a blanked on a couch while watching something on a laptop. One of the algorithms objectives were to classify all activity categories for a given video, even if two activities were happening at the same time. Another objective was to identify the time frames for all activities in a video.

Team Kinetics from Google DeepMind won the challenge on both counts. In a statement, AI2 said the challenge significantly raised state-of-the-art accuracy for human activity recognition.

THOR Challenge: The teams computer vision systems had to navigate through 30 nearly photorealistic virtual scenes of living rooms and kitchens to find a specified target object, such as a fork or an apple, based solely on visual input.

THORs top finisher was a team from National Tsing Hua University in Taiwan.

Textbook Question Answering Challenge: Computer algorithms were given a data set of textual and graphic information from a middle-school science curriculum, and then were asked to answer more than 26,000 questions about the content.

AI2 said the competition was exceptionally close, but the algorithm created by Monica Haurilet and Ziad Al-Halah from Germanys Karlsruhe Institute of Technology came out on top for text questions. Yi Tay and Anthony Luu from Nanyang Technological University in Singapore won the diagram-question challenge.

The challenge participants significantly improved state-of-the-art performance on TQAs text questions, while at the same time confirming the difficulty machine learning methods have answering questions posed with a diagram, AI2 said.

The top test scores are pretty good for an AI. But theyd be failing grades for a flesh-and-blood middle-schooler: 42 percent accuracy on the text-question exam, and 32 percent on the diagram-question test.

Representatives from the winning teams will join other AI researchers at a workshop planned for Wednesday during the 2017 Conference on Computer Vision and Pattern Recognition in Honolulu.

Read this article:

AI2 lists top artificial intelligence systems in its Visual Understanding Challenge - GeekWire

Artificial Intelligence: The New Impulse For Alphabet – Seeking Alpha

An important shift from a mobile first world to an AI first world

Google CEO, Sundar Pichai

The active investments of Alphabet (NASDAQ:GOOG) (NASDAQ:GOOGL) in the artificial intelligence market, the growth rate of which over the next decade will be four times higher than that of the digital advertising market, increase the long-term investment attractiveness of the company.

To begin, let's take a look at the current growth forecasts for the global Artificial Intelligence (AI) market in the coming decade.

Here is information provided by Statista:

For a better clarity, Ive slightly modified these data and projected the trend until the year 2030. Here is what Ive got: over the next 15 years, this market will be growing at the CAGR of 40%, and in the next 10 years, it will be increasing by an average of 50% each year:

Tractica forecast (a market intelligence firm that focuses on human interaction with technology) is a bit more modest, but it still suggests that the annual worldwide AI revenue will grow from $643.7 million in 2016 to $36.8 billion by 2025, demonstrating a CAGR of 49.88%:

So, the growth in the next decade at an average annual rate of 50% - is really a lot?

It depends on what to compare with, but given that I'm performing this analysis through the prism of perspectives for Alphabet, it probably makes sense to compare AI with digital advertising market, which is accountable for 87% of Googles revenue.

As we can see, according to eMarketers data and assuming the trend will persist, in the coming decade this market will be growing at an average annual rate of 12.3%, i.e. four times slower than the AI market:

Also, Alphabet is one of the market leaders in cloud computing, therefore, I propose to compare the growth rate of this market with AI as well.

According to the Wikibon enterprise cloud spending will be growing at a CAGR of 19% between 2016 and 2026:

Approximately, the same forecast for the next five years was given by IDC:

So, in the horizon of the coming decade, the rates of growth of the AI market will be at least twice higher than those of the cloud computing market and four times higher than those of the digital advertising market. The most obvious conclusion from this: In order to ensure a double-digit annual growth rate in the next ten years, Alphabet needs to actively invest in the AI market. The good news for the owners of Alphabet shares is that the company is already actively doing it.

Starting with 2012, Alphabet acquired 11 startups specializing in AI, which exceeds the number of similar acquisitions by Microsoft (MSFT) and Facebook (FB) combined:

However, it should be remembered that the quantity does not always turn into quality. Nevertheless, if you judge about the success of Alphabet in the field of AI by the level of artificial intelligence of Google Voice Assistant, it becomes clear that the company is currently in a position of a leader.

According to the March study conducted by Stone Temple, that compared the quality of the responses of intelligent assistants developed by Alphabet (Google Assistant), Microsoft (Cortana), Apple (AAPL) (Siri) and Amazon (AMZN) (Alexa), Google Assistant gave answers to the biggest number of questions, and also made the smallest number of mistakes:

It has been observed that not the fastest runner wins a long distance race, but the one who starts earlier. The AI market is an incredibly long "distance," but, apparently, Alphabet has started this "race" first and is already a leader.

Moreover, Alphabet, being the most popular global search engine with an enormous amount of data, has all chances to remain on a leading position in the artificial intelligence market in the long term, and it will be the companys growth driver for the next decade.

P.S. I have recently published my version of Alphabet's valuation through the DCF analysis, and I came to the conclusion that, given the most conservative prediction parameters and the revenue growth at a CAGR of 12.5% in the next ten years, the fair price of the companys shares will be at least 30% above the current level. Considering the figures provided in this article, I will probably have to review the DCF model, increasing the revenue growth forecasts. Of course, this will enhance the growth potential of the company's share price.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Continued here:

Artificial Intelligence: The New Impulse For Alphabet - Seeking Alpha

The Power of Artificial Intelligence – A Glimpse at HR’s Future – HuffPost

Artificial Intelligence has infused its majestic wonders in the business world spread through many industries and niches. From predictive technology and fully automated self-regulating processes, to the way companies manage data and market their products; Artificial Intelligence is as real as peanut butter and jelly sandwiches.

And, its here to stay!

Its 2017. The question is how can companies effectively exploit the absolute potential of the tools available at their disposal, while growing their business and still maintaining a personal connection with the customer base?

Recently, Google has become one of several global powerhouses that are financially backing a new $150 million AI research institute in Toronto, Canada. This also includes $20,000 a year in funding as pledged by The Chan Zuckerberg Initiative, the investment fund of Facebook CEO Mark Zuckerberg and his wife Priscilla Chan.

One of the most interesting areas for AI applicability is Human Resources in the Retail and Hospitality industries, where reducing hiring biases and enforcing complete transparency for acquiring talented individuals is crucial.

Knockri, an innovative young tech start-up from Toronto, Canada is one such example of a company taking AI-based HR solutions by the horns.

The ultimate goal is to be able to help every recruiting team in the world, who require strong front-line customer service talent. Our AI system for HR uses audio and videoanalysis, assessing personality attributes to gauge how fit an applicant would be for a position based on employer feedback, industryknowledge and scientificallybacked data, says Jahanzaib Ansari, the companys CEO.

The software system does not eliminate the person to person interview; it only acts as a highly intelligent screening tool, allowing employers to get the best applicant to the interview a lot quicker. Since its founding in 2016, the start-up is already disrupting the retail and hospitality industry by saving employers an immense amount of time and money in the screening and short-listing process of hiring by using the power of AI.

The company is further enhancing their system alongside IBM Watsons cutting edge Artificial Intelligence technology, based out of IBMsInnovation Space in Toronto.

Artificial Intelligence has the potential to have an enormous impact on the Human Resources industry.

Here are 5 ways that stand out:

1. Personalization: Personalizing the process of learning at the corporate level and recording critical employee data relating to a broad spectrum of behaviours and learning patterns.

2. Improved Recruitment: HR is a highly human-centric realm. Since human beings are complicated creatures, its very hard to acquire basic analysis-focused data on individuals during the hiring process. This is where AI helps in predictive analytics using natural language, thus speeding up the recruitment process by empowering businesses to weed out undesirable candidates faster, while committing far fewer mistakes.

3. Workflow Automation: AI is poised to be a game-changer when it comes to workflow problems. Automating processes like interview scheduling, employee performance reviews, employee on-boarding, and even the answering of basic HR questions - all fall under this category.

4. Better prediction models: Prediction models are vital in improving efficiency, productivity and overall cost effectiveness for any organization. This is where AI works its magic by analyzing turnover rates, internal employee engagement levels and communications or any other unforeseen problems that could take months or years to be visibly surface. Artificial Intelligence will always be one step ahead of companies themselves.

5. Reducing Hiring Biases: The quality of data that a companys AI is being trained on is really important. One of AIs fundamental goals should naturally be to help companies build a stronger & more diverse workforce.This means that the data being used to train the AI should be as unbiased as possible, so it can really help businesses increase their employment equity in an honest way.

With a lot of retail brick and mortars shutting down, a huge focus has been put on the in-store customer experience.

It is critical that employers get the best applicant for a position 10x quicker by using technologies and software that work as an enabler; making daily operations alot more efficient via the power of AI, instead of replacing the traditional recruiter altogether.

So whats the take-away here?

HR professionals need to embrace big data, so they can be prepared to adapt the profound technological advancements in AI that are set to revolutionise the recruitment industry forever!

With the fast paced and ever evolving world of business going through tremendous technological changes, it is imperative that companies around the globe - particularly those in the HR, Retail and Hospitality industries - understand, employ and effectively apply the power of Artificial Intelligence.

The Morning Email

Wake up to the day's most important news.

Read the original post:

The Power of Artificial Intelligence - A Glimpse at HR's Future - HuffPost