Daily Archives: August 1, 2017

The ‘creepy Facebook AI’ story that captivated the media – BBC News

Posted: August 1, 2017 at 6:18 pm


BBC News
The 'creepy Facebook AI' story that captivated the media
BBC News
The newspapers have a scoop today - it seems that artificial intelligence (AI) could be out to get us. "'Robot intelligence is dangerous': Expert's warning after Facebook AI 'develop their own language'", says the Mirror. Similar stories have appeared ...
Dystopian Fear Of Facebook's AI Experiment Is Highly ExaggeratedForbes
Facebook didn't kill its language-building AI because it was too smartit was actually too dumbQuartz
This is how Facebook's shut-down AI robots developed their own language and why it's more common than you thinkThe Independent
Gizmodo -CBS Los Angeles -Fast Co. Design -Facebook Code
all 205 news articles »

Read more from the original source:

The 'creepy Facebook AI' story that captivated the media - BBC News

Posted in Ai | Comments Off on The ‘creepy Facebook AI’ story that captivated the media – BBC News

Facebook Buys AI Startup Ozlo for Messenger – Investopedia

Posted: at 6:18 pm


Investopedia
Facebook Buys AI Startup Ozlo for Messenger
Investopedia
According to media reports, past demos on the company's website show how an AI digital assistant developed by the company can tell a user if a restaurant is group-friendly by gathering and analyzing all the reviews of the establishment. On its website ...
Facebook Acquires AI Startup OzloInc.com
Facebook buys Ozlo to boost its conversational AI effortsTechCrunch
Facebook acquired an AI startup to help Messenger build out its ...Recode
YourStory.com -Fortune -GeekWire
all 53 news articles »

Continue reading here:

Facebook Buys AI Startup Ozlo for Messenger - Investopedia

Posted in Ai | Comments Off on Facebook Buys AI Startup Ozlo for Messenger – Investopedia

LogMeIn acquires chatbot and AI startup Nanorep for up to $50M – TechCrunch

Posted: at 6:18 pm

LogMeIn, the company that provides authentication and other connectivity solutions for those who connect remotely to networks and services, has made another acquisition to expand the products it offers to customers, specifically in its new Bold360 CRM platform, launched in June. The company has picked up Nanorep, a startup out of Israel that develops chatbots and other AI-based tools to help people navigate self-service apps.

LogMeIn is paying $45 million plus up to $5 million more in earn-outs based on performance and employees staying put over the next two years.

Nanorep had raised just under $7 million from investors that included Titanium out of Russia (which had also backed Cloudyn, recently acquired by Microsoft), Oryzn Capital and OurCrowd.

The startup already had around 500 large customers, including big names like FedEx, ToysRUs and Vodafone. In essence, its platform helps anticipate what customers are trying to do when theyre on a website say in a technical support or search situation and reduces the number of steps needed to get there. It looks like all of Nanoreps existing business will continue as its tech also gets integrated into Bold360.

LogMeIns launch of Bold360 earlier this year was intended to help the company expand the range of services that it offered to customers, beyond authentication and IT management within an organisation, and into more cloud-based services where the business interfaces with its customers.

However, the CRM space is already very crowded, and so its no surprise to see that LogMeIn has made an acquisition to add more features to the service to help set it apart from the pack.

With Nanorep, its also tapping into the recent enthusiasm and interest in AI and building intelligent services that mimic human behaviours, specifically in CRM.

Artificial intelligence is changing the way we interact with our favorite brands and will play a critical role in the future of customer engagement, said Bill Wagner, CEO, LogMeIn, in a statement. With Nanorep, we gain proven technology and AI expertise that expands our Bold360 offering, accelerates our customer engagement vision and provides a natural path for us to leverage these emerging technologies across our entire portfolio. We believe in the ability of technology to unlock the potential of the modern workforce and with the addition of Nanorep we are going to be able to deliver solutions that will help our customers achieve the next generation of humanized and personalized customer service.

Although LogMeIn has acquired Nanobot to help raise its game in CRM, on another level this is also an important move just to keep up.

Gartner predicts that conversational agents which you can interpret as a more fancy way of saying chatbots will account for 30 percent of all customer service interactions by 2022, up from just three percent today.

There are many others that are also active in this same area, includingSalesforce with its Einstein AI, Gong, which provides real-time processing and teaching to live agents; and Hubspot, which just made an acquisition of its own, of Kemvi.

What Nanorep is tapping into that is interesting is the fact that the vast range of businesses in the world are not tech-centric, and so they will be less capable of building AI solutions like chatbots themselves, nor will they want to spend an arm and a leg to get them: like all software, AI is gradually moving into the realm of being off-the-shelf, and LogMeIn is hoping to be a part of that trend.

This is publicly traded LogMeIns seventh acquisition, and its first since acquiring password manager LastPass in 2015 for $110 million.

See the rest here:

LogMeIn acquires chatbot and AI startup Nanorep for up to $50M - TechCrunch

Posted in Ai | Comments Off on LogMeIn acquires chatbot and AI startup Nanorep for up to $50M – TechCrunch

Google says AI better than humans at scrubbing extremist YouTube … – The Guardian

Posted: at 6:18 pm

YouTube says new, tougher rules will be implemented against supremacist content. Photograph: Sergei Konkov/TASS

Google has pledged to continue developing advanced programs using machine learning to combat the rise of extremist content, after it found that it was both faster and more accurate than humans in scrubbing illicit content from YouTube.

The company is using machine learning along with human reviewers as part of a mutli-pronged approach to tackle the spread of extremist and controversial videos across YouTube, which also includes tougher standards for videos and the recruitment of more experts to flag content in need of review.

A month after announcing the changes, and following UK home secretary Amber Rudds repeated calls for US technology firms to do more to tackle the rise of extremist content, Googles YouTube has said that its machine learning systems have already made great leaps in tackling the problem.

A YouTube spokesperson said: While these tools arent perfect, and arent right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.

Our initial use of machine learning has more than doubled both the number of videos weve removed for violent extremism, as well as the rate at which weve taken this kind of content down. Over 75% of the videos weve removed for violent extremism over the past month were taken down before receiving a single human flag.

One of the problems YouTube has in policing its site for illicit content is that users upload 400 hours of content every minute, making filtering out extremist content in real time an enormous challenge that only an algorithmic approach is likely to manage, the company says.

YouTube also said that it had begun working with 15 more NGOs and institutions, including the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue in an effort to improve the systems understanding of issues around hate speech, radicalisation and terrorism to better deal with objectionable content.

Google will begin enforcing tougher standards on videos that could be deemed objectionable, but are not illegal, in the coming weeks. The company said that YouTube videos flagged as inappropriate that contain controversial religious or supremacist content, but that do not breach the companys policies on hate speech or violent extremism will be placed in a limited state.

A YouTube spokesperson said: The videos will remain on YouTube behind an interstitial, wont be recommended, wont be monetised, and wont have key features including comments, suggested videos, and likes.

YouTube has also begun redirecting searches with certain keywords to playlists of curated videos that confront and debunk violent extremist messages, as parts of its effort to help prevent radicalisation.

Google plans to continue developing the machine learning technology and to collaborate with other technology companies to tackle online extremism.

YouTube is the worlds largest video hosting service and is one of the places extremist and objectionable content ends up, even if it originates and is removed from other services, including Facebook, making it a key battleground.

Big-name brands, including GSK, Pepsi, Walmart, Johnson & Johnson, the UK government and the Guardian pulled millions of pounds of advertising from YouTube and other social media properties after it was found their ads were placed next to extremist content.

See the original post here:

Google says AI better than humans at scrubbing extremist YouTube ... - The Guardian

Posted in Ai | Comments Off on Google says AI better than humans at scrubbing extremist YouTube … – The Guardian

Better drugs, faster: The potential of AI-powered humans – BBC News

Posted: at 6:18 pm


BBC News
Better drugs, faster: The potential of AI-powered humans
BBC News
Scientists working in tandem with artificial intelligence (AI) could slash the time it takes to develop new drugs - and, crucially, the cost - say tech companies. Developing pharmaceutical drugs is a very expensive and time-consuming business.

and more »

Read this article:

Better drugs, faster: The potential of AI-powered humans - BBC News

Posted in Ai | Comments Off on Better drugs, faster: The potential of AI-powered humans – BBC News

Why A Third Of Retailers Continue To Shun AI And Voice-Activation – GeoMarketing (blog)

Posted: at 6:18 pm

The retail industry has been gradually ramping up its use of artificial intelligence and voice-activation skills, there are still some holdouts due the lack of human-ness that nuanced and complex customer service tends to demand, a study cited by eMarketer shows.

Pointing to a survey of retailers by AI platform Linc and Brand Garage from May, eMarketer notes that over a third (34.1 percent) of US retail executives claim to be piloting AI programs.

The main use cases involve assisting human sales and customer service reps in dealing with shoppers problems or through quick conversations via chatbots.

As companys like Facebook continue to explore the use the AI and chatbots, retailers and other brick-and-mortar brands, along with their customers, will be experiencing more contact with AI whether theyre ready or not.

This week, Facebook acquired AI developer Ozlo to support its Messenger platform. Ozlo is primarily focused on building messenger bots that allows AI systems to respond to text-based take-out orders, connect consumers with ride-hailing services, among many other things.

Facebook has been building out its own intelligent agent, dubbed M, for over a year. By incorporating deeper machine learning techniques to better understand consumers and recommend a food order or ride service, the social network may help address one of the pain points that retailers say continues to hold them back from fully adopting AI programs.

At the moment, just 7.7 percent of retailers surveyed by Linc and Brand Garage have an existing, ongoing role for AI in their customer service programs.

For the ones who have not created a program for voice-activated assistants or chatbots, over a third (36.2 percent) say the technology isnt sophisticated enough to do what they need. Many retailers say they dont have the technical resources to support an AI initiative. Still others remain dubious that the technologies are ready to reach the mainstream at this early point.

On top of that, a perceptible minority 8.7 percent contend that implementing machine-based conversations would repel some consumers because of the sense that the direct connection with consumers would be eroded.

If someone wants a human, they will obviously be disappointed by an AI bot/assistant, says eMarketer principal analyst Victoria Petrock. However, as the systems learn customers preferences and become more sophisticated at predicting their wants and needs, AI bots might actually deliver up a more precise and personalized experience. They also may free up the employees to do higher-level work.

Erik Lautier, EVP of e-commerce/CMO at Francescas, a US-based womens clothing and accessories boutique with hundreds of locations, is one retail executive that has been bullish on AI.

Were currently experimenting with two things: first, our chatbot on Facebook Messenger shows our customers weather-based outfitting recommendations, closest boutique locations, etc.; second, our customers can receive shipping updates via Facebook Messenger or SMS, Lautier said in the Linc/Brand Garage survey.

While ROI is a challenge to measure near-term, as we enhance our capabilities in CRM, I expect well develop an understanding of how segments that interact with AI perform relative to others, Lautier added. We also havent approached our testing with an ROI first mentality. Weve developed these things because we felt a certain customer segment would find genuine value in it, and we expect that segment to grow in the months to come.

More here:

Why A Third Of Retailers Continue To Shun AI And Voice-Activation - GeoMarketing (blog)

Posted in Ai | Comments Off on Why A Third Of Retailers Continue To Shun AI And Voice-Activation – GeoMarketing (blog)

Panasonic AI senses drowsy drivers and cranks up the AC – Engadget

Posted: at 6:18 pm

Panasonic came up with five different levels of potential drowsiness: not drowsy at all, slightly drowsy, drowsy, very drowsy and seriously drowsy (their terms). The system aims to figure out exactly where you are on that scale and take the appropriate measures.

To do so, the camera uses AI facial recognition to detect eyeblinks and expressions. If your eyelids droop and the speed of your blinks slow down drastically, for instance, you're likely on a level five journey to sleepyville. All told, it can detect around 1,800 facial expressions and blink parameters related to drowsiness.

The infrared sensor, meanwhile, can tell how fast you're losing heat regardless of how much clothing you're wearing. If heat loss levels are higher, generally a driver will become drowsy more quickly. Furthermore, if it's dark rather than light, you'll also tend to get sleepy over a shorter period of time.

To counter that, the system adjusts lighting, airflow and temperature based on how drowsy it thinks you are. The system doesn't want to freeze you out, though, so, Panasonic worked with Nara Women's University to calculate your "thermal sensation," the ideal level of airflow, light and warmth needed to keep you "comfortably awake," Panasonic says.

Unlike other systems, it works silently in the background so that drivers don't even notice they're being monitored. Rather, you'll (hopefully) just feel generally more awake during the trip, unless you try to pull off a 20-hour all-night trip. In that case, it'll rightfully tell you to pull the hell over so you don't endanger yourself and others. Panasonic plans to make their system available to automakers by October, and it might come to your favorite car model sometime after that.

See the article here:

Panasonic AI senses drowsy drivers and cranks up the AC - Engadget

Posted in Ai | Comments Off on Panasonic AI senses drowsy drivers and cranks up the AC – Engadget

Facebook Shut Down An Artificial Intelligence Program That …

Posted: at 6:18 pm

Provided by UPROXX Media Group Inc. Uproxx

Facebook might have accidentally gotten a little closer to answering Phillip K. Dicks 1968 question of whether androids dream of electric sheep. The social media giant just shut down an artificial intelligence program after it developed its own language and researchers were left trying to figure out what two AIs were talking about. The AIs had found a way to negotiate with one another, but the way they debated used English words reduced to a more logical structure that made more sense to the computers than to their human observers. What at first looked like an unintelligible failure to teach the AIs to talk instead was revealed as a result of the computers reward systems prizing efficiency over poetry.

There are plenty of computer languages developed by humans to help computers follow human instructions: BASIC, C, C++, COBOL, FORTRAN, Ada, and Pascal, and more. And then there is TCP/IP, which helps machines communicate with one another across computer networks. But those are all linguistic metaphors used to describe electronic functions, rather than the vocabulary we need to discuss the huge leap forward an artificial intelligence developed by Facebook recently made. The goal was ultimately to develop an AI that could communicate with humans, but instead the research took a left turn when instead the computers learned to communicate with one another in a way that locked humans out by not following the rules of English.

For example, two computers negotiating who got a certain number of balls had a conversation that went like this:

Bob:i can i i everything else . . . . . . . . . . . . . .

Alice:balls have zero to me to me to me to me to me to me to me to me to

Bob:you i everything else . . . . . . . . . . . . . .

Alice:balls have a ball to me to me to me to me to me to me to me

Though it looks primitive and a little nonsensical, at its heart, this isnt so different from the way that the English language evolves through human use. Think for example of how short form electronic communication like texting and Twitter has lead to abbreviations and the elimination of articles that might get you docked for bad grammar in class but are quicker to write and read in common use. Or think of phrases like baby mamma that developed to distill the complexities and subtitles of different relationships into a single turn of phrase that can efficiently convey connections and identities.

Eventually researchers worked out what was going on, and shut down the program. There are obvious concerns with learning computers developing languages that outpace our own abilities to translate and follow their inherent logic. Not to mention that Facebook never designed their AI to be a vanguard of linguistic evolution. They just want their platform to talk to users in a clearcut way. But what they stumbled on could prove very helpful to the next generation of linguists working on the cybernetic frontier.

(Via Digital Journal &The Atlantic)

Read the original here:

Facebook Shut Down An Artificial Intelligence Program That ...

Posted in Artificial Intelligence | Comments Off on Facebook Shut Down An Artificial Intelligence Program That …

Sad Songs, Artificial Intelligence and Gracenote’s Quest to Unlock the World’s Music – Variety

Posted: at 6:18 pm

Its all about that vibe. Anyone who has ever compiled a mix-tape, or a Spotify playlist for that matter, knows that compilations succeed when they carry a certain emotional quality across their songs.

Thats why the music data specialists at Gracenote have long been classifying the worlds music by moods and emotions. Only, Gracenotes team hasnt actually listened to each and every one of the 100 million individual song recordings in its database. Instead, it has taught computers to detect emotions, using machine listening and artificial intelligence (AI) to figure out whether a song is dreamy, sultry, or just plain sad.

Machine learning is a real strategic edge for us, said Gracenotes GM of music Brian Hamilton during a recent interview.

Gracenote began its work on what it calls sonic mood classification about 10 years ago. Over time, that work has evolved, as more traditional algorithms were switched out for cutting-edge neural networks. And quietly, it has become one of the best examples for the music industrys increasing reliance on artificial intelligence.

First things first: AI doesnt know how you feel.We dont know which effect a musical work will have on an individual listener, said Gracenotes VP of research Markus Cremer during an interview with Variety. Instead, it is trying to identify the intention of the musician as a kind of inherent emotional quality. In other words: It wants to teach computers which songs are truly sad, not which song may make you feel blue because of some heartbreak in your teenage years.

Still, teaching computers to identify emotions in music is a bit like therapy: First, you name your feelings. Gracenotes music team initially developed a taxonomy of more than 100 vibes and moods, and has since expanded that list to more than 400 such emotional qualities.

Some of these include obvious categories like sultry and sassy, but there are also extremely specific descriptors like dreamy sensual, gentle bittersweet, and desperate rabid energy. New categories are constantly being added, while others are fine-tuned based on how well the system performs. Its sort of an iterative process, explained Gracenotes head of content architecture and discovery Peter DiMaria. The taxonomy morphs and evolves.

In addition to this list of moods, Gracenote also uses a so-called training set for its machine learning efforts. The companys music experts have picked and classified some 40,000 songs as examples for these categories. Compiling that training set is an art of its own. We need to make sure that we give it examples of music that people are listening to, said DiMaria. At the same time, songs have to be the best possible example for any given emotion. Some tracks are a little ambiguous, he said.

The current training set includes Lady Gagas Lovegame as an example for a sexy stomper, Radioheads Pyramid Song as plaintive, and Beyonces Me Myself & I as an example for soft sensual & intimate.

Just like the list of emotions itself, that training set needs to be kept fresh constantly. Artists are creating new types of musical expressions all the time, said DiMaria. We need to make sure the system has heard those. Especially quickly-evolving genres like electronica and hip-hop require frequent updates.

Once the system has been trained with these songs, it is being let loose on millions of tracks. But computers dont simply listen to long playlists of songs, one by one. Instead, Gracenotes system cuts up each track into 700-millisecond slices, and then extracts some 170 different acoustic values, like timbre, from any such slice.

In addition, it sometimes takes larger chunks of a song to analyze a songs rhythm and similar features. Those values are then being compared against existing data to classify each song. The result isnt just a single mood, but a mood profile.

All the while, Gracenotes team has to periodically make sure that things dont go wrong. A musical mix is a pretty complex thing, explained Cremer.

With instruments, vocals, and effects layered on top of each other and the result being optimized for car stereos or internet streaming, there is a lot to listen to for a computer including things that arent actually part of the music.

It can capture a lot of different things, said Cremer. Unsupervised, Gracenotes system could for example decide to pay attention to compression artifacts, and match them to moods, with Cremer joking that the system may decide: Its all 96 kbps, so this makes me sad.

Once Gracenote has classified music by moods, it delivers that data to customers, which use it in a number of different ways. Smaller media services often license Gracenotes music data as their end-to-end solution for organizing and recommending music. Media center app maker Plex for example uses the companys music recommendation technology to offer its customers personalized playlists and something the company calls mood radio. Plex users can for example pick a mood like gentle bittersweet, press play, and then wait for Mazzy Star to do its thing.

Gracenote also delivers its data to some of the industrys biggest music service operators, including Apple and Spotify. These big players typically dont like to talk about how they use Gracenotes data for their products. Bigger streaming services generally tend to operate their own music recommendation algorithms, but they often still make use of Gracenotes mood data to train and improve those algorithms, or to help human curators pre-select songs that are then being turned into playlists.

This means that music fans may be acutely aware of Gracenotes mood classification work, while others may have no idea that the companys AI technology has helped to improve their music listening experience.

Either way, Gracenote has to make sure that its data translates internationally, especially as it licenses it into new markets. On Tuesday, the company announced that it will begin to sell its music data product, which among other things includes mood classification as well as descriptive, cleaned-up metadata for cataloging music, in Europe and Latin America. To make sure that nothing is lost in translation, the company employs international editors who not just translate a word like sentimental, but actually listen to example songs to figure out which expression works best in their cultural context.

And the international focus goes both ways. Gracenote is also constantly scouring the globe to feed its training set with new, international sounds. Our data can work with every last recording on the planet, said Cremer.

In the end, classifying all of the worlds music is really only possible if companies like Gracenote do not just rely on humans, but also on artificial intelligence and technologies like machine listening. And in many ways, teaching computers to detect sad songs can actually help humans to have a better and more fulfilling music experience if only because relying on humans would have left many millions of songs unclassified, and thus out of reach for the personalized playlists of their favorite music services.

Using data and technology to unlock these songs from all over the world has been one of the most exciting parts of his job, said Cremer: The reason Im here is to make sure that everyone has access to all of that music.

More here:

Sad Songs, Artificial Intelligence and Gracenote's Quest to Unlock the World's Music - Variety

Posted in Artificial Intelligence | Comments Off on Sad Songs, Artificial Intelligence and Gracenote’s Quest to Unlock the World’s Music – Variety

Amber Rudd urges online giants to use artificial intelligence to block extremist material being uploaded to internet – Telegraph.co.uk

Posted: at 6:18 pm

Amber Rudd is urging online giants to take preemptive action using artificial intelligence to stop extremist material from being uploaded to the internet.

The Home Secretary said companies needed to use technological advancements to block inappropriate content from being shared on the web in the first place as she advocated a shiftaway from ministers having to ask for material to be taken down.

Ms Rudd will challenge the likes of Facebook, Twitter, Microsoft and Google to do more to tackle extremist content as she attends the inaugural meeting of the Global Internet Forum to Counter Terrorism in Silicon Valley on Tuesday.

Meanwhile, Ms Rudd has urged online messaging services such as WhatsApp to stop using "unbreakable" encryption because of fears that it only benefits terrorists.

Read this article:

Amber Rudd urges online giants to use artificial intelligence to block extremist material being uploaded to internet - Telegraph.co.uk

Posted in Artificial Intelligence | Comments Off on Amber Rudd urges online giants to use artificial intelligence to block extremist material being uploaded to internet – Telegraph.co.uk