Daily Archives: July 8, 2017

Is government ready for AI? – FCW.com

Posted: July 8, 2017 at 4:15 am

Emerging Tech

Artificial intelligence is helping the Army keep its Stryker armored vehicles in fighting shape.

Army officials are using IBMs Watson AI system in combination with onboard sensor data, repair manuals and 15 years of maintenance data to predict mechanical problems before they happen. IBM and the Armys Redstone Arsenal post in Alabama demonstrated Watsons abilities on 350 Stryker vehicles during a field test that began in mid-2016.

The Army is now reviewing the results of that test to evaluate Watsons ability to assist human mechanics, and the early insights are encouraging.

The Watson AI enabled the pilot programs leaders to create the equivalent of a personalized medicine plan for each of the vehicles tested, said Sam Gordy, general manager of IBM U.S. Federal. Watson was able to tell mechanics that you need to go replace this [part] now because if you dont, its going to break when this vehicle is out on patrol, he added.

The Army is one of a handful of early adopters in the federal government, and several other agencies are looking into using AI, machine learning and related technologies. AI experts cite dozens of potential government uses, including cognitive chatbots that answer common questions from the public and complex AIs that search for patterns that could signal Medicaid fraud, tax cheating or criminal activity.

There are, for a lack of a better number, a gazillion sweet spots for AI in government, said Daniel Enthoven, business development manager at Domino Data Lab, a vendor of AI and data science collaboration tools.

Still, many agencies will need to answer some difficult questions before they embrace AI, machine learning and autonomous systems. For instance, how will the agencies audit decisions made by intelligent systems? How will they gather data from often disparate sources to fuel intelligent decisions? And how will agencies manage their employees when AI systems take over tasks previously performed by humans?

Intelligence agencies are using Watson to comb through piles of data and provide predictive analysis, and the Census Bureau is considering using the supercomputer-powered AI as a first-line call center that would answer peoples questions about the 2020 census, Gordy said.

A Census Bureau spokesperson added that the AI virtual assistant could improve response times and enhance caller interactions.

Using AI should save the bureau money because you have a computer doing this instead of people, Gordy said. And if trained correctly, the system will provide more accurate answers than a group of call-center workers could.

You train Watson once, and it understands everything, he said. Youre getting a very consistent answer, time after time after time.

For many agencies, however, its still early in the AI adoption cycle. Use of the technology is very, very nascent in government, said William Eggers, executive director of Deloittes Center for Government Insights and co-author of a recent study on AI in government. If it was a nine-inning [baseball] game, were probably in the first inning right now.

He added that over the next couple of years, agencies can expect to see AI-like functionality being incorporated into the software products marketed to them.

The first step for many civilian agencies appears to be using AI as a chatbot or telephone agent. Daniel Castro, vice president of theInformation Technology and Innovation Foundation, said intelligent agents should be able to answer about 90 percent of the questions agencies receive, and the people asking those questions arent likely to miss having a human response.

Its not like people are expecting to know their IRS agents when they call them up with a question, he said.

The General Services Administrations Emerging Citizen Technology program launched an open-source pilot project in April to help federal agencies make their information available to intelligent personal assistants such as Amazons Alexa, Googles Assistant and Microsofts Cortana. More than two dozen agencies including the departments of Energy, Homeland Security and Transportation are participating.

Many vendors and other technology experts see huge opportunities for AI inside and outside government. In June, an IDC study sponsored by Salesforce predicted that AI adoption will ramp up quickly in the next four years. AI-powered customer relationship management activities will add $1.1 trillion to business revenue and create more than 800,000 jobs from 2017 to 2021, the study states.

In the federal government, using AI to automate tasks now performed by employees would save at least 96.7 million working hours a year, a cost savings of $3.3 billion, according to the Deloitte study. Based on the high end of Deloittes estimates, AI adoption could save as many as 1.2 billion working hours and $41.1 billion every year.

AI-based applications can reduce backlogs, cut costs, overcome resource constraints, free workers from mundane tasks, improve the accuracy of projections, inject intelligence into scores of processes and systems, and handle many other tasks humans cant easily do on our own, such as sifting through millions of documents in real time for the most relevant content, the report states.

Although some might fear a robot takeover, Eggers said federal workers should not worry about their jobs in the near term. Although theres likely to be pressure from lawmakers to use AI to reduce the governments headcount, agencies should look at AI as a way to supplement employees work and allow them to focus on more creative and difficult tasks, he added.

Read the original post:

Is government ready for AI? - FCW.com

Posted in Ai | Comments Off on Is government ready for AI? – FCW.com

Josh.ai raises $11 million for a premium home automation system … – TechCrunch

Posted: at 4:14 am

One of the promises of voice-based computing is the ability to make home automation simpler something that major tech companies, including Amazon, Apple and Google, are now tackling with their own voice assistants and smart speakers. But their solutions are still somewhat clunky, both in terms of the software interface for configuring your smart home and the voice commands you use to take actions. Thats where the startup Josh.ai comes in.

The company has now raised $11 million to design a better voice-controlled system for smart homes, and will later this year release its own hardware dedicated to this purpose.

Headquartered in Denver with offices in L.A., Josh.ai is the product of serial entrepreneursAlex Capecelatro, CEO, and Tim Gill, CTO. The two previously worked together on a social recommendations app Yeti, which had begun its life as At The Pool, andwas sold back in 2015. Gill, who had previously founded and sold Quark (Quark XPress), had joined Yeti as a technical advisor, and wrote a number of the algorithms used in the app.

Following the sale of Yeti, the two teamed up again to work on a project in the smart home space something they were both interested in for personal reasons.

Gill, for example, had spent years developing his own home automation system his version of Mark Zuckerbergs Jarvis to run inside the large residential property he was building in Denver.

He was well underway in building the house and understanding what the competition looked likewhat the product offerings looked like, explainsCapecelatro. And he was pretty dissatisfied with what was out there.

Meanwhile,Capecelatro was also building a home for himself in L.A., and running into the same problems.

I was just amazed that all of the big automation systems Crestron, Control4, and Savant they cost hundreds of thousands of dollars, and the [user interface] looks like its from the 90s, he says. It was weird that for a ton of money in my home where you want to have a delightful experience, the best offerings on the table just werent that good.

The founders saw a need in the market for something that sits above mass market solutions, like Apples Home app, or Alexas smart home control, which focus more on tying together after-market devices, like security cameras, smart doorbells, or smart lights like Philips Hue.

They founded the startup Josh.ai in March 2015, and shipped the first product the following year.

The solution, as it exists today, includes a kit with a Mac mini and iPad, and software that runs the home. After plugging in the Mac, Josh.ai auto-discovers devices on the network. It can identify those from over 50 manufacturers. For example, it can control lighting and shades like those from Lutron, music systems like Sonos, dozens of brands of security cameras, Nest thermostats, Samsung smart TVs, and even more niche products like Global Cachs box for controlling IR devices (such as your not-so-smart TVs).

The automatic speech recognition (AKA speech-to-text) portion of Josh.ais system is handled in the cloud, while Mac mini handles the natural language processing to know what your commands mean.

What makes Josh.ai unique is not just its software interface, but how users interact with the system. You speak to the voice assistant Josh to tell the home what to do. (You can also change its name if thats an issue, or even pick from a variety of male and female voices and accents.)

Josh, or the wake word youve chosen, precedes your command, which can be spoken using more natural language. The system is better than many when it comes to interpreting what you mean, by nature of its single-purpose focus on home automation.

For instance, you can tell Josh to turn it off, and it will know what it means because it remembers what it had turned on before. Or you can say, its hot in here, and Josh will know how to adjust your thermostat.

It can also deep-link to streaming video content, so you can ask to watch Planet Earth, and Josh will turn on the TV, switch to the right input, launch Netflix, then start playing the show.

Josh.ai supports scenes, as well, allowing you to configure a number of devices to work together like lights, shades, music, fans, thermostats, and other switches. That way, you can say things like turn everything off, and Josh knows to shut down all the connected devices in the home.

Where the system gets really smart is in its ability to handle complex, compound commands meaning controlling multiple devices in one sentence.

You can say to Josh, play Simon and Garfunkel and turn on the lights, for example. Or, play Explosions in the sky in the kitchen, and play Simon and Garfunkel in the living room. Other systems could get tripped up by the and and the in the in the artists names, but Josh.ai understands when those words are a break between two commands, and when theyre part of something else.

The current system which was largely designed for high-end homes is sold by professional integrators at around $10,000 and up, depending on the components involved. To date, the team has sold more than 50 and fewer than 100 installations.

Josh.ai can work over your Echo or Google Home, if you prefer, and includes interfaces for iOS, Android and the web. But the company is now preparing to launch its own, farfield mic solution in a new hardware device thats built specifically for use in the home.

While the new hardware will perform some basic virtual assistant type tasks telling you the weather, perhaps (the company isnt confirming specific features at this time) the main focus will be on home automation.

Above: a tease of the new device

The hardware wont be a cylindrical shape like Echo or Google Home, but will be designed with an aesthetic appeal in mind.

It also wont be super cheap.

It will still be a premium product, but it will be a lot less than where the current product is. And the idea is this will enable our mass market rollout in probably a year to eighteen months, notesCapecelatro, speaking of his plan to keep bringing Josh.ais technology to ever larger audiences.

Josh.ai, a team of 15 soon to be 25, recently closed on $8 million in new funding, largely from the founders personal networks. The investors names arent being disclosed because theyre not institutional firms. To date, Josh.ai has raised $11 million, but has not yet added anyone to its board.

Read the original here:

Josh.ai raises $11 million for a premium home automation system ... - TechCrunch

Posted in Ai | Comments Off on Josh.ai raises $11 million for a premium home automation system … – TechCrunch

Google is helping fund AI news writers in the UK and Ireland – The Verge

Posted: at 4:14 am

Google is giving the Press Association news agency a grant of 706,000 ($806,000) to start writing stories with the help of artificial intelligence. The money is coming out of the tech giants Digital News Initiative fund, which supports digital journalism in Europe. The PA supplies news stories to media outlets all over the UK and Ireland, and will be working with a startup named Urbs Media to produce 30,000 local stories a month with the help of AI.

The editor-in-chief of the Press Association, Peter Clifton, explained to The Guardian that the AI articles will be the product of collaboration with human journalists. Writers will create detailed story templates for topics like crime, health, and unemployment, and Urbs Medias Radar tool (it stands for Reporters And Data And Robots) will fill in the blanks and helping localize each article. This sort of workflow has been used by media outlets for years, with the Los Angeles Times using AI to write news stories about earthquakes since 2014.

Skilled human journalists will still be vital in the process, said Clifton, but Radar allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually.

The money from Google will also be used to make tools for scraping information from public databases in the UK, like those generated by local councils and the National Health Service. The Radar software will also auto-generate graphics for stories, as well as add relevant videos and pictures. The software will start being used from the beginning of next year.

Some reporters in the UK, though, are skeptical about the new scheme. Tim Dawson, president of the National Union of Journalists, told The Guardian: The real problem in the media is too little bona fide reporting. I dont believe that computer whizzbangery is going to replace that. What Im worried about in my capacity as president of the NUJ is something that ends up with third-rate stories which look as if they are something exciting, but are computer-generated so [news organizations] can get rid of even more reporters.

Visit link:

Google is helping fund AI news writers in the UK and Ireland - The Verge

Posted in Ai | Comments Off on Google is helping fund AI news writers in the UK and Ireland – The Verge

TrueFace.AI busts facial recognition imposters – Mashable

Posted: at 4:14 am


Mashable
TrueFace.AI busts facial recognition imposters
Mashable
The company originally created Chui in 2014 to work with customized smart homes. Then they realized clients were using it more for security purposes, and TrueFace.AI was born. Shaun Moore, one of the creators of TrueFace.AI, gave us some more insight ...

and more »

See more here:

TrueFace.AI busts facial recognition imposters - Mashable

Posted in Ai | Comments Off on TrueFace.AI busts facial recognition imposters – Mashable

Google DeepMind teams with Open AI to prevent a robot uprising – Engadget

Posted: at 4:14 am

Google DeepMind and Open AI, a lab partially funded by Elon Musk, released a research article outlining a new method of machine learning. It actually takes its cues from humans when it comes to learning new tasks. This could be safer than allowing an AI to figure out how to solve a problem on its own, which has the potential to introduce unwelcome surprises.

The main problem that the research tackled was when an AI discovers the most efficient way to achieve maximum rewards is to cheat -- the equivalent of shoving everything on the floor of your room into a closet and declaring it "clean." Technically, the room itself is clean, but that's not what's supposed to happen. Machines are able to find these workarounds and exploit them in any given problem.

The issue is with the reward system, and that's where the two groups focused their efforts. Rather than crafting an overly complex reward system that machines can cut through, the teams used human input to reward the AI. When the AI solved a problem the way trainers wanted to, it got positive feedback. Using this method, the AI was able to learn play simple video games.

While this is an encouraging breakthrough, it's not widely applicable: This type of human feedback is much too time consuming. But through collaborations like this, it's possible that we can control and direct the development of AI and prevent machines from eventually becoming smart enough to destroy us all.

Read the rest here:

Google DeepMind teams with Open AI to prevent a robot uprising - Engadget

Posted in Ai | Comments Off on Google DeepMind teams with Open AI to prevent a robot uprising – Engadget

Why artificial intelligence is far too human – The Boston Globe

Posted: at 4:14 am

LUCY NALAND FOR THE BOSTON GLOBE

Have you ever wondered how the Waze app knows shortcuts in your neighborhood better than you? Its because Waze acts like a superhuman air traffic controller it measures distance and traffic patterns, it listens to feedback from drivers, and it compiles massive data set to get you to your location as quickly as possible.

Even as we grow more reliant on these kinds of innovations, we still want assurances that were in charge, because we still believe our humanity elevates us above computers. Movies such as 2001: A Space Odyssey and the Terminator franchise teach us to fear computers programmed without any understanding of humanity; when a human sobs, Arnold Schwarzeneggers robotic character asks, Whats wrong with your eyes? They always end with the machines turning on their makers.

Advertisement

What most people dont know is that artificial intelligence ethicists worry the opposite is happening: We are putting too much of ourselves, not too little, into the decision-making machines of our future.

God created humans in his own image, if you believe the scriptures. Now humans are hard at work scripting artificial intelligence in much the same way in their own image. Indeed, todays AI can be just as biased and imperfect as the humans who engineer it. Perhaps even more so.

Get This Week in Opinion in your inbox:

Globe Opinion's must-reads, delivered to you every Sunday.

We already assign responsibility to artificial intelligence programs more widely than is commonly understood. People are diagnosed with diseases, kept in prison, hired for jobs, extended housing loans, and placed on terrorist watch lists, in part or in full, as a result of, AI programs weve empowered to decide for us. Sure, humans might have the final word. But computers can control how the evidence is weighed.

And and no one has asked you what you want.

That was by design. Automation was done in part to remove human bias from the equation. So why does a computer algorithm reviewing bank loans exhibit racial prejudice against applicants?

It turns out that algorithms, which are the building blocks of AI acquire bias the same way that humans do through instruction. In other words, theyve got to be taught.

Advertisement

Computer models can learn by analyzing data sets for relationships. For example, if you want to train a computer to understand how words relate to each other, you can upload the entire English-langugage Web and let the machine assign relational values to words based on how often they appear next to other words; the closer together, the greater the value. In this pattern recognition, the computer begins to paint a picture of what words mean.

Teaching computers to think keeps getting easier. But theres a serious miseducation problem as well. While humans can be taught to differentiate between implicit and explicit bias, and recognize both in themselves, a machine simply follows a series of if-then statements. When those instructions reflect the biases and dubious assumptions of their creators, a computer will execute them faithfully while still looking superficially neutral. What we have to stop doing is assuming things are objective and start assuming things are biased. Because thats what our actual evidence has been so far, says Cathy ONeil, data scientist and author of the recent book Weapons of Math Destruction.

As with humans, bias starts with the building blocks of socialization: language. The magazine Science recently reported on a study showing that implicit associations including prejudices are communicated through our language. Language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well, writes Arvind Narayanan, co-author of the study.

The scientists found that words like flower are more closely associated with pleasantness than insect. Female words were more closely associated with the home and arts than with career, math, and science. Likewise, African-American names were more frequently associated with unpleasant terms than names more common among white people were.

This becomes an issue when job recruiting programs trained on language sets like this are used to select resumes for interviews. If the program associates African-American names with unpleasant characteristics, its algorithmic training will be more likely to select European named candidates. Likewise, if the job-recruiting AI is told to search for strong leaders, it will be less likely to select women, because their names are associated with homemaking and mothering.

The scientists took their findings a step farther and found a 90 percent correlation between how feminine or masculine the job title ranked in their word-embedding research and the actual number of men versus women employed in 50 different professions according to Department Labor statistics. The biases expressed in language directly relates to the roles we play in life.

AI is just an extension of our culture, says co-author Joanna Bryson, a computer scientist at the University of Bath in the United Kingdom and Princeton University. Its not that robots are evil. Its that the robots are just us.

Even AI giants like Google cant escape the impact of bias. In 2015, the companys facial recognition software tagged dark skinned people as gorillas. Executives at FaceApp, a photo editing program, recently apologized for building an algorithm that whitened the users skin in their pictures. The company had dubbed it the hotness filter.

In these cases, the error grew from data sets that didnt have enough dark-skinned people, which limited the machines ability to learn variation within darker skin tones. Typically, a programmer instructs a machine with a series of commands, and the computer follows along. But if the programmer tests the design on his peer group, coworkers, and family, hes limited what the machine can learn and imbues it with whichever biases shape his own life.

Photo apps are one thing, but when their foundational algorithms creep into other areas of human interaction, the impacts can be as profound as they are lasting.

The faces of one in two adult Americans have been processed through facial recognition software. Law enforcement agencies across the country are using this gathered data with little oversight. Commercial facial-recognition algorithms have generally done a better job of telling white men apart than they do with women and people of other races, and law enforcement agencies offer few details indicating that their systems work substantially better. Our justice system has not decided if these sweeping programs constitute a search, which would restrict them under the Fourth Amendment. Law enforcement may end up making life-altering decisions based on biased investigatory tools with minimal safeguards.

Meanwhile, judges in almost every state are using algorithms to assist in decisions about bail, probation, sentencing, and parole. Massachusetts was sued several years ago because an algorithm it uses to predict recidivism among sex offenders didnt consider a convicts gender. Since women are less likely to reoffend, an algorithm that did not consider gender likely overestimated recidivism by female sex offenders. The intent of the scores was to replace human bias and increase efficiency in an overburdened judicial system. But, as mathematician Julia Angwin reported in ProPublica, these algorithms are using biased questionnaires to come to their determinations and yielding flawed results.

A ProPublica study of the recidivism algorithm used in Fort Lauderdale found that 23.5 percent of white men were labeled as being at an elevated risk for getting into trouble again, but didnt re-offend. Meanwhile, 44.9 percent of black men were labeled higher risk for future offenses, but didnt re-offend, showing how these scores are inaccurate and favor white men.

While the questionnaires dont ask specifically about skin color, data scientists say they back into race by asking questions like: When was your first encounter with police?

The assumption is that someone who comes in contact with police as a young teenager is more prone to criminal activity than someone who doesnt. But this hypothesis doesnt take into consideration that policing practices vary and therefore so does the polices interaction with youth. If someone lives in an area where the police routinely stop and frisk people, he will be statistically more likely to have had an early encounter with the police. Stop-and-frisk is more common in urban areas where African-Americans are more likely to live than whites.This measure doesnt calculate guilt or criminal tendencies, but becomes a penalty when AI calculates risk. In this example, the AI is not just computing for the individuals behavior, it is also considering the polices behavior.

Ive talked to prosecutors who say, Well, its actually really handy to have these risk scores because you dont have to take responsibility if someone gets out on bail and they shoot someone. Its the machine, right? says Joi Ito, director of the Media Lab at MIT.

Its even easier to blame a computer when the guts of the machine are trade secrets. Building algorithms is big business, and suppliers guard their intellectual property tightly. Even when these algorithms are used in the public sphere, their inner workings are seldom open for inspection. Unlike humans, these machine algorithms are much harder to interrogate because you dont actually know what they know, Ito says.

Whether such a process is fair is difficult to discern if a defendant doesnt know what went into the algorithm. With little transparency, there is limited ability to appeal the computers conclusions. The worst thing is the algorithms where we dont really even know what theyve done and theyre just selling it to police and theyre claiming its effective, says Bryson, co-author of the word embedding study.

Most mathematicians understand that the algorithms should improve over time. As theyre updated, they learn more if theyre presented with the right data. In the end, the relatively few people who manage these algorithms have an enormous impact on the future. They control the decisions about who gets a loan, who gets a job, and, in turn, who can move up in society. And yet from the outside, the formulas that determine the trajectories of so many lives remain as inscrutable as the will of the divine.

Follow this link:

Why artificial intelligence is far too human - The Boston Globe

Posted in Artificial Intelligence | Comments Off on Why artificial intelligence is far too human – The Boston Globe

The AI revolution in science – Science Magazine

Posted: at 4:14 am

Just what do people mean by artificial intelligence (AI)? The term has never had clear boundaries. When it was introduced at a seminal 1956 workshop at Dartmouth College, it was taken broadly to mean making a machine behave in ways that would be called intelligent if seen in a human. An important recent advance in AI has been machine learning, which shows up in technologies from spellcheck to self-driving cars and is often carried out by computer systems called neural networks. Any discussion of AI is likely to include other terms as well.

ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if its 3 p.m., send a reminder) or complex (identify pedestrians).

BACKPROPAGATION The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.

BLACK BOX A description of some deep learning systems. They take an input and provide an output, but the calculations that occur in between are not easy for humans to interpret.

DEEP LEARNING How a neural network with multiple layers becomes sensitive to progressively more abstract patterns. In parsing a photo, layers might respond first to edges, then paws, then dogs.

EXPERT SYSTEM A form of AI that attempts to replicate a humans expertise in an area, such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for applying that knowledge. Machine-learning techniques are increasingly replacing hand coding.

GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.

MACHINE LEARNING The use of algorithms that find patterns in data without explicit instruction. A system might learn how to associate features of inputs such as images with outputs such as labels.

NATURAL LANGUAGE PROCESSING A computers attempt to understand spoken or written language. It must parse vocabulary, grammar, and intent, and allow for variation in language use. The process often involves machine learning.

NEURAL NETWORK A highly abstracted and simplified model of the human brain used in machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs simple computations on them, and passes them on to the next layer of units. The final layer represents the answer.

NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be analog, digital, or a combination.

PERCEPTRON An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.

REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by acting toward an abstract goal, such as earn a high video game score or manage a factory efficiently. During training, each effort is evaluated based on its contribution toward the goal.

STRONG AI AI that is as smart and well-rounded as a human. Some say its impossible. Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common sense.

SUPERVISED LEARNING A type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.

TENSORFLOW A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.

TRANSFER LEARNING A technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as recognizing cats.

TURING TEST A test of AIs ability to pass as human. In Alan Turings original conception, an AI would be judged by its ability to converse through written text.

Link:

The AI revolution in science - Science Magazine

Posted in Artificial Intelligence | Comments Off on The AI revolution in science – Science Magazine

Artificial intelligence-based system warns when a gun appears in a video – Phys.Org

Posted: at 4:14 am

July 7, 2017 Credit: University of Granada

Scientists from the University of Granada (UGR) have designed a computer system based on new artificial intelligence techniques that automatically detects in real time when a subject in a video draws a gun.

Their work, pioneering on a global scale, has numerous practical applications, from improving security in airports and malls to automatically controlling violent content in which handguns appear in videos uploaded on social networks such as Facebook, Youtube or Twitter, or classifying public videos on the internet that have handguns.

Francisco Herrera Triguero, Roberto Olmos and Siham Tabik, researchers in the Department of Computational and Artificial Intelligence Sciences at the UGR, developed this work. To ensure the proper functioning and efficiency of the model, the authors analyzed low-quality videos from YouTube and movies from the '90s such as Pulp Fiction, Mission Impossible and James Bond films. The algorithm showed an effectiveness of over 96.5 percent and is capable of detecting guns with high precision, analyzing five frames per second, in real time. When a handgun appears in the image, the system sends an alert in the form of a red box on the screen where the weapon is located.

A fast and inexpensive model

UGR full professor Francisco Herrera explained that the model can easily be combined with an alarm system and implemented inexpensively using video cameras and a computer with moderately high capacities.

Additionally, the system can be implemented in any area where video cameras can be placed, indoors or outdoors, and does not require direct human supervision.

Researcher Siham Tabik noted that deep learning models like this represent a major breakthrough over the last five years in the detection, recognition and classification of objects in the field of computational.

A pioneering system

Until now, the principal weapon detection systems were based on metal detection and found in airports and public events in closed spaces. Although these systems have the advantage of being able to detect a firearm even when it is hidden from sight, they unfortunately have several disadvantages.

Among these drawbacks is the fact that these systems can only control the passage through a specific point (if the person carrying the weapon does not pass through this point, the system is useless); they also require the constant presence of a human operator and generate bottlenecks when there is a large flow of people. They also detect everyday metallic objects such as coins, belt buckles and mobile phones. This makes it necessary to use conveyor belts and x-ray scanners in combination with these systems, which is both slow and expensive. In addition, these systems cannot detect weapons that are not made of metal, which are now possible because of 3-D printing.

For this reason, handgun detection through video cameras is a new complementary security system that is useful for areas with video surveillance.

Explore further: Tracking humans in 3-D with off-the-shelf webcams

More information: Automatic Handgun Detection Alarm in Videos Using Deep Learning. arxiv.org/abs/1702.05147

Many applications require that people and their movements are captured digitally in 3-D in real-time. Until now, this was possible only with expensive systems of several cameras, or by having people wear special suits. Computer ...

University of Washington researchers have shown that Google's new tool that uses machine learning to automatically analyze and label video content can be deceived by inserting a photograph periodically and at a very low rate ...

Hitachi, Ltd. today announced the development of a detection and tracking technology using artificial intelligence (AI) which can distinguish an individual in real-time using features from over 100 categories of external ...

Despite YouTube's attempts to safeguard user anonymity, intelligence agencies, hackers and online advertising companies can still determine which videos a user is watching, according to Ben-Gurion University of the Negev ...

It took 24 hours before the video of a man murdering his baby daughter was removed from Facebook. On April 24, 2017, the father from Thailand had streamed the killing of his 11-month-old baby girl using the social network's ...

The cow goes "moo." The pig goes "oink." A child can learn from a picture book to associate images with sounds, but building a computer vision system that can train itself isn't as simple. Using artificial intelligence techniques, ...

Google parent Alphabet is spinning off a little-known unit working on geothermal power called Dandelion, which will begin offering residential energy services.

Elon Musk's Tesla will build what the maverick entrepreneur claims is the world's largest lithium ion battery within 100 days, making good on a Twitter promise to ease South Australia's energy woes.

Qualcomm on Thursday escalated its legal battle with Apple, filing a patent infringement lawsuit and requesting a ban on the importation of some iPhones, claiming unlawful and unfair use of the chipmaker's technology.

France will end sales of petrol and diesel vehicles by 2040 as part of an ambitious plan to meet its targets under the Paris climate accord, new Ecology Minister Nicolas Hulot announced Thursday.

Japanese designer Yuima Nakazato claimed Wednesday that he has cracked a digital technique which could revolutionise fashion with mass made-to-measure clothes.

Volvo plans to build only electric and hybrid vehicles starting in 2019, making it the first major automaker to abandon cars and SUVs powered solely by the internal combustion engine.

Adjust slider to filter visible comments by rank

Display comments: newest first

It should be able to tell John McClane when to duck or Robocop when to shoot first by analysing their film footage: If they can train it to shout at the TV screen.

1/5??? The above mentioned videos would be a much better test, with sub-optimal lighting, and fast moving objects, than the sideways presented still frame featured in the article.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Go here to see the original:

Artificial intelligence-based system warns when a gun appears in a video - Phys.Org

Posted in Artificial Intelligence | Comments Off on Artificial intelligence-based system warns when a gun appears in a video – Phys.Org

Google gives journalists money to use artificial intelligence in reporting – The Hill

Posted: at 4:14 am

Google is giving British journalists more than 700,000 pounds to help them incorporate artificial intelligence into their work.

Google awarded thegrantto The Press Association (PA), the national news agency for the United Kingdom and Ireland, and Urbs Media, a data driven news startup. It's one of the largest grants handed out by Googles Digital News Initiative Innovation Fund.

The funding, announced on Thursday, will specifically go to Reporters And Data And Robots, a news service that aims to create 30,000 local stories a month.

Skilled human journalists will still be vital in the process, but RADAR allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually, Clifton said in a statement.

The news organizations expressed optimism for development of their AI tools with the new grant.

PA and Urbs Media are developing an end-to-end workflow to generate this large volume of news for local publishers across the UK and Ireland, they said in a release.

The funds will also help develop capabilities to auto-generate graphics and video to add to text-based stories, as well as related pictures. PAs distribution platforms will also be enhanced to make sure that all local outlets can find and use the large volume of localised news stories.

PA and Urbss AI push is not the first time mediaoutlets have taken advantage of the technology to supplement their reporting. Reporters at the Los Angeles Times have been working with AI since 2014 to assist them in writing and reporting stories about earthquakes.

"It saves people a lot of time, and for certain types of stories, it gets the information out there in usually about as good a way as anybody else would, then-Los AngelesTimes journalist Ken Schwencke, who wrote a program for automated earthquake reporting, told the BBC.

"The way I see it is, it doesn't eliminate anybody's job as much as it makes everybody's job more interesting."

Follow this link:

Google gives journalists money to use artificial intelligence in reporting - The Hill

Posted in Artificial Intelligence | Comments Off on Google gives journalists money to use artificial intelligence in reporting – The Hill

Karandish: Problems Artificial Intelligence must overcome – St. Louis Business Journal

Posted: at 4:14 am


St. Louis Business Journal
Karandish: Problems Artificial Intelligence must overcome
St. Louis Business Journal
It's graduation season, and Bill Gates recently said that Artificial Intelligence is among the top fields for 2017 graduates to enter. A quorum of business leaders and executives have echoed these sentiments. What problems and issues will these recent ...

Excerpt from:

Karandish: Problems Artificial Intelligence must overcome - St. Louis Business Journal

Posted in Artificial Intelligence | Comments Off on Karandish: Problems Artificial Intelligence must overcome – St. Louis Business Journal