Google Can Now Describe Your Cat Photos

Google 's computers learned to recognize cats in photos. Now, theyre learning to describe cats playing with a ball of string.

Computer scientists in the search giants research division, and a separate team working at Stanford University, independently developed artificial-intelligence software that can decipher the action in a photo, and write a caption to describe it. Thats a big advance over previous software that was mostly limited to recognizing objects.

In a blog post, Google described how it is using advanced machine-learning techniques that mimic the human brain to recognize a photo of a person riding a motorcycle on a dirt road, or a herd of elephants walking across a dry grass field.

The new software can capture the whole scene and generate corresponding natural-looking text, says Yoshua Bengio, a professor of computer science at the University of Montreal and a leading expert in the field. That defies predictions that software would be limited to recognizing objects, he said.

The new technology could lead to big improvements in the accuracy of Googles image-search results, which today often rely on text found near a photo on a web page. One day it might help people search vast libraries of untagged photos or videos stored on smartphones, says David Bader, a professor of computer science at Georgia Tech. A startup called Viblio is using similar research out of Simon Fraser University to automatically categorize videos.

In 2012, a Google/Stanford team famously taught a computer to learn how to recognize cats. The computer was shown millions of images from YouTube videos, and used then-state-of-the-art machine-learning algorithms to teach itself to spot felines.

Similar advances are helping improve other Google services. Earlier this year, Google researchers disclosed how its computers had learned to read house numbers from images captured by its Street View cars, making it quicker and easier to locate buildings in Google Maps, for instance.

Google is making big bets on artificial-intelligence technology. Earlier this year, it paid hundreds of millions of dollars to acquire Deep Mind Technologies, a London-based startup that employs many specialists in advanced machine learning. Earlier, it bought DNNResearch, a small company started at the University of Toronto, in order to hire a top academic in machine learning, Geoffrey Hinton.

Artificial-intelligence research also helps speech-recognition software, used by smartphone assistants like Apple 's Siri or Google voice search.

Others also are investing in the field. Facebook scooped up a top artificial-intelligence academic late last year. Meanwhile, Chinese search engine Baidu has said it will invest $300 million in an artificial intelligence lab in Silicon Valley. To lead the lab, Baidu hired the head of Stanfords artificial-intelligence lab, Andrew Ng, who helped build the computer that taught itself to recognize cats from YouTube videos.

See the original post:

Google Can Now Describe Your Cat Photos

Artificial Intelligence Creates Its Own Magic Tricks

November 18, 2014

Eric Hopton for redOrbit.com Your Universe Online

Have you ever wanted to be a magician? Heres your big chance, courtesy of a neat bit of artificial intelligence (AI) and a smartphone app. It wont exactly turn you into an instant David Blaine or Dynamo. It might not enable you to pull a rabbit out of a shiny top hat or saw your beautiful assistant in half. But you should be able to fool some of the people some of the time with some cool card tricks or a magic jigsaw puzzle.

Some people seem to have fun jobs and, if this playing with magic work is anything to go by, there are a bunch of scientists at Queen Mary University of London who probably cant wait to get to work every day. They have been working on a way to get a computer to produce its own variation on some familiar conjuring tricks.

The researchers fed information into a computer program about how a magic jigsaw puzzle and a mind reading card trick work. They also built in the results of experiments into how humans understand magic tricks. The system then created completely new variants on those old magic tricks. Hey Presto! New tricks for old ones thanks to artificial intelligence!

The QMUL team say that the new tricks rely on the use of mathematical techniques rather than sleight of hand or other theatrics. Details of the research were published on Monday November 17th in the journal Frontiers in Psychology. The magic puzzle was also on sale in a London magic shop where it apparently proved very popular with working magicians. The card trick itself is available as an app called Phoney in the Google Play Store.

Howard Williams, co-creator of the project, described the way AI could help magicians come up with new ideas. Computer intelligence can process much larger amounts of information and run through all the possible outcomes in a way that is almost impossible for a person to do on their own. So while, a member of the audience might have seen a variation on this trick before, the AI can now use psychological and mathematical principles to create lots of different versions and keep audiences guessing, he said.

The computer also produced a magic jigsaw. Using what the QMUL team call a clever geometric principle, the trick involves taking a jigsaw apart and then reassembling it so that certain shapes have disappeared. This type of trick can involve highly complex calculations involving many simultaneous factors including the size of the puzzle, the number of pieces and shapes that appear and disappear, as well as the numerous possible ways the puzzle might be arranged. As the creators say, Something this complex is ideal for an algorithm to process, and make decisions about which flexible factors are most important.

The app looks like a lot of fun. It mimics the way a magician might perform a similar mind reading card trick. But using the app means that magician (thats you and I, of course) does not have to remember the order of the cards.

A deck of playing cards is arranged in a specific way. A few seemingly innocuous pieces of information are gathered from the subject who is then asked to choose and identify a card from the deck. The Android app then reveals the card on a mobile phone screen. Because of the way the computer was used to arrange the decks, the correct card could be identified with the minimum of information. In fact the program was able to suggest deck arrangements that, on average, needed one less question to identify the card than a traditional magician would need.

Read this article:

Artificial Intelligence Creates Its Own Magic Tricks

Computer uses artificial intelligence to create magic tricks

Researchershave taught a computer to create itsown magic tricks It came up with new twists on a mind-reading card trick, and magic puzzle Tricks are based on maths, not theatrics, using artificial intelligence Computer's card trick can now be downloaded as a smartphone app

By Sarah Griffiths for MailOnline

Published: 10:32 EST, 17 November 2014 | Updated: 12:17 EST, 17 November 2014

David Blaine and Dynamo are considered among the worlds best magicians, but they may soon have competition from machines.

Researchers have taught a computer to create its own magic tricks, including a mind-reading card illusion.

And what the computer lacks in creativity, it makes up for in logic, because the tricks were created from maths rather than theatrics, using artificial intelligence in this way for the first time.

Researchers have taught a computer to create its own magic tricks, including a mind-reading card illusion (pictured) which they have developed into an app called Phoney

Researchers at Queen Mary University London gave a computer program guidelines on how a magic jigsaw puzzle, and a mind-reading card trick work, as well the results of experiments into how humans understand magic tricks.

From this, the intelligent system created new versions of the tricks, which could be performed by a magician.

Co-creator of the project, Howard Williams said: Computer intelligence can process much larger amounts of information and run through all the possible outcomes in a way that is almost impossible for a person to do on their own.

Go here to read the rest:

Computer uses artificial intelligence to create magic tricks

Artificial intelligence is now creating its own magic tricks

Queen Mary University of London QMUL's

You might not have to be a professional magician to come up with clever tricks in the near future. Researchers at Queen Mary University of London have developed artificial intelligence that can create magic tricks (specifically, those based on math) all on its own. Once their program learns the basics of creating magic jigsaws and "mind reading" stunts, it can generate many variants of these tricks by itself. This could be particularly handy if you like to impress your friends on a regular basis -- you could show them a new card trick every time without having to do much work.

The best part? You can try some of these computer-generated tricks yourself. The 12 Magicians of Osiris magic jigsaw is available as a web pack, and you can download the Android component for one card trick, Phoney, from Google Play. Neither will give you as much satisfaction as developing tricks from scratch, but they're proof that computers can do more with math than solve equations.

QMUL

Frontiers

View post:

Artificial intelligence is now creating its own magic tricks

Researchers use artificial intelligence to create magic tricks

Brittany Hillen

A group of researchers at Queen Mary University in London have taken to creating magic tricks using artificial intelligence -- something they've made available for anyone who is interested over on the QMagic site. There are four tricks so far, including one that involves having a smartphone guess what a playing card is, and another that turns one's smartphone into a "crystal ball" that can read minds. Even better, the app-based tricks have been released in the Google Play Store for others to enjoy.

The trick featured below, for example, uses an app to guess which card out of a lineup the viewer chooses -- something, obviously, it does correctly. The setup behind it is simple. The app itself is expecting a certain card order, which the magician arranges beforehand.

Artificial intelligence has a benefit over human-made magic tricks, as explained by the project's co-creator Howard Williams. "Computer intelligence can process much larger amounts of information and run through all the possible outcomes in a way that is almost impossible for a person to do on their own." And because it can come up with so many arrangements for the deck, it requires less information from the viewer to make a correct guess.

In addition to creating the tricks uisng artificial intelligence, the researchers are also using the data gathered to amass research on "the psychology of being a spectator." Giving one example, researcher Peter McOwan said the team had thought maybe an audience would be "suspicious" of including technology as part of the trick, but that turned out incorrect.

With this artificial intelligence, researchers have also developed a clever jigsaw puzzle with disappearing shapes (download available via PDF), as well as the aforementioned Crystal Ball, the Subliminal Card trick, and the Phoney trick featured above.

SOURCE: io9

Excerpt from:

Researchers use artificial intelligence to create magic tricks

Elon Musk Predicts Catastrophic AI Event Will Occur Within 10 Years

November 18, 2014

Chuck Bednar for redOrbit.com Your Universe Online

Tesla CEO and SpaceX co-founder Elon Musk is once again sounding the alarm about artificial intelligence (AI), stating that the risk of something seriously dangerous happening could be as little as five years away.

According to re/codes John Paczkowski, Musk responded to a post about AI at the website Edge.org with a comment that has since been deleted. In it, Musk essentially said that research into artificial intelligence was moving forward so quickly that he predicted a negative event would occur within the next decade.

He also went on to defend himself from potential criticism by stating that his views were not a case of crying wolf about something I dont understand, added James Cook of Business Insider. Musk added by saying that he is not alone in thinking we should be worried. The leading AI companies recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.

Musks representatives confirmed the authenticity of the comments to Paczkowski. A spokesperson said that Musk sent his views to Edge.org founder John Brockman via email and did not intend for them to be published. The individual added that Musk planned to write a longer blog post addressing the issue at a later date.

The South African-born inventor and entrepreneur has shared his views on the dangers of AI technology on multiple occasions. In October, while addressing those in attendance at the MIT Aeronautics and Astronautics Departments 2014 Centennial Symposium, he compared AI to summoning demons and referred to it as the biggest existential threat currently faced by mankind.

Likewise, as Justin Moyer of The Washington Post pointed out, Musk said that AI was potentially more dangerous than nukes in August, and in June, he cautioned that the scientists in the movie The Terminator did not expect to be developing a killing machine that they simply made the technology and were surprised by the outcome.

It gets weirder: Musk has invested in at least two artificial intelligence companies one of which, DeepMind, he appeared to slight in his recent deleted blog post, Moyer said, adding that while DeepMind was acquired by Google in January, it appears as though Musk was just supporting AI companies to keep an eye on them.

Unless you have direct exposure to groups like DeepMind, you have no idea how fast-it is growing at a pace close to exponential, Musk wrote, according to The Washington Post. Its not from the standpoint of actually trying to make any investment return. Its purely I would just like to keep an eye on whats going on with artificial intelligence.

Read more:

Elon Musk Predicts Catastrophic AI Event Will Occur Within 10 Years

Elon Musk Once Again Warns of the Looming Robot Apocalypse

Tesla and SpaceX founder Elon Musk has once again taken to a public forum to warn everyone that they shouldnt sleep on recent developments in the artificial intelligence field. In short, Musk says that the chance of something seriously dangerous happening is likely in five years or so, and a near certainty within a decade.

Musk posted his warning on science and futurology site edge.org, as a reply to an article titled The Myth of AI. At some point, Musk deleted his comment but quick redditors over at the futurology subreddit caught it.

Heres what he had to say:

The pace of progress in artificial intelligence (Im not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I dont understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen

This is by no means Musks first warning of the type.

In August, he tweeted that AI was potentially more dangerous than nukes.

A few months ago he vocalized his concerns regarding a possible Skynet scenario. In fact, he pretty much admitted to investing in an AI company so that he could keep an eye on them.

And barely three weeks ago, speaking at MITs Aeronautics and Astronautics departments Centennial Symposium, Musk compared harnessing AI to controlling a demon.

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, its probably that. So we need to be very careful with the artificial intelligence.

Original post:

Elon Musk Once Again Warns of the Looming Robot Apocalypse

Elon Musks private warning on AI: seriously dangerous in 5 years

Brittany Hillen

In October during an MIT event, Elon Musk shared a word of caution about artificial intelligence, saying that humanity needs to be careful with the technology, and that it is likely "our biggest existential threat". He drove the point home, saying that by trifling with artificial intelligence "we are summoning the demon." Following this, a comment from Musk about artificial intelligence efforts that was supposed to remain private was inadvertently published for all to see, and it paints a far more dire warning.

The statement was made public following a misunderstanding about whether it was supposed to be kept private (it was). Over on Edge.org, a piece featuring Jaron Lanier discussing artificial intelligence was posted. If you scroll to the end of the feature, you'll see statements from others who shared their own thoughts on the technology, and amongst them for a brief period of time was the following from Musk:

The pace of progress in artificial general intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast -- it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. Please note that I am normally super pro technology, and have never raised this issue until recent months. This is not a case of crying wolf about something I don't understand.

According to a spokesperson that spoke to the folks at Mashable, Musk's statement was made in an email and hadn't been intended for publication. The statement was pulled, but not before someone managed to grab a screenshot, which quickly went viral. It was summed up with:

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen..."

Check out the gallery below to see the actual screenshot of Musk's statement, courtesy of Reddit user "Buck-Nasty". Be sure to check out the timeline, as well, for related news.

VIA: Mashable

Here is the original post:

Elon Musks private warning on AI: seriously dangerous in 5 years

Elon Musk's secret fear: Artificial Intelligence will turn deadly in 5 years

By Adario Strange2014-11-18 01:56:08 UTC

There's plenty of debate over the singularity a hypothetical future moment where software becomes self-aware and smart beyond our capacity to understand. Some say it will be a boon for humanity; some foresee an Artificial Intelligence-driven apocalypse.

We already knew that Elon Musk was in the latter camp. Now we know that the SpaceX and Tesla entrepreneur thinks the A.I. doom is approaching faster than anyone suspects within the next 5-10 years.

It all started last Friday, when noted virtual reality pioneer Jaron Lanier was featured on publisher John Brockman's site, Edge.org, discussing the potential threat of artificial intelligence in a post titled "The Myth of A.I." Following his thoughts are comments from a number of science and technology luminaries weighing in on the topic. Among those comments were the thoughts of Musk, who sounded a particularly alarming note about the threat of A.I.

"The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most," wrote Musk. "Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don't understand."

The problem: according to a Musk spokesperson contacted by Mashable, the comments were emailed to Brockman and were not intended to be made public on the site.

Soon after after the comment appeared, it was removed, but not before a screenshot of the comment was captured and posted on Reddit, effectively ensuring that Musk's supposedly private comments were preserved for all the Internet see.

Aside from the frighteningly near-term prediction, the other thing that seemed to give Musk's comment import was the mention of DeepMind, a very real company he has invested in that works in the artificial intelligence space.

"The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast," said Musk. "Unless you have direct exposure to groups like Deepmind, you have no idea how fast it is growing at a pace close to exponential."

Later, Musk went on to claim that his view is shared by others working in the space.

More here:

Elon Musk's secret fear: Artificial Intelligence will turn deadly in 5 years

Magic tricks created using artificial intelligence for the first time

Researchers working on artificial intelligence at Queen Mary University of London have taught a computer to create magic tricks.

The researchers gave a computer program the outline of how a magic jigsaw puzzle and a mind reading card trick work, as well the results of experiments into how humans understand magic tricks, and the system created completely new variants on those tricks which can be delivered by a magician.

The magic tricks created were of the type that use mathematical techniques rather than sleight of hand or other theatrics, and are a core part of many magicians' repertoires. The tricks, details of which are published today (Monday) in the journal Frontiers in Psychology, proved popular with audiences and the magic puzzle was put on sale in a London magic shop. The card trick is available as an app called Phoney in the Google Play Store.

Co-creator of the project, Howard Williams, explains how a computer can aid trick creation: "Computer intelligence can process much larger amounts of information and run through all the possible outcomes in a way that is almost impossible for a person to do on their own. So while, a member of the audience might have seen a variation on this trick before, the AI can now use psychological and mathematical principles to create lots of different versions and keep audiences guessing."

The magic jigsaw involves assembling a jigsaw to show a series shapes, then taking it apart and reassembling it so that certain shapes have disappeared using a clever geometric principle. Creation of tricks of this kind involve several simultaneous factors such as the size of the puzzle, the number of pieces involved, the number of shapes that appear and disappear and the ways that the puzzle can be arranged. Something this complex is ideal for an algorithm to process, and make decisions about which flexible factors are most important.

The mind reading card trick involves arranging a deck of playing cards in a specific way then, based on a few seemingly innocuous pieces of information from the audience, identifying a card that has been seen selected from the deck and using an Android app to reveal the card on a mobile phone screen. The computer was used to arrange the decks in such a way that a specific card could be identified with the least amount of information possible. The program identified arrangements for the deck that on average required one fewer question to be asked before the card was found than with the traditional method. The app simply avoids the magician having to remember the order of the cards.

Professor Peter McOwan, part of the QMUL team who worked on the project, added: "Using AI to create magic tricks is a great way to demonstrate the possibilities of computer intelligence and it also forms a part of our research in to the psychology of being a spectator. For example, we suspected that audiences would be suspicious of the involvement of technology in the delivery of a trick but we've found out that isn't the case."

Video: http://www.youtube.com/watch?v=xZiqkoaCaic

Jigsaw puzzle: http://qmagicworld.wordpress.com/the-twelve-magicians-of-osiris/

Story Source:

More here:

Magic tricks created using artificial intelligence for the first time

A.I. Artificial Intelligence, Game of Life program walkthru, in C++, OLDSCHOOL CONSOLE based. ACE – Video


A.I. Artificial Intelligence, Game of Life program walkthru, in C++, OLDSCHOOL CONSOLE based. ACE
A.I. - Artificial Intelligence, Game of Life program walkthru, in C++, CONSOLE based (i.e. OLDSCHOOL) - ACE Appetite Control Energy - MORE ENERGY, LESS WEI...

By: superbee1970

See the article here:

A.I. Artificial Intelligence, Game of Life program walkthru, in C++, OLDSCHOOL CONSOLE based. ACE - Video

University of Toronto startup searches for Ebola cure

A team of University of Toronto researchers is using artificial intelligence to hunt for a potential Ebola cure.

Software from U of T startup Chematria Inc. has the ability to think like a human chemist, analyzing the effectiveness of existing and hypothetical drugs against disease. Unlike a human, the program can complete this process in days instead of years, saving time and millions of dollars. Now, scientists are hoping the advanced technology can tackle the global Ebola crisis.

What we are attempting would have been considered science fiction, until now, said Chematria CEO, Abraham Heifets.

The testing of pharmaceuticals is normally a physical science we still have to build every prototype we test, said Heifets, resulting in hundreds of thousands of failed experiments for each success. Chematria can do that research virtually, making it 150 times faster than conventional methods, said Heifets. The tech has been applied to malaria, leukemia and Multiple Sclerosis, but in the face of a growing pandemic, creators last week launched an Ebola project.

If there is a drug out there for (Ebola), theres a very good chance well find it, Heifets told the Star.

Operating on the largest supercomputer in Canada, made by IBM, Chematria is programmed with millions of data points that look at things like drug patents and how different drugs work. The Ebola project is targeting a claw-like mechanism the virus uses to latch onto cells.

What our search then does is takes this target and imagines millions of hypothetical medicines that would fit in that claw and stop it from working, said Heifets. Theres also a chance it will discover existing medicines that can accomplish that task. There have been no hits so far.

In a few weeks, the software will provide scientists with a list of compounds ranked from 1 to 10 million in order of how well they work against Ebola. Previous projects have produced promising results, according to Heifets. The search for a drug used to combat the progression of MS, for example, resulted in nine strong candidates and is now in the animal testing phase, he said.

Even if the program cant find a cure for Ebola, Heifets said the technology has positive implications for responding to pandemics.

Humanity as a whole has better options now than hoping and praying well find something in 10 years.

Read more from the original source:

University of Toronto startup searches for Ebola cure

Science Documentary: Stem Cells,Regenerative Medicine,Artificial Heart,a future medicine documentary – Video


Science Documentary: Stem Cells,Regenerative Medicine,Artificial Heart,a future medicine documentary
Science Documentary: Stem Cells,Regenerative Medicine,Artificial Heart,a future medicine documentary In each and every one of our organs and tissue, we have stem cells. These stem cells...

By: ScienceRound

More here:

Science Documentary: Stem Cells,Regenerative Medicine,Artificial Heart,a future medicine documentary - Video

Smart bombs that can pick whom to kill

Los Angeles: On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.

Initially, pilots aboard the plane directed the missile, but halfway to its destination it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot (80-metre) unmanned freighter.

Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms-makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill.

As these weapons become smarter and nimbler, critics fear they will become increasingly difficult for humans to control or to defend against. And while pinpoint accuracy could save civilian lives, critics fear weapons without human oversight could make war more likely, as easy as flipping a switch.

Britain, Israel and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks or ships without direct human control.

After launch, so-called autonomous weapons rely on artificial intelligence and their own sensors to select targets and to initiate an attack.

Britains fire and forget Brimstone missiles, for example, can distinguish among tanks and cars and buses without human assistance, and can hunt targets in a predesignated region without oversight. The Brimstones also communicate with one another, sharing their targets.

Armaments with even more advanced self-governance are on the drawing board, although the details usually are kept secret.

An autonomous weapons arms race is already taking place, said Steve Omohundro, a physicist and artificial intelligence specialist at Self-Aware Systems, a Palo Alto, California, research centre. They can respond faster, more efficiently and less predictably.

Concerned by the prospect of a robotics arms race, representatives from dozens of nations will meet on Thursday in Geneva to consider whether development of these weapons should be restricted by the Convention on Certain Conventional Weapons. Christof Heyns, the United Nations special rapporteur on extrajudicial, summary or arbitrary executions, last year called for a moratorium on the development of these weapons altogether.

See the article here:

Smart bombs that can pick whom to kill

IoT Wont Work Without Artificial Intelligence

As the Internet of Things (IoT) continues its run as one of the most popular technology buzzwords of the year, the discussion has turned from what it is, to how to drive value from it, to the tactical: how to make it work.

IoT will produce a treasure trove of big data data that can help cities predict accidents and crimes, give doctors real-time insight into information from pacemakers or biochips, enable optimized productivity across industries through predictive maintenance on equipment and machinery, create truly smart homes with connected appliances and provide critical communication between self-driving cars. The possibilities that IoT brings to the table are endless.

As the rapid expansion of devices and sensors connected to the Internet of Things continues, the sheer volume of data being created by them will increase to a mind-boggling level. This data will hold extremely valuable insight into whats working well or whats not pointing out conflicts that arise and providing high-value insight into new business risks and opportunities as correlations and associations are made.

It sounds great. However, the big problem will be finding ways to analyze the deluge of performance data and information that all these devices create. If youve ever tried to find insight in terabytes of machine data, you know how hard this can be. Its simply impossible for humans to review and understand all of this data and doing so with traditional methods, even if you cut down the sample size, simply takes too much time.

We need to improve the speed and accuracy of big data analysis in order for IoT to live up to its promise. If we dont, the consequences could be disastrous and could range from the annoying like home appliances that dont work together as advertised to the life-threatening pacemakers malfunctioning or hundred car pileups.

The only way to keep up with this IoT-generated data and gain the hidden insight it holds is with machine learning.

Wikipedia defines machine learning as a subfield of computer science (CS) and artificial intelligence (AI) that deals with the construction and study of systems that can learn from data, rather than follow only explicitly programmed instructions.

While this may sound a bit like science fiction, its already present in everyday life. For example, its used by Pandora to determine what other songs you may like, or by Amazon.com to suggest other books and movies to you. Both are based on what has been learned about the user and are refined over time as the system learns more about your behaviors.

In an IoT situation, machine learning can help companies take the billions of data points they have and boil them down to whats really meaningful. The general premise is the same as in the retail applications review and analyze the data youve collected to find patterns or similarities that can be learned from, so that better decisions can be made.

For example, wearable devices that track your health are already a burgeoning industry but soon these will evolve to become devices that are both inter-connected and connected to the internet, tracking your health and providing real-time updates to a health service.

Read more:

IoT Wont Work Without Artificial Intelligence

World wary as bombs, not humans, pick whom to kill

On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.

Initially, pilots aboard the plane directed the missile, but halfway to its destination, it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter.

Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill.

As these weapons become smarter and nimbler, critics fear they will become increasingly difficult for humans to control or to defend against. And while pinpoint accuracy could save civilian lives, critics fear weapons without human oversight could make war more likely, as easy as flipping a switch.

Britain, Israel and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks or ships without direct human control. After launch, so-called autonomous weapons rely on artificial intelligence and sensors to select targets and to initiate an attack.

(A B-1 bomber deploys a Long Range Anti-Ship Missile. The missiles are designed to select and strike targets without human oversight)

Britain's "fire and forget" Brimstone missiles, for example, can distinguish among tanks and cars and buses without human assistance, and can hunt targets in a predesignated region without oversight. The Brimstones also communicate with one another, sharing their targets.

Armaments with even more advanced self-governance are on the drawing board, although the details usually are kept secret. "An autonomous weapons arms race is already taking place," said Steve Omohundro, a physicist and artificial intelligence specialist at Self-Aware Systems, a research center in Palo Alto, Calif. "They can respond faster, more efficiently and less predictably."

Concerned by the prospect of a robotics arms race, representatives from dozens of nations will meet on Thursday in Geneva to consider whether development of these weapons should be restricted by the Convention on Certain Conventional Weapons. Christof Heyns, the United Nations special rapporteur on extrajudicial, summary or arbitrary executions, last year called for a moratorium on the development of these weapons.

The Pentagon has issued a directive requiring high-level authorization for the development of weapons capable of killing without human oversight. But fast-moving technology has already made the directive obsolete, some scientists say.

Visit link:

World wary as bombs, not humans, pick whom to kill