Monthly Archives: March 2017

At Syracuse University, more students are getting ahold of virtual reality – The Daily Orange

Posted: March 6, 2017 at 3:15 pm

Jillian Cabrera peered down and only a dark space far below greeted him. He glanced to the side and red canyon walls enveloped him. He stood on a wooden bridge, its panels tied together and spaced unevenly apart, that stretched across the canyon mouth as the wind whistled through the gaps. No railing protected him.

The only way to get off the bridge was to step off it.

Maggie Nhan watched Cabrera, who is afraid of heights, stand motionless in the middle of a basement lab in Shaffer Art Building. She glanced at the computer monitor, which displayed the red canyon walls and bridge. He was hooked up to the HTC Vive, playing the virtual reality game Waltz of the Wizard.

Cabrera, clutching the Vive remotes, laughed nervously. Im in a room, Cabrera said, rotating in place. All he needed to do was take one step to the side. Wow, this is hard. My hands are actually sweating.

Cabrera, a junior Syracuse University student, eventually took the step and was transported back to a wizards lab. He and Nhan, a sophomore, are computer art and animation majors who used the Vive to design their own virtual reality games last semester.

Its just one on-campus initiative teaching students how to utilize VR software, as several pockets of the SU community have embraced the technology. SU introduced its first virtual reality course in fall 2014 in the College of Visual and Performing Arts. The S.I. Newhouse School of Public Communications later introduced its Virtual Reality Storytelling course in the spring of 2015. Theres also a joint course in the College of Engineering and Computer Science and the School of Architecture thats centered around virtual reality.

In addition to curriculum, SUs football team previously used VR software to train its quarterbacks in 2015 and will be integrating another program this spring, said Mike Morrison, assistant director of athletics communications. Other projects include commercialized ventures, like imr.sv, launched last August by Sam Lewis, a Martin J. Whitman School of Management student.

Virtual realitys current popularity began in 2010 with the development of the Oculus Rift prototype. The Rift and other VR systems allow users to interact in a virtual, computer-generated environment, where they no longer see their physical environments. VR differs from augmented reality, which overlays a physical space with digital elements, and 360 videos, which allow users to rotate in a video. These videos can be considered VR, but not all VR can be a 360 video.

Meyer Giordano, an instructor in VPA, taught CAR 230, Topics in Computer Gaming I, the course Cabrera and Nhan took. When Giordano first started teaching it in fall 2014, the software was so rudimentary that it was difficult to get the program running, he said. Now the technology has progressed to the point that he could show someone how to create a basic environment in five minutes.

As the technology has advanced, teaching the class has become a lot more straightforward on the technical side, but because theres more content now, theres a lot of other directions to explore, Giordano said.

Currently the cost of VR is restraining its expansion. Each high-capability system can cost more than $500. But Cabrera and Nhan said they are excited for the future of VR because it will appeal to a greater audience than typical video games. Instead of relying on controllers and buttons, users will be able to use their bodies.

The purpose of experimenting with VR is to have students push the technology to see what they can create, Giordano said. But as VR gets more commercialized, it loses the frontier aspect and he said he might find the technology less interesting. He could switch to teaching augmented reality, he said, which has not been very developed yet.

But Giordano said he is still attracted to the future of virtual reality, such as the idea that VR might limit consumer waste. Instead of buying physical clothes, he said, a user would buy clothes in the virtual world and just wear those.

The more time we as humans spend in VR, the less time were spending trashing this planet, he said.

School of Architecture/College of Engineering and Computer Science

On the second floor of Slocum Hall, 40 students sat clustered in the front of room 224. Their worktables lay abandoned, covered with paper and wooden objects, as sunlight streamed through the windows. Images of sensory experiences, geometric shapes and videos projected onto the wall.

Five students were presenting a virtual reality proposal, part of a joint architecture and engineering class taught by Amber Bartosh, an assistant professor of architecture, and Mark Povinelli, a professor of electrical engineering and computer science. The students are creating a Climate Disruptor Awareness Generator, which will be installed in April in E.S. Bird Library.

The Climate Disruptor Awareness Generator is meant to demonstrate to students the impact of climate change, with virtual reality and augmented reality adding an interactive component to the experience.

The VR/AR team is still in the early design stage for its contribution to the project, said Cliff Bourque, a graduate architecture student on the team. Right now, the group is focusing on the process of creating the elements, rather than the content.

Povinelli said that with the proper amount of real-world prototyping and testing, VR can add to the strength of the design process for engineers. Bartosh said she has been experimenting with VR to visualize things architects cant see easily, like energy and solar radiation.

Its very difficult in architecture to study anything at full-scale, Bartosh said. We do almost everything either through models or drawings, and even in a digital model, its difficult to get a scale or perspective.

Bartosh added later, Im always telling the students that right now VR is largely used for representation of simulation, but its not inconceivable to think of VR as a future material, the way that we think about physical materials.

S.I. Newhouse School of Public Communications

A card swipe protects the entrance to the Alan Gerry Center for Media Innovation lab while the Department of Public Safety monitors it. The room, tucked in the back of Newhouse 2, is stocked with Oculus Rifts, HTC Vives, Google Cardboard, Samsung Gear VRs and 360 cameras.

So much new equipment comes into the lab that the glass case in the back is nicknamed the digital petting zoo, said Dan Pacheco, Peter A. Horovitz Chair in Journalism Innovation and spearhead of Newhouses VR courses.

But despite the high-tech equipment, students still sign out VR equipment with a pen and notebook.

The lab is where Asa Worthley, a junior Whitman student, came to work on his 360 video Pale Blue Dot in 360: VR Carl Sagan. The three-minute clip collages images of iconic people in a galaxy skyline, accompanied by a narration by Carl Sagan.

Worthley is a part of 5th Medium, the first virtual reality club at SU that works with The Daily Orange on 360 videos. Students of any major or discipline can join the club, giving students like him who arent in Newhouse support and access to the technology. The club has been working on projects like the Greek Peak Mountain Resort 360 video, where viewers can watch a ski lift and snowboard.

The innovation lab is also a space for students taking one of the two Newhouse virtual reality classes: Virtual Reality Storytelling or Introduction to 360 Video. Pacheco was first exposed to VR in 2012, when he met Nonny de la Pea, the godmother of virtual reality, he said.

Pacheco convinced her to come to SU to demonstrate it. After further exposure over the next few years, he asked his department head to create a VR storytelling class for spring 2015. Pacheco thought no one would sign up, but the class filled within a couple of days.

Now, about 160 students have taken either of the two classes. While mostly Newhouse students enroll, Pacheco said he leaves a few spots open for students from other colleges. The exposure students get is about the same at current media companies, he said.

When Ive taken students down to The New York Times, people at The New York Times are telling me, Yeah, your students are pretty much at the same level as where were at, Pacheco said.

Ken Harper, an associate professor of multimedia photography and design who taught the first 360 video course at Newhouse last semester, said the hardest part about teaching immersive technologies is that he is still learning himself. He said it isnt uncommon to pick up skills on the weekend and then teach them in class the next week.

Harper and Pacheco said they created a faculty group for professors across the university who teach VR.

For journalists, the most promising aspect of VR is its ability to enhance storytelling, educate like teaching students about the solar system and its accessibility for less privileged people, Harper said.

And while there is need for caution about VR, like the possibility for addiction or tricking people into false memories, Pacheco said that in his experience, people dont want to just check out of reality, but rather make reality better. Journalists need to start using immersive technology now, Pacheco and Harper said, because their content will define the ethical boundaries for the medium.

My role in this is to keep the humanity in it, Harper said. I think if we could convey information, and offer up new worlds for people who otherwise couldnt have them, if we could develop the storytelling techniques that further empathy, maybe we can make the world a little bit friendlier.

Sports Editor Tomer Langer contributed reporting to this story.

Published on March 5, 2017 at 10:18 pm

Contact Haley: hykim100@syr.edu

See original here:

At Syracuse University, more students are getting ahold of virtual reality - The Daily Orange

Posted in Virtual Reality | Comments Off on At Syracuse University, more students are getting ahold of virtual reality – The Daily Orange

The Future Of AI With Alexy Khrabrov – Forbes

Posted: at 3:14 pm


Forbes
The Future Of AI With Alexy Khrabrov
Forbes
Alexy Khrabrov doesn't just want to tell people about AI. He wants to show you, immerse you and get you as excited as he is. The founder and CEO of By the Bay and Chief Scientist at the Cicero Institute has made a career out of not only understanding ...

Read more:

The Future Of AI With Alexy Khrabrov - Forbes

Posted in Ai | Comments Off on The Future Of AI With Alexy Khrabrov – Forbes

Astronomers Deploy AI to Unravel the Mysteries of the Universe – WIRED

Posted: at 3:14 pm

Slide: 1 / of 1. Caption: Brad Goldpaint/Getty Images

Astronomer Kevin Schawinski has spent much of his career studying how massive black holes shape galaxies. But he isnt into dirty workdealing with messy dataso he decided to figure out how neural networks could do it for him. Problem is, he and his cosmic colleagues suck at that sophisticated kind of coding.

That changed when another professor at Schawinskis institution, ETH Zurich, sent him an email and CCed Ce Zhang, who actually is a computer scientist. You guys should talk, the email said. And they did: Together, they plotted how they could take leading-edge machine-learning techniques and superimpose them on the universe. And recently, they released their first result: a neural network that sharpens up blurry, noisy images from space. Kind of like those scenes in CSI-type shows where a character shouts Enhance! Enhance! at gas station security footage, and all of a sudden the perps face resolves before your eyes.

Schawinski and Zhangs work is part of a larger automation trend in astronomy: Autodidactic machines can identify, classify, andapparentlyclean up their data better and faster than any humans. And soon, machine learning will be a standard digital tool astronomers can pull out, without even needing to grasp the backend.

In their initial research, Schawinski and Zhang came across a kind of neural net that, in an example, generated original pictures of cats after learning what cat-ness is from a set of feline images. It immediately became clear, says Schawinski.

This feline-friendly system was called a GAN, or generative adversarial network. It pits two machine-brainseach its own neural networkagainst each other. To train the system, they gave one of the brains a purposefully noisy, blurry image of a cat galaxy and then an unmarred version of that same galaxy. That network did its best to fix the degraded galaxy, making it match the pristine one. The second half of the network evaluated the differences between that fixed image and the originally OK one. In test mode, the GAN got a new set of scarred pictures and performed computational plastic surgery.

Once trained up, the GAN revealed details that telescopes werent sensitive enough to resolve, like star-forming spots. I dont want to use a clich phrase like holy grail, says Schawinski, but in astronomy, you really want to take an image and make it better than it actually is.

When I asked the two scientists, who Skyped me together on Friday, whats next for their silicon brains, Schawinski asked Zhang, How much can we reveal? which suggests to me they plan to take over the world.

They went on to say, though, that they dont exactly know, short-term (or at least theyre not telling). Long-term, these machine learning techniques just become part of the arsenal scientists use, says Schawinski, in a kind of ready-to-eat form. Scientists shouldnt have to be experts on deep learning and have all the arcane knowledge that only five people in the world can grapple with.

Other astronomers have already used machine learning to do some of their work. A set of scientists at ETH Zurich, for example, used artificial intelligence to combat contamination in radio data. They trained a neural network to recognize and then mask the human-made radio interference that comes from satellites, airports, WiFi routers, microwaves, and malfunctioning electric blankets. Which is good, because the number of electronic devices will only increase, while black holes arent getting any brighter.

Neural networks need not limit themselves to new astronomical observations, though. Scientists have been dragging digital data from the sky for decades, and they can improve those old observations by plugging them into new pipelines. With the same data people had before, we can learn more about the universe, says Schawinski.

Machine learning also makes data less tedious to process. Much of astronomers work once involved the slog of searching for the same kinds of signals over and overthe blips of pulsars, the arms of galaxies, the spectra of star-forming regionsand figuring out how to automate that slogging. But when a machine learns, it figures out how to automate the slogging. The code itself decides that galaxy type 16 exists and has spiral arms and then says, Found another one! As Alex Hocking, who developed one such system, put it, the important thing about our algorithm is that we have not told the machine what to look for in the images, but instead taught it how to see.

A prototype neural network that pulsar astronomers developed in 2012 found 85 percent of the pulsars in a test dataset; a 2016 system flags fast radio burst candidates as human- or space-made, and from a known source or from a mystery object. On the optical side, a computer brainweb called RobERtRobotic Exoplanet Recognitionprocesses the chemical fingerprints in planetary systems, doing in seconds what once took scientists days or weeks. Even creepier, when the astronomers asked RobERt to dream up what water would look like, he, uh, did it.

The point, here, is that computers are better and faster at some parts of astronomy than astronomers are. And they will continue to change science, freeing up scientists time and wetware for more interesting problems than whether a signal is spurious or a galaxy is elliptical. Artificial intelligence has broken into scientific research in a big way, says Schawinski. This is a beginning of an explosion. This is what excites me the most about this moment. We are witnessing anda little bitshaping the way were going to do scientific work in the future.

Original post:

Astronomers Deploy AI to Unravel the Mysteries of the Universe - WIRED

Posted in Ai | Comments Off on Astronomers Deploy AI to Unravel the Mysteries of the Universe – WIRED

More Bad News for Gamblers AI WinsAgain – HPCwire (blog)

Posted: at 3:14 pm

AI-based poker playing programs have been upping the ante for lowly humans. Notably several algorithms from Carnegie Mellon University (e.g. Libratus, Claudico, and Baby Tartanian8) have performed well. Writing in Science last week, researchers from the University of Alberta, Charles University in Prague and Czech Technical University report their poker algorithm DeepStack is the first computer program to beat professional players in heads-up no-limit Texas holdem poker.

Sorting through the firsts is tricky in the world of AI-game playing programs. What sets DeepStack apart from other programs, say the researchers, is its more realistic approach at least in games such as poker where all factors are never full known think bluffing, for example. Heads-up no-limit Texas holdem (HUNL) is a two-player version of poker in which two cards are initially dealt face down to each player, and additional cards are dealt face-up in three subsequent rounds. No limit is placed on the size of the bets although there is an overall limit to the total amount wagered in each game.

Poker has been a longstanding challenge problem in artificial intelligence, says Michael Bowling, professor in the University of Albertas Faculty of Science and principal investigator on the study. It is the quintessential game of imperfect information in the sense that the players dont have the same information or share the same perspective while theyre playing.

Using GTX 1080 GPUs and CUDA with the Torch deep learning framework, we train our system to learn the value of situations, says Bowling on an NVIDIA blog. Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game.

In the last two decades, write the researchers, computer programs have reached a performance that exceeds expert human players in many games, e.g., backgammon, checkers, chess, Jeopardy!, Atari video games, and go. These successes all involve games with information symmetry, where all players have identical information about the current state of the game. This property of perfect information is also at the heart of the algorithms that enabled these successes, write the researchers.

We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning.

In total 44,852 games were played by the thirty-three players with 11 players completing the requested 3,000 games, according to the paper. Over all games played, DeepStack won 492 mbb/g. This is over 4 standard deviations away from zero, and so, highly significant. According to the authors, professional poker players consider 50 mbb/g a sizable margin. Using AIVAT to evaluate performance, we see DeepStack was overall a bit lucky, with its estimated performance actually 486 mbb/g.

(For those of us less prone to take a seat at the Texas holdem poker table, mbb/g equals milli-big-blinds per game or the average winning rate over a number of hands, measured in thousandths of big blinds. A big blind is the initial wager made by the non-dealer before any cards are dealt. The big blind is twice the size of the small blind; a small blind is the initial wager made by the dealer before any cards are dealt. The small blind is half the size of the big blind.)

Its an interesting paper. Game theory, of course, has a long history and as the researchers note, The founder of modern game theory and computing pioneer, von Neumann, envisioned reasoning in games without perfect information. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory. One game that fascinated von Neumann was poker, where players are dealt private cards and take turns making bets or bluffing on holding the strongest hand, calling opponents bets, or folding and giving up on the hand and the bets already added to the pot. Poker is a game of imperfect information, where players private cards give them asymmetric information about the state of game.

According to the paper, DeepStack algorithm is composed of three ingredients: a sound local strategy computation for the current public state, depth-limited look-ahead using a learned value function to avoid reasoning to the end of the game, and a restricted set of look-ahead actions. At a conceptual level these three ingredients describe heuristic search, which is responsible for many of AIs successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.

The researchers describe DeepStacks architecture as a standard feed-forward network with seven fully connected hidden layers each with 500 nodes and parametric rectified linear units for the output. The turn network was trained by solving 10 million randomly generated poker turn games. These turn games used randomly generated ranges, public cards, and a random pot size. The flop network was trained similarly with 1 million randomly generated flop games.

Link to paper: http://science.sciencemag.org/content/early/2017/03/01/science.aam6960.full

Link to NVIDIA blog: https://news.developer.nvidia.com/ai-system-beats-pros-at-texas-holdem/

Go here to see the original:

More Bad News for Gamblers AI WinsAgain - HPCwire (blog)

Posted in Ai | Comments Off on More Bad News for Gamblers AI WinsAgain – HPCwire (blog)

Let’s get the network together: Improving lives through AI – Cloud Tech

Posted: at 3:14 pm

We have seen a machine master the complex game of Go, previously thought to be one of the most difficult challenge of artificial processing. We have witnessed vehicles operating autonomously, including a caravan of trucks crossing Europe with only a single operator to monitor systems. We have seen a proliferation of robotic counterparts and automated means for accomplishing a variety of tasks. All of this has given rise to a flurry of people claiming that the AI revolution is already upon us.

Understanding the growth in the functional and technological capability of AI is crucial for understanding the real world advances we have seen. Full AI, that is to say complete, autonomous sentience, involves the ability for a machine to mimic a human to the point that it would be indistinguishable from them (the so-called Turing test). This type of true AI remains a long way from reality. Some would say the major constraint to the future development of AI is no longer our ability to develop the necessary algorithms, but, rather, having the computing power to process the volume of data necessary to teach a machine to interpret complicated things like emotional responses. While it may be some time yet before we reach full AI, there will be many more practical applications of basic AI in the near term that hold the potential for significantly enhancing our lives.

With basic AI, the processing system, embedded within the appliance (local) or connected to a network (cloud), learns and interprets responses based on experience. That experience comes in the form of training through using data sets that simulate the situations we want the system to learn from. This is the confluence of machine learning (ML) and AI. The capability to teach machines to interpret data is the key underpinning technology that will enable more complex forms of AI that can be autonomous in their responses to input. It is this type of AI that is getting the most attention. In the next ten years, the use of this kind of ML-based AI will likely fall into two categories:

There is no doubt about the commercial prospects for autonomous robotic systems for applications like online sales conversion, customer satisfaction, and operational efficiency. We see this application already being advanced to the point that it will become commercially viable, which is the first step to it becoming practical and widespread. Simply put, if revenue can be made from it, it will become self-sustaining and thus continue to grow. The Amazon Echo, a personal assistant, has succeeded as a solidly commercial application of autonomous technology in the United States.

In addition to the automation of transportation and logistics, a wide variety of additional technologies that utilise autonomous processing techniques are being built. Currently, the artificial assistant or chatbot concept is one of the most popular. By creating the illusion of a fully sentient remote participant, it makes interaction with technology more approachable. There have been obvious failings of this technology (the unfiltered Microsoft chatbot, Tay, as a prime example), but the application of properly developed and managed artificial systems for interaction is an important step along the route to full AI. This is also a hugely important application of AI as it will bring technology to those who previously could not engage with technology completely for any number of physical or mental reasons. By making technology simpler and more human to interact with, you remove some of the barriers to its use that cause difficulty for people with various impairments.

The use of AI for development and discovery is just now beginning to gain traction, but over the next decade, this will become an area of significant investment and development. There are so many repetitive tasks involved in any scientific or research project that using robotic intelligence engines to manage and perfect the more complex and repetitive tasks would greatly increase the speed at which new breakthroughs could be uncovered.

View original post here:

Let's get the network together: Improving lives through AI - Cloud Tech

Posted in Ai | Comments Off on Let’s get the network together: Improving lives through AI – Cloud Tech

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone – Wired.co.uk

Posted: at 3:14 pm

Subscribe to WIRED

AI services like Apples Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that todays electronics dont come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.

Many have suggested Moore's law is ending and that means we won't get 'more compute' cheaper using the same methods, Eliasmith says. Hes betting on the proliferation of neuromorphics a type of computer chip that is not yet widely known but already being developed by several major chip makers. What is Moore's Law? WIRED explains the theory that defined the tech industry

Traditional CPUs process instructions based on clocked time information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using spikes bursts of electric current that can be sent whenever needed. Just like our own brains, the chips neurons communicate by processing incoming flows of electricity - each neuron able to determine from the incoming spike whether to send current out to the next neuron.

What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.

Eliasmith points out that neuromorphics arent new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant youd need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.

Subscribe to WIRED

This was partly because there hasnt been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.

Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.

Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language known for its intuitive syntax and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.

Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo, Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.

Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what its sees. The machine wasnt perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.

Eliasmith won NSERCs John C. Polyani award for that project Canadas highest recognition for a breakthrough scientific achievement and once Suma came across the research, the pair joined forces to commercialize these tools.

While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs, says Suma. Suma points out that while todays AIs like Siri remain offline until explicitly called into action, well soon have artificial agents that are always on and ever-present in our lives.

Imagine a SIRI that listens and sees all of your conversations and interactions. Youll be able to ask it for things like - "Who did I have that conversation about doing the launch for our new product in Tokyo?" or "What was that idea for my wife's birthday gift that Melissa suggested?, he says.

When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, Im reminded that because the AI would be processed locally on the device, theres no need for that information to touch a server owned by a big company. And for Eliasmith, this always on component is a necessary step towards true machine cognition. The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world, he says.

Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.

With the rise of neuromorphics, and tools like Nengo, we could soon have AIs capable of exhibiting a stunning level of natural intelligence right on our phones.

See the original post here:

The future of AI is neuromorphic. Meet the scientists building digital 'brains' for your phone - Wired.co.uk

Posted in Ai | Comments Off on The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone – Wired.co.uk

Artificial intelligence experts unveil Baxter the robot – who you control with your MIND – Express.co.uk

Posted: at 3:14 pm

The incredible work undertaken by Artificial Intelligence geniuses has been backed by private funding from Boeing and the US National Science Foundation.

A team from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University that allows people to correct robot mistakes instantly with nothing more than their brains.

Using data from an electroencephalography (EEG) monitor that records brain activity, the system can detect if a person notices an error as a robot performs an object-sorting task.

Jason Dorfman, MIT CSAIL

Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word

CSAIL director Daniela Rus

The teams novel machine-learning algorithms enable the system to classify brain waves in the space of 10 to 30 milliseconds.

While the system currently handles relatively simple binary-choice activities, the studys senior author says that the work suggests that we could one day control robots in much more intuitive ways.

CSAIL director Daniela Rus told Express.co.uk: Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word.

Jason Dorfman, MIT CSAIL

A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we havent even invented yet.

In the current study the team used a humanoid robot named Baxter from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.

The paper presenting the work was written by BU PhD candidate Andres F. Salazar-Gomez, CSAIL PhD candidate Joseph DelPreto, and CSAIL research scientist Stephanie Gil under the supervision of Rus and BU professor Frank H. Guenther.

Jason Dorfman, MIT CSAIL

The paper was recently accepted to the IEEE International Conference on Robotics and Automation (ICRA) taking place in Singapore this May.

Past work in EEG-controlled robotics has required training humans to think in a prescribed way that computers can recognise.

Rus team wanted to make the experience more natural and to do that, they focused on brain signals called error-related potentials (ErrPs), which are generated whenever our brains notice a mistake.

Jason Dorfman, MIT CSAIL

As the robot indicates which choice it plans to make, the system uses ErrPs to determine if the human agrees with the decision.

Rus added: As you watch the robot, all you have to do is mentally agree or disagree with what it is doing.

You dont have to train yourself to think in a certain way - the machine adapts to you, and not the other way around.

The work in progress identified that ErrP signals are extremely faint, which means that the system has to be fine-tuned enough to both classify the signal and incorporate it into the feedback loop for the human operator.

In addition to monitoring the initial ErrPs, the team also sought to detect secondary errors that occur when the system doesnt notice the humans original correction.

Scientist Stephanie Gil said: If the robots not sure about its decision, it can trigger a human response to get a more accurate answer.

These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices.

While the system cannot yet recognise secondary errors in real time, Gil expects the model to be able to improve to upwards of 90 per cent accuracy once it can.

In addition, since ErrP signals have been shown to be proportional to how egregious the robots mistake is, the team believes that future systems could extend to more complex multiple-choice tasks.

Jason Dorfman, MIT CSAIL

1 of 9

Salazar-Gomez notes that the system could even be useful for people who cant communicate verbally: a task like spelling could be accomplished via a series of several discrete binary choices, which he likens to an advanced form of the blinking that allowed stroke victim Jean-Dominique Bauby to write his memoir The Diving Bell and the Butterfly.

Wolfram Burgard a professor of computer science at the University of Freiburg who was not involved in the research added: This work brings us closer to developing effective tools for brain-controlled robots and prostheses.

Given how difficult it can be to translate human language into a meaningful signal for robots, work in this area could have a truly profound impact on the future of human-robot collaboration."

Go here to read the rest:

Artificial intelligence experts unveil Baxter the robot - who you control with your MIND - Express.co.uk

Posted in Artificial Intelligence | Comments Off on Artificial intelligence experts unveil Baxter the robot – who you control with your MIND – Express.co.uk

Google’s artificial intelligence can diagnose cancer faster than human doctors – Mirror.co.uk

Posted: at 3:14 pm

Making the decision on whether or not a patient has cancer usually involves trained professionals meticulously scanning tissue samples over weeks and months.

But Google's artificial intelligence (AI) supercomputer DeepMind may be able to do it much, much faster.

The search company has been working with the NHS since September last year to help speed up cancer detection. The software can now tell the difference between healthy and cancerous tissue, as well as discover if metastasis has occured.

"Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labour intensive and error-prone," explained Google in a white paper outlining the study.

"We present a framework to automatically detect and localise tumours as small as 100 100 pixels in gigapixel microscopy images sized 100,000100,000 pixels.

"Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumour detection task."

Such high-level image recognition was first developed for Google's driverless car programme, in order to help the vehicles scan for road obstructions.

Now the company has adapted it for the medical field and says it's more accurate than regular human doctors:

"At 8 false positives per image, we detect 92.4% of the tumours, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity."

Despite this, it's unlikely to replace human pathologists just yet. The software only looks for one thing - cancerous tissue - and is not able to pick up any irregularities that a human doctor could spot.

In order to perfect the study, Google was given access to 20 MRI and CT scans of 20 anonymous patients.

DeepMinds Mustafa Suleyman said: This real-world application of artificial intelligence technology is exactly why we set up DeepMind.

"We hope this work could lead to real benefits for cancer patients across the country.

poll loading

YES NO

View post:

Google's artificial intelligence can diagnose cancer faster than human doctors - Mirror.co.uk

Posted in Artificial Intelligence | Comments Off on Google’s artificial intelligence can diagnose cancer faster than human doctors – Mirror.co.uk

Why You Should Let Artificial Intelligence Creep Into Your Business – Inc.com

Posted: at 3:14 pm

Signpost is a service that lets brick-and-mortar store owners publish incentives and promotions on its website. Last summer, the New York City-based company's founder and CEO, Stuart Wall, created a new app: the A.I.-centric Mia. Through its natural language generation capability, Mia crafts messages and sends them to prospects at opportune times. It tracks and analyzes a store's calls, emails, and credit card swipes, and then makes what it decides is the right pitch. "New customers often tell me they show up because of our five-star reviews, which I hear about through Mia," says Randy Jewart, owner of Resolution Gardens, a landscaping company in Austin, and a Mia subscriber. People contact small businesses to learn about products and services, Wall notes, "so why waste this valuable data that A.I. can use to market to them?"

Unlike traditional computing, which delivers precise solutions within defined parameters, A.I.--sometimes referred to as cognitive computing--teaches itself how to solve problems. "Instead of delivering only specificity, A.I.-centric programming generates millions of solutions, evaluating each for efficacy and then choosing the most viable and optimal ones," says Amir Husain, CEO and founder of Austin-based SparkCognition, which serves financial, aerospace, energy, and utility enterprises. If A.I. applications seem to be doing the thinking for you, they are.

Manually finding your target customer--by searching and poring through income-level, interest-based, and geographical data--is labor-intensive and time-consuming. A.I. cuts to the chase. "For example, using a feed of three key pieces of information that the entrepreneur provides--a brief product-description text, images, and a price range--an A.I. system can zip through social media and other online outlets, looking for correlations between product and digital conversations," says Husain, author of The Sentient Machine, to be published this year. A.I. also finds the targets' contact information.

If you give it the green light, A.I.'s natural language processing technology then writes and sends a sales pitch, notes transmission time, and analyzes feedback. "You can almost hear an A.I. system going, 'Aha! I've cracked the code,' " says Husain, adding that A.I. constantly optimizes itself by making slight changes to the message.

One key reason for A.I.'s upsurge is entrepreneurs' free or inexpensive access to libraries such as IBM Watson, Google TensorFlow, and Microsoft Azure. These application programming interfaces (APIs) allow coders to build A.I. apps without starting from scratch. Enterprise-focused A.I. companies are catering to all aspects of entrepreneurship. Last year Koru, in Seattle, launched Koru Hire, predictive hiring software that uses A.I. to match job applicants' skills and experience with profiles of a company's best current and past employees. It generates a "fit score" that indicates whether a candidate might replicate those successes. And in San Francisco, the Grid launched A.I.-centric website-design software. It analyzes the intended content--text and images--which it separates into components, creating an array of options so the user can "build" the site in minutes. The program then interacts with the user to modify layout, color, and typography. Husain expects to see a proliferation of A.I.-centric marketing, sales, and other service startups focused on small and medium-size businesses. On tap for this summer: Cinch, from Cinch Financial, in Boston, which uses A.I. to analyze personal money data and recommends financial strategies, along with behavioral changes and new products that coincide with those behaviors.

The biggest misconception about A.I. is that it's robots with human faces sitting at remote desks. "A.I. is nothing more than an add-on technology--spice and flair--to an otherwise conventional system, such as a traditional travel-reservation site that, because of A.I., can now converse with a human," says Bruce W. Porter, an A.I. researcher and computer science professor at the University of Texas, Austin. Porter emphasizes that future breakthroughs will not be 100 percent A.I. "A.I. will likely provide a 10 percent product- or service-performance boost," he says. That is, in fact, huge. Firms that fail to make the A.I. leap, he says, may fail to have customers.

Not all searches are as simple as typing a few keywords and having Google take over. Entrepreneurs often need more in-depth and complicated excavations--for patent and trademark data, for example--and that, in turn, involves an often hefty legal budget to pay a highly trained human to do. Porter foresees within five years many companies offering services to consumers who have no expertise in A.I. or specific knowledge fields. They'll be able to conduct their own A.I.-based data retrieval. Count on industry disruption, he says, as this type of A.I. application will leapfrog current data-retrieval-service providers.

Because it's able to generate natural language, A.I. is an exceptional tool for helping entrepreneurs assemble contracts, as opposed to buying them off the shelf at, say, LegalZoom. A.I. applications will converse with--by text and, ultimately, voice--and tease information out of humans that will become components of formal agreements, such as details about fee payments and product returns. Porter anticipates users will pay to access cloud-based A.I. computer systems to produce such documents: "A.I.-centric startups, because they don't require a human in the loop and won't need to hire staffers, can offer their services at a very low cost, especially given an anticipated large volume of customers and business competition."

See the article here:

Why You Should Let Artificial Intelligence Creep Into Your Business - Inc.com

Posted in Artificial Intelligence | Comments Off on Why You Should Let Artificial Intelligence Creep Into Your Business – Inc.com

Good, Bad & Ugly! Artificial Intelligence for Humans is All of This & More – Entrepreneur

Posted: at 3:14 pm

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

Even though artificial intelligence may have positive effects, why create it if it has the potential to backfire. Many big tech companies are increasingly adopting artificial intelligence to make their businesses more efficient. In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. Artificial Intelligence, chatbots, self driving cars and robots often seem like a part of science fiction movies, but in reality, they have already started affecting our daily lives. For example, companies like Wipro and Infosys are deploying AI platform to do the job of engineers.

There are always good and bad sides to every new technology and AI is no exception to this condition. Given below are three examples of the good, the bad and the ugly of Artificial Intelligence.

When Siri Failed to Understand Typical Accents

When Apple released its digital AI assistant, Siri, in October 2011, iPhone users had a lot of expectations with the new bot. Yet despite its growing popularity, Siri was often criticized for its problems and technical glitches. Siri has not been well received by some English speakers with distinctive accents. The personal assistants lack of understanding different accents clearly depicts the restrictions of present AI technology.Today artificial intelligence cant understand changing needs of humans, how will they control our lives then?

Microsoft AI Chatbot Tays Disastrous Debut:

Tay, an artificial intelligence chatbot that was originally released by Microsoft Corporation via Twitter on March 23, 2016.It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch. According to the company, this error was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter.

Shopping via Voice with Amazons Alexa:

Amazon Alexa seemed to be the star last year as Alexa devices topped Amazon's best-seller list last year. Alexa is an intelligent personal assistant developed by Amazon Lab126, made popular by the Amazon Echo. It is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real-time information. Alexa can also control several smart devices using itself as a home automation hub.

Currently, interaction and communication with Alexa are only available in English and German. What sets apart Alexa from other AI assistants is its personal shopping feature. The device is directly linked to the e-commerce website catalog, which allows customers to order products through voice purchasing option.

A self confessed Bollywood Lover, Travel junkie and Food Evangelist.I like travelling and I believe it is very important to take ones mind off the daily monotony .

Read the rest here:

Good, Bad & Ugly! Artificial Intelligence for Humans is All of This & More - Entrepreneur

Posted in Artificial Intelligence | Comments Off on Good, Bad & Ugly! Artificial Intelligence for Humans is All of This & More – Entrepreneur