Daily Archives: March 1, 2017

Oculus Cuts Prices of Its Virtual Reality Gear – New York Times

Posted: March 1, 2017 at 9:15 pm


New York Times
Oculus Cuts Prices of Its Virtual Reality Gear
New York Times
On Wednesday, Oculus announced that it has dropped the price of a package featuring its Rift headset and Touch controllers, which allow players to use their hands inside virtual reality, by 25 percent to $598 from $798. Oculus will also reduce the ...
Facebook's Oculus cuts price of virtual reality set by $200Reuters
Facebook's Oculus Cuts Virtual Reality Headset Price to Spur SalesBloomberg
Oculus cuts price on virtual reality gearUSA TODAY
TIME -Nasdaq -Fortune -Oculus
all 140 news articles »

Go here to read the rest:

Oculus Cuts Prices of Its Virtual Reality Gear - New York Times

Posted in Virtual Reality | Comments Off on Oculus Cuts Prices of Its Virtual Reality Gear – New York Times

The Strange Story Of When George Saunders First Met Virtual Reality – Co.Create

Posted: at 9:15 pm

Graham Sack has never been the sort of guy content to do one thing. He was for many years an actor, on stage and in films. Hes a PhD candidate in comparative literature at Columbia. He recently sold a screenplay about a math genius who gamed the Texas lottery. Combining his interests in literature and movies, he had long dreamed of adapting something by the writer George Saunders, whose writing Sack fell in love with a decade ago when reading a dystopian Saunders story in The New Yorker.

With so many interests, Sack is the sort of person who wouldnt think twice about reinventing himself as a director of virtuality reality films, which is what he decided to do a little over a year ago. In late 2015, he and his girlfriend saw someone in a New York caf fiddling around with a VR headset, which theyd never seen before. They approached the stranger, who it turned out was visiting New York from Austin, befriended him, and tried out the headset. Within a few months, Sack decided to fly down to Austin to visit his new friend and attempt to shoot something himself.

While in Austin, Sack was sitting in a caf paging through a local newsletter when he saw that George Saunders was scheduled to speak at Book People, a local bookstore. Sack checked the time: Saunderss talk was actually happening that very minute.

Lincoln in the Bardo VR experience

It all felt like kismet. Some of Sacks favorite Saunders stories had seemed to anticipate emerging virtual reality and augmented reality technologies. Sack wondered: had the author himself actually sampled these technologies? Sack rushed back to his Airbnb, grabbed the VR headset hed been playing with, and called an Uber to Book People.

By the time he arrived, Sack had missed the talk entirely, but fans were in line to meet Saunders and have their books signed. Sack filed into the rear of the line, his VR headset in tow.

Finally, it was Sacks turn to speak to Saunders. Sack introduced himself quickly, and asked: Might Saunders like to sample virtual reality?

In case you havent read George Saunders, know that his short stories are infused with techno-skepticism. Many of them present dystopian science fiction worlds where people are manipulated by, or manipulate each other with, various forms of digital machinery. So approaching the author to ask him to put on a scary VR headset was a big ask.

"I think he was curious, but very off-put at the same time," recalls Sack. Whats more, Sack was proposing that Saunders try VR for the first time in a public place (the managers of Book People were still milling about). When Saunders hesitated, Sack explained: "You are already doing virtual reality." The technology that Saunders portrayed in his stories was here. Wasnt it time he sampled it?

Saunders acquiesced. Soon, Sack was fumbling nervously to get the Samsung headset on his favorite living author. After a few false starts with the menu"super awkward," recalls Sackhe managed to boot up his favorite VR film, Chris Milks "Evolution of Verse," a poetic short whose highlight may be the moment a train charges at the camera before transforming into a flock of birds.

At last, the film was running. One of Americas foremost literary figures now stood with a headset strapped to his face in the back of an Austin bookstore beside the table where he had been signing books a few moments before. "Ah jeez . . ." Saunders said as the VR film progressed. "Oh boy, its coming right at me," he said, bumping into the table.

The film ended, and Sack helped Saunders take off the headset. Sack waited anxiously for Saunderss verdict.

"What else should I see?" asked the author.

Sack told Saunders he would be eager to collaborate sometime. They traded emails. Weeks went by. "It was basically radio silence for a month," recalls Sack.

Then, suddenly, Sack got an email from Penguin Random House. They said that Saunders had been thinking a lot about the VR, and invited Sack in for a talk.

Sack assumed hed have the chance to pitch a VR adaptation of a Saunders short story, so he spent weeks combing through every short story Saunders had written, jotting ideas of which ones might work in the medium. But when Sack got to his meeting at Penguin Random House, they sprung a surprising idea on him: Would Sack be interested in making a companion VR short for Saunderss forthcoming debut novel, Lincoln in the Bardo?

Lincoln in the Bardo

Now it was Sack who was slightly hesitant. Saunderss dystopian short fiction was a natural fit for VR, but Lincoln in the Bardo was a period piece (about, among other things, Abraham Lincolns mourning the death of his son, Willie). Was it even suited to a medium of the future like virtual reality?

Sack took the novel home and started reading it. And soon, he came to an early, major scene in the novel, where Lincoln cradles the dead body of his son, a sort of paternal Piet. "I read it, and the tears came, and I was like, I want to do this scene," says Sack. The scene was highly visual, rooted in one place, and had a theatrical qualityall elements VR excelled in handling, Sack had come to feel.

Sack agreed to do the film, entering into a production partnership with the New York Times. (Though the Times has been doing VR journalism for over a year, this is its first foray into scripted, fictional VR.) After navigating a complex, precedent-setting contract negotiationnever before has a novel launched with a VR tie-inSack worked on the film through the summer and fall. Finally, by November, Sack had a rough cut of the film to show Saunders.

They met in a New York hotel: only their second in-person encounter.

Again, Sack fumbled to put the headset on Saunders. And as Saunders watched the film, he scrutinized the authors every reaction. He was particularly nervous about what Saunders would think about the moment in the short film where Lincoln cradles Willies body. Would he find it moving, or maudlin?

Sack had by now tested the film on enough people that he knew exactly where they were in the film based on the most subtle movements of their faces. As Saunders approached the big moment with Willie, Sack braced himself.

Finally, the author spoke. "I'm fucking crying in here man," he said.

And indeed, when the film's last moments were over and Saunders removed the headset, his eyes were red. He said that watching Sacks film helped him relive the pathos he'd felt when originally composing the Lincoln-and-Willie Piet.

You can experience the scene now, too, in various forms. Lincoln in the Bardo itself went on sale last week, along with a companion audiobook (featuring performances from Nick Offerman and others). The VR companion piece can be found via the NYT VR app, or experienced less immersively on YouTube.

"Honestly, its the most fulfilling project Ive ever been involved in," says Sack now.

Read the original:

The Strange Story Of When George Saunders First Met Virtual Reality - Co.Create

Posted in Virtual Reality | Comments Off on The Strange Story Of When George Saunders First Met Virtual Reality – Co.Create

Virtual reality kit visualizes safer living space for those with dementia – Construction Dive

Posted: at 9:15 pm

Dive Brief:

Working in conjunction with Wireframe Immersive and Australias Dementia Centre, Glasgow, Scotlandbased architect David Burgher, of Aitken Turnbull Architects,has developed a kit of virtual reality tools to enable design professionals to visualize perceptive impairments related to dementia and old age in the built environment, according to Curbed.

The Virtual Reality Empathy Platform comprises a laptop, VR headset, camera and controller tool. It is intended to provide designers with an immersive component to help improve lighting, floor plans and overall design of care facilities and living environments.

The portfolio of virtual reality design cases continues to expand as contractors, designers and technologists probe the if you could only see what I see actuality of VR technology. Burghers VR kit expands on research conducted at the University of Cambridge, in the U.K., where gloves and impairment goggles were used to simulate arthritis and vision impairment. Both efforts are aimed at accelerating the inclusive design vector to create building interiors better-suited for individuals with dementia as well as other mobility and perception impairments.

Virtual reality is also gaining wider adoption as a safety training tool for the contractors charged with building complex healthcare facilities and other large commercial projects. In September 2016, Bechtel rolled out an immersive safety training program at the companys innovation center in Houston, using a SafeScan VR program from New York Citybased Human Condition Safety to repeatedly expose workers to simulated dangerous or intensive environments.

Meanwhile, hardware manufacturers along with researchers at MIT are working on ways to make the VR experience even more immersive by removing the bulky wires and clumsy interfaces traditionally needed for the high rates of data transfer to headsets.

Read the rest here:

Virtual reality kit visualizes safer living space for those with dementia - Construction Dive

Posted in Virtual Reality | Comments Off on Virtual reality kit visualizes safer living space for those with dementia – Construction Dive

Clients explore company’s landscape designs with virtual reality – Total Landscape Care

Posted: at 9:15 pm

Urban Ecosystems designs are built in SketchUp and then rendered in the video game engine Unity. Photo: Urban Ecosystems

Have you ever struggled to sell a job to a client because they just dont see it?

Sometimes even when you have drawn a top down view of their future landscape, rendered it in some 3D software, and talked your client through the vision you have for the space they are still skeptical.

One method that may become a future tool of landscape design is virtual reality, and Urban Ecosystems based in St. Paul, Minnesota, is already putting it to use.

The integration of virtual reality (VR) into their business started by collaborating with a programmer and a video game designer.

I was interested in the interactive component, said Samuel Geer, director of operations for Urban Ecosystems. I dedicated some energy into the seeing what the process would be to bring it (3D models) into a virtual environment. A lot of it can be automated. It wasnt that much extra effort.

Geer says the company creates the environments in SketchUp and then uses the video game engine Unity to add the ability to explore and manipulate the environment. Urban Ecosystems uses VR technology that is custom designed for landscape architecture and design.

Users are able to toggle between different design options to help decide on features from a cost perspective. Photo: Urban Ecosystems

The software is capable of rendering large, complex designs such as parks and golf course, as well as residential landscapes. The space can be filled with people to help determine how the space works when crowded and it can be view in daytime and nighttime settings.

The amount of time it takes to create a VR compatible landscape design can vary.

It depends on the project and what youre trying to do, small scale versus a larger, more complex environment, Geer said. Its going to take longer depending on how many bells and whistles you put into it.

As of right now, Geer hasnt heard of other landscaping companies using this tool, but he notes that architecture firms in their area have started to adopt VR.

Customers often appreciate getting to sneak a peek of what their dream yard will look like, and seeing it in relation to the rest of their home helps them see how a new element would inhabit the space.

It helps communicate the cost dimensions, Geer said. Being able to look at the materials installed helps the make those decisions. Theres a lot of opportunity to combine some decision-making criteria with an aesthetic decision. You can very clearly present that information to the client.

One of the benefits of VR is the ability to look at how the design interacts with the space. Users can see where a view needs to be preserved and which style fits best with the different design options they can switch between.

It helps them feel more in control of the process, Geer said. It lets them feel like theyre in the drivers seat.

Geer believes the interactive nature of VR will help it eventually become the future of presenting landscape designs.

It becomes a hands-on experience and peoples personal interest and tastes are able to be expressed more eloquently compared to seeing a top down design of the space, he said.

Below is a video of Urban Ecosystems demonstrating its VR designs with KARE11.com.

The rest is here:

Clients explore company's landscape designs with virtual reality - Total Landscape Care

Posted in Virtual Reality | Comments Off on Clients explore company’s landscape designs with virtual reality – Total Landscape Care

Line’s AI Bets Pit Japanese Messenger Against Amazon and Google – Bloomberg

Posted: at 9:15 pm

by

March 1, 2017, 11:00 AM EST March 1, 2017, 7:16 PM EST

Line Corp. outlined an ambitious artificial-intelligence strategy that promises to transform Japans most popular messaging service while pitting it against Google, Facebook Inc. and Amazon.com Inc.

The company is launching a suite of AI software tools to power an online digital assistant capable of conversing in Japanese and Korean, Line said at the Mobile World Congress in Barcelona on Wednesday.Users can talk to the assistant, getting the latest weather and news through either a dedicated smartphone app or a tabletop-speaker called Wave thats similar to Amazons Echo. Both will be available in early summer.

Clova's smart speaker WAVE.

Source: Line Corp.

Silicon Valley companies are exploring ways of extending their reach beyond smartphones, with Amazon and Google both selling AI-powered digital assistants not unlike Wave. Facebook, whose Messenger and WhatsApp compete with Line, haslaunched a chatbot platform and plowed more than $2 billion into virtual reality. But Line believes it can leverage local knowledge to beat tech giants in its home country and markets where its messaging service is popular, including South Korea, Taiwan, Thailand and Indonesia.

There is a shift toward toward post-smartphone, post-touch technologies, Chief Executive Officer Takeshi Idezawa said in an interview. These connected devices will permeate even deeper into our daily lives and therefore must even closer match the local needs, languages and cultures.

Line developed its AI platform with parent Naver Corp. The South Korean company operates that countrys dominant search engine, displacing Google in a testament to the power of local knowledge, Idezawa said.

Tokyo-based Line is already much more than a messaging service on its home turf, with people using the app to read news, hail taxis and find part-time jobs. That wealth of content and interaction in local languages gives Line an advantage over larger rivals because AI is only as good as the data on which its trained, Idezawa said.

Line is also open to acquisitions and partnerships in the field. The company is buying a stake in Vinclu, a Tokyo-basedInternet of Things startup. It invested inSound Hound, aU.S.-based voice recognition company,together with Naverlast month. And its considering joining forces with Sony Corp. to develop smart devices.

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

Lines shares rose as much as 2.1 percent in Tokyo on Thursday. The stock is still down about 2 percentthis year, amid concerns about stagnating growth at a company that pulled off 2016s biggest technology public offering. Idezawa is under pressure to find new sources of revenue on what is otherwise a free messaging service, as subscriber addition and revenue from games and digital stickers slow.

For now, Line has pinned its hopes on advertising and as-yet unannounced products, while AI remains a more distant prospect.

Its one of the longer-term bets, Idezawa said. The point is to secure a position early on. People will probably begin to use these services more regularly three to five years from now.

More:

Line's AI Bets Pit Japanese Messenger Against Amazon and Google - Bloomberg

Posted in Ai | Comments Off on Line’s AI Bets Pit Japanese Messenger Against Amazon and Google – Bloomberg

Google’s anti-trolling AI can be defeated by typos, researchers find [Updated] – Ars Technica

Posted: at 9:15 pm

Visit any news organization's website or any social media site, and you're bound to find some abusive or hateful language being thrown around. As those who moderate Ars' comments know, trying to keep a lid on trolling and abuse in comments can be an arduous and thankless task: whendone too heavily, it smacks of censorship and suppression of free speech; when applied too lightly, it can poison the community and keep people from sharing their thoughts out of fear of being targeted. And human-based moderation is time-consuming.

Both of these problems are the target of a project by Jigsaw, an Alphabet startup effort spun off from Google. Jigsaw's Perspective project is an application interface currently focused on moderating online conversationsusing machine learning to spot abusive, harassing, and toxic comments. The AI applies a "toxicity score" to comments, which can be used to either aide moderation or to reject comments outright, giving the commenter feedback about why their post was rejected. Jigsaw is currently partnering with Wikipedia and The New York Times, among others, to implement the Perspective API to assist in moderating reader-contributed content.

But that AI still needs some training, as researchers at the University of Washington's Network Security Lab recently demonstrated. In a paper published on February 27, Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran demonstrated that they could fool the Perspective AI into giving a low toxicity score to comments that it would otherwise flag by simply misspelling key hot-button words (such as "iidiot") or inserting punctuation into the word ("i.diot" or "i d i o t," for example). By gaming the AI's parsing of text, they were able to get scores that would allow comments to pass a toxicity test that would normally be flagged as abusive.

"One type of the vulnerabilities of machine learning algorithms is that an adversary can change the algorithm output by subtly perturbing the input, often unnoticeable by humans," Hosseini and his co-authors wrote. "Such inputs are called adversarial examples, and have been shown to be effective against different machine learning algorithms even when the adversary has only a black-box access to the target model."

The researchers also found that Perspective would flag comments that were not abusive in nature but used keywords that the AI had been trained to see as abusive. The phrases "not stupid" or "not an idiot" scored nearly as high on Perspective's toxicity scale as comments that used "stupid" and "idiot."

These sorts of false positives, coupled with easy evasion of the algorithms by adversaries seeking to bypass screening, belie the basic problem with any sort of automated moderation and censorship. Update: CJ Adams,Jigsaw's product manager for Perspective, acknowledged the difficulty in a statement he sent to Ars:

It's great to see research like this. Online toxicity is a difficult problem, and Perspective was developed to support exploration of how ML can be used to help discussion. We welcome academic researchers to join our research efforts on Github and explore how we can collaborate together to identify shortcomings of existing models and find ways to improve them.

Perspective is still a very early-stage technology, and as these researchers rightly point out, it will only detect patterns that are similar to examples of toxicity it has seen before. We have more details on this challenge and others on the Conversation AI research page. The API allows users and researchers to submit corrections like these directly, which will then be used to improve the model and ensure it can to understand more forms of toxic language, and evolve as new forms emerge over time.

More here:

Google's anti-trolling AI can be defeated by typos, researchers find [Updated] - Ars Technica

Posted in Ai | Comments Off on Google’s anti-trolling AI can be defeated by typos, researchers find [Updated] – Ars Technica

Facebook enlists AI tech to help prevent suicide – Mashable

Posted: at 9:15 pm


Mashable
Facebook enlists AI tech to help prevent suicide
Mashable
The AI tool looks at words in the post and, especially, comments from friends such as "Are you okay?" and "I'm here to help" that may indicate someone is struggling. This part of the system won't auto-report those at risk to Facebook, but will ...
Facebook testing AI that helps spot suicidal usersEngadget
Can AI save a life? Facebook thinks soTechRepublic
Facebook is testing AI tools to help prevent suicideNew Scientist
TheStreet.com -RT -New Atlas -Facebook Newsroom
all 103 news articles »

See the original post here:

Facebook enlists AI tech to help prevent suicide - Mashable

Posted in Ai | Comments Off on Facebook enlists AI tech to help prevent suicide – Mashable

What Does An AI Chip Look Like? – SemiEngineering

Posted: at 9:15 pm

Depending upon your point of reference, artificial intelligence will be the next big thing or it will play a major role in all of the next big things.

This explains the frenzy of activity in this sector over the past 18 months. Big companies are paying billions of dollars to acquire startup companies, and even more for R&D. In addition, governments around the globe are pouring additional billions into universities and research houses. A global race is underway to create the best architectures and systems to handle the huge volumes of data that need to be processed to make AI work.

Market projections are rising accordingly. Annual AI revenues are predicted to reach $36.8 billion by 2025, according to Tractica. The research house says it has identified 27 different industry segments and 191 use cases for AI so far.

Fig. 1. AI revenue growth projection. Source: Tractica

But dig deeper and it quickly becomes apparent there is no single best way to tackle AI. In fact, there isnt even a consistent definition of what AI is or the data types that will need to be analyzed.

There are three problems that need to be addressed here, said Raik Brinkmann, president and CEO of OneSpin Solutions. The first is that you need to deal with a huge amount of data. The second is to build an interconnect for parallel processing. And the third is power, which is a direct result of the amount of data that you have to move around. So you really need to move from a von Neumann architecture to a data flow architecture. But what exactly does that look like?

So far there are few answers, which is why the first chips in this market include various combinations of off-the-shelf CPUs, GPUs, FPGAs and DSPs. While new designs are under development by companies such as Intel, Google, Nvidia, Qualcomm and IBM, its not clear whose approach will win. It appears that at least one CPU always will be required to control these systems, but as streaming data is parallelized, co-processors of various types will be required.

Much of the processing in AI involves matrix multiplication and addition. Large numbers of GPUs working in parallel offer an inexpensive approach, but the penalty is higher power. FPGAs with built-in DSP blocks and local memory are more energy efficient, but they generally aremore expensive. This also is a segment where software and hardware really need to be co-developed, but much of the software is far behind the hardware.

There is an enormous amount of activity in research and educational institutions right now, said Wally Rhines, chairman and CEO of Mentor Graphics. There is a new processor development race. There are also standard GPUs being used for deep learning, and at the same time there are a whole bunch of people doing work with CPUs. The goal is to make neural networks behave more like the human brain, which will stimulate a whole new wave of design.

Vision processing has received most of the attention when it comes to AI, largely because Tesla has introduced self-driving capabilities nearly 15 years before the expected rollout of autonomous vehicles. That has opened a huge market for this technology, and for chip and overall system architectures needed to process data collected by image sensors, radar and LiDAR. But many economists and consulting firms are looking beyond this market to how AI will affect overall productivity. A recent report from Accenture predicts that AI will more than double GDP for some countries (see Fig. 2 below). While that is expected to cause significant disruption in jobs, the overall revenue improvement is too big to ignore.

Fig. 2: AIs projected impact.

Aart de Geus, chairman and co-CEO of Synopsys, points to three waves of electronicscomputation and networking, mobility, and digital intelligence. In the latter category, the focus shifts from the technology itself to what it can do for people.

Youll see processors with neural networking IP for facial recognition and vision processing in automobiles, said de Geus. Machine learning is the other side of this. There is a massive push for more capabilities, and the state of the art is doing this faster. This will drive development to 7nm and 5nm and beyond.

Current approaches Vision processing in self-driving dominates much of the current research in AI, but the technology also has a growing role in drones and robotics.

For AI applications in imaging, the computational complexity is high, said Robert Blake, president and CEO of Achronix. With wireless, the mathematics is well understood. With image processing, its like the Wild West. Its a very varied workload. It will take 5 to 10 years before that market shakes out, but there certainly will be a big role for programmable logic because of the need for variable precision arithmetic that can be done in a highly parallel fashion.

FPGAs are very good at matrix multiplication. On top of that, programmability adds some necessary flexibility and future-proofing into designs, because at this point it is not clear where the so-called intelligence will reside in a design. Some of the data used to make decisions will be processed locally, some will be processed in data centers. But the percentage of each could change for each implementation.

Thats has a big impact on AI chip and software design. While the big picture for AI hasnt changed muchmost of what is labeled AI is closer to machine learning than true AIthe understanding of how to build these systems has changed significantly.

With cars, what people are doing is taking existing stuff and putting it together, said Kurt Shuler, vice president of marketing at Arteris. For a really efficient embedded system to be able to learn, though, it needs a highly efficient hardware system. There are a few different approaches being used for that. If you look at vision processing, what youre doing is trying to figure out what is it that a device is seeing and how you infer from that. That could include data from vision sensors, LiDAR and radar, and then you apply specialized algorithms. A lot of what is going on here is trying to mimic whats going on in the brain using deep and convolutional neural networks.

Where this differs from true artificial intelligence is that the current state of the art is being able to detect and avoid objects, while true artificial intelligence would be able to add a level of reasoning, such as how to get through a throng of people cross a street or whether a child chasing a ball is likely to run into the street. In the former, judgments are based on input from a variety of sensors based upon massive data crunching and pre-programmed behavior. In the latter, machines would be able to make value judgments, such as the many possible consequences of swerving to avoid the childand which is the best choice.

Sensor fusion is an idea that comes out of aircraft in the 1990s, said Shuler. You get it into a common data format where a machine can crunch it. If youre in the military, youre worried about someone shooting at you. In a car, its about someone pushing a stroller in front of you. All of these systems need extremely high bandwidth, and all of them have to have safety built into them. And on top of that, you have to protect the data because security is becoming a bigger and bigger issue. So what you need is both computational efficiency and programming efficiency.

This is what is missing in many of the designs today because so much of the development is built with off-the-shelf parts.

If you optimize the network, optimize the problem, minimize the number of bits and utilize hardware customized for a convolutional neural network, you can achieve a 2X to 3X order of magnitude improvement in power reduction, said Samer Hijazi, senior architect at Cadence and director of the companys Deep Learning Group. The efficiency comes from software algorithms and hardware IP.

Google is attempting to alter that formula. The company has developed Tensor processing units (TPUs), which are ASICs created specifically for machine learning. And in an effort to speed up AI development, the company in 2015 turned its TensorFlow software into open source.

Fig. 3: Googles TPU board. Source: Google.

Others have their own platforms. But none of these is expected to be the final product. This is an evolution, and no one is quite sure how AI will evolve over the next decade. Thats partly due to the fact that use cases are still being discovered for this technology. And what works in one area, such as vision processing, is not necessarily good for another application, such as determining whether an odor is dangerous or benign, or possibly a combination of both.

Were shooting in the dark, said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. We know how to do machine learning and AI, but how they actually work and converge is unknown at this point. The current approach is to have lots of compute power and different kinds of compute enginesCPUs, DSPs for neural networking types of applicationsand you need to make sure it works. But thats just the first generation of AI. The focus is on compute power and heterogeneity.

That is expected to change, however, as the problems being solved become more targeted. Just as with the early versions of IoT devices, no one quite knew how various markets would evolve so systems companies threw in everything and rushed products to market using existing chip technology. In the case of smart watches, the result was a battery that only lasted several hours between charges. As new chips are developed for those specific applications, power and performance are balanced through a combination of more targeted functionality, more intelligent distribution of how processing is parsed between a local device and the cloud, and a better understanding of where the bottlenecks are in a design.

The challenge is to find the bottlenecks and constraints you didnt know about, said Bill Neifert, director of models technology at ARM. But depending on the workload, the processor may interact differently with the software, which is almost inherently a parallel application. So if youre looking at a workload like financial modeling or weather mapping, the way each of those stresses the underlying system is different. And you can only understand that by probing inside.

He noted that the problems being solved on the software side need to be looked at from a higher level of abstraction, because it makes them easier to constrain and fix. Thats one key piece of the puzzle. As AI makes inroads into more markets, all of this technology will need to evolve to achieve the same kinds of efficiencies that the tech industry in general, and the semiconductor industry in particular, have demonstrated in the past.

Right now we find architectures are struggling if they only handle one type of computing well, said Mohandass. But the downside with heterogeneity is that the whole divide and conquer approach falls apart. As a result, the solution typically involves over-provisioning or under-provisioning.

New approaches As more use cases are established for AI beyond autonomous vehicles, adoption will expand.

This is why Intel bought Nervana last August. Nervana develops 2.5D deep learning chips that utilize a high-performance processor core, moving data across an interposer to high-bandwidth memory. The stated goal is a 100X reduction in time to train a deep learning model as compared with GPU-based solutions.

Fig. 4: Nervana AI chip. Source: Nervana

These are going to look a lot like high-performance computing chips, which are basically 2.5D chips and fan-out wafer-level packaging, said Mike Gianfagna, vice president of marketing at eSilicon. You will need massive throughput and ultra-high-bandwidth memory. Weve seen some companies looking at this, but not dozens yet. Its still a little early. And when youre talking about implementing machine learning and adaptive algorithms, and how you integrate those with sensors and the information stream, this is extremely complex. If you look at a car, youre streaming data from multiple disparate sources and adding adaptive algorithms for collision avoidance.

He said there are two challenges to solve with these devices. One is reliability and certification. The other is security.

With AI, reliability needs to be considered at a system level, which includes both hardware and software. ARMs acquisition of Allinea in December provided one reference point. Another comes out of Stanford University, where researchers are trying to quantify the impact of trimming computations from software. They have discovered that massive cutting, or pruning, doesnt significantly impact the end product. University of California at Berkeley has been developing a similar approach based upon computing that is less than 100% accurate.

Coarse-grain pruning doesnt hurt accuracy compared with fine-grain pruning, said Song Han, a Ph.D. candidate at Stanford University who is researching energy-efficient deep learning. Han said that a sparse matrix developed at Stanford required 10X less computation, an 8X smaller memory footprint, and used 120X less energy than DRAM. Applied to what Stanford is calling an Efficient Speech Recognition Engine, he said that compression led to accelerated inference. (Those findings were presented at Cadences recent Embedded Neural Network Summit.)

Quantum computing adds yet another option for AI systems. Leti CEO Marie Semeria said quantum computing is one of the future directions for her group, particularly for artificial intelligence applications. And Dario Gil, vice president of science and solutions at IBM Research, explained that using classical computing, there is a one in four chance of guessing which of four cards is red if the other three are blue. Using a quantum computer and entangling of superimposed qubits, by reversing the entanglement the system will provide a correct answer every time.

Fig. 5: Quantum processor. Source: IBM.

Conclusions AI is not one thing, and consequently there is no single system that works everywhere optimally. But there are some general requirements for AI systems, as shown in the chart below.

Fig. 6: AI basics. Source: OneSpin

And AI does have applications across many markets, all of which will require extensive refinement, expensive tooling, and an ecosystem of support. After years of relying on shrinking devices to improve power, performance and cost, entire market segments are rethinking how they will approach new markets. This is a big win for architects and it adds huge creative options for design teams, but it also will spur massive development along the way, from tools and IP vendors all the way to packaging and process development. Its like hitting the restart button for the tech industry, and it should prove good for business for the entire ecosystem for years to come.

Related Stories What Does AI Really Mean? eSilicons chairman looks at technology advances, its limitations, and the social implications of artificial intelligenceand how it will change our world. Neural Net Computing Explodes Deep-pocket companies begin customizing this approach for specific applicationsand spend huge amounts of money to acquire startups. Plugging Holes In Machine Learning Part 2: Short- and long-term solutions to make sure machines behave as expected. Wearable AI System Can Detect A Conversation Tone (MIT) An artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a persons speech patterns and vitals.

More:

What Does An AI Chip Look Like? - SemiEngineering

Posted in Ai | Comments Off on What Does An AI Chip Look Like? – SemiEngineering

AI in healthcare must overcome security, interoperability concerns – TechTarget

Posted: at 9:15 pm

Artificial intelligence is beginning to gain ground in healthcare. The combination of advanced algorithms, large...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

data sets and powerful computers has offered a new way to leverage technology in patient care. AI is also able to perform complex cognitive tasks and analyze large amounts of patient data instantly. However, despite the powerful capabilities that AI can offer, some physicians are skeptical about the safety of using AI in healthcare, especially in roles that can impact a patient's health.

Today, most consumers have been exposed to some form of AI. Services like Google Home and Amazon's Alexa extensively use artificial intelligence and machine learning as part of their core application. But AI is not limited to taking basic commands to give weather forecasts or set reminders. Artificial intelligence has shown that it can perform several complex and cognitive tasks faster than a human. The automotive industry has already showcased its ability to leverage AI to offer driverless cars, while other industries have also found ways to use machine learning to detect fraud or assess financial risks. These are just a few examples that highlight the maturity level of AI.

Companies such as IBM play a big part in pushing AI into healthcare. Its use in leveraging its Watson platform in cancer research, insurance claims and clinical support tools has encouraged many in the industry to see the importance of this technology. Despite these encouraging signs and positive uses of artificial intelligence in healthcare, there are still some concerns and questions around its potential risks, and some healthcare professionals are uneasy about AI getting into the business of patient care. Below are four challenges of artificial intelligence in healthcare that need to be overcome before physicians will fully adopt the technology.

Patient health data is protected under federal law, and any breaches or failure to maintain its integrity can have legal and financial penalties. Since AI used for patient care would need access to multiple health data sets, it would need to adhere to the same regulations that current applications and infrastructures must meet. As most AI platforms are consolidated and require extensive computing power, patient data -- or parts of it -- would likely reside in the vendor's data centers. This would cause concerns around data privacy, but could also lead to significant risk if the platform is breached.

One of the popular subjects in the healthcare industry in recent years has been interoperability. Hospitals across the nation face the challenge of not being able to efficiently exchange patient health data across other healthcare organizations, despite the availability of data standards across the world. Adding AI to the mix would likely complicate things even further. When vendors like IBM or Microsoft actively deliver health-related services using their AI capabilities, the likelihood of these organizations talking to each other is very slim due to competition and proprietary technology. However, if policies are put in place that require these platforms to meet current interoperability requirements, this may help address the exchange of data right away.

Opponents of AI in healthcare have argued that computers are not always reliable and can fail on us from time to time. These failures can lead to catastrophic consequences if AI prescribes the wrong medication or gives a patient the wrong diagnosis. However, AI could eventually move to a stage where it can be trusted once it has proven its safety and readiness for patient care. If its error margins are less than or equal to those of its human counterparts, then the platform could be ready to take on an active role in patient care.

AI has progressed to the point where robots or virtual characters can mimic human behavior and interact naturally with humans. Emotional responses expressed in voice tones or text have been engineered based on human emotional reactions. However, there are several decisions physicians make that are based on their gut feeling, and intuition that may never be replicated using algorithms and super computers. These are the areas of patient care that would be hard to replace with a robot.

AI technology is advancing at a rapid rate. Several well-known scientists and popular figures such as Stephen Hawking, Bill Gates and Elon Musk have said that AI could become so powerful and self-aware that it may put its own interests before those of humans. But before robots become the enemy, there are tremendous benefits of artificial intelligence in healthcare, and many physicians are welcoming the technology. AI in healthcare offers the opportunity to help physicians identify better treatment options, detect cancer early and engage patients.

MD Anderson pauses IBM Watson AI project

How radiology can benefit from AI

How to overcome obstacles to AI implementation

See the rest here:

AI in healthcare must overcome security, interoperability concerns - TechTarget

Posted in Ai | Comments Off on AI in healthcare must overcome security, interoperability concerns – TechTarget

How AI will lead to self-healing mobile networks – VentureBeat

Posted: at 9:15 pm

Today we are routinely awed by the promise of machine learning (ML) and artificial intelligence (AI). Our phones speak to us and our favorite apps can ID our friends and family in our photographs. We didnt get here overnight, of course. Enhancements to the network itself deep, convolutional neural networks executing advanced computer science techniques brought us to this point.

Now one of the primary beneficiaries of our super-connected world will be the very networks we have come to rely on for information, communication, commerce, and entertainment. Much has been written about the networked society, but on this transformative journey, the network itself is becoming a full-fledged, contributingmemberof that society.

AI and ML will propel networks through four stages of evolution, from todays self-healing networks to learning networks to data-aware networks to self-driving networks.

Todays networks are in Stage I a real-time feedback loop of network status monitoring and near real-time optimizations to fix problems or improve performance. The sensory systems and the network optimizations are based on human-made rules and heuristics using simple descriptive analytics. For instance, if signal A goes above threshold B for C seconds, initiate action X.

These rules are typically easy to interpret but are suboptimal to modern, data-driven alternatives because they are hard-coded, cannot adapt to changing environments, and lack the complexity to effectively deal with a wide range of possible situations. In fact, these rules are limited by the inability of the human mind, even an experienced and intelligent mind, to find all the meaningful correlations affecting network KPIs among a massive data set of influencing factors. They also dont allow the humans responsible for network performance toanticipatetrouble, making real-time the limiting factor to an optimally-performing network.

Timing is everything. Stage II networks will continuously find patterns in past network data and use them to predict future behavior. ML can be directed to analyze factors thought to be impactful, like time/day, network events, or one-time or recurring external events or factors (e.g. an election, a natural disaster, or a trend on YouTube).

The value in the data lies in probabilistic correlations between past network performance and manual solutions that provide future optimizations. ML can capture as many correlations as model complexity allows, with data scientists and domain experts working together to best separate signal from noise, calibrating and testing ML models before they are put into production. ML models can reveal an exhaustive distribution of network KPIs and a dizzying array of external influencing factors, and then expose the subtlest of correlative relationships for the sake of predicting future outcomes.

These predictions give human overseers advanced warnings of how to distribute network resources and perform other optimizations, leading to enhanced performance at lower cost. For example, a network autopilot could detect the slightest predicted deviations from the optimal path and issue warnings to human operators long before actual problems emerge. Continuously collecting data and comparing predictions against reality will enhance accuracy, leading to better next-gen models.

ML methods of note for Stage II include linear and non-linear supervised methods, tree-based ensembles, neural networks, and batch learning (e.g., retrain overnight). In Stage II, predictive assistance means more time for human operators to effect change, and the result is a breakthrough in network performance. Machines make predictions, and humans find solutions, with time to spare.

The student becomes the master. By Stage III, AI algorithms review past performance and, independent of human direction, identify undiscovered correlative factors affecting future performance outside the guidance of human logic. They do so by looking beyond network data and initial guidance into external data sets such as generated and simulated data.

Machines use knowledge obtained from supervised methods and apply that knowledge to unsupervised methods, revealing undiscovered correlative factors without human intervention or guidance.

A Stage III network provides predictions of multiple possible futures and creates forecasts allowing management to predict potential business outcomes based on their own theoretical actions. For example, the network could let human managers select from a set of possible future outcomes (highest-possible performance during the Super Bowl, or lowest-possible power usage during holiday hours). Thus begins the era of strategic network optimization, with the network not only predicting a single future, but offering multiverse futures to its human colleagues. ML methods for Stage III include deep learning, simulation techniques, and other advanced computer science techniques like bandits, advanced statistics, model governance, and automatic model selection.

While highly capable, a Stage III network is still not technically intelligent. That grand jump towards the Singularity occurs in Stage IV.

I reason, therefore I am. A Stage IV network can (1) independently identify and prioritize factors of interest that impact network performance, (2) accurately predict multiverse outcomes in time for optimally executed human-effected remedies, and, most importantly, (3) distinguish between those factors that are causal vs. correlative to gain deeper insights and drive better decisions.

The distinction between causal and correlative is itself based on probabilistic analysis as seen in research. The ability of AI to establish causality is the ability to understand the root causes of network performance as opposed to the correlative signs of those causes. The ability to identify causal factors will lead to more accurate predictions and an even better-performing network. At this stage, the network gains the ability to reason cause vs. effect and the truly intelligent network is born.

A Stage IV network can autonomously choose a course of action to maximize operational efficiency in the face of external influences. It can improve security against new incoming threats and more generally operate to maximize a given set of KPIs. The system is adaptive to real-time changes and continuously learns and improves in a data-driven context. ML methods of note for Stage IV include deep learning, reinforcement learning, online learning, dynamic systems, and other advanced computer science techniques.

The notion of applying remedies at locally before globally is apropos in the case of AI and ML. While the world will no doubt benefit greatly from the democratization and mobilization of its ever-expanding mountain of data, it is the network and the networked society that stand to benefit the most, soonest, from our journey towards the truly intelligent machine.

Diomedes Kastanis is VP, Chief Innovation Office, atEricsson, supporting advancement of the companys technology vision and innovation.

Read more:

How AI will lead to self-healing mobile networks - VentureBeat

Posted in Ai | Comments Off on How AI will lead to self-healing mobile networks – VentureBeat