Daily Archives: March 29, 2017

Murder in virtual reality should be illegal – Quartz

Posted: March 29, 2017 at 11:24 am

You start by picking up the knife, or reaching for the neck of a broken-off bottle. Then comes the lunge and wrestle, the physical strain as your victim fights back, the desire to overpower him. You feel the density of his body against yours, the warmth of his blood. Now the victim is looking up at you, making eye contact in his final moments.

Science-fiction writers have fantasised about virtual reality (VR) for decades. Now it is hereand with it, perhaps, the possibility of the complete physical experience of killing someone, without harming a soul. As well as Facebooks ongoing efforts with Oculus Rift, Google recently bought the eye-tracking start-up Eyefluence to boost its progress towards creating more immersive virtual worlds. The director Alejandro G Irritu and the cinematographer Emmanuel Lubezki, both famous for Birdman (2014) and The Revenant (2015), have announced that their next project will be a short VR film.

But this new form of entertainment is dangerous. The impact of immersive virtual violence must be questioned, studied, and controlled. Before it becomes possible to realistically simulate the experience of killing someone, murder in VR should be made illegal.

This is not the argument of a killjoy. As someone who has worked in film and television for almost 20 years, I am acutely aware that the craft of filmmaking is all about maximising the impact on the audience. Directors ask actors to change the intonation of a single word while editors sweat over a film cut down to fractions of a second, all in pursuit of the right mood and atmosphere.

So I understand the appeal of VR, and its potential to make a story all the more real for the viewer. But we must examine that temptation in light of the fact that both cinema and gaming thrive on stories of conflict and resolution. Murder and violence are a mainstay of our drama, while single-person shooters are one of the most popular segments of the games industry.

Students who played violent games for just 20 minutes a day were more aggressive and less empathetic than those who didnt.The effects of all this gore are not clear-cut. Crime rates in the United States have fallen even as Hollywood films have become bloodier and violent video games have grown in popularity. Some research suggests that shooter games can be soothing, while other studies indicate they might be a causal risk factor in violent behaviour. (Perhaps, as for Frank Underwood in the Netflix series House of Cards, its possible for video games to be both those things.) Students who played violent games for just 20 minutes a day, three days in a row, were more aggressive and less empathetic than those who didnt, according to research by the psychologist Brad Bushman at Ohio State University and his team. The repeated actions, interactivity, assuming the position of the aggressor, and the lack of negative consequences for violence are all aspects of the gaming experience that amplify aggressive behaviour, according to research by the psychologists Craig Anderson at Iowa State University and Wayne Warburton at Macquarie University in Sydney. Mass shooters including Aaron Alexis, Adam Lanza, and Anders Breivik were all obsessive gamers.

The problem of what entertainment does to us isnt new. The morality of art has been a matter of debate since Plato. The philosopher Jean-Jacques Rousseau was skeptical of the divisive and corrupting potential of theatre, for example, with its passive audience in their solitary seats. Instead, he promoted participatory festivals that would cement community solidarity, with lively rituals to unify the jubilant crowd. But now, for the first time, technology promises to explode the boundary between the world we create through artifice and performance, and the real world as we perceive it, flickering on the wall of Platos cave. And the consequences of such immersive participation are complex, uncertain and fraught with risk.

Humans are embodied beings, which means that the way we think, feel, perceive, and behave is bound up with the fact that we exist as part of and within our bodies. By hijacking our capacity for proprioceptionthat is, our ability to discern states of the body and perceive it as our ownVR can increase our identification with the character were playing. The rubber hand illusion showed that, in the right conditions, its possible to feel like an inert prosthetic appendage is a real hand; more recently, a 2012 study found that people perceived a distorted virtual arm, stretched up to three times its ordinary length, to still be a part of their body.

Its a small step from here to truly inhabiting the body of another person in VR. But the consequences of such complete identification are unknown, as the German philosopher Thomas Metzinger has warned. There is the risk that virtual embodiment could bring on psychosis in those who are vulnerable to it, or create a sense of alienation from their real bodies when they return to them after a long absence. People in virtual environments tend to conform to the expectations of their avatar, Metzinger says. A study by Stanford researchers in 2007 dubbed this the Proteus effect: They found that people who had more attractive virtual characters were more willing to be intimate with other people, while those assigned taller avatars were more confident and aggressive in negotiations. Theres a risk that this behaviour, developed in the virtual realm, could bleed over into the real one.

In an immersive virtual environment, what will it be like to kill? Surely a terrifying, electrifying, even thrilling experience. But by embodying killers, we risk making violence more tantalizing, training ourselves in cruelty and normalising aggression. The possibility of building fantasy worlds excites me as a filmmakerbut, as a human being, I think we must be wary. We must study the psychological impacts, consider the moral and legal implications, even establish a code of conduct. Virtual reality promises to expand the range of forms we can inhabit and what we can do with those bodies. But what we physically feel shapes our minds. Until we understand the consequences of how violence in virtual reality might change us, virtual murder should be illegal.

This article was originally published at Aeon and has been republished under Creative Commons. Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

Read this article:

Murder in virtual reality should be illegal - Quartz

Posted in Virtual Reality | Comments Off on Murder in virtual reality should be illegal – Quartz

Upload to Open Virtual Reality Coworking Office in Los Angeles … – Variety

Posted: at 11:24 am

What do you do when youre trying to turn a 20,000-square-foot warehouse into a coworking and incubatorspace for virtual and augmented reality startups? You put on your headset, of course.

Thats exactly what UploadCEO Taylor Freeman and Uploads expansion manager Avi Horowitz did when they started working on the companys new headquarters in Venice, Calif. Thanks to a partnership with construction virtual reality specialist IrisVR, they were able to turn the plans for the office space into a 3D VR model, which they could explore with VR headsets to figure out which walls had to be moved and which spaces could be used for what purpose.

Freeman and Horowitz recently gave Variety a tour of the office-in-progress in the real world, sans headsets, laying out their ambitious plans while contractors were still working on finishing interior walls. Upload plans to officially open its co-working space on April 13, offering anyone working on virtual and augmented reality a variety of work spaces, ranging from floating desks that youd have to share with others to dedicated offices for a select group of VR startups.

Companies and freelancers housed in the facility will be able to make use of a number of demo stations set up for room-scale virtual reality, as well as a capture room designed to turn real humans into 3D assets and a soundproofmusic studio.

Also housed in the same building is an educational space meant to provide regular classes on VR development as well as a series of industry events. Its like WeWork meets General Assembly, said Freeman, likening the space to both a popular coworking office space provider and an education venture known for its coding bootcamps.

Upload started out as a coworking space in San Francisco, and is also operating its UploadVR news site that keeps tabs on the nascent VR industry. In Los Angeles, the company is looking to significantly expand its footprint, with enough space to house close to 200 people every day across the education and events space, as well as an open coworking area and dedicated offices. This is where the center of VR and AR is, Freeman said about the expansion to L.A.

In Venice, Upload will be close to a number of other VR startups, including WEVR, Within, and Vertebrae. But Freeman said that his team also had another indicator for high regional demand: Our website traffic showed that L.A. is our biggest geography.

Read more from the original source:

Upload to Open Virtual Reality Coworking Office in Los Angeles ... - Variety

Posted in Virtual Reality | Comments Off on Upload to Open Virtual Reality Coworking Office in Los Angeles … – Variety

This Virtual Reality Mid-Cap Stock Is Set to ‘Wow’ Investors – TheStreet.com

Posted: at 11:24 am

A floundering Trump administration, political warfare in the halls of Congress, a divided nation and seesawing stock markets are making your investment decisions tougher. That's why you need to focus on "momentum trends" that will continue to unfold regardless of today's financial and political chaos.

One such trend is the exponentially rising demand for Virtual/Augmented Reality (VR/AR). One of the best pure plays on VR/AR is HiMax Technologies (HIMX) .

Founded in 2001 and based in Taiwan, HiMax supplies, or is expected to supply, display circuits to three of the most-popular VR headset brands: Facebook's (FB) Oculus Rift, Microsoft's (MSFT) HoloLens, and the second generation of Alphabet's (GOOGL) Google Glass.

The projected profits in the VR/AR industry are enormous. Goldman Sachs (GS) estimates the market will reach $80 billion by 2025, with the potential for that figure to soar much higher, to more than $180 billion.

Sales of HiMax's devices should ignite once VR headsets and AR smart glasses achieve sufficient economy of scale to bring prices down to a broader segment of consumers. As the supplier of the chips that manage the displays in these devices, HiMax stands to be one of the first component suppliers to benefit from increased sales.

In an industry dotted with tiny, fly-by-night start-ups, HiMax boasts a solid balance sheet that will ensure its competitiveness even during unexpected economic shocks. The company has more than $194 million of cash on the books and its operating cash flow is a robust $84.6 million.

Read more here:

This Virtual Reality Mid-Cap Stock Is Set to 'Wow' Investors - TheStreet.com

Posted in Virtual Reality | Comments Off on This Virtual Reality Mid-Cap Stock Is Set to ‘Wow’ Investors – TheStreet.com

How Virtual Reality Could Revolutionize The Real Estate Industry – Forbes

Posted: at 11:24 am


Forbes
How Virtual Reality Could Revolutionize The Real Estate Industry
Forbes
Real estate is an industry that normally moves with the times and adopts technology that can assist in its continued success. However, there seems to be a slight lack of faith in a new technology that presents itself to the industry: virtual reality ...

See original here:

How Virtual Reality Could Revolutionize The Real Estate Industry - Forbes

Posted in Virtual Reality | Comments Off on How Virtual Reality Could Revolutionize The Real Estate Industry – Forbes

The Virtual Reality Company Explores Magical New Worlds with VR Animated Series Raising a Rukus – PEOPLE.com

Posted: at 11:24 am

Virtual Reality is about to get even bigger.

The Virtual Reality Companyannounced a new, original animated virtual-reality series on Monday called Raising a Rukus.

Raising a Rukus follows two siblings and their mischievous dog Rukus as they travel to different worlds and embark on various magical adventures together.

Weve all seen animated stories before, but for the first time, were actually immersed in this world with the characters,VRCs co-founder and chief creative officer Robert Strombergtells PEOPLE.

Each episode of the show will last 12 minutesand will feature branched narration, allowing viewers to follow the story from the perspective of the brother and sister.

The brother and sister get separated and go on a short journey with their own set of obstacles and problems they have to solve, says Stromberg. It really adds a unique element. Were still telling the same story, but it presents the opportunity for even more detail. What they go through individually means something when they come back together.

Co-founder and CEO Guy Primus adds that when creatingRaising a Rukus, one ofVRCs main focus was making it a universally relatable story.

We want to tell stories that are universally relatable and cross cultural boundaries. We know how to make Hollywood films, but whats really important to us is that the story plays just as well in China as it does here in the United States, he says.

From Coinage: The 5 Most Expensive Movies of All Time

And they certainly had a powerful team behind them to make sure their goals were met.Steven Spielberg, who sits on the board of advisors for the VRC, worked as a creative consultant on the project.

As were writing the story, he would add his opinions and point us in the right direction, says Primus. We showed him each cut of the show and he gave really great feedback. He connected us with peoplewho he thought would help enhance the project.

He added his fingerprint of what makes Spielberg, Spielberg and what makes things magical, adds Stromberg.

Raising a Rukuswill premiere in theaters in Canada this spring, and get wider distribution across North America later this summer.

It will be distributed to audiences around the world over the course of the next several months, says Primus, adding that you dont necessarily need VR goggles to view the series. We are taking this out to theaters to give even more people access to Raising a Rukus.

Read the original here:

The Virtual Reality Company Explores Magical New Worlds with VR Animated Series Raising a Rukus - PEOPLE.com

Posted in Virtual Reality | Comments Off on The Virtual Reality Company Explores Magical New Worlds with VR Animated Series Raising a Rukus – PEOPLE.com

Preview: ‘Don’t Shoot’ virtual reality demonstration – The Crimson While

Posted: at 11:24 am

By Jake Howell | 03/28/2017 9:46pm

Rick Houser, Dan Fonseca and Ryan Cook will display a demo of the Virtual Reality simulation used in their study and discussing its impact in future police training.Photo courtesy Flickr.com

Following the developments out of Ferguson, Missouri, the issue of police brutality and alleged racial shootings have come to the forefront of national news. Incidents such as the Tamir Rice shooting of 2014, during which Tamir, a twelve-year-old boy, was shot dead by police officers in Cleveland, Ohio, after they mistook a toy gun for a real one in a city park, have caused many to ask what truly motivates police officers during such incidents.

Three University researchers have taken public questions and attempted to derive the answers of what motivates police shootings. Using electroencephalography and virtual reality technology, Rick Houser, Dan Fonseca and Ryan Cook have measured the brain activity of officers in high-threat situations. In their visit to the University, they will display a demo of the VR simulation used in their study and discussing its impact in future police training.

WHO: Rick Houser and Ryan Cook of the College of Education, and Dan Fonseca of the College of Engineering, have conducted this interdisciplinary study, and will present their findings. The event is free and open to the public.

WHAT: The event will showcase the professors research as well as the technology they have used to conduct their study.

WHEN: The presentation will take place at 1 p.m., on March 29.

WHERE: The presentation will be held in room 1022 of the North Engineering Research Center.

WHY: The researchers hope to use the understanding gained from their research to improve police training and avoid fatalities.

An officer who is able to understand the intentions of others may be more effective in making these high decisions and consequently lower the risk of shooting a community member, particularly those who are unarmed or an accidental shooting, said Houser.

Continued here:

Preview: 'Don't Shoot' virtual reality demonstration - The Crimson While

Posted in Virtual Reality | Comments Off on Preview: ‘Don’t Shoot’ virtual reality demonstration – The Crimson While

Warner Bros., IMAX to create virtual reality experiences for ‘Justice League,’ ‘Aquaman’ – MarketWatch

Posted: at 11:24 am

Virtual reality: Coming to a theater near you sort of.

The home entertainment division of Time Warner Inc.s TWX, +0.36% Warner Bros. film studio said on Tuesday its reached a co-financing and production agreement with IMAX Corp. IMAX, +0.07% to develop and release three interactive VR experiences based on upcoming films: Justice League, Aquaman and a third that has yet to be announced.

The companies plan to launch an experience a year, beginning with Justice League in late 2017. All the VR experiences will have an exclusive window in IMAX VR centres before being available on other in-home and mobile VR platforms.

Also see: Virtual reality opens the world to aging seniors

A key component of our vision for VR is to help usher in the first wave of high-end blockbuster-based content, IMAX Chief Executive Richard Gelfond said in a statement. This type of premium content will introduce audiences to virtual reality in stand-alone and multiplex-based IMAX VR centres as well as other platforms.

The Enhance Games suit allows people to experience and play videogames with their entire body. The prototype was launched at "The Wow Factory", an innovation lab organized by Sony at SXSW.

The Warner Bros. and IMAX partnership comes as studios and cinemas find themselves asking how can they improve and innovate the moviegoing experience to continue to compete for audiences time, attention and dollars. Theater admissions were down last year compared with 2015, and though the box office saw a record year in revenue for the second year in a row, admissions havent been able to reach record levels set in 2002, according to data from the National Association of Theatre Owners.

IMAXs VR Centres are facilities designed specifically for VR experiences, with room-tracking technology allowing participants to explore a virtual space. This isnt your friends living room, reads the companys website.

Check out: Steven Spielberg-backed startup is creating VR experiences to get people into malls

With its flagship pilot location in Los Angeles, IMAX is planning to open at least five others across New York City, California, the U.K. and Shanghai in the next few months. There would be a testing period for customer experience and pricing. The company said in its L.A. center, which opened in January, is off to a strong start and if the other locations are successful the plan is to roll the VR Centre concept out globally to select multiplexes and commercial locations, such as shopping centers and tourist locations.

IMAX shares have gained 13% in the trailing 12-month period, while shares of Time Warner are up 34%. By comparison, the S&P 500 index SPX, +0.08% is up 15%, the Dow Jones Industrial Average DJIA, -0.20% is up 17% and the Nasdaq Composite Index COMP, +0.22% is up more than 22% during the same period.

Original post:

Warner Bros., IMAX to create virtual reality experiences for 'Justice League,' 'Aquaman' - MarketWatch

Posted in Virtual Reality | Comments Off on Warner Bros., IMAX to create virtual reality experiences for ‘Justice League,’ ‘Aquaman’ – MarketWatch

The Trade-Off Every AI Company Will Face – Harvard Business Review

Posted: at 11:23 am

It doesnt take a tremendous amount of training to begin a job as a cashier at McDonalds. Even on their first day, most new cashiers are good enough. And they improve as they serve more customers. Although a new cashier may be slower and make more mistakes than their experienced peers, society generally accepts that they will learn from experience.

We dont often think of it, but the same is true ofcommercial airline pilots. We take comfort that airline transport pilot certification is regulated by the U.S. Department of Transportations Federal Aviation Administration and requires minimum experience of 1,500 hours of flight time, 500 hours of cross-country flight time, 100 hours of night flight time, and 75 hours of instrument operations time. Butwe also know that pilots continue to improve from on-the-job experience.

On January 15, 2009, when US Airways Flight 1549 was struck by a flock of Canada geese, shutting down all engine power, Captain Chelsey Sully Sullenberger miraculously landed his plane in the Hudson River, saving the lives of all 155 passengers. Most reporters attributed his performance to experience. He had recorded 19,663 total flight hours, including 4,765 flying an A320. Sully himself reflected: One way of looking at this might be that for 42 years, Ive been making small, regular deposits in this bank of experience, education, and training. And on January 15, the balance was sufficient so that I could make a very large withdrawal. Sully, and all his passengers, benefited from the thousands of peoplehed flown before.

How it will impact business, industry, and society.

The difference between cashiers and pilots in what constitutes good enough is based on tolerance for error. Obviously, our tolerance is much lower for pilots. This is reflected in the amount of in-house training we require them to accumulate prior to servingtheir first customers, even though they continue to learn from on-the-job experience. We have different definitions for good enough when it comes to how much training humans requirein different jobs.

The same is true ofmachines that learn.

Artificial intelligence (AI) applications are based on generating predictions. Unlike traditionally programmed computer algorithms, designed to take data and follow a specified path to produce an outcome, machine learning, the most common approach to AI these days, involves algorithms evolving through various learning processes. Amachine is given data, including outcomes, it finds associations, and then, based on those associations, it takes new data ithas never seen before and predicts an outcome.

This means that intelligent machines need to be trained, just aspilots and cashiers do. Companies design systems to train new employees until they aregood enough and then deploy them into service, knowing that they will improve as they learn from experience doing their job. While this seems obvious, determining what constitutes good enough is an important decision. In the case of machine intelligence, it can be a major strategic decision regarding timing: when to shift from in-house training to on-the-job learning.

There is no ready-made answer as to what constitutes good enough for machine intelligence. Instead, there are trade-offs. Success with machine intelligence will require taking these trade-offs seriously and approaching them strategically.

The first question firms must ask is what tolerance they and their customers have for error. We have high tolerance for error with some intelligent machines and a low tolerance for others. For example, Googles Inbox application reads your email, uses AI to predict how you will want to respond, and generates three short responses for the user to choose from. Many users report enjoying using the application even when it has a 70% failure rate (i.e., the AI-generated response is only useful 30% of the time). The reason for this high tolerance for error is that the benefit of reduced composing and typing outweighs the cost of wasted screen real estate when the predicted short response is wrong.

In contrast, we have low tolerance for error in the realm of autonomous driving. The first generation of autonomous vehicles, largely pioneered by Google, was trained using specialist human drivers who took a limited set of vehicles and drove them hundreds of thousands of kilometers. It was like a parent taking a teenager on supervised driving experiences before letting them drive on their own.

The human specialist drivers provide a safe training environment, but are also extremely limited. The machine only learns about a small number of situations. It may take many millions of miles in varying environments and situations before someone has learned how to deal with the rare incidents that are more likely to lead to accidents. For autonomous vehicles, real roads arenasty and unforgiving precisely because nasty or unforgiving human-causedsituations can occur on them.

The second question to ask, then, is how important it is to capture user data in the wild. Understanding that training might take a prohibitively long time, Tesla rolled out autonomous vehicle capabilities toall itsrecent models. These capabilities included a set of sensors that collect environmental data as well as driving data that isuploaded to Teslas machine learning servers. In a very short period of time, Tesla can obtain training data just by observing how the drivers of its cars drive. The more Tesla vehicles there are on the roads, the more Teslas machines can learn.

However, in addition to passively collecting data as humans drive their Teslas, the company needs autonomous driving data to understand how itsautonomous systems are operating. For that, it needs to have cars drive autonomously so that it can assess performance, but also assess when a human driver, required to be there and paying attention, chooses to intervene. Teslas ultimate goal is not to produce a copilot, or a teenager who drives under supervision, but a fully autonomous vehicle. That requiresgetting to the point where real people feel comfortable in a self-driving car.

Herein lies a tricky trade-off. In order to get better, Tesla needs its machines to learnin real situations. But putting its current cars in real situations means giving customers a relatively young and inexperienced driver although perhaps as good as or better than many younghuman drivers. Still, this is far riskier than beta testing, for example, whetherSiri or Alexa understoodwhat you said, or whether Google Inbox correctly predicts your response to an email. In the case of Siri, Alexa, or Google Inbox, itmeans a lower-quality user experience. In the case of autonomous vehicles, it means putting lives at risk.

As Backchannel documented in a recent article, that experience can be scary. Cars can exit freeways without notice, or put on the brakes when mistaking an underpass for an obstruction. Nervous drivers may opt not to use the autonomous features, and, in the process, may hinderTeslas ability to learn. Furthermore, even if the company can persuade some people to become beta testers, are those the people it wants? After all, a beta tester for autonomous driving may be someone with a taste for more risk than the average driver. In that case, who is the company training their machines to be like?

Machines learn faster with more data, and more data is generated when machines are deployed in the wild. However, bad things can happen in the wild and harm the company brand. Putting products in the wild earlier accelerates learning but risks harming the brand (and perhaps the customer!); putting products in the wild later slows learning but allows for more time to improve the product in-house and protect the brand (and, again, perhaps the customer).

For some products, like Google Inbox, the answer to the trade-off seems clear because the cost of poor performance is low and the benefits from learning from customer usage are high. It makes sense to deploy this type of product in the wild early. For other products, like cars, the answer is less clear.As more companies seek to take advantage of machine learning, this is a trade-off more and more will have to make.

Read the original:

The Trade-Off Every AI Company Will Face - Harvard Business Review

Posted in Ai | Comments Off on The Trade-Off Every AI Company Will Face – Harvard Business Review

Boffins give ‘D.TRUMP’ an AI injection – The Register

Posted: at 11:23 am

Let's give this points in the Academic Sense of Humour stakes for 2017: the wryly-named Data-mining Textual Responses to Uncover Misconception Patterns, or D.TRUMP, looks to automate the process of working out just how confused someone might be, from how they answer open-response questions.

The problem three boffins from Rice University and Princeton are trying to answer arises because of the rise of large-scale online learning.

At a smaller scale for example, in a lecture theatre or tutorial gathering it's relatively easy for a capable instructor to work out from a student's question what part of a topic they're finding hard to grasp.

That scales badly: in a MOOC (massive open online course), the tutor-to-student ratio could be thousands to one or more, but there's an upside, since the scale of the student body is also a rich source of data.

D.TRUMP seeks to mine that student data for evidence of clue deficit, with the authors of this paper writing: The scale of this data presents a great opportunity to revolutionise education by using machine learning algorithms to automatically deliver personalised analytics and feedback to students and instructors in order to improve the quality of teaching and learning.

To achieve that, D.TRUMP transforms answers into low-dimensional textual vectors using tools like Word2Vec and the like; and the authors' work, which is a statistical model that jointly models both the transformed response textual feature vectors and expert expert labels on whether a response exhibits one or more misconceptions.

The researchers tested their work against 386 students' answers to a total of 1,668 questions in the AP Biology high-school level classes at OpenStax Tutor, giving them a total of 60,000 labelled responses.

It's probably helpful at this point to identify just how fine the line can be between correct and an almost-correct misconception. From the paper:

Question 1: People who breed domesticated animals try to avoid inbreeding even though most domesticated animals are indiscriminate. Evaluate why this is a good practice. Correct Response: A breeder would not allow close relatives to mate, because inbreeding can bring together deleterious recessive mutations that can cause abnormalities and susceptibility to disease. Student Response 1: Inbreeding can cause a rise in unfavorable or detrimental traits such as genes that cause individuals to be prone to disease or have unfavourable mutations. Student Response 2: Interbreeding can lead to harmful mutations.

For those (like the author) who didn't study biology: Inbreeding leads to harmful mutations is a lay understanding of genetics. To be marked correct, the student needs to identify the mechanism, that inbreeding can bring together recessive mutations from mother and father.

Having developed D.TRUMP to the level that it can spot that kind of misconception, the system provides another bit of help to the educator: it can identify groups of students who share a misconception. This could indicate whether the students arrived in a course with a clue deficit, or that the courseware isn't getting its message across.

Read more:

Boffins give 'D.TRUMP' an AI injection - The Register

Posted in Ai | Comments Off on Boffins give ‘D.TRUMP’ an AI injection – The Register

AI will transform information security, but it won’t happen overnight – CSO Online

Posted: at 11:23 am

Although it dates as far back as the 1950s, Artificial Intelligence (AI) is the hottest thing in technology today.

An overarching term used to describe a set of technologies such as text-to-speech, natural language processing (NLP) and computer vision, AI essentially enables computers to do things normally done by people.

Machine learning, the most prominent subset of AI, is about recognizing patterns in data and computer learning from them like a human. These algorithms draw inferences without being explicitly programmed to do so. The idea is the more data you collect, the smarter the machine becomes.

At consumer level, AI use cases include chatbots, Amazons Alexa and Apples Siri, while enterprise efforts see AI software aim to cure diseases and optimize enterprise performance, such as improving customer experience or fraud detection.

There is plenty to back-up the hype; A Narrative Science survey found that 38 percent of enterprises are already using AI, growing to 62 percent by 2018, with Forrester Research predicting a 300 percent year-on-year increase in AI investment this year. AI is clearly here to stay.

Unsurprisingly given the constant evolution of criminals and malware, InfoSec also wants a piece of the AI pie.

With its ability to learn patterns of behavior by sifting through huge datasets, AI could help CISOs by finding those known unknown security threats, automating SOC response and improving attack remediation. In short, with skilled personnel hard to come by, AI fills some (but not all) of the gap.

Experts have called for the need of a smart, autonomous security system, and American cryptographer Bruce Schneier believes that AI could offer the answer.

It is hyped, because security is nothing but hype, but it is good stuff, said the CTO of Resilient Systems.

Were a long way off AI from making humans redundant in cybersecurity, but theres more interest in [using AI for] human augmentation, which is making people smarter. You still need people defending you. Good systems use people and technology together.

Martin Ford, futurist and author of Rise of the Robots, says both white and black hats are already leveraging these technologies, such deep learning neural networks.

It's already being used on both the black and white hat sides, Ford told CSO. There is a concern that criminals are in some cases ahead of the game and routinely utilize bots and automated attacks. These will rapidly get more sophisticated.

...AI will be increasingly critical in detecting threats and defending systems. Unfortunately, a lot of organizations still depend on a manual process -- this will have to change if systems are going to remain secure in the future.

Some CISOs, though, are preparing to do just that.

It is a game changer, Intertek CISO Dane Warren said. Through enhanced automation, orchestration, robotics, and intelligent agents, the industry will see greater advancement in both the offensive and defensive capabilities.

Warren adds that improvements could include responding quicker to security events, better data analysis and using statistical models to better predict or anticipate behaviors.

Andy Rose, CISO at NATS, also sees the benefits: Security has always had a need for smart processes to apply themselves to vast amounts of disparate data to find trends and anomalies whether that is identifying and stopping spam mail, or finding a data exfiltration channel.

People struggle with the sheer volume of data so AI is the perfect solution for accelerating and automating security issue detection.

Security providers have always tried to evolve with the ever-changing threat landscape and AI is no different.

However, with technology naturally outpacing vendor transformation, start-ups have quickly emerged with novel AI-infused solutions for improving SOC efficiency, quantifying risks and optimizing network traffic anomaly detection.

Relative newcomers Tanium, Cylance and - to lesser extent - LogRhythm have jumped into this space, but its start-ups like Darktrace, Harvest.AI, PatternEx (coming out of MIT), and StatusToday that have caught the eye of the industry. Another relative unknown, SparkCognition, unveiled what it called the first AI-powered cognitive AV system at BlackHat 2016.

The tech giants are now playing with AI in security too; Google is working on AI-based system which replaces traditional CAPTCHA forms and its researchers have taught AI to create its own encryption. IBM launched Watson for Cyber Security earlier this month, while in January Amazon acquired Harvest.AI, which uses algorithms to identify important documents and IP of a business, and then user behavior analytics with data loss prevention techniques to protect them from attack.

Some describe these products as first-gen AI security solutions, primarily focused on sifting through data, hunting for threats, and facilitating human-led remediation. In the future, AI could automate 24x7 SOCs, enabling workers to focus on business continuity and critical support issues.

I see AI initially as an intelligent assistant able to deal with many inputs and access expert level analytics and processes, agrees Rose, adding AI will support security professionals in higher level analysis and decisions.

Ignacio Arnaldo is chief data scientist at PatternEx, which offers an AI detection system that automates tasks in SecOps, such as the ability to detect APTs from network, applications and endpoint logs. He says that AI offers CISOs a new level of automation.

CISOs are well aware of the problems - they struggle to hire talent, and there are more devices and data that need to be analyzed. CISOs acknowledge the need for tools that will increase the efficiency of their SOCs. AI holds the promise but CISOs have not yet seen an AI platform that clearly/proves to increase human efficiency.

More and more CISOs fully understand that the global skills shortage, and the successful large-scale attacks against high maturity organizations like Dropbox, NSA/CIA, and JPMorgan are all connected, says Darktrace CTO Dave Palmer, whose firm provides machine learning technology to thousands of companies across 60 countries worldwide.

No matter how well funded a security team is, it cant buy its way to high security using traditional approaches that have been demonstrably failing and that dont stand a chance of working in the anticipated digital complexity of our economy in 10 years time.

But for all of this, some think were jumping the gun. AI, after all, seems a luxury item in an era in which many firms still dont do regular patch management.

At this years RSA conference, crypto experts mulled how AI is applicable in security, with some questioning how to train the machine and what the humans role is. Machine reliability and oversight were also mentioned, while others suggested its odd to see AI championed given security is often felled by low-level basics.

I completely agree, says Rose. Security professionals need to continually reassess the basics patching, culture, SDLP etc. otherwise AI is just a solution that will tell you about the multitude of breaches you couldnt, and didnt, prevent.

Schneier sees it slightly differently. He believes security can be advanced and yet still fail at the basics, while he poignantly notes AI should only be for those who have got the security posture and processes in place, and are ready to leverage the machine data.

Ethics, he says, is only an issue for full automation, and hes unconcerned about such tools being utilized by black hats or surveillance agencies.

I think this is all a huge threat, says Ford, disagreeing. I would rank it as one of the top dangers associated with AI in the near to medium term. There is a lot of focus on "super-intelligent machines taking over"...but this lies pretty far in the future. The main concern now is what bad people will do when they have access to AI.

Warren agrees there are obstacles for CISOs to overcome. It is forward thinking, and many organizations still flounder with the basics.

He adds that with these AI benefits will come challenges, such as the costly rewriting of apps and the possibility of introducing new threats. ...Advancements in technology introduce new threat vectors.

A balance is required, or the environment will advance to a point where the industry simply cannot keep pace.

AI and security is not necessarily a perfect match. As Vectra CISO Gunter Ollmann blogged about recently, buzzwords have made it appear that security automation is the same as AI security - meaning theres a danger of CISOs buying solutions they dont need, while there are further concerns over AI ethics, quality control and management.

Arnaldo critically points out that AI security is no panacea either. Some attacks are very difficult to catch: there are a wide range of attacks at a given organization, over various ranges of time, and across many different data sources.

Second, the attacks are constantly changing...Therefore; the biggest challenge is training the AI.

If this points to some AI solutions being ill-equipped, Palmer adds further weight to the claim.

Most of the machine learning inventions that have been touted arent really doing any learning on the job within the customers environment. Instead, they have models trained on malware samples in a vendors cloud and are downloaded to customer businesses like anti-virus signatures. This isnt particularly progressive in terms of customer security and remains fundamentally backward looking.

So, how soon can we see it in security?

A way off, notes Rose. Remember that the majority of IPS systems are still in IDS mode because firms lack the confidence to rely on intelligent systems to make automated choices and unsupervised changes to their core infrastructure. They are worried that, in acting without context, the control will damage the service and thats a real threat.

But the need is imperative: If we don't succeed in using AI to improve security, then we will have big problems because the bad guys will definitely be using it, says Ford.

I absolutely believe increased automation and ease of use are the only ways in which we are going to improve security, and AI will be a huge part of that, says Palmer.

Read this article:

AI will transform information security, but it won't happen overnight - CSO Online

Posted in Ai | Comments Off on AI will transform information security, but it won’t happen overnight – CSO Online