Daily Archives: July 5, 2017

How virtual reality may change your life – BBC News

Posted: July 5, 2017 at 11:13 pm


BBC News
How virtual reality may change your life
BBC News
Virtual reality (VR) is being touted as a big growth area for film-makers, engaging audiences in ways traditional film can't. But it is also being explored everywhere from rock music to psychiatric treatments. Is it all just a passing fad - or could VR ...

and more »

More here:

How virtual reality may change your life - BBC News

Posted in Virtual Reality | Comments Off on How virtual reality may change your life – BBC News

The Only Thing Less Fun Than Watching Virtual Reality Porn Is Making Virtual Reality Porn – GQ Magazine

Posted: at 11:13 pm

Illustration by Ccile Dormeau

Its hard.

Youve probably been fantasizing about virtual reality porn ever since you discovered regular porn. Well, welcome to the VR revolution: PornHub reports that VR porn videos receive over 500,000 views daily. Finally, porn that puts you in the middle of the action.

Like the first iteration of the personal computer or the Internet before it was the meme factory it is now, VR porn is still very much in its beginning stages. Viewers at home only have a handful of websites to choose from, with little variety between scenes. For actors and producers, VR porn can be awkward, expensive, and inhibiting to shoot.

While wed like to think there are miniature women begging us to fuck them taking residence within our VR goggles, theres a lot more to shooting virtual reality erotica than meets the eye. To get the scoop on what a day on a VR set looks like, we spoke to some of the actors and producers who put naked people in front of us.

It feels like I'm in The Jetsons. Really, says adult performer and porntrepeneur Joanna Angel. But Angels version of Orbit City has its own challenges. I think the most stressful part, from a producers standpoint, is that you can't play the scene back on the camera at any point. You shoot and cross your fingers that after it's all stitched together; no ones arm or leg is cut off. It's very unsettling for a Jewish producerknowing full well there's a 50/50 chance that you might be throwing your money away.

Shooting VR, it turns out, is actually really frustrating. The cameras offer new possibilities for viewers, but they limit the positions actors and actresses can engage in.

The possible positions all depend on where the rig is stationed, continues Angel, not to mention how big or tall the performers are. Generally, there's a lot of cowgirl and reverse cowgirl more than anything else. In a regular porn scene, you always want to get as many positions, movement, and excitement as possible. In a VR scene your movements and positions are limited. It feels very much like you're performing inside of a little box.

What really separates a VR porn shoot from your run-of-the-mill shoot is the number of people who need to be involved to make it work. Daniel Dilallo, a director for 3X Entertainment, a porn production company in Los Angeles that works exclusively with Vivid, recently shot The Kim Kardashian Superstar VR Experience (a killer video that uses a Kim Kardashian lookalike to pick up where the real sex tape left off). Dilallo let me in on some of the hardships of shooting VR.

The women cant actually be too close to the camera, says Dilallo. They have to have their scenes blocked out and the actress needs to know where she can and cannot go. Men have it worse. Generally, guys are stuck in one position with a camera going over their shoulder. Its difficult for the guys to even see whats going on, which makes timing especially hard. He has to sit back, he cant use his hands. Its hard for the guys to last. The camera bares resemblance to some of the clunky robots of the 1950s sci-fi era and usually hovers over the dudes shoulder as he struggles to recall every unsexy scenario hes ever encountered in a strained attempt to not finish. Its a world away from your typical porno shoot.

But the really hard stuff happens in the editing room. The post-production time and process is crazy-long, tedious and expensive, says 3X's VR director, Adam Block, who also had a hand in creating The Kim Kardashian Superstar VR Experience. Theres also a huge learning curve as this tech continues to evolveeven for the talentas most have never shot in VR, which requires a certain amount of instruction and parameters.

Dilallo thinks the near future of off-the-screen porn will be based in augmented reality, where images are superimposed onto your surroundings (i.e. Pokmon Go). This is where those horned-up pipe dream fantasies of yesteryear will finally come to fruition. Soon youll be able to bring porn stars into your living room. Itll be as easy as slipping on a pair of goggles, choosing who youd like to have over and seeing a famous adult actress flop around on your Ikea couch. What a time to be alive and alone!

MORE STORIES LIKE THIS ONE

Read this article:

The Only Thing Less Fun Than Watching Virtual Reality Porn Is Making Virtual Reality Porn - GQ Magazine

Posted in Virtual Reality | Comments Off on The Only Thing Less Fun Than Watching Virtual Reality Porn Is Making Virtual Reality Porn – GQ Magazine

Apple software engineers join WebVR virtual reality accessibility group – AppleInsider (press release) (blog)

Posted: at 11:13 pm

By AppleInsider Staff Wednesday, July 05, 2017, 07:22 pm PT (10:22 pm ET)

While not an official company endorsement of the WebVR platform, three Apple employees are now listed on the WebVR Community Group's participants webpage, UploadVR reports.

Specifically, Brandel Zachernuk, David Singer and Dean Jackson join a cadre of web developers representing various internet services and like Google's Chrome, Microsoft's Internet Explorer, and Mozilla's Firefox. Developers from Intel, Facebook, Samsung and other top technology companies are also part of the working group.

According to his LinkedIn profile, Zachernuk serves as a senior front-end developer on Apple's marketing and communications team. Jackson is a WebGL spec editor, while Singer has worked in Apple's multimedia and software standards office since 1988.

As noted by the group's co-chair Brandon Jones, a Chrome WebVR and WebGL developer at Google, Apple's participation means WebVR now has input from every major web browser vendor. Apple markets the Safari web browser that ships with both macOS and iOS.

WebVR is an open API that seeks to provide VR hardware support through modern web browsers. Developers working on the standard are building in support for devices ranging from Oculus Rift to Google Cardboard to Playstation VR. The goal, according to contributors, is to broaden access to VR experiences.

Jones notes that participation in the affiliated WebVR Community Group does not necessarily imply commitment to the standard. However, given its penchant for secrecy, Apple's public presence at the community group suggests the company is at least investigating potential integrations.

See the original post:

Apple software engineers join WebVR virtual reality accessibility group - AppleInsider (press release) (blog)

Posted in Virtual Reality | Comments Off on Apple software engineers join WebVR virtual reality accessibility group – AppleInsider (press release) (blog)

Newton couple creating virtual reality software in basement – Topeka Capital Journal

Posted: at 11:13 pm

NEWTON A Newton couple is bringing Silicon Valley to their Kansas-based lab better known as their basement.

Corey and Michele Janssens, founders of ViewVerge, are enhancing the way people see media through a 2D to 3D converter and a 3D to 3D enhancer for augmented and virtual reality (ARVR), The Wichita Eagle (http://bit.ly/2sNxVyt ) reported.

Our goal was to basically re-create a biological version of 3D a more natural 3D because of ARVR, Corey Janssens said. We perceive in 3D, so it just seemed kind of natural: Why have a 3D device and watch 2D content?

The couple has struggled to attract investors who want to invest outside of Silicon Valley, but said they have no plans to leave the state.

What were doing is a Silicon Valley venture in Kansas, Michele Janssens said. I knew that would be a challenge, and it is just as big a challenge as we thought it would be.

But there are good things happening in Kansas. And everyone tells us there is a push right now to venture more into tech and bring jobs and money to the Wichita area.

While the Jansssenses have sought and attracted mentors nationwide in 3D technology, marketing and branding, they said success will occur when they have licensing and investors to help make ViewVerge technology readily available through mobile applications, or for 2D to 3D conversion in the medical and military fields.

Corey Janssens, a former Army unmanned aerial vehicle pilot and self-taught theoretical physicist and engineer, and Michele Janssens, a speech therapist, have what they call a marriage of science and communication.

An interesting fact that is a very integral part of who we are as a couple and hopefully as a vital company: Corey is autistic, I am a speech therapist, and were married, Michele Janssens said. He is passionate about building things and physics and the science, and I am passionate about communication.

Its really kind of a unique marriage.

Corey Janssens said he has had many jobs in his life that led him to developing this software.

It was when he spent five years as part of and then leading a confidential Microsoft think-tank that Bill Gates called him a modern-day Isaac Newton, according to a ViewVerge media release.

That interaction and exposure led him to apply to get one of the first rounds of developer HoloLens they released, Michele Janssens said. We waited about 10 years to do something like this.

The couple received their Microsoft Hololens the first self-contained holographic computer in May 2016.

When we got that Hololens, he knew this was it, Michele Janssens said.

It took just three to four months for Corey Janssens to develop the foundation for the software, and after continual improvements they think they have the answer to natural, human-like 3D media.

I dont believe youre going to have much 2D media in the future, he said. It just makes more sense to have graphics that are put in the format of the way we naturally see things.

If you build a system that is converting 2D to 3D, in a sense that is what the human brain does. We dont actually see 3D, you infer distance from having two eyes.

So by mimicking the biological system well enough with some added algorithms, you have an early computer vision system that is much more human.

The 3D software currently available has been gimmicky, Michele Janssens said, and that is not their goal.

When (people) hear 3D, they think stuff popping out in the face, and thats not actually what 3D is, Michelle Janssens said.

Our goals are to make it natural and comfortable, just like when youre looking around.

Continue reading here:

Newton couple creating virtual reality software in basement - Topeka Capital Journal

Posted in Virtual Reality | Comments Off on Newton couple creating virtual reality software in basement – Topeka Capital Journal

Visual effects titan Digital Domain aims for global lead in virtual reality content, services – South China Morning Post

Posted: at 11:13 pm

Digital Domain Holdings, operator of the worlds largest independent visual-effects studio, plans to sharpen its focus on virtual reality technology initiatives after beefing up its senior management and recently raising fresh funding.

We are strongly focused on developing our business model of technology-plus-entertainment in the virtual reality industry, Peter Chou, the chairman of Hong Kong-listed Digital Domain, told a press conference on Wednesday.

The company expects to bolster that effort with the help of new high-level recruits to its board of directors. These are: Wei Ming, the former general manager of Alibaba Group Holdings digital entertainment business unit;Pu Jian, a vice-president at mainland Chinese conglomerate Citic;Alan Song Anlan, the managing partner at SoftBank China Venture Capital;and John Lagerling, the former vice-president of business development for mobile and product partnerships at Facebook.

Wei was also named as Digital Domains new vice-chairman and chief executive of the companys fast-developing greater China business unit.

Chief executive Daniel Seah Ang said the mainland, which is forecast to be the worlds biggest movie market this year, is now providing many opportunities for the company to expand its media and entertainment business, as well as further penetrate the nascent marketplace for virtual reality content and services.

Digital Domain runs award-winning movie visual special-effects studio Digital Domain 3.0, which also provides services to major commercial advertisers such asNike and Apple. Canadian subsidiaries Immersive Ventures and IM360 Entertainment are involved in creating original virtual reality content.

The company has been expanding into virtual reality content amid rising global interest for virtual reality headsets developed or set to be introduced by the likes of Samsung Electronics, HTC and Lenovo Group.

Worldwide revenue for the combined augmented reality and virtual reality market is forecastto increase 130.5 per cent to US$13.9 billion this year, up from US$6.1 billion last year, according to the latest estimates from research firm IDC.

Virtual reality technology immerses a user in an imagined world, like in a video game or movie, with the aid of an opaque headset, such as HTCs Vive and Googles Daydream platform for Android smartphones.

Augmented reality, meanwhile, provides an overlay of digital imagery onto the real world with the use of a clear headset like Microsofts HoloLens or an advanced smartphone that supports the technology, such as Lenovos Phab 2 Pro.

"Social virtual reality development is gaining traction, and Digital Domain is well-poised to lead the virtual reality industry ... and develop more relevant technologies in the future," said new company director Lagerling about the creation and use such content insocial media.

Digital Domain forged a strategic partnership with Alibaba-backed online video platformYouku Tudou last year to step up development of virtual reality content for mainstream distribution on the mainland. E-commerce giant Alibaba owns the South China Morning Post.

In addition, Digital Domain also raised fresh funding in the fourth quarter last year to support its business expansion.

Those include: HK$38.5 million from Paul Jacobs, the executive chairman at US tech firm Qualcomm; HK$309 million from the Munsun VR Fund, a limited partnership managed by Munsun Asset Management (Asia) that is owned by China Precious Metal Resources Holdings; and HK$200 million from red-chip Citic, the mainlands biggest conglomerate, and SoftBank China Venture Capital.

The new initiatives by Digital Domain this year are expected by management to bolster its overall business moving forward.

The company reported in March a wider net loss of HK$479.4 million last year, from HK$156.3 million in 2015, amid rising operating expenses.

Excerpt from:

Visual effects titan Digital Domain aims for global lead in virtual reality content, services - South China Morning Post

Posted in Virtual Reality | Comments Off on Visual effects titan Digital Domain aims for global lead in virtual reality content, services – South China Morning Post

UCLA helps virtual reality lead the charge in battle for US Army recruitment – UCLA Newsroom

Posted: at 11:13 pm

The UCLA Army ROTC is working to inspire people to choose the U.S. military as a potential career path. Students from the current Bruin battalion appear in the Armys first virtual reality recruitment video, which can be viewed through nearly any virtual reality viewer, including Google Cardboard, Samsung Gear VR, HTC Vive or Oculus Rift.

The video, titled Leaders Made Here, was created via collaboration between the UCLA Department of Military Science/Army Reserve Officers Training Corps, and Holor Media, a virtual reality company based in Hollywood and led by former executives from Disney, Pixar and Industrial Light and Magic.

The six-minute immersive film gives viewers a chance to live life as an Army cadet participating in a field training exercise. It was filmed at Camp Pendleton near San Diego and features students from the current Bruin battalion, which is made up of students from UCLA as well as other nearby colleges that dont offer ROTC. Interspersed with real testimonies from college students enrolled in the program, the viewer is given a chance to take part in land navigation, medical training, Army ceremonies and even an obstacle course.

Holor Media

Members of the UCLA Army ROTC program hope this video will increase awareness of the opportunity for college students to join a nearly 100-year-old tradition of service by joining the Bruin battalion and becoming a United States Army officer.

If potential students enjoy the experience of being an Army cadet in VR, we challenge them to apply for and experience the real thing, says Lt. Col. Shannon Stambersky, UCLA professor of military science. Virtual reality is fantastic and all as a starting point, but reality-reality itself cant be beat.

Video director Brian Tan believes a series of shorts like thiscould be a recruiting game changer.

Unlike most 360-degree videos which are passive, fly-on-the-wall experiences, this was filmed from a first person point of view, giving viewers unprecedented interactivity and engagement close to the real thing, said Tan, who goes by BLT.

Tan is a UCLA alumnus who graduated in 2010 with a degree in political science/international relations. He also started UCLAs first film and photography club which just celebrated its 11th anniversary and filmed the first video featuring the Bruin battalion in 2009. Tan also worked with the current students on a 100-year anniversary video that will be released soon.

The students featured in the video are all UCLA juniors: Ainara Manlutac (majoring in chemistry), Edwin Chang (majoring in geography/environmental studies), Daisy Guilyard (majoring in political science), Louis Bethge (majoring in Russian studies), Kiana Malcolm (majoring in political science). All of them are expected to join the U.S. Army after they graduate in 2018.

Read more here:

UCLA helps virtual reality lead the charge in battle for US Army recruitment - UCLA Newsroom

Posted in Virtual Reality | Comments Off on UCLA helps virtual reality lead the charge in battle for US Army recruitment – UCLA Newsroom

Elia Petridis Launches Virtual Reality Company: Fever Content (Exclusive) – TheWrap

Posted: at 11:13 pm

Elia Petridis, who directed legendary old-time actor Ernest Borgnine in his final role in The Man Who Shook the Hand of Vicente Fernandez, has decided to focus on what could be Hollywoods future by starting virtual reality company Fever Content.

As part ofthe companys launch, Petridis has brought on Craig Bernard, who previously served as chief creative cfficer for SAMO VR, as Fever Contents executive producer. At SAMO, Bernard oversaw several VR projects, including the music video for the EDEN song, Drugs. That film was showcased by VR company Jaunt at this years Sundance and SXSW festivals.

Petridis VR experience includesnarrative live-action thriller, Eye for an Eye: A Sance in Virtual Reality, a collaboration with Gnomes & Goblins virtual reality studio Wevr.

Also Read: Fox Sports Rolls Out Social Virtual Reality for Gold Cup Soccer Tournament

I am very excited to be working with Craig, Petridis said in a statement. Our team understands the potential VR grants to entertainment. We fuse creation and technology to unlock the heart of each experience.

Our approach streamlines the creative process, which is crucial to our many partners and clients needs and expectations, Bernard said in the statement.We have cultivated a large network of technical and creative partners, which allows us to support even the most ambitious goals.

Fever Content makes experiences that deserve to be fully immersive, Petridis added. And we know exactly how our experiences are meant to make you feel. Our content will reach inside of you and grab at your heart strings. We are thrilled to bring audiences the latest and greatest wonders of VR.

CES Asia, the three-year-old overseas version of the annual Las Vegas tech extravaganza, took over five halls at the Shanghai New International Expo Center to showcase the latest and greatest in consumer technology -- which included plenty of robots, smart appliances and self-driving cars.A full 450 exhibiting companies and more than 30,000 attendees testdrove some products at the bleeding edge of innovation.

Cowarobot autonomous suitcase This is not your typical overnight bag. The rolling suitcase from Chinas Cowarobot can identify and follow its owner through airport concourse traffic, avoiding obstacles along the way. Italso automatically locks depending on distance from the owner, alertswhen its more than a safe distance away.

PicoNeo DKS ThePicoNeo DKS is a wireless virtual reality rig that plays like a full-fledged PC setup, with a 2.5K 5.5 inch HD screen that smooths out the often-blurry and clunky gameplay of most mobile VR devices. The setup uses Qualcomms Snapdragon 820 processor to deliver substantial computing power.

HiScene HiAR Like the Neo DKS, one of CES Asias buzziest augmented reality headsets also features the Qualcomm Snapdragon 820 processor. The HiAR goggles, which feel heftier than many other AR sets, use artificial intelligence as part of an always-on voice control capability -- as augmented reality continues to move toward a Minority Report future.

Shadow Creator Halomini In case you hadnt noticed, virtual and augmented reality was kind of a big deal at CES Asia as it was at the flagship Vegas show earlier this year. Shadow Creators Halomini headset, which feels like a lighter version of Microsofts HoloLens, allows users to set appointments, chat with friends and watch videos, while keeping their eyes on whatever it was they're watching.

Ovo Technology Danovo CES Asia is full of robots, but the Danovo stood out for its fun personality as much as that applies to an inanimate object. The egg-shaped machine from Chinas Ovo Technology can navigate around items, dance, engage with people, and even project video by sliding over the top of its shell. Ovo also makes trash collecting and security robots, but they're a lot more serious than the Danovo.

Gowild Holoera Virtual reality can be lonely, which is why Gowild decided to add a friend. Amber, a 3D hologram who lives inside its pyramid-shaped Holoera device, can respond to commands, read moods and cheer users up with a well-timed song.

Qihan Sanbot Another entry in CES Asias parade of robots was Qihans Sanbot, which is based on IBMs "Jeopardy!"-winning Watson operating system. Sanbot can recognize and communicate with customers in 30 languages and process credit card payments. It also does a delightful dance, complete with glowing, gyrating limbs.

Baidu Little Fish The smart speaker from Chinese tech giant Baidu is the countrys answer to the Amazon Echo, only with a high-resolution 8-inch screen and camera that turns to face the user. It can handle the basics like controlling smart-home devices and playing music, and its face-recognition software allows authorized users to order food and medicine.

PowerVision Power Ray The fishing robot includes ocean mapping, an integrated fish luring light and even an optional remote bait drop feature that allows users to place the hook wherever they want. Its camera shoots in 4K UHD and is capable of 1080p real-time streaming. It even connects with the Zeiss VR One Plus VR headset to turn real-life fishing into a virtual reality game.

JD JDrone The unmanned aircraft is part of a plan from Chinas second-biggest online retailer, JD.com, to use drones to deliver products that weigh as much as one metric ton. The company is also developing fully-automated warehouses.

Itonology CarMew C1 This lighter socket-mounted device gives cars high-speed wi-fi, allowing people in them (preferably not driving) to get work done and stream music. It connects near field FM, auxiliary dual channels and car audio, and enables sharing of 4G networks.

The Chinese version of the annual tech extravaganza featured plenty of robots and serious advances in mobile virtual reality

CES Asia, the three-year-old overseas version of the annual Las Vegas tech extravaganza, took over five halls at the Shanghai New International Expo Center to showcase the latest and greatest in consumer technology -- which included plenty of robots, smart appliances and self-driving cars.A full 450 exhibiting companies and more than 30,000 attendees testdrove some products at the bleeding edge of innovation.

Excerpt from:

Elia Petridis Launches Virtual Reality Company: Fever Content (Exclusive) - TheWrap

Posted in Virtual Reality | Comments Off on Elia Petridis Launches Virtual Reality Company: Fever Content (Exclusive) – TheWrap

AI is changing how we do science. Get a glimpse – Science Magazine

Posted: at 11:12 pm

By Science News StaffJul. 5, 2017 , 11:00 AM

Particle physicists began fiddling with artificial intelligence (AI) in the late 1980s, just as the term neural network captured the publics imagination. Their field lends itself to AI and machine-learning algorithms because nearly every experiment centers on finding subtle spatial patterns in the countless, similar readouts of complex particle detectorsjust the sort of thing at which AI excels. It took us several years to convince people that this is not just some magic, hocus-pocus, black box stuff, says Boaz Klima, of Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, one of the first physicists to embrace the techniques. Now, AI techniques number among physicists standard tools.

Neural networks search for fingerprints of new particles in the debris of collisions at the LHC.

2012 CERN, FOR THE BENEFIT OF THE ALICE COLLABORATION

Particle physicists strive to understand the inner workings of the universe by smashing subatomic particles together with enormous energies to blast out exotic new bits of matter. In 2012, for example, teams working with the worlds largest proton collider, the Large Hadron Collider (LHC) in Switzerland, discovered the long-predicted Higgs boson, the fleeting particle that is the linchpin to physicists explanation of how all other fundamental particles get their mass.

Such exotic particles dont come with labels, however. At the LHC, a Higgs boson emerges from roughly one out of every 1 billion proton collisions, and within a billionth of a picosecond it decays into other particles, such as a pair of photons or a quartet of particles called muons. To reconstruct the Higgs, physicists must spot all those more-common particles and see whether they fit together in a way thats consistent with them coming from the same parenta job made far harder by the hordes of extraneous particles in a typical collision.

Algorithms such as neural networks excel in sifting signal from background, says Pushpalatha Bhat, a physicist at Fermilab. In a particle detectorusually a huge barrel-shaped assemblage of various sensorsa photon typically creates a spray of particles or shower in a subsystem called an electromagnetic calorimeter. So do electrons and particles called hadrons, but their showers differ subtly from those of photons. Machine-learning algorithms can tell the difference by sniffing out correlations among the multiple variables that describe the showers. Such algorithms can also, for example, help distinguish the pairs of photons that originate from a Higgs decay from random pairs. This is the proverbial needle-in-the-haystack problem, Bhat says. Thats why its so important to extract the most information we can from the data.

Machine learning hasnt taken over the field. Physicists still rely mainly on their understanding of the underlying physics to figure out how to search data for signs of new particles and phenomena. But AI is likely to become more important, says Paolo Calafiura, a computer scientist at Lawrence Berkeley National Laboratory in Berkeley, California. In 2024, researchers plan to upgrade the LHC to increase its collision rate by a factor of 10. At that point, Calafiura says, machine learning will be vital for keeping up with the torrent of data. Adrian Cho

With billions of users and hundreds of billions of tweets and posts every year, social media has brought big data to social science. It has also opened an unprecedented opportunity to use artificial intelligence (AI) to glean meaning from the mass of human communications, psychologist Martin Seligman has recognized. At the University of Pennsylvanias Positive Psychology Center, he and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project use machine learning and natural language processing to sift through gobs of data to gauge the publics emotional and physical health.

Thats traditionally done with surveys. But social media data are unobtrusive, its very inexpensive, and the numbers you get are orders of magnitude greater, Seligman says. It is also messy, but AI offers a powerful way to reveal patterns.

In one recent study, Seligman and his colleagues looked at the Facebook updates of 29,000 users who had taken a self-assessment of depression. Using data from 28,000 of the users, a machine-learning algorithm found associations between words in the updates and depression levels. It could then successfully gauge depression in the other users based only on their updates.

In another study, the team predicted county-level heart disease mortality rates by analyzing 148 million tweets; words related to anger and negative relationships turned out to be risk factors. The predictions from social media matched actual mortality rates more closely than did predictions based on 10 leading risk factors, such as smoking and diabetes. The researchers have also used social media to predict personality, income, and political ideology, and to study hospital care, mystical experiences, and stereotypes. The team has even created a map coloring each U.S. county according to well-being, depression, trust, and five personality traits, as inferred from Twitter.

Theres a revolution going on in the analysis of language and its links to psychology, says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades. He also found support for suggestions that much of the 1728 play Double Falsehood was likely written by William Shakespeare: Machine-learning algorithms matched it to Shakespeares other works based on factors such as cognitive complexity and rare words. Now, we can analyze everything that youve ever posted, ever written, and increasingly how you and Alexa talk, Pennebaker says. The result: richer and richer pictures of who people are. Matthew Hutson

For geneticists, autism is a vexing challenge. Inheritance patterns suggest it has a strong genetic component. But variants in scores of genes known to play some role in autism can explain only about 20% of all cases. Finding other variants that might contribute requires looking for clues in data on the 25,000 other human genes and their surrounding DNAan overwhelming task for human investigators. So computational biologist Olga Troyanskaya of Princeton University and the Simons Foundation in New York City enlisted the tools of artificial intelligence (AI).

Artificial intelligence tools are helping reveal thousands of genes that may contribute to autism.

BSIP SA/ALAMY STOCK PHOTO

We can only do so much as biologists to show what underlies diseases like autism, explains collaborator Robert Darnell, founding director of the New York Genome Center and a physician scientist at The Rockefeller University in New York City. The power of machines to ask a trillion questions where a scientist can ask just 10 is a game-changer.

Troyanskaya combined hundreds of data sets on which genes are active in specific human cells, how proteins interact, and where transcription factor binding sites and other key genome features are located. Then her team used machine learning to build a map of gene interactions and compared those of the few well-established autism risk genes with those of thousands of other unknown genes, looking for similarities. That flagged another 2500 genes likely to be involved in autism, they reported last year in Nature Neuroscience.

But genes dont act in isolation, as geneticists have recently realized. Their behavior is shaped by the millions of nearby noncoding bases, which interact with DNA-binding proteins and other factors. Identifying which noncoding variants might affect nearby autism genes is an even tougher problem than finding the genes in the first place, and graduate student Jian Zhou in Troyanskayas Princeton lab is deploying AI to solve it.

To train the programa deep-learning systemZhou exposed it to data collected by the Encyclopedia of DNA Elements and Roadmap Epigenomics, two projects that cataloged how tens of thousands of noncoding DNA sites affect neighboring genes. The system in effect learned which features to look for as it evaluates unknown stretches of noncoding DNA for potential activity.

When Zhou and Troyanskaya described their program, called DeepSEA, in Nature Methods in October 2015, Xiaohui Xie, a computer scientist at the University of California, Irvine, called it a milestone in applying deep learning to genomics. Now, the Princeton team is running the genomes of autism patients through DeepSEA, hoping to rank the impacts of noncoding bases.

Xie is also applying AI to the genome, though with a broader focus than autism. He, too, hopes to classify any mutations by the odds they are harmful. But he cautions that in genomics, deep learning systems are only as good as the data sets on which they are trained. Right now I think people are skeptical that such systems can reliably parse the genome, he says. But I think down the road more and more people will embrace deep learning. Elizabeth Pennisi

This past April, astrophysicist Kevin Schawinski posted fuzzy pictures of four galaxies on Twitter, along with a request: Could fellow astronomers help him classify them? Colleagues chimed in to say the images looked like ellipticals and spiralsfamiliar species of galaxies.

Some astronomers, suspecting trickery from the computation-minded Schawinski, asked outright: Were these real galaxies? Or were they simulations, with the relevant physics modeled on a computer? In truth they were neither, he says. At ETH Zurich in Switzerland, Schawinski, computer scientist Ce Zhang, and other collaborators had cooked the galaxies up inside a neural network that doesnt know anything about physics. It just seems to understand, on a deep level, how galaxies should look.

With his Twitter post, Schawinski just wanted to see how convincing the networks creations were. But his larger goal was to create something like the technology in movies that magically sharpens fuzzy surveillance images: a network that could make a blurry galaxy image look like it was taken by a better telescope than it actually was. That could let astronomers squeeze out finer details from reams of observations. Hundreds of millions or maybe billions of dollars have been spent on sky surveys, Schawinski says. With this technology we can immediately extract somewhat more information.

The forgery Schawinski posted on Twitter was the work of a generative adversarial network, a kind of machine-learning model that pits two dueling neural networks against each other. One is a generator that concocts images, the other a discriminator that tries to spot any flaws that would give away the manipulation, forcing the generator to get better. Schawinskis team took thousands of real images of galaxies, and then artificially degraded them. Then the researchers taught the generator to spruce up the images again so they could slip past the discriminator. Eventually the network could outperform other techniques for smoothing out noisy pictures of galaxies.

AI that knows what a galaxy should look like transforms a fuzzy image (left) into a crisp one (right).

KIYOSHI TAKAHASE SEGUNDO/ALAMY STOCK PHOTO

Schawinskis approach is a particularly avant-garde example of machine learning in astronomy, says astrophysicist Brian Nord of Fermi National Accelerator Laboratory in Batavia, Illinois, but its far from the only one. At the January meeting of the American Astronomical Society, Nord presented a machine-learning strategy to hunt down strong gravitational lenses: rare arcs of light in the sky that form when the images of distant galaxies travel through warped spacetime on the way to Earth. These lenses can be used to gauge distances across the universe and find unseen concentrations of mass.

Strong gravitational lenses are visually distinctive but difficult to describe with simple mathematical ruleshard for traditional computers to pick out, but easy for people. Nord and others realized that a neural network, trained on thousands of lenses, can gain similar intuition. In the following months, there have been almost a dozen papers, actually, on searching for strong lenses using some kind of machine learning. Its been a flurry, Nord says.

And its just part of a growing realization across astronomy that artificial intelligence strategies offer a powerful way to find and classify interesting objects in petabytes of data. To Schawinski, Thats one way I think in which real discovery is going to be made in this age of Oh my God, we have too much data. Joshua Sokol

Organic chemists are experts at working backward. Like master chefs who start with a vision of the finished dish and then work out how to make it, many chemists start with the final structure of a molecule they want to make, and then think about how to assemble it. You need the right ingredients and a recipe for how to combine them, says Marwin Segler, a graduate student at the University of Mnster in Germany. He and others are now bringing artificial intelligence (AI) into their molecular kitchens.

They hope AI can help them cope with the key challenge of moleculemaking: choosing from among hundreds of potential building blocks and thousands of chemical rules for linking them. For decades, some chemists have painstakingly programmed computers with known reactions, hoping to create a system that could quickly calculate the most facile molecular recipes. However, Segler says, chemistry can be very subtle. Its hard to write down all the rules in a binary way.

So Segler, along with computer scientist Mike Preuss at Mnster and Seglers adviser Mark Waller, turned to AI. Instead of programming in hard and fast rules for chemical reactions, they designed a deep neural network program that learns on its own how reactions proceed, from millions of examples. The more data you feed it the better it gets, Segler says. Over time the network learned to predict the best reaction for a desired step in a synthesis. Eventually it came up with its own recipes for making molecules from scratch.

The trio tested the program on 40 different molecular targets, comparing it with a conventional molecular design program. Whereas the conventional program came up with a solution for synthesizing target molecules 22.5% of the time in a 2-hour computing window, the AI figured it out 95% of the time, they reported at a meeting this year. Segler, who will soon move to London to work at a pharmaceutical company, hopes to use the approach to improve the production of medicines.

Paul Wender, an organic chemist at Stanford University in Palo Alto, California, says its too soon to know how well Seglers approach will work. But Wender, who is also applying AI to synthesis, thinks it could have a profound impact, not just in building known molecules but in finding ways to make new ones. Segler adds that AI wont replace organic chemists soon, because they can do far more than just predict how reactions will proceed. Like a GPS navigation system for chemistry, AI may be good for finding a route, but it cant design and carry out a full synthesisby itself.

Of course, AI developers have their eyes trained on those other tasks as well. Robert F. Service

The rest is here:

AI is changing how we do science. Get a glimpse - Science Magazine

Posted in Ai | Comments Off on AI is changing how we do science. Get a glimpse – Science Magazine

How AI will change the way we live – VentureBeat

Posted: at 11:12 pm

Will robots take our jobs? When will driverless cars become the norm? How is Industry 4.0 transforming manufacturing? These were just some of the issues addressed at CogX in London last month. Held in association with The Alan Turing Institute, CogX 17 was an event bringing together thought leaders across more than 20 industries and domains to address the impact of artificial intelligence on society. To round off the proceedings, a prestigious panel of judges recognized some of the best contributions to innovation in AI in an awards ceremony.

In his keynote speech, Lord David Young, a former UK Secretary of State for Trade and Industry, was keen to point out that workers should not worry about being made unemployed by robots because, he said, most jobs that would be killed off were miserable anyway.

He told the conference that more jobs than ever would be automated in the future, but that this should be welcomed. When the Spinning Jenny first came in, it was almost exactly the same, he said. They thought it was going to kill employment. We may have a problem one day if the Googles of this world continue to get bigger and the Amazons spread into all sorts of things, but government has the power to regulate that, has the power to break it up.

Im not the slightest worried about it, he continued. Most of the jobs are miserable jobs. What technology has to do is get rid of all the nasty jobs.

Its certainly an interesting analogy, comparing the current tech and AI revolution to the Industrial Revolution. Its hard to disagree that just as the proliferation of machines in the 18th and 19th centuries helped create new jobs and wealth, AI is likely to do the same. There is undoubtedly a bigger question around regulation and whos in charge of this new landscape, however.

CogX also threw some fascinating panel discussions about transportation and smart cities. Panelists including M.C. Srivas, Ubers chief data scientist, and Huawei CTO Ayush Sharma talked at length about the necessity of self-driving cars in our towns and cities, whose roads have become jails where commuters do time. And thats without delving into issues of safety and pollution.

Kenneth Cukier, The Economistsbig data expert, asked the audience whether they thought autonomous cars were likely to hit our cities in either 5, 10, or 15 years. Most of those in attendance, along with the panel, agreed that we should see autonomous cars becoming the norm in the next 10 to 15 years, with clear legislation set to come in around 2023.

However and this is something that affects us directly the panel also agreed that although the mass manufacturing of self-driving cars is still a few years off, intelligent assistants for smart cars are imminent, likely to become standard within the next couple of years. Voice offers countless possibilities in the automotive space. Besides enabling the safe use of existing controls such as in-car entertainment systems or heating/air conditioning, it also offers GPS functionality as well as control over the vehicles mechanics.

The session on Industry 4.0 kicked off by attempting to make sense of a term that has been used for several years. The general consensus was that automating manufacturingwas the best way to express an idea that originated in a report by the German government. Industrial companies have to become automated to survive, and many are building highly integrated engines to capture data from their machines. The market for smart manufacturing tools is expected to hit $250 billion by 2018.

Its well known that robotics are already used in manufacturing to handle larger-scale and more dangerous work. What the panel also discussed are other possibilities AI offers, such as virtual personal assistants for workers to help them complete their daily tasks or smart technology such as 3D printing and its benefits for smaller companies.

Even our entertainment these days is driven by AI. The Industry 4.0 session ended on a lighter note with Limor Schweitzer, CEO at RoboSavvy, encouraging Franky the robot to show the audience its dance moves. Sophia, a humanlike robot created by Hanson Robotics, also provided entertainment at the CogX awards ceremony; she announced the nominees and winners in the category of best innovation in artificial general intelligence, which included my company Sherpa, Alphabets DeepMind, and Vicarious.

CogX also touched on the impact of AI on health, HR, education, legal services, fintech, and many other sectors. Panelists were in agreement that advances in AI must benefit all of us. While there are still many question marks about regulation of the sector, AI already permeates all aspects of our society.

Ian Cowley is the marketing manager at Sherpa, which uses algorithms based on probability models to predict information a user might need.

The rest is here:

How AI will change the way we live - VentureBeat

Posted in Ai | Comments Off on How AI will change the way we live – VentureBeat

Google’s DeepMind Turns to Canada for Artificial Intelligence Boost – Fortune

Posted: at 11:12 pm

Googles high-profile artificial intelligence unit has a new Canadian outpost.

DeepMind, which Google bought in 2014 for roughly $650 million, said Wednesday that it would open a research center in Edmonton, Canada. The new research center, which will work closely with the University of Alberta, is the United Kingdom-based DeepMinds first international AI research lab.

DeepMind, now a subsidiary of Google parent company Alphabet ( goog ) , recruited three University of Alberta professors from to lead the new research lab. The professorsRich Sutton, Michael Bowling, and Patrick Pilarskiwill maintain their positions at the university while working at the new research office.

Get Data Sheet , Fortunes technology newsletter .

Sutton, in particular, is a noted expert in a subset of AI technologies called reinforcement learning and was an advisor to DeepMind in 2010. With reinforcement learning, computers look for the best possible way to achieve a particular goal, and learn from each time they fail.

DeepMind has popularized reinforcement learning in recent years through its AlphaGo program that has beat the worlds top players in the ancient Chinese board game, Go. Google has also incorporated some of the reinforcement learning techniques used by DeepMind in its data centers to discover the best calibrations that result in lower power consumption.

DeepMind has taken this reinforcement learning approach right from the very beginning, and the University of Alberta is the worlds academic leader in reinforcement learning, so its very natural that we should work together, Sutton said in a statement. And as a bonus, we get to do it without moving.

DeepMind has also been investigated by the United Kingdom's Information Commissioner's Office for failing to comply with the United Kingdom's Data Protection Act as it expands to using its technology in the healthcare space.

ICO information commissioner Elizabeth Denham said in a statement on Monday that the office discovered a "number of shortcomings" in the way DeepMind handled patient data as part of a clinical trial to use its technology to alert, detect, and diagnosis kidney injuries. The ICO claims that DeepMind failed to explain to participants how it was using their medical data for the project.

DeepMind said Monday that it "underestimated the complexity" of the United Kingdom's National Health Service "and of the rules around patient data, as well as the potential fears about a well-known tech company working in health." DeepMind said it would be now be more open to the public, patients, and regulators with how it uses patient data.

"We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole," DeepMind said in a statement. "We got that wrong, and we need to do better."

Original post:

Google's DeepMind Turns to Canada for Artificial Intelligence Boost - Fortune

Posted in Artificial Intelligence | Comments Off on Google’s DeepMind Turns to Canada for Artificial Intelligence Boost – Fortune