20/20 gains virtual reality patent – Research Live

US Qualitative research and technology firm, 20/20 Research, has been granted a US patent relating to virtual environments for behavioural research.

Patent No. 10,354,261 includes being able to instantly manipulate a virtual environment based upon a users vision patterns and demographics.

The patent also includes asking questions based upon what a participant sees and measuring and scoring vision patterns, as well as delivering virtual environments to common user devices.

Isaac Rogers, CEO of 20/20 Research, said: With the technology covered in this patent, you can place someone in a customised digital space, see through their eyes and use their vision to manipulate that space. You not only get immediate feedback as the space changes around them, but their gaze determines what they see.

Behavioural and qualitative data seem to be inching closer together. We took this step to create a pathway where virtual reality and augmented reality can mix with traditional methodologies.

Original post:

20/20 gains virtual reality patent - Research Live

PokerStars Introduces Virtual Reality Poker Tour – LegalUSPokerSites.com

Thursday, September 19th, 2019 Home Poker News PokerStars Removes Some Short Decks and Adds VRPT

The PokerStars news team has been busy.

In this latest update of PokerStars changes, there are additions and subtractions from the online poker offerings mostly additions.

The subtraction is relegated to 6+ Holdem Spin & Gos. There will be no more of them.

But the additions include everything from a Virtual Reality Poker Tour as a free-to-play series to the new stacks in big blinds feature.

It was less than a month ago when PokerStars rolled out its All-In Cash Out function and then implemented a four-table limit on all cash games. Those were controversial in their own rights, especially the cutback on multi-tabling, but the latest changes in this update are not likely to ruffle as many feathers.

This year, the poker variant that is all the rage is Short Deck Holdem, also known as Six Plus or 6+ Holdem. The game played with a shorter deck and different hand rankings gives players a challenge and mixes up the holdem options.

PokerStars introduced 6+ Holdem to its live tournaments in Monte Carlo earlier this year as a part of the European Poker Tour. Online options were then included in the Spring Championship of Online Poker (SCOOP) series.

And in late July, players were offered 6+ Holdem Spin & Go tournaments. The fast-paced SNGs were offered in nearly all PokerStars markets with buy-ins of $1, $3, $15, $30, and $100.

Less than two months later, however, those Spin & Go options are gone. Pokerfuse reported this week that the 6+ Holdem Spin & Gos are no longer in the lineup. They were gradually removed, first at the higher buy-in levels on September 2 and then the rest of them this week.

Short Deck players can still get their fix in regular cash games and tournaments, though.

At the beginning of September, PokerStars introduced a new option for all players.

Instead of showing stack sizes in traditional amounts, players can choose to see their stacks as big blinds. In fact, they can see all player stacks, even those of their opponents, in the format of how many big blinds they represent.

Players can now change their settings to show a preference, but they can also toggle back and forth if they choose.

According to Director of Poker Product Chris Straghalis, players have been requesting the option. The information in knowing your stack compared with what the current blind levels are is of such critical importance that forcing people to make a mental calculation seems somewhat redundant when we can make the calculation for them, dynamically and on the fly, he said.

Betting can also be done in big blinds. Straghalis explained, So, if the blinds are 500/1,000, instead of opening to 2,100, you would open to 2.1 big blinds.

The hope is that players will be able to look at the game more analytically. Many players decide on their bets in relation to the pot size and stack sizes, and that is often easier to do in terms of big blinds.

The option is now available in all aspects of play in cash games and tournaments.

PokerStars has been playing around with virtual reality poker for some time.

The first preview of PokerStars VR came in September 2018. By November, players could download it and play poker in a virtual reality setting.

PokerStars VR has been developed in partnership with Lucky VR. The game could be downloaded through Oculus, Viveport, or Steam. Players needed a proper headset, but they could then immerse themselves in a very realistic poker game, one with themes, interactive props, and even voice commands.

The system has consistently been improved and upgraded since its launch, as they work out bugs and add new features.

Now, the PokerStars VR team is preparing for the first PokerStars Virtual Reality Poker Tour (VRPT). The free-to-play series will take place in virtual spaces, one set of tournaments in September, another in October, and yet another in November.

The first series will be held at the Galaxy Space Station, one of the many themes offered in the VR arena. It is called VRPT Space, and it will offer these events:

September 27: 1M-chip buy-in Mini Main Event with 60M chips GTD

September 28: 5M-chip buy-in Main Event with 300M chips GTD

September 29: 25M-chip buy-in Ultra High Roller with 200M chips GTD

The next iteration of the series will be VRPT Macau. The events will be the same, presumably with the same guarantees (though those are subject to change) but with the same buy-ins. VRPT Macau will play out October 25-27. And the November series will do the same but play out November 29 through December 1. That location has yet to be announced.

Entries into each event will win a gift bag, which includes virtual headphones, chips, and other gear. A VRPT virtual bracelet will be awarded to the winner of each event, and it can be worn on that persons avatar at the virtual tables.

Members of Team PokerStars will be playing in some of the tournaments.

Players can register for the tournaments on the Discord website. See the PokerStars VR page for details.

In even more news from the VR community, they are looking to name the first Team PokerStars VR Ambassador. This person will be a member of the virtual reality community.

At this point, it is unclear as to how the ambassador and new team member will be chosen. The criteria will be announced at a later date, but it doesnt seem to be connected to the results of the upcoming VRPT. The first series begins in just one week.

About Jennifer Newell

Disclaimer: The information on this site is my interpretation of the laws as made available online. It is in no way meant to serve as legal advice or instruction. We recommend that you seek legal advice from a licensed attorney for further or official guidance.

View all posts by Jennifer Newell

Original post:

PokerStars Introduces Virtual Reality Poker Tour - LegalUSPokerSites.com

Virtual reality program coming to state veterans homes – Cleveland Daily Banner

By LARRY C. BOWERS

It will be approximately two years until area veterans will be moving into Bradley County's new veterans home, and when that happens they will have the opportunity to participate in a new virtual reality program.

Tennessee State Veterans Homes (TSVH) has announced the launch of a virtual reality pilot program for skilled nursing and long-term care residents.

TSVH will partner with MyndVR, a leader in virtual reality solutions for seniors, to first bring the innovative technology to the Murfreesboro veterans home location. The service will then be provided to three other existing veterans homes in Humboldt, Knoxville and Clarksville.

This service will later be available for Bradley County residents, once construction on the new, $47 million facility is completed on Westland Drive in South Cleveland. After the recent groundbreaking, construction is expected to take 22 to 24 months.

Virtual reality will also be offered at the state's sixth veterans home, probably destined for Arlington in Shelby County on Tennessee's western end.

Arlington is in the same fundraising situation Bradley County was experiencing a few years ago. The community has secured local funding, but is waiting for approval of federal VA funds.

Tennessee State Veterans Homes is excited to partner with MyndVR to bring groundbreaking technology and virtual reality therapy to our residents, said Ed Harries, TSVH executive director. Research has shown that utilizing immersive virtual reality can be an effective therapeutic modality for geriatric residents.

At all TSVH facilities, residents receive skilled nursing care and a variety of therapy programs to help enhance their quality of life. They also have a robust calendar of daily activities in which those who are able can participate. Introducing virtual reality therapy to the homes will allow for more residents to participate and benefit from the activities, even those who are bedridden, since the MyndVR headsets have hundreds of different video experiences the residents can choose from.

The Activities Department will utilize MyndVR and monitor closely as the residents watch videos designed specifically for seniors, said Tracy Marsteller, TSVH activities director. Most TSVH residents are wheelchair bound or have limited mobility, and virtual reality will provide an immersive environment that empowers the residents to choose what they want to see and where they want to go.

We are proud to provide exceptional care to our residents and we are constantly looking at new ways to innovate and incorporate technology into our care plans, said Tyler Masden, TSVH Murfreesboro administrator. By introducing MyndVR, we are giving seniors of all levels of cognitive ability the chance to experience things that due to health reasons would otherwise not be possible.

The comfortable MyndVR Vive Focus headsets, which residents will use to view the videos, offers a wide variety of engaging content such as strolling the streets of Rome, sitting front row at a Broadway musical or a thrilling skydiving trip. The headsets also come with a tablet so staff can easily navigate the experiences for the residents. MyndVR continues to research the cognitive health benefits of virtual reality and develop new video content with seniors in mind.

We strive to improve the quality of life and lift the spirits of veterans across the country, said Chris Brickler, CEO of MyndVR. Our technology is more engaging than traditional forms of entertainment. It far surpasses TV and newspapers. With our immersive multi-sensory technology, veterans are able to experience true joy and escape from their four walls of existence. We're finding that VR can be used for healing in many ways and can provide immense emotional support and joy to our veterans."

The MyndVR program will be implemented first with the residents staying at the Murfreesboro Tennessee State Veterans Home, with the goal to launch to the other three existing TSVH facilities soon. Select TSVH staff members have gone through a training program with the MyndVR team to ensure they are familiar with the technology and proper safety and hygiene protocols are in place.

TSVH offers rehabilitative and therapy services, long-term, short-term and skilled nursing care for honorably discharged veterans, veteran spouses and Gold Star parents.

TSVH Executive Offices are located in Murfreesboro and all facilities are governed by the Tennessee State Veterans Homes Board.

Visit link:

Virtual reality program coming to state veterans homes - Cleveland Daily Banner

Augmented Reality vs. Virtual Reality – Science Times

(Photo : https://www.pexels.com/photo/app-augmented-reality-game-gps-163042/) Augmented reality is the new virtual reality, and even better. It offers an interaction between the real world and what the cyber world can offer.

A couple of years back, virtual reality had been the trend with people of all ages. Some watch videos while others play using a virtual reality box. Virtual reality is a technological advancement that creates a simulated environment.

In virtual reality, VR boxes or headsets take over your sense of sight which seemingly brings you to another world. The LCD panels of the VR headsets are refracted by lenses that completely block your vision from the outside world and make your vision focus on what is displayed on the screen.

Virtual reality takes you to another world or dimension. It recreates an actual place that has been photographed or an artificial place that has been animated. Virtual reality makes you experience another world even if you are just in the four corners of your room. You can enjoy the beauty of another place even without stepping outside of your home.

Another world within your grasp, Isn't that amazing?

Another way that technology can change the look of the world is through augmented reality. People may think that virtual and augmented reality are the same, but they are two different things. These video enhancing technological innovations aim to provide the best cyber experience for everyone.

But what is an augmented reality?

How is it different from virtual reality?

Augmented reality is the enhancement of your actual surroundings and adding some digital images. While virtual reality takes you to another world, augmented reality keeps you where you are but with added enhancements.

If you are familiar with Pokemon Go, this game pretty much explains augmented reality. When you download the Pokemon Go game, it will take you to go hunting for Pokemon. Pokemon are pocket monsters that Pokemon Masters hunt and collect to help them in battlefields.

Here is where augmented reality comes in. Using your smartphone and your phone's camera, the Pokemon Go app will direct you to where you can go and catch a pokemon. On the LCD of your phone, you will see the same surrounding as where you exactly are, but with an added a pokemon animation, that which you are about to catch. The combination of reality and animation is known as augmented reality.

Now, that's even more amazing!

You can experience the world of animation which you used to see on TV while traversing the real world.

Indeed, the ever-changing world of technology takes us to places with a simple click and they happen within the walls of your home or within the palm of our hands.

Experience the power of computer technology and change the way you look at the world!

Become a part of the animated world with augmented reality!

Link:

Augmented Reality vs. Virtual Reality - Science Times

Scientists Study Whether Virtual Reality Can Prevent Memory Loss – Voice of America

For three days a week, Wayne Garcia has been taking part in a nontraditional exercise.

He starts by putting on virtual reality (VR) equipment on his head. He then gets on a specially designed exercise bicycle and starts pushing its pedals. Faster and faster he goes.

Garcia is taking part in a study at the University of Southern Californias Keck School of Medicine. Researchers want to see if just a small amount of VR can help prevent memory loss as people age.

Garcia says he remembers how his parents and grandparents all suffered from dementia.

Its very scary that one day that could be me.

Garcia recalls his father once reading a newspaper upside down and almost setting the house on fire by putting a towel on a heater.

Just the sadness you remember what your dad was like, what your mom was like when they were all good, and then the decline now. And now youre taking care of them rather than when they used to take care of you.

Garcia is taking part in the study to see if using virtual reality at the same time as physical exercise can help prevent dementia in the future.

Judy Pa is part of the team of researchers leading the study. She says the actual definition of dementia is when a person is no longer able to take care of themselves, things like paying the bills, driving, cooking for themselves, dressing themselves.

She added that the break down and death of cells in the nervous system take 10 to 20 years.

Pa said that unlike usual games, VR provides a first-person, three-dimensional experience that is important to memory training.

Our goal is to prevent dementia (and) to prevent Alzheimers disease. There are no effective treatments yet. We hope that we will get there eventually, but my perspective and the research that we do in my laboratory at USC is really surrounding prevention.

The VR study exercises the subjects body and brain at the same time, testing the memory and decision-making part of their brain.

Subjects have to pedal on the exercise bike and keep their heart rate up. In the VR experience, they are trying to learn and remember directions while collecting food items, and then feeding the food to some animals.

Understanding changes in the brain that happen with exercise, changes in the brain that happen when youre in an enriched environment and putting those two together, and thats what our intervention is currently targeting.

Even if virtual reality can help, it may not be for everyone. In one study, four out of 40 people withdrew from the research because they reported motion sickness. Pa will be doing more tests over the next year with participants who are 50 to 80 years old to gather additional information.

Garcia is hopeful for what VR might mean for the future.

There might be a place where you could go, and you can get your daily dose of virtual reality and cardio to keep the mind going.

Im Jonathan Evans.

Elizabeth Lee reported this story for VOA News. Jonathan Evans adapted it for Learning English. George Grow was the editor.

________________________________________

bills n. documents that says how much money you owe for something you have bought or used

cardio n. any type of exercise that causes the heart to beat faster and harder for a period of time

dementia n. a mental illness that causes someone to be unable to think clearly or to understand what is real and what is not real

decline n. to become worse in condition or quality

dress v. to put clothes on yourself

perspective n. a way of thinking about and understanding something such as a particular issue or life in general

towel n. a piece of cloth used for drying things

virtual reality n. an artificial world of images and sounds created by a computer that is affected by the actions of a person who is experiencing it

See the original post here:

Scientists Study Whether Virtual Reality Can Prevent Memory Loss - Voice of America

Study shows virtual reality could improve surgical outcomes – Medical Device Network

A new study led by UCLA Health in the US has shown that three-dimensional (3D) virtual reality models could improve surgical outcomes by enabling better visualisation of a patients anatomy.

When tested in preparation for kidney tumour surgeries, the models led to shorter operating times, less blood loss during surgery, and a shorter hospital stay following the procedure.

According to the researchers, previous studies focused on the qualitative performance of 3D models. The latest study was conducted for quantitative evaluation of the technologys ability to improve patient outcomes.

The virtual reality models improve the visualisation of a persons anatomy, allowing surgeons to see the structures depth and contour.

UCLA David Geffen School of Medicine clinical instructor Dr Joseph Shirk said: Surgeons have long since theorised that using 3D models would result in a better understanding of the patient anatomy, which would improve patient outcomes.

But actually seeing evidence of this magnitude, generated by very experienced surgeons from leading medical centres, is an entirely different matter. This tells us that using 3D digital models for cancer surgeries is no longer something we should be considering for the future its something we should be doing now.

In the latest study, 48 patients were randomised into the control group and 44 into the intervention arm.

For surgery of participants in the control arm, the surgeon prepared for the procedure by reviewing CT or MRI scans.

For patients in the intervention arm, the surgeon reviewed the CT or MRI scan, as well as the 3D virtual reality model. The 3D models were reviewed via the surgeons mobile phones and a virtual reality headset.

The technology leveraged by the study was provided by the company Ceevra.

Shirk noted: Visualising the patients anatomy in a multicolour 3D format, and particularly in virtual reality, gives the surgeon a much better understanding of key structures and their relationships to each other.

The researchers expect that the 3D models can be used for planning surgeries of various type of cancers, including prostate, lung, liver and pancreas.

Enter your details here to receive your free whitepaper.

Close survey

Close

See the rest here:

Study shows virtual reality could improve surgical outcomes - Medical Device Network

City of Boston and Fundacin MAPFRE Turn to Virtual Reality to Reduce Traffic Fatalities, Injuries – PRNewswire

BOSTON, Sept. 20, 2019 /PRNewswire/ --In a collaborative effort to reduce traffic-related fatalities and serious crashes and collisions among pedestrians, bicyclists and motorists in Boston, Mayor Martin J. Walsh and the Boston Transportation Department have partnered with Fundacin MAPFRE to launch an interactive public action campaign to reinforce critical road safety rules and encourage empathy among those sharing the streets of Boston.

The program, Look Both Ways, Boston,supports the City of Boston's Vision Zero Boston program, which focuses on proven strategies to eliminate fatal and serious traffic crashes in the city by 2030 and urges those who use the city's streets to look at every situation from the other person's perspective. Fundacin MAPFRE, a global nonprofit organization, has long championed initiatives that work to eliminate traffic fatalities around the world.

The initiative, which kicks off at City Hall Plaza on September 19 and 20, features a virtual reality (VR) experience in which users get behind the wheel to navigate three different scenarios that test their safe driving skills. In addition to measuring speed and adherence to traffic rules, the interactive experience uses eye-tracking technology to monitor distracted driving when confronted with increasingly complex driving situations. Following the event, Look Both Ways, Boston will bring the VR experience to college campuses and other locations across Massachusetts.

The event on City Hall Plaza also features several interactive exhibits to demonstrate critical safety issues such as truck blind spots to encourage people to trade places and experience how others see the road, whether on a bike, behind the wheel of a large truck, in a wheelchair, or crossing the street.

"Ensuring Boston's streets are safe for all is the number one priority of the Boston Transportation Department," said Mayor Walsh. "Through Boston's Vision Zero plan, including initiatives such as the implementation of the Neighborhood Slow Streets Program, the incorporation of buffered bike lanes, and the adoption of updated traffic sign and signal technology, we will continue to utilize every resource to ensure to safety of Boston's streets. I look forward to this partnership with Fundacin MAPFRE and encourage residents to learn more about this campaign at Boston City Hall."

"This public safety campaign will further the goals of Go Boston 2030, the city's comprehensive transportation plan to ensure safe, reliable and equitable access to our streets for all users," said Chris Osgood, City of Boston Chief of Streets. "An unprecedented public engagement process influenced the 58 projects and policies outlined in the plan and which Boston's Transportation and Public Works Departments are making significant progress to implement. The new partnership with Fundacin MAPFRE is another opportunity for people to get involved and to promote Go Boston 2030 improvements and, in particular, safe streets."

"Road safety is one of the most critical issues we face as a society," said Alfredo Castello, chief representative of Fundacin MAPFRE in the United States. "We are proud to work with Mayor Walsh and Boston's transportation team on this important initiative that supports our shared Vision Zero goal of eliminating traffic fatalities and serious injuries and protecting those who use our roadways."

According to a study conducted by the Johns Hopkins Center for Injury Research and Policy in the Department of Health Policy and Management at the Johns Hopkins Bloomberg School of Public Health and Fundacin MAPFRE, the most common cause of unintentional injury death among children 1-14 years of age (49 percent) was transportation-related, including children as passengers in cars, on motorcycles and bicycles, and as pedestrians. While the number of transportation-related deaths has been decreasing over time for all ages, they claimed the lives of 21,571 children ages 0-14 nationwide from 20052017.

The kickoff event also will include Boston Children's Hospital's Injury Prevention Program to provide guidance on car seat and bicycle safety; trucks and bicycles to demonstrate blind spots; and AGNES, a suit developed by Massachusetts Institute of Technology, to demonstrate the physical challenges associated with aging.

Please click here for more information on Look Both Ways, Boston.

About Fundacin MAPFREFundacin MAPFREis a nonprofit organization created by MAPFRE in 1975 to promote the well-being of society and citizens across the company's footprint. Active in 30 countries, Fundacin MAPFRE focuses on five areas: Prevention and Road Safety, including fires, mishaps at home and drownings; Insurance and Social Protection; Culture; Social Action; and Health Promotion. In 2018, Fundacin MAPFRE performed nearly 300 activities around the world, benefiting 25.5 million people.

Media ContactsJudy Senechal, Fundacin MAPFRE, 508-599-0898, jsenechal@mapfreusa.com

Mayor's Press Office: 617-635-4461

SOURCE Fundacin MAPFRE

Originally posted here:

City of Boston and Fundacin MAPFRE Turn to Virtual Reality to Reduce Traffic Fatalities, Injuries - PRNewswire

Dementia simulators: How virtual reality helps caregivers understand challenges – Considerable

Virtual reality is huge in the gaming world, and its even making a splash in advertising. But theres one field people might not automatically associate with VR: medicine.

VR has burst onto the medical scene and is assisting with everything from PTSD to simulating difficult surgeries to helping ease pain and anxiety. And now its making waves in dementia care.

There are studies underway to find out how VR can help dementia patients recall memories.

Currently, there are studies being done to find out how VR can help dementia patients recall memories, and plenty of the results have proved to be successful. When it comes to the caregivers in charge of assisting with dementia patients, VR offers even further help.

So how, exactly, is this technology helping dementia caregivers? Essentially, by helping them experience the limitations dementia symptoms impose.

Its nearly impossible for people without dementia to understand completely the frustrations and trials that come with memory loss. Sometimes even the most capable and trained healthcare professionals will unwillingly become impatient with people with dementia but VR is here to help combat that.

The Virtual Dementia Tour was created by Second Wind Dreams, a nonprofit dedicated to changing the perception of aging through educational programs and by offering dream fulfillment.

The Second Wind Dreams tour leads users through an altered experience of everyday life.

The program draws on research conducted by P.K. Beville, M.S., a specialist in geriatrics and the founder of Second Wind Dreams. The tour draws on patented sensory tools to lead users through an altered experience of everyday life. People going on the Virtual Tour are given limiting devices like gloves, glasses, and headphones that simulate some of the physical difficulties that dementia presents like loss of peripheral vision and reduced motor skills. Theyre then asked to complete tasks like folding towels and finding and putting on certain items of clothing everyday necessities that are easy to take for granted, but often extremely difficult for those with dementia to accomplish.

Virtual reality technology has the potential to give caregivers, medical students and others greater insight into what its like to have mild cognitive impairment (MCI), age-related vision and hearing loss, or to progress through the continuum of Alzheimers disease, Keith Fargo, PhD, director of scientific programs & outreach at the Alzheimers Association told Considerable.

By experiencing aspects of someone elses journey, individuals may gain a better understanding of, and empathy for, older adults and their struggles with dementia. While more research is needed, technology like this may be useful in expanding awareness about what it is like to have Alzheimers disease and other dementias.

The idea is that by attempting to accomplish tasks with the challenges that dementia patients face regularly, their caregivers will have a greater sense of empathy and understanding about what their patients regularly go through. That way, caregivers will be able to better communicate with their patients about their needs, pains, and struggles.

For me it has changed my attitude, an anonymous caregiver said in a testimonial on the Second Wind Dreams website. I now recognize that there are reasonable explanations for behaviors. The person needs to be understood in the context of their life history, what is important/unique to them.

People need clear simple instructions, breaking down tasks and they need positive reinforcement and understanding. Above all people with dementia need to feel valued and their achievements, even if small, need to be recognized.

The Virtual Dementia Tours destination: a deeper level of understanding and shared experience between patient and caretaker.

Original post:

Dementia simulators: How virtual reality helps caregivers understand challenges - Considerable

Virtual reality used to highlight uranium contamination on Navajo Nation – The Durango Herald

GALLUP, N.M. Activists are using virtual reality technology to focus on areas of the Navajo Nation affected by uranium contamination.

The arts collective Bombshelltoe has collected 360-degree footage of land near Church Rock, New Mexico, to show how people and the land have changed since a 1979 uranium mill spill, the Gallup Independent reports .

The film, titled Ways of Knowing, was directed by artist Kayla Briet.

The project started four years ago after Washington, D.C.-based nuclear policy program manager Lovely Umayam met Navajo activist Sunny Dooley at an event in Santa Fe. Filmmaker Adriel Luis is also a co-producer of the movie.

Sunny asked us during this meeting, Where is your heart? And it caught my along with everyone elses attention, Umayam said.

In 1979, a dam on the Navajo Nation near Church Rock broke at a uranium mills evaporation pond, releasing 94 million gallons of radioactive waste into the Puerco River.

It was the largest accidental release of radioactive material in United States history and three times the radiation released at the Three Mile Island accident.

The radiation contaminated not only water but the food chain. Cattle in western New Mexico later showed higher levels of radiation.

Dooley, who lives in Chi Chil Tah, New Mexico, said she has felt the direct effects of the big spill that went down the Puerco River and contaminated the water and soil in her community.

During a recent presentation of the virtual reality footage, Dooley talked about her daily life of not being able to have running water in her home because it is contaminated.

I have to come to Gallup to get my water and take it back home, she said.

Umayam said the group wanted to use the new technology of virtual reality with the stories to bring a true experience and show the impact of uranium mining.

She said the project is close to being finished, but with every presentation they get more information and make tweaks to the system.

See the article here:

Virtual reality used to highlight uranium contamination on Navajo Nation - The Durango Herald

Qualcomm Introduces XR Enterprise Program to Fast-Track AR Adoption – Virtual Reality Times

Qualcomm has this week announced its XR Enterprise Program during the just concluded Enterprise Wearable Technology Summit (EWTS) held in Dallas, TX. The purpose of the program is to bring together XR products and technologies which are based on the Qualcomm Snapdragon XR platform with the enterprise product providers to allow for collaboration, innovation, synergy and to also help in fast-tracking the adoption of augmented reality and virtual reality across various industries including aerospace, engineering, education, retail, manufacturing, architecture, insurance, transportation and manufacturing.

The Qualcomm XR Enterprise Program will also speed up the adoption of Qualcomms Boundless XR platform for extended reality applications. Boundless XR was launched in early 2019 and splits the processing functions between a remote PC and the chip on the AR/VR headset.

The semiconductor equipment company wants to advance augmented reality into areas where it sees the greatest promise for a broad adoption. Qualcomm hopes its mobile expertise with the Snapdragon processors will also help usher in new possibilities for headset applications.

The company is deeply invested in the smartphone market where its processors power billions of devices and its already looking forward to the future post-smartphone world that will include personal form factors such as augmented reality. The company has been strongly pushing augmented reality applications.

Qualcomm is launching its XR Enterprise Program to support business applications. It is pitching this program to solution providers on the premise that optimizing XR applications for the Qualcomm processors will help with the customer upgrades and migrations as the volatile device vendor landscape and the form factors continue to evolve rapidly.

The vendor landscape is seeing a lot of dynamism with many of the top tech companies and new players slated to release new hardware in the next few years. Microsoft, too, seems inclined to continue developing its own hardware. Its decision to launch the Snapdragon-powered HoloLens was to some extent influenced by supporting early partners, a clear evidence that these collaborations create synergies that can drive the industry forward.

There is, evidently, a strong trend towards the mixed reality market. Most of the advances in the technology have occurred in the enterprise market. This is especially true in the augmented reality market which still has a nascent ecosystem with a limited range of games and apps. The AR landscape is a far cry from the virtual reality market which is already supported by a vast ecosystem to drive consumer adoption and growth.

The Qualcomm XR Enterprise Program will offer inaugural members a global community which will open up access to promotional opportunities, technical support resources, joint planning, business development, co-marketing as well as matchmaking with other members with the aim of helping drive the enterprise XR segment to boost the operational efficiencies, safety, worker satisfaction while also delivering a positive impact on the bottom line.

Its odd 15 or so founding members are active in a vast section of verticals where mixed reality is seeing most currency such as transportation, construction, healthcare, retail and manufacturing.

Participants in the program can have access to a higher level of technical support, business development and co-marketing resources. The matchmaking opportunities allow for synergies to be created within the group while also unlocking new opportunities for seeding and pilot enterprise projects.

The enterprise VR market is growing by leaps and bounds. According to Business Insider Intelligence, the global enterprise VR sales for both hardware and software are expected to hit $5.5 billion by 2023 which will be a 587% rise from 2018.

The slow enterprise adoption so far has mainly been due to the small supporting infrastructure for both virtual reality and augmented reality technology. The Qualcomm enterprise membership program is meant to address that in a small way.

Currently, the pace adoption will be affected by the high hardware costs, the limited range of developed applications as well as the network constraints resulting from latency requirements for both virtual reality and augmented reality.

The companies that have signed up for the enterprise program include XRHealth, Upskill, Pico Interactive, Accenture, Scope AR, ZerolIght, UbiMax, Nreal and STRIVR among others.

See more here:

Qualcomm Introduces XR Enterprise Program to Fast-Track AR Adoption - Virtual Reality Times

Brain May Not Need Body Movements to Learn Virtual Spaces – UANews

Virtual reality is becoming increasingly present in our everyday lives, from online tours of homes for sale to high-tech headsets that immerse gamers in hyper-realistic digital worlds. While its entertainment value is well-established, virtual reality also has vast potential for practical uses that are just beginning to be explored.

Arne Ekstrom, director of the Human Spatial Cognition Lab in the University of Arizona Department of Psychology, uses virtual reality to study spatial navigation and memory. Among the lab's interests are the technology's potential for socially beneficial uses, such as training first responders, medical professionals and those who must navigate hazardous environments. For those types of applications to be most effective, though, we need to better understand how people learn in virtual environments.

In a new study published in the journal Neuron, Ekstrom and co-author Derek Huffman, a post-doctoral researcher in the Center for Neuroscience at the University of California, Davis, advance that understanding by looking at whether or not being able to physically move through virtual spaces improves how we learn them.

"One of the big concerns or drawbacks with virtual reality is that it fails to capture the experience that we actually have when we navigate in the real world," said Ekstrom, an associate professor of psychology and the study's senior author. "That's what we were trying to address in this study: What information is sufficient for forming spatial representations that are useful in actually knowing where things are?"

The researchers had study participants explore three virtual cities while wearing virtual reality headsets. The participants navigated each city in one of three ways:

Participants spent two to three hours, on average, exploring the virtual cities and locating certain shops they were instructed to find. Once they'd had an opportunity to learn the environments well, they were asked a series of questions to test their spatial memory. For example, they might be asked to imagine they were standing at the coffee shop, facing the bookstore. They would then be asked to point in the direction of the grocery store.

The accuracy of participants' responses did not vary based on which condition they were in.

Participants then underwent an MRI scan while answering a similar set of questions. This allowed the researchers to see what was happening in the brain as participants retrieved spatial memories.

The researchers found that the same areas of the brain were activated for participants in all three situations. In addition, the patterns of interaction between different regions of the brain were similar among the three conditions.

"What we found was that the neural codes were identical between the different conditions," Ekstrom said. "This suggests as far as the brain is concerned and what we were also able to measure with behavior that there is sufficient information with just seeing things in a virtual environment. The information you get from moving your body, once you know the environment well enough, doesn't really add that much."

The findings address a long-standing scientific debate around whether or not body movements aid in learning physical spaces.

"There's been this idea that how you learn might make a huge difference, and that if you don't have body-based cues, then you're lacking a big part of what might be important for forming memories of space," said Huffman, the study's first author. "Our research would suggest that once you have a well-formed memory of an environment, it doesn't matter as much how you learned it."

"We would say you don't need body immersion, and you don't need body cues to form complex spatial representations," Ekstom added. "That can happen with sufficient exposure in simple virtual reality applications."

From a practical standpoint, the research suggests that even basic virtual reality systems may be useful in instructional applications.

"Virtual reality has the potential to allow us to understand situations that we might not otherwise be able to directly experience," Ekstrom said. "For example, what if we could train first responders to be able to find people after an attack on a building, without them actually ever having been to that building?

"Our findings suggest there's promise for using virtual reality even simple applications where you're just moving a joystick to teach people fairly complex knowledge about spatial environments."

See the original post here:

Brain May Not Need Body Movements to Learn Virtual Spaces - UANews

Virtual Reality Has A Lot To Learn, Says Sony Worldwide Studios President – GamingBolt

I think that the hardware experience will improve the VR experience, says Shuhei Yoshida.

To say that Sony are among the biggest proponents of virtual reality in the games industry would not be an exaggeration. Theyve made impressive strides with PSVR over the last couple of years, and its clear that this is an area that theyre going to continue investing in. But even Sony understands one thing- that VR is still in its infancy, and that those working in this space have a lot to learn.

Recently, speaking in an interview with GameWatch (via Wccftech), SIE Worldwide Studios president Shuhei Yoshida got to speaking about VR, and said that though this is an impressive piece of technology that has shown great promise, its still got lots to learn. According to Yoshida, as the base hardware of consoles improves, so too will the VR experience that goes alongside it.

I think that the hardware experience will improve the VR experience, said Yoshida. VR has a lot to learn even at companies that have been making games for a long time. I realized that as soon as I started VR. I had to learn a lot because I couldnt do it with normal TV games. But we had to have many guidelines for danger, but with the developers ingenuity, we were able to see how to do it, and VR makes us think about what the human abilities are, and after three years such knowledge is growing.

Its pretty clear at this point that Sony isnt done with VR just yet, and will be focusing on it with the PS5 as well. Theyve confirmed that the PS5 will indeed support PSVR though that doesnt mean we should expect to see a PSVR 2 any time soon.

The rest is here:

Virtual Reality Has A Lot To Learn, Says Sony Worldwide Studios President - GamingBolt

Straight up conversation: Microsoft chief talks augmented reality in schools – AEI – American Enterprise Institute

Dan Ayoub is the general manager of mixed reality, artificial intelligence, and STEM education for Microsoft. Before that, Dan worked for 20 years in the games industry, most notably as the development lead for the iconic titleHalo. I recently talked with Dan about Microsofts work to bring augmented and virtual reality education to the classroom, and heres what he said.

Rick Hess:Dan, youre general manager of Microsofts education team. Can you say a bit about what that actually involves?

Dan Ayoub:Thanks, Rick! The high-level goal for the team is to empower every learner on the planet to achieve more. Which is a pretty big task! So what that means concretely is we make products and curriculum for educators and learners of all ages, we partner with classrooms to implement technology, and work with researchers on where the puck is going. In addition to what you generally think of when you think of Microsoft, we have tools for collaboration, tools to help students learn to read, gaming like Minecraft, and so on. We have a central education group, and of course a number of people are working on education across the company. It also involves fostering lifelong learning and future skills like cloud, AI, and data science.

Rick:You came to this work from outside of education, after leading the famed Halo game-development team for eight years and after nearly two decades in gaming. What led you to make the jump?

Dan:Its kind of crazy that after 18 years of making games, to do a jump like this. I came to Microsoft to continue working on games about a decade ago; about seven or eight years ago, I got really interested in education from an intellectual standpoint, probably related to having kids. At the same time, working with all of this future tech and working at a big tech company, I started to see where things are headed, and it became really clear that the current way of thinking wasnt preparing kids for the future. So there I am working on Halo at Microsoft but fascinated with this problem and having no idea how to get involved. Then I was offered a role in the mixed-reality team to look at how we could use the technology for helping students, and it just seemed like the perfect intersection of my development background and my interests.

Rick:It often seems like game designers have figured out some things about engaging youths that have yet to show up in educational software. Is that fair?

Dan:Theres a great quote by Marshall McLuhan: Anyone who tries to make a distinction between education and entertainment doesnt know the first thing about either. I think its hard to compare when the context is so different, but I think theres a lot of what games do well that make sense in the classroom. Like making the student the center of the experience, gradually giving skills, and building on them. I think games are also great at teaching grit, resilience, and the understanding that failure is a part of success. Games are also increasingly social in nature, which is really interesting to think about in educational scenarios.

Rick:Whats the one big lesson youve brought over from your time at Halo, and how has that affected your work at Microsoft?

Dan:Designing for the user. In this case, the final user is the student, but you need to think about the actual teacher using the tech as the primary user, because if they arent comfortable using the technology, it isnt making it into the classroom, or you need a ton of professional development to make it happen. Id also add that in games we are constantly listening to our customers on how to make their experience better, and this is something I have definitely brought with me. Finally, its all about engagement, and that is really key. Working on Halo gives me incredible cred with students when I go into classrooms.

Rick:Ha, I can imagine! So what have you found to be the biggest bumps, headaches, or disconnects when it comes to designing useful educational softwareand helping educators use it effectively?

Dan:I think two things come to mind. First is the notion of technology as a silver bullet; at the end of the day, its all about the teacher, and if you bring technology into the classroom and use it the same way you used a paper and pencil, and dont adapt, then you arent going to reap the benefits. At the end of the day, great technology will allow a great teacher to do more and help their students to succeed, but that involves changing how they work in the classroom. I think the second is making sure the software is user-friendly to the teachers and helping them to use it effectively through training, support, and so on.

Rick:Can you talk about one or two of the really eye-opening, head-shaking developmental things you all are working on that might truly one day be transformativebut perhaps not for a decade or two?

Dan:I think two of the most transformative, jaw-dropping things coming down the road are augmented reality and artificial intelligence. Both are in very early stages, but there is massive potential for them both. I think both will completely change education forever once they reach scale and the tech is ready.

Rick:OK. Ive heard you talk about the distinction between augmented and virtual reality before. Can you explain the difference for a general audience?

Dan:Understanding the difference can be tricky, for sure. In a nutshell, virtual reality is entirely immersive, so you put on a headset and you are transported to a different world and have no awareness of whats going on around you. The immersion limits your ability to work collaboratively with people near you, though you can co-habit a virtual environment, but that immersion can be beneficial for people who may have challenges focusing and is great for singular experiences. Its also been shown to be great for empathy-building. Augmented reality, like a Hololens, works by creating holograms over your field of vision, so you can still see everything around youthis is great for classes in the same room together.

Rick:So what do we know about how well augmented reality can work?

Dan:We spent a lot of time researching the effectiveness of the technology, and there were a bunch of studies pointing to the potential, but I was really eager to see the practical results. Heres what we know: Some partners are seeing a full-letter grade improvement when using the technology. Others are seeing up to a 60 percent reduction in the time it takes to teach their content. All of this is due to the lower cognitive load required to learn while using the hardware. Outside of the classroom, this tech is being used today in corporate and vocational training and workflows in industries like automotive and design, to name a few.

Rick:How about virtual reality?

Dan:Similar to augmented reality, we are seeing great results in the classroom. VR is also being used in corporate training as well; Walmart is using the tech to help train their employees, and every day I see new cases of the tech at work. Its really quite exciting because its all still so new, and people are crafting some amazing things in the workplace and the classroomI am seeing a bunch of really interesting use cases in vocational education as well. We recently made over 30 hours of standards-aligned content free for educators, and its been great to see the response.

Rick:What are some of the ways that K-12 schooling might ultimately benefit from virtual or augmented reality?

Dan:As time goes on, I have two scenarios I am extremely excited about: first is the potential for distance learning, as you can have students collaborate with other students all over the planet in virtual environments; you can also learn from literally the best people on the planet regardless of where they are and be in the same room with them. The second is as we can weave AI into the experience, we can start to get to the idyllic personalized learning or 1:1 learning scenario for every student. Another area I am extremely excited about is differentiated learningso how do we use this technology to diagnose things like dyslexia earlier through eye tracking or to assist autistic children?

Rick:On a somewhat different note, as someone who comes to ed tech from outside, can you offer some tips as to the pitfalls those making ed tech need to be focused on?

Dan:I have been really vocal that companies focused on the wrong thing in the early days by creating these showcase experiences that focused more on showy visuals than actual curriculum. Our job is to help teachers do their job, so we made a decision to focus on standards-aligned content that would help teachers do what they need to do, and the response to this has been great. Like any educational technology, its a tool that can help students immensely, but it requires thinking about how youll approach it. At the end of the day, its still all about pedagogy.

Rick:In education, we have a long history of getting jazzed about the possibilities of new techonly to be disappointed, time and again. Whats your advice for schools or systems that want to avoid the usual rash of mistakes?

Dan:I think first and foremost, if you just adopt technology and continue to teach like we did during the Industrial Revolution, then ed tech isnt going to fix all your problems. I like to say that you need to be diligent, learn about the tech and how to maximize it, and adapt it to your needs, but also change how you teach. Also, please ask uswe love to talk to educators and we prefer to talk about the problems they are trying to solve rather than just pushing technology. Let us know what youre trying to accomplish and help us to make our products better for you.

This interview has been condensed and edited for clarity.

Read the original:

Straight up conversation: Microsoft chief talks augmented reality in schools - AEI - American Enterprise Institute

What is Virtual Reality? VR Definition and Examples | Marxent

See some real examples ofVirtual Reality shopping apps; or fora look ahead, check out the5 top Virtual Reality and Augmented Reality technology trends for 2019.

Virtual Reality (VR) is the use of computer technology to create a simulated environment. Unlike traditional user interfaces, VR places the user inside an experience. Instead of viewing a screen in front of them, users are immersedand able to interact with3D worlds. By simulating as many senses as possible, such as vision, hearing,touch, evensmell,the computer is transformed into agatekeeper to thisartificial world.The only limits to near-real VR experiences are the availability of content and cheapcomputing power.

Virtual Reality and Augmented Reality are two sides of the same coin. You could think of Augmented Reality as VR with one foot in the real world: Augmented Reality simulates artificial objects in the real environment; Virtual Reality creates an artificial environment to inhabit.

In Augmented Reality, the computer uses sensors and algorithms to determine the position and orientation of a camera. AR technology then renders the 3D graphics as they would appear from the viewpoint of the camera, superimposing the computer-generated images over ausers view of the real world.

In Virtual Reality, the computer uses similar sensors and math. However,rather than locating a real camera within a physical environment, the position of the users eyes are located within the simulated environment. If the users head turns, the graphics react accordingly. Rather than compositing virtual objects and a real scene, VR technology creates a convincing, interactive world for the user.

Virtual Realitys most immediately-recognizable component is the head-mounted display (HMD). Human beings are visual creatures, and display technology is often the single biggest difference between immersive Virtual Reality systems and traditional user interfaces. For instance,CAVEautomatic virtual environments actively display virtual content onto room-sized screens. While they arefun for people in universities and big labs, consumer and industrial wearables are the wild west.

With a multiplicity of emerging hardware and software options, the future of wearables is unfolding but yet unknown. Concepts such as the HTC Vive Pro Eye, Oculus Quest and Playstation VR are leading the way, but there are also players like Google, Apple, Samsung, Lenovo and others who may surprise the industry with new levels of immersion and usability. Whomever comes out ahead, the simplicity of buying a helmet-sized device that can work in a living-room, office, or factory floor has made HMDs center stage when it comes to Virtual Reality technologies.

Convincing Virtual Reality applications require more than just graphics. Both hearing and vision are central to a persons sense of space. In fact, human beings react more quickly to audio cues than to visual cues. In order to create truly immersive Virtual Realityexperiences, accurate environmental soundsand spatial characteristics are a must. Theselenda powerful sense of presence toa virtual world. To experience the binaural audio details that go into a Virtual Reality experience, put on some headphones and tinkerwith this audio infographicpublished byThe Verge.

While audio-visual information is most easily replicated in Virtual Reality, active research and development efforts are still being conducted into the other senses. Tactile inputs such as omnidirectional treadmills allow users to feel as though theyre actually walking through a simulation, rather than sitting in a chair or on a couch. Haptic technologies, also known as kinesthetic ortouch feedback tech, have progressed from simple spinning-weight rumble motors to futuristic ultrasound technology. It is now possible to hear and feel true-to-life sensations along with visual VR experiences.

Read more:

What is Virtual Reality? VR Definition and Examples | Marxent

What is Virtual Reality (VR)? Ultimate Guide to Virtual …

Introduction to Virtual Reality (VR)

Virtual Reality (VR)literally makes it possible to experience anything, anywhere, anytime. It is the most immersive type of reality technology and can convince the human brain that it is somewhere it is really not. Head mounted displays are used with headphones and hand controllers to provide afully immersiveexperience. With the largest technology companies on planet earth (Facebook, Google, and Microsoft) currently investing billions of dollars intovirtual reality companies and startups, the future of virtual reality is set to be a pillar of our everyday lives.

A realistic three-dimensional image or artificial environment that is created with a mixture of interactive hardware and software, and presented to the user in such a way that the any doubts are suspended and it is accepted as a real environment in which it is interacted with in a seemingly real or physical way.

Virtual reality (also calledVirtual RealitiesorVR) is best understood by first defining what it aims to achieve total immersion.

Total immersion means that the sensory experience feels so real, that we forget it is a virtual-artificial environment and begin to interact with it as we would naturally in the real world. In a virtual reality environment, a completely synthetic world may or may not mimic the properties of a real-world environment. This means that the virtual reality environment may simulate an everyday setting (e.g. walking around the streets of London), or may exceed the bounds of physical reality by creating a world in which the physical laws governing gravity, time and material properties no longer hold (e.g. shooting space aliens on a foreign gravity-less planet).

Virtual reality immersionis the perception of being physically present in a non-physical world. It encompasses the sense ofpresence, which is the point where the human brain believes that is somewhere it is really not, and is accomplished through purely mental and/or physical means. The state oftotal immersionexists when enough senses are activated to create the perception of being present in a non-physical world. Two common types of immersion include:

Virtual reality requires as many of our senses as possible to be simulated. These senses include vision (visual), hearing (aural), touch (haptic), and more. Properly stimulating these sense requires sensory feedback, which is achieved through integrated hardware and software (also known as inputs). Examples of this hardware and inputs are discussed below as key components to a virtual reality system, which include head mounted displays (HMD), special gloves or hand accessories, and hand controls.

Several categories of virtual reality technologies exist, with more likely to emerge as this technology progresses. The various types of virtual reality differ in their levels of immersion and alsovirtual reality applications and use cases. Below, we explore a few of the different categories of virtual reality:

Virtual Reality NewsLatest Developments in Virtual Reality (VR) News

The field of augmented reality is continually growing with new technology advancements, software improvements, and products. Staying up to date with the latest augmented reality news is important to stay on top of this rapidly growing industry. We cover the latest inmixed reality news, augmented reality news, and virtual reality news.

In order for the human brain to accept an artificial, virtual environment as real, it has to not only look real, but also feel real. Looking real can be achieved by wearing a head-mounted display (HMD) that displays a recreated life size, 3D virtual environment without the boundaries usually seen on TV or a computer screen. Feeling real can be achieved through handheld input devices such as motion trackers that base interactivity on the users movements. By stimulating many of the same senses one would use to navigate in the real world, virtual reality environments are feeling increasingly more like the natural world. Below, we explore some of the key components to behind this system.

Virtual reality content, which is the what users view inside of a virtual reality headset, is equally important as the headset itself. In order to power these interactive three-dimensional environments, significant computing power is required. This is where PC (Personal Computer), consoles, and smartphones come in. They act as the engine to power the content being produced.

A head-mounted display (also called HMD, Headset, or Goggles) is a type of device that contains a display mounted in front of a users eyes. This display usually covers the users full field of view and displays virtual reality content. Some virtual reality head mounted displays utilize smartphone displays, including the Google Cardboard and Samsung Gear VR. Head-mounted displays are often also accompanied with a headset to provide for audio stimulation.

Inside of each virtual reality head-mounted display (HMD) is a series of sensors, individual eye displays, lenses, and display screen(s), among other various components. TheIfixit Oculus Rift teardownoffers an excellent step-by-step teardown and look inside of one of the most popular virtual reality headsets. Below we explore some of the key components inside of a virtual reality headset.

Image Source: MIT Technology Review

The three most common sensors in a virtual reality headset are magnetometers, accelerometers and gyroscopes. These sensors work together by measuring the users motions and direction in space. Their ultimate goal is to achieve true six-degrees-of-freedom (6DoF), which covers all the degrees of motion for an object in space.

Lenses lie between your eyes and pixels on the display screen(s). They focus and reshape the picture for each eye by angling two 2D images to mimic how each of our eyes take in views of the world (also called stereoscopic). This creates an impression of depth and solidity, which we perceive to be a three-dimensional image. Lenses on each virtual reality device are not one-size-fits all and have to be adjusted for initial use as all devices have different lens properties.

Display screens show the images that user view through the lenses. They are typically LCD and receive video feed from the computer or smartphone. Depending on the headset, the video feed is either sent to one display or two displays (one per eye). This happens via wireless connection, smartphone connection, or HDMI. The most common types of virtual reality display technology is a Liquid Crystal Display (LCD) screen, similar to the kinds used in smartphones and computer monitors. An alternative display technology is an Organic Light-Emitting Diode (OLED) screen.

Virtual reality systems demand a substantial amount of power, even in comparison to notoriously power hungry gaming systems. The processing power required by virtual reality systems can be broken down into several categories:

Field of view (also called Field of Vision or FOV) is an important component used in virtual reality to provide users with a realistic perception of their environmental landscape. Simply put, field of view refers to how wide the picture is. Field of view is measured based on the degree of display (e.g. 360). Most high-end headsets make do with 100 or 110 field of view which is sufficient for most virtual reality content.

Frame rate refers to the frequency (rate) at which the display screen shows consecutive images, which are also called frames. Television shows run at 30 frames per second (fps) and some game consoles run at 60 frames per second (fps). In virtual reality, a minimum frame rate of approximately 60 frames per second is needed to avoid content stuttering or cause of simulation sickness. The Oculus Rift runs at 90 fps, providing Oculus Rift users with a very lifelike experience. Future Frame rates for virtual reality headsets are set to inevitably continue getting faster, providing for a more realistic experience.

Latency refers to the amount of time it takes for an image displayed in a users headset to catch up to their changing head position. Latency can also the thought of as a delay, and is measured in milliseconds (ms). In order for an experience to feel real, latency usually needs to be in the range of 20 milliseconds (ms) or less. Low latency, or very little delay, is needed to make the human brain accept the virtual environment as real. The lower the latency, the better. The higher the latency, a noticeable and unnatural lag may set in, consequently causing simulation sickness for the user.

Virtual reality audio may not be as technically-complex as the visual components, however, it is an equally important component to stimulate a users senses and achieve immersion. Most virtual reality headsets provide users with the option to use their own headphones in conjunction with a headset. Other headsets may include their own integrated headphones. Virtual reality audio works via positional, multi-speaker audio (often called Positional Audio) that gives the illusion of a 3-dimensional world. Positional audio is a way of seeing with your ears and is used in virtual reality because it can provide cues to gain a users attention, or give them information that may not be presented visually. This technology is already quite common and often found in home theater surround sound systems.

Tracking handles the vital task of understanding a users movements and then acting upon them accordingly to maintain full immersion in virtual reality. Below, we explore the three of the main types of virtual reality tracking:

Head tracking refers to the way in which the view in front of you will shift as you look up, down and side-to-side. A system called six degrees of freedom (6DoF) plots your head in terms of your x, y and z axis to measure head movements forward and backwards, side-to-side and shoulder to shoulder, otherwise known as pitch, yaw and roll. Head tracking utilizes a series of sensors, vital to any virtual reality headset, which includes agyroscope,accelerometer, andmagnetometer. Head-tracking technology must be low latency in order to be effective. Anything above 50ms will cause a lag between the headset movement and virtual reality environment changes.

Motion tracking is the way in which you view and interact with your own body (e.g. hands, movements, etc). One of the most natural motion-related acts is to want to be able to see your own hands (virtually) in front of you. To do this, virtual reality input accessories such as gloves can be used. Other motion tracking devices such as wireless controllers, joysticks, treadmills, and motion platforms are now being used to supplement the headset and provide an even more immersive experience. Many of these input accessories utilize sensors to detect gestures such as pointing and waving. Virtual reality systems such as HTCs Vive headset, utilize base stations to track the sensors from the headset and controllers.

Eye tracking technology is still maturing, however, it may be one of the most important missing pieces to complete the virtual reality full immersion puzzle. Eye tracking involves tracking the human eyes via an infrared sensor that monitors your eye movement inside the headset. The main advantage to this type of tracking is that depth of field (i.e. distance) becomes much more realistic. In a virtual reality headset, the objects that our eyes focus on, need to look as life-like as possible. Without eye tracking, everything remains in focus as you move your eyes but not your head around a scene, thus causing a greater likeliness of simulation sickness.

It takes bold visionaries and risk-takers to build future technologies into realities. In the field of virtual reality (VR), there are many companies across the globe working on this mission. Our mega list of mixed reality, augmented reality and virtual reality companies covers the top companies and startups who are innovating in this space.

A well established example of virtual reality already in use is in the field of aviation training. From flying a commercial airplane out of a crowded international airport, to training for a dangerous night-flight using only night vision, virtual reality can provide significant benefits to aspiring pilots.

Piloting commercial flights require taking on tremendous responsibility, as there are often several hundred passengers on any given flight. Training for this responsibility requires both conceptual and hands on training.The initial hands on training can often be supplemented by use of a simulator.These simulators, which employ sophisticated computer models, use virtual reality to recreate what a pilot should expect when they actually flying. Simulators even use hydraulics to recreate the feeling of takeoff and landing. The benefit to using avirtual reality flight simulatoris that this all takes place in a controlled environment, which is forgiving to mistakes and pose virtually no risk.

Almost every flight by an active military pilot can be a life threatening mission. Training to become a military pilot requires unique skillsets and knowledge of how to react in uncertain situations. Almost all branches of them military, including the Air Force, Army, and Navy, now use virtual reality technologies to train pilots. By using virtual reality, soldiers are taught how to fly in battle, how to handle emergencies and recover fast, and how to coordinate air support with ground operations. Since simulators often have visual acuity over the entire 360-degree field of view, these simulators provide trainees with very deep levels of immersion. As mentioned above, the benefit to using a virtual reality flight simulator is that this all takes place in a controlled environment, which is forgiving to mistakes and poses virtually no risk.

Virtual Reality is only one pillar of reality technologies. Further explore the depth of these technologies by continuing with one of our other Ultimate Guide to Understanding web resources on Mixed Reality or AugmentedReality.

Read more from the original source:

What is Virtual Reality (VR)? Ultimate Guide to Virtual ...

What is virtual reality? – Definition from WhatIs.com

Virtual reality is an artificial environment that is created with software and presented to the user in such a way that the user suspends belief and accepts it as a real environment. On a computer, virtual reality is primarily experienced through two of the five senses: sight and sound.

The simplest form of virtual reality is a 3-D image that can be explored interactively at a personal computer, usually by manipulating keys or the mouse so that the content of the image moves in some direction or zooms in or out. More sophisticated efforts involve such approaches as wrap-around display screens, actual rooms augmented with wearable computers, and haptics devices that let you feel the display images.

Virtual reality can be divided into:

The Virtual Reality Modelling Language (VRML) allows the creator to specify images and the rules for their display and interaction using textual language statements.

See also: augmented reality

See a video introduction to virtual reality:

Excerpt from:

What is virtual reality? - Definition from WhatIs.com

How Virtual Reality (VR) can Enrich the Hospitality Industry

Virtual reality, or VR for short, is one of the biggest emerging technology trends and the business world is gradually coming to terms with the various opportunities it provides. For those in the hospitality industry, virtual reality has particular appeal, because it can digitally transport potential customers to a hotel or travel destination.

In this article, you learn various ways hotels can leverage virtual reality to boost business results.

Virtual reality is a computer technology, which utilises images, sounds and physical sensations to make users feel as though they are physically present in a virtual world. Virtual reality technology typically makes use of VR headsets and this equipment enables users to look around and immerse themselves in a digital environment.

The concept of virtual reality has actually existed, in some form, since the 1930s, but high-quality virtual reality headsets have only become a mainstream consumer product in more recent times, due in large part to increased investment from the likes of Google, Facebook and Samsung.

While many of the applications of modern virtual reality are entertainment-based, businesses are increasingly getting to grips with VRs potential as a marketing tool, delivering important information to potential customers in a way they can actually experience, and stimulating multiple senses in the process.

Within the hospitality industry, VR has become particularly important, because of the amount of information the average customer needs before they will actually book a hotel room. Rather than reading through descriptions, which may or may not be trustworthy, it offers customers the chance to experience things for themselves.

For example, this potentially allows customers to experience a virtual recreation of a room within a hotel, or take a look at one of the nearby attractions. Essentially, this allows the hotel industry to benefit from the type of try before you buy marketing that has been commonplace within the food industry for decades.

Of course, the practical uses for virtual reality technology do not stop when the customer has booked a hotel room. Indeed, those within the hospitality industry can continue to use VR to deliver information and allow customers to experience nearby attractions once they have arrived, adding to the hotel experience itself.

The full potential of virtual reality within the hotel industry is only recently being recognised. Nevertheless, three of the best current uses of the technology are outlined below:

One of the most common uses of virtual reality in the hospitality industry so far has been the creation of virtual travel experiences, using 360 degree video technology. Through this, users can experience a virtual recreation of different aspects of travel, from the flight, to arrival, to some of the key sights.

Three examples of this can be seen below. The first is a video showing how the basic process works, and showing people who are wearing VR headsets and experiencing a virtual tour. Meanwhile, the second and third examples are 360 degree videos, which can be viewed with VR glasses or a Google Cardboard for a more immersive experience.

Example #1: A Virtual Honeymoon to London and Hawaii

Example #2: Visit Hamilton Island in 360 Virtual Reality with QantasBest viewed with VR glasses or a Google Cardboard

Example #3: Maldives VR 360 4K VideoBest viewed with VR glasses or a Google Cardboard

Another common use of virtual reality technology within the hotel industry is for virtual hotel tours. These tours can be made available on hotel websites, allowing guests or potential guests to take a look at their hotel room, or other parts of the hotel, before they book or before they arrive.

While these tours are best experienced with a VR headset, they can also potentially be made available to those without access to a headset on social media sites like Facebook, using its 360 degree video technology.

Example: Atlantis Dubai Virtual Tour VR 360Best viewed with VR glasses or a Google Cardboard

Finally, one of the more interesting uses of VR technology in recent times has been the creation of virtual reality booking processes. This has recently been put into action by companies like Amadeus, allowing customers to look for flights, compare hotel prices and book rooms through a virtual reality headset.

The potential for this has not yet been fully explored, but it is easy to see how this VR booking process can allow customers to explore virtual hotel rooms, experience local sights and book a room seamlessly.

Virtual Reality travel search and booking experience

Would you like to learn more about other digital technologies which can benefit your business? Have also a look at the articles How Augmented Reality is Transforming the Hospitality Industry and Using Artificial Intelligence in the Hospitality Industry.

With digital technology continuously evolving, it should come as little surprise that its applications within the travel and hospitality industry evolve too. In the following articles you find the most innovating digital trends in the hospitality industry.

Link:

How Virtual Reality (VR) can Enrich the Hospitality Industry

Incredible Augmented Reality Demo Conjures Up Ghostly Versions of Your Coworkers

Ghost World

An augmented reality startup called Spatial popped out of stealth mode Wednesday with an incredible AR demo that shows how its service can populate an empty room with ghostly — yet bustling — versions of your coworkers.

“We think the future of work is going to be increasingly distributed,” CEO Anand Agarawala told TechCrunch. “When you put on Spatial, [your coworkers] are in the room with you. It feels like they’re all sitting at the table, and they feel like they’ve been teleported into the space with you.”

Augmented Reality Check

Spatial claims its “collective computing environment” will let remote workers share data with each other in seemingly physical space. All they have to do is don an AR headset, such as Microsoft’s HoloLens. But that could prove to be Spatial’s Achilles’ heel — the current generation of AR headsets are uncomfortable to wear and have narrow fields of view.

Still, the demo is a powerful vision of what virtual work environments could look like in a post-Slack world — even if we need to wait for the hardware to catch up with the concept.

READ MORE: Spatial Debuts ‘Minority Report’-Inspired Augmented Reality Collaboration Tool [Variety]

More on augmented reality: Amazon Has Plans for Headset Free Augmented Reality to Transform Your Home

Read the original:

Incredible Augmented Reality Demo Conjures Up Ghostly Versions of Your Coworkers

Virtual Reality Headsets | GameStop

Are You Ready for Virtual Reality?

The future of VR gaming is finally here! The Virtual Reality technology (VR headsets, VR glasses, VR goggles, etc.) that only existed in Science Fiction novels has gone mainstream and it's a natural fit for the video game industry.

What Is Virtual Reality?

Virtual Reality can be defined as a fully immersive computer simulated environment that gives one the feeling of being in a virtual world, instead of their actual world. VR is a super-realistic reality that replicates sensory experiences like sight, touch, hearing and smell.

Headset devices (like PSVR, Oculus VR, HTC Vive) use stereoscopic displays to make what you see three dimensional and give depth to the image that you are looking at. Sensors track your motion and allow the image to change with your perspective. Our other senses such as sound and touch help to convince our brains that the virtual reality is real. VR is all about immersion and the feeling of presence, so you can truly become the character that you are playing in the game.

Virtual Reality Cost

Virtual Reality gaming equipment is expected to cost anywhere from $19.99 to $1,500 (with a high-end computer to properly run the more expensive VR systems). From driving games to first person shooters, there are literally hundreds of Virtual Reality games in development right now. The price for each virtual reality game depends that particular game and the gaming system.

Stay Up-To-Date with Virtual Reality

Virtual Reality is the future of gaming and entertainment, and GameStop wants to make sure that you are in-the-know about the best VR headsets and all that virtual gaming has to offer. Be sure and sign up for our VR email updates above to stay current on the latest on PlayStation VR, Oculus Rift, HTC Vive and more!

Read the original here:

Virtual Reality Headsets | GameStop

Virtual reality | computer science | Britannica.com

Virtual reality (VR), the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment. VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive information and are worn as goggles, headsets, gloves, or body suits. In a typical VR format, a user wearing a helmet with a stereoscopic screen views animated images of a simulated environment. The illusion of being there (telepresence) is effected by motion sensors that pick up the users movements and adjust the view on the screen accordingly, usually in real time (the instant the users movement takes place). Thus, a user can tour a simulated suite of rooms, experiencing changing viewpoints and perspectives that are convincingly related to his own head turnings and steps. Wearing data gloves equipped with force-feedback devices that provide the sensation of touch, the user can even pick up and manipulate objects that he sees in the virtual environment.

The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense, the National Science Foundation, and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics, simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.

Read More on This Topic

electronic game: Networked games and virtual worlds

During the 1990s and 2000s, computer game designers exploited three-dimensional graphics, faster microprocessors, networking, handheld and wireless game devices, and the Internet to develop new genres for video consoles, personal computers, and networked environments. These included first-person shootersaction games in which the environment

Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of mediafrom futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatresover the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York Worlds Fair by Fred Waller and Ralph Walker, originated in Wallers studies of vision and depth perception. Wallers work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Traineran example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.

Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a cinema of the future. By late 1960, Heilig had built an individual console with a variety of inputsstereoscopic images, motion chair, audio, temperature changes, odours, and blown airthat he patented in 1962 as the Sensorama Simulator, designed to stimulate the senses of an individual to simulate an actual experience realistically. During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted stereoscopic 3-D TV display that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.

The seeds for virtual reality were planted in several computing fields during the 1950s and 60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called light guns). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.

During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a man-computer symbiosis and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.

Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MITs Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPAs premier research centres. In 1965 Sutherland outlined the characteristics of what he called the ultimate display and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Robertss Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbarts invention of a new input device, the computer mouse.

Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilots head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called augmented reality because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearers ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewers immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.

An important area of application for VR systems has always been training for real-life activities. The appeal of simulations is that they can provide training equal or nearly equal to practice with real systems, but at reduced cost and with greater safety. This is particularly the case for military training, and the first significant application of commercial simulators was pilot training during World War II. Flight simulators rely on visual and motion feedback to augment the sensation of flying while seated in a closed mechanical system on the ground. The Link Company, founded by former piano maker Edwin Link, began to construct the first prototype Link Trainers during the late 1920s, eventually settling on the blue box design acquired by the Army Air Corps in 1934. The first systems used motion feedback to increase familiarity with flight controls. Pilots trained by sitting in a simulated cockpit, which could be moved hydraulically in response to their actions (see photograph). Later versions added a cyclorama scene painted on a wall outside the simulator to provide limited visual feedback. Not until the Celestial Navigation Trainer, commissioned by the British government in World War II, were projected film strips used in Link Trainersstill, these systems could only project what had been filmed along a correct flight or landing path, not generate new imagery based on a trainees actions. By the 1960s, flight trainers were using film and closed-circuit television to enhance the visual experience of flying. The images could be distorted to generate flight paths that diverted slightly from what had been filmed; sometimes multiple cameras were used to provide different perspectives, or movable cameras were mounted over scale models to depict airports for simulated landings.

Inspired by the controls in the Link flight trainer, Sutherland suggested that such displays include multiple sensory outputs, force-feedback joysticks, muscle sensors, and eye trackers; a user would be fully immersed in the displayed environment and fly through concepts which never before had any visual representation. In 1968 he moved to the University of Utah, where he and his colleague David Evans founded Evans & Sutherland Computer Corporation. The new company initially focused on the development of graphics applications, such as scene generators for flight simulator systems. These systems could render scenes at roughly 20 frames per second in the early 1970s, about the minimum frame rate for effective flight training. General Electric Company constructed the first flight simulators with built-in, real-time computer image generation, first for the Apollo program in the 1960s, then for the U.S. Navy in 1972. By the mid-1970s, these systems were capable of generating simple 3-D models with a few hundred polygon faces; they utilized raster graphics (collections of dots) and could model solid objects with textures to enhance the sense of realism (see computer graphics). By the late 1970s, military flight simulators were also incorporating head-mounted displays, such as McDonnell Douglas Corporations VITAL helmet, primarily because they required much less space than a projected display. A sophisticated head tracker in the HMD followed a pilots eye movements to match computer-generated images (CGI) with his view and handling of the flight controls.

Advances in flight simulators, human-computer interfaces, and augmented reality systems pointed to the possibility of immersive, real-time control systems, not only for research or training but also for improved performance. Since the 1960s, electrical engineer Thomas Furness had been working on visual displays and instrumentation in cockpits for the U.S. Air Force. By the late 1970s, he had begun development of virtual interfaces for flight control, and in 1982 he demonstrated the Visually Coupled Airborne Systems Simulatorbetter known as the Darth Vader helmet, for the armoured archvillain of the popular movie Star Wars. From 1986 to 1989, Furness directed the air forces Super Cockpit program. The essential idea of this project was that the capacity of human pilots to handle spatial information depended on these data being portrayed in a way that takes advantage of the humans natural perceptual mechanisms. Applying the HMD to this goal, Furness designed a system that projected information such as computer-generated 3-D maps, forward-looking infrared and radar imagery, and avionics data into an immersive, 3-D virtual space that the pilot could view and hear in real time. The helmets tracking system, voice-actuated controls, and sensors enabled the pilot to control the aircraft with gestures, utterances, and eye movements, translating immersion in a data-filled virtual space into control modalities. The more natural perceptual interface also reduced the complexity and number of controls in the cockpit. The Super Cockpit thus realized Lickliders vision of man-machine symbiosis by creating a virtual environment in which pilots flew through data. Beginning in 1987, British Aerospace (now part of BAE Systems) also used the HMD as the basis for a similar training simulator, known as the Virtual Cockpit, that incorporated head, hand, and eye tracking, as well as speech recognition.

Sutherland and Furness brought the notion of simulator technology from real-world imagery to virtual worlds that represented abstract models and data. In these systems, visual verisimilitude was less important than immersion and feedback that engaged all the senses in a meaningful way. This approach had important implications for medical and scientific research. Project GROPE, started in 1967 at the University of North Carolina by Frederick Brooks, was particularly noteworthy for the advancements it made possible in the study of molecular biology. Brooks sought to enhance perception and comprehension of the interaction of a drug molecule with its receptor site on a protein by creating a window into the virtual world of molecular docking forces. He combined wire-frame imagery to represent molecules and physical forces with haptic (tactile) feedback mediated through special hand-grip devices to arrange the virtual molecules into a minimum binding energy configuration. Scientists using this system felt their way around the represented forces like flight trainees learning the instruments in a Link cockpit, grasping the physical situations depicted in the virtual world and hypothesizing new drugs based on their manipulations. During the 1990s, Brookss laboratory extended the use of virtual reality to radiology and ultrasound imaging.

Virtual reality was extended to surgery through the technology of telepresence, the use of robotic devices controlled remotely through mediated sensory feedback to perform a task. The foundation for virtual surgery was the expansion during the 1970s and 80s of microsurgery and other less invasive forms of surgery. By the late 1980s, microcameras attached to endoscopic devices relayed images that could be shared among a group of surgeons looking at one or more monitors, often in diverse locations. In the early 1990s, a DARPA initiative funded research to develop telepresence workstations for surgical procedures. This was Sutherlands window into a virtual world, with the added dimension of a level of sensory feedback that could match a surgeons fine motor control and hand-eye coordination. The first telesurgery equipment was developed at SRI International in 1993; the first robotic surgery was performed in 1998 at the Broussais Hospital in Paris.

As virtual worlds became more detailed and immersive, people began to spend time in these spaces for entertainment, aesthetic inspiration, and socializing. Research that conceived of virtual places as fantasy spaces, focusing on the activity of the subject rather than replication of some real environment, was particularly conducive to entertainment. Beginning in 1969, Myron Krueger of the University of Wisconsin created a series of projects on the nature of human creativity in virtual environments, which he later called artificial reality. Much of Kruegers work, especially his VIDEOPLACE system, processed interactions between a participants digitized image and computer-generated graphical objects. VIDEOPLACE could analyze and process the users actions in the real world and translate them into interactions with the systems virtual objects in various preprogrammed ways. Different modes of interaction with names like finger painting and digital drawing suggest the aesthetic dimension of this system. VIDEOPLACE differed in several aspects from training and research simulations. In particular, the system reversed the emphasis from the user perceiving the computers generated world to the computer perceiving the users actions and converting these actions into compositions of objects and space within the virtual world. With the emphasis shifted to responsiveness and interaction, Krueger found that fidelity of representation became less important than the interactions between participants and the rapidity of response to images or other forms of sensory input.

The ability to manipulate virtual objects and not just see them is central to the presentation of compelling virtual worldshence the iconic significance of the data glove in the emergence of VR in commerce and popular culture. Data gloves relay a users hand and finger movements to a VR system, which then translates the wearers gestures into manipulations of virtual objects. The first data glove, developed in 1977 at the University of Illinois for a project funded by the National Endowment for the Arts, was called the Sayre Glove after one of the team members. In 1982 Thomas Zimmerman invented the first optical glove, and in 1983 Gary Grimes at Bell Laboratories constructed the Digital Data Entry Glove, the first glove with sufficient flexibility and tactile and inertial sensors to monitor hand position for a variety of applications, such as providing an alternative to keyboard input for data entry.

Zimmermans glove would have the greatest impact. He had been thinking for years about constructing an interface device for musicians based on the common practice of playing air guitarin particular, a glove capable of tracking hand and finger movements could be used to control instruments such as electronic synthesizers. He patented an optical flex-sensing device (which used light-conducting fibres) in 1982, one year after Grimes patented his glove-based computer interface device. By then, Zimmerman was working at the Atari Research Center in Sunnyvale, California, along with Scott Fisher, Brenda Laurel, and other VR researchers who would be active during the 1980s and beyond. Jaron Lanier, another researcher at Atari, shared Zimmermans interest in electronic music. Beginning in 1983, they worked together on improving the design of the data glove, and in 1985 they left Atari to start up VPL Research; its first commercial product was the VPL DataGlove.

By 1985, Fisher had also left Atari to join NASAs Ames Research Center at Moffett Field, California, as founding director of the Virtual Environment Workstation (VIEW) project. The VIEW project put together a package of objectives that summarized previous work on artificial environments, ranging from creation of multisensory and immersive virtual environment workstations to telepresence and teleoperation applications. Influenced by a range of prior projects that included Sensorama, flight simulators, and arcade rides, and surprised by the expense of the air forces Darth Vader helmets, Fishers group focused on building low-cost, personal simulation environments. While the objective of NASA was to develop telerobotics for automated space stations in future planetary exploration, the group also considered the workstations use for entertainment, scientific, and educational purposes. The VIEW workstation, called the Virtual Visual Environment Display when completed in 1985, established a standard suite of VR technology that included a stereoscopic head-coupled display, head tracker, speech recognition, computer-generated imagery, data glove, and 3-D audio technology.

The VPL DataGlove was brought to market in 1987, and in October of that year it appeared on the cover of Scientific American (see photograph). VPL also spawned a full-body, motion-tracking system called the DataSuit, a head-mounted display called the EyePhone, and a shared VR system for two people called RB2 (Reality Built for Two). VPL declared June 7, 1989, Virtual Reality Day. On that day, both VPL and Autodesk publicly demonstrated the first commercial VR systems. The Autodesk VR CAD (computer-aided design) system was based on VPLs RB2 technology but was scaled down for operation on personal computers. The marketing splash introduced Laniers new term virtual reality as a realization of cyberspace, a concept introduced in science fiction writer William Gibsons Neuromancer in 1984. Lanier, the dreadlocked chief executive officer of VPL, became the public celebrity of the new VR industry, while announcements by Autodesk and VPL let loose a torrent of enthusiasm, speculation, and marketing hype. Soon it seemed that VR was everywhere, from the Mattel/Nintendo PowerGlove (1989) to the HMD in the movie The Lawnmower Man (1992), the Nintendo VirtualBoy game system (1995), and the television series VR5 (1995).

Numerous VR companies were founded in the early 1990s, most of them in Silicon Valley, but by mid-decade most of the energy unleashed by the VPL and Autodesk marketing campaigns had dissipated. The VR configuration that took shape over a span of projects leading from Sutherland to LanierHMD, data gloves, multimodal sensory input, and so forthfailed to have a broad appeal as quickly as the enthusiasts had predicted. Instead, the most visible and successfully marketed products were location-based entertainment systems rather than personal VR systems. These VR arcades and simulators, designed by teams from the game, movie, simulation, and theme park industries, combined the attributes of video games, amusement park rides, and highly immersive storytelling. Perhaps the most important of the early projects was Disneylands Star Tours, an immersive flight simulator ride based on the Star Wars movie series and designed in collaboration with producer George Lucass Industrial Light & Magic. Disney had long built themed rides utilizing advanced technology, such as animatronic charactersnotably in Pirates of the Caribbean, an attraction originally installed at Disneyland in 1967. Star Tours utilized simulated motion and special-effects technology, mixing techniques learned from Hollywood films and military flight simulators with strong story lines and architectural elements that shaped the viewers experience from the moment they entered the waiting line for the attraction. After the opening of Star Tours in 1987, Walt Disney Imagineering embarked on a series of projects to apply interactive technology and immersive environments to ride systems, including 3-D motion-picture photography used in Honey, I Shrunk the Audience (1995), the DisneyQuest indoor interactive theme park (1998), and the multiplayer-gaming virtual world, Toontown Online (2001).

In 1990, Virtual World Entertainment opened the first BattleTech emporium in Chicago. Modeled loosely on the U.S. militarys SIMNET system of networked training simulators, BattleTech centres put players in individual pods, essentially cockpits that served as immersive, interactive consoles for both narrative and competitive game experiences. All the vehicles represented in the game were controlled by other players, each in his own pod and linked to a high-speed network set up for a simultaneous multiplayer experience. The players immersion in the virtual world of the competition resulted from a combination of elements, including a carefully constructed story line, the physical architecture of the arcade space and pod, and the networked virtual environment. During the 1990s, BattleTech centres were constructed in other cities around the world, and the BattleTech franchise also expanded to home electronic games, books, toys, and television.

While the Disney and Virtual World Entertainment projects were the best-known instances of location-based VR entertainments, other important projects included Iwerks Entertainments Turbo Tour and Turboride 3-D motion simulator theatres, first installed in San Francisco in 1992; motion-picture producer Steven Spielbergs Gameworks arcades, the first of which opened in 1997 as a joint project of Universal Studios, Sega Corporation, and Dreamworks SKG; many individual VR arcade rides, beginning with Sega Arcades R360 gyroscope flight simulator, released in 1991; and, finally, Visions of Realitys VR arcades, the spectacular failure of which contributed to the bursting of the investment bubble for VR ventures in the mid-1990s.

Continued here:

Virtual reality | computer science | Britannica.com