12345...1020...


What is Virtual Reality? – Virtual Reality Society

The definition of virtual reality comes, naturally, from the definitions for both virtual and reality. The definition of virtual is near and reality is what we experience as human beings. So the term virtual reality basically means near-reality. This could, of course, mean anything but it usually refers to a specific type of reality emulation.

We know the world through our senses and perception systems. In school we all learned that we have five senses: taste, touch, smell, sight and hearing. These are however only our most obvious sense organs. The truth is that humans have many more senses than this, such as a sense of balance for example. These other sensory inputs, plus some special processing of sensory information by our brains ensures that we have a rich flow of information from the environment to our minds.

Everything that we know about our reality comes by way of our senses. In other words, our entire experience of reality is simply a combination of sensory information and our brains sense-making mechanisms for that information. It stands to reason then, that if you can present your senses with made-up information, your perception of reality would also change in response to it. You would be presented with a version of reality that isnt really there, but from your perspective it would be perceived as real. Something we would refer to as a virtual reality.

So, in summary, virtual reality entails presenting our senses with a computer generated virtual environment that we can explore in some fashion.

Answering what is virtual reality in technical terms is straight-forward. Virtual reality is the term used to describe a three-dimensional, computer generated environment which can be explored and interacted with by a person. That person becomes part of this virtual world or is immersed within this environment and whilst there, is able to manipulate objects or perform a series of actions.

Although we talk about a few historical early forms of virtual reality elsewhere on the site, today virtual reality is usually implemented using computer technology. There are a range of systems that are used for this purpose, such as headsets, omni-directional treadmills and special gloves. These are used to actually stimulate our senses together in order to create the illusion of reality.

This is more difficult than it sounds, since our senses and brains are evolved to provide us with a finely synchronised and mediated experience. If anything is even a little off we can usually tell. This is where youll hear terms such asimmersiveness and realism enter the conversation. These issues that divide convincing or enjoyable virtual reality experiences from jarring or unpleasant ones are partly technical and partly conceptual. Virtual reality technology needs to take our physiology into account. For example, the human visual field does not look like a video frame. We have (more or less) 180 degrees of vision and although you are not always consciously aware of your peripheral vision, if it were gone youd notice. Similarly when what your eyes and the vestibular system in your ears tell you are in conflict it can cause motion sickness. Which is what happens to some people on boats or when they read while in a car.

If an implementation of virtual reality manages to get the combination of hardware, software and sensory synchronicity just right it achieves something known as a sense of presence. Where the subject really feels like they are present in that environment.

This may seems like a lot of effort, and it is! What makes the development of virtual reality worthwhile? The potential entertainment value is clear. Immersive films and video games are good examples. The entertainment industry is after all a multi-billion dollar one and consumers are always keen on novelty. Virtual reality has many other, more serious, applications as well.

There are a wide variety of applications for virtual reality which include:

Virtual reality can lead to new and exciting discoveries in these areas which impact upon our day to day lives.

Wherever it is too dangerous, expensive or impractical to do something in reality, virtual reality is the answer. From trainee fighter pilots to medical applications trainee surgeons, virtual reality allows us to take virtual risks in order to gain real world experience. As the cost of virtual reality goes down and it becomes more mainstream you can expect more serious uses, such as education or productivity applications, to come to the fore. Virtual reality and its cousin augmented reality could substantively change the way we interface with our digital technologies. Continuing the trend of humanising our technology.

There are many different types of virtual reality systems but they all share the same characteristics such as the ability to allow the person to view three-dimensional images. These images appear life-sized to the person.

Plus they change as the person moves around their environment which corresponds with the change in their field of vision. The aim is for a seamless join between the persons head and eye movements and the appropriate response, e.g. change in perception. This ensures that the virtual environment is both realistic and enjoyable.

A virtual environment should provide the appropriate responses in real time- as the person explores their surroundings. The problems arise when there is a delay between the persons actions and system response or latency which then disrupts their experience. The person becomes aware that they are in an artificial environment and adjusts their behaviour accordingly which results in a stilted, mechanical form of interaction.

The aim is for a natural, free-flowing form of interaction which will result in a memorable experience.

Virtual reality is the creation of a virtual environment presented to our senses in such a way that we experience it as if we were really there. It uses a host of technologies to achieve this goal and is a technically complex feat that has to account for our perception and cognition. It has both entertainment and serious uses. The technology is becoming cheaper and more widespread. We can expect to see many more innovative uses for the technology in the future and perhaps a fundamental way in which we communicate and work thanks to the possibilities of virtual reality.

See the rest here:

What is Virtual Reality? – Virtual Reality Society

VR Porn: Virtual Reality Sex Videos & Porno Movies | YouPorn

Virtual reality is the newest frontier to explore in porn and once you begin your exploration you won’t ever want to go back to the standard way of watching. Masturbate and get off to the hottest VR porn online when you are browsing our selection of free XXX content. Next time you need your reality augmented, cum check out the virtual sex videos on YouPorn.

See the original post:

VR Porn: Virtual Reality Sex Videos & Porno Movies | YouPorn

12 Amazing Uses of Virtual Reality – Entrepreneur

Virtual reality technology holds enormous potential to change the future for a number of fields, from medicine, business, architecture to manufacturing.

Psychologists and other medical professionals are using VR to heighten traditional therapy methods and find effective solutions for treatments of PTSD, anxiety and social disorders. Doctors are employing VR to train medical students in surgery, treat patients pains and even help paraplegics regain body functions.

In business, a variety of industries are benefiting from VR. Carmakers are creating safer vehicles, architects are constructing stronger buildings and even travel agencies are using it to simplify vacation planning.

Read this article:

12 Amazing Uses of Virtual Reality – Entrepreneur

Virtual reality | computer science | Britannica.com

Virtual reality (VR), the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment. VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive information and are worn as goggles, headsets, gloves, or body suits. In a typical VR format, a user wearing a helmet with a stereoscopic screen views animated images of a simulated environment. The illusion of being there (telepresence) is effected by motion sensors that pick up the users movements and adjust the view on the screen accordingly, usually in real time (the instant the users movement takes place). Thus, a user can tour a simulated suite of rooms, experiencing changing viewpoints and perspectives that are convincingly related to his own head turnings and steps. Wearing data gloves equipped with force-feedback devices that provide the sensation of touch, the user can even pick up and manipulate objects that he sees in the virtual environment.

The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense, the National Science Foundation, and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics, simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.

Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of mediafrom futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatresover the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York Worlds Fair by Fred Waller and Ralph Walker, originated in Wallers studies of vision and depth perception. Wallers work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Traineran example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.

Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a cinema of the future. By late 1960, Heilig had built an individual console with a variety of inputsstereoscopic images, motion chair, audio, temperature changes, odours, and blown airthat he patented in 1962 as the Sensorama Simulator, designed to stimulate the senses of an individual to simulate an actual experience realistically. During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted stereoscopic 3-D TV display that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.

The seeds for virtual reality were planted in several computing fields during the 1950s and 60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called light guns). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.

During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a man-computer symbiosis and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.

Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MITs Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPAs premier research centres. In 1965 Sutherland outlined the characteristics of what he called the ultimate display and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Robertss Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbarts invention of a new input device, the computer mouse.

Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilots head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called augmented reality because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearers ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewers immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.

An important area of application for VR systems has always been training for real-life activities. The appeal of simulations is that they can provide training equal or nearly equal to practice with real systems, but at reduced cost and with greater safety. This is particularly the case for military training, and the first significant application of commercial simulators was pilot training during World War II. Flight simulators rely on visual and motion feedback to augment the sensation of flying while seated in a closed mechanical system on the ground. The Link Company, founded by former piano maker Edwin Link, began to construct the first prototype Link Trainers during the late 1920s, eventually settling on the blue box design acquired by the Army Air Corps in 1934. The first systems used motion feedback to increase familiarity with flight controls. Pilots trained by sitting in a simulated cockpit, which could be moved hydraulically in response to their actions (see photograph). Later versions added a cyclorama scene painted on a wall outside the simulator to provide limited visual feedback. Not until the Celestial Navigation Trainer, commissioned by the British government in World War II, were projected film strips used in Link Trainersstill, these systems could only project what had been filmed along a correct flight or landing path, not generate new imagery based on a trainees actions. By the 1960s, flight trainers were using film and closed-circuit television to enhance the visual experience of flying. The images could be distorted to generate flight paths that diverted slightly from what had been filmed; sometimes multiple cameras were used to provide different perspectives, or movable cameras were mounted over scale models to depict airports for simulated landings.

Inspired by the controls in the Link flight trainer, Sutherland suggested that such displays include multiple sensory outputs, force-feedback joysticks, muscle sensors, and eye trackers; a user would be fully immersed in the displayed environment and fly through concepts which never before had any visual representation. In 1968 he moved to the University of Utah, where he and his colleague David Evans founded Evans & Sutherland Computer Corporation. The new company initially focused on the development of graphics applications, such as scene generators for flight simulator systems. These systems could render scenes at roughly 20 frames per second in the early 1970s, about the minimum frame rate for effective flight training. General Electric Company constructed the first flight simulators with built-in, real-time computer image generation, first for the Apollo program in the 1960s, then for the U.S. Navy in 1972. By the mid-1970s, these systems were capable of generating simple 3-D models with a few hundred polygon faces; they utilized raster graphics (collections of dots) and could model solid objects with textures to enhance the sense of realism (see computer graphics). By the late 1970s, military flight simulators were also incorporating head-mounted displays, such as McDonnell Douglas Corporations VITAL helmet, primarily because they required much less space than a projected display. A sophisticated head tracker in the HMD followed a pilots eye movements to match computer-generated images (CGI) with his view and handling of the flight controls.

Advances in flight simulators, human-computer interfaces, and augmented reality systems pointed to the possibility of immersive, real-time control systems, not only for research or training but also for improved performance. Since the 1960s, electrical engineer Thomas Furness had been working on visual displays and instrumentation in cockpits for the U.S. Air Force. By the late 1970s, he had begun development of virtual interfaces for flight control, and in 1982 he demonstrated the Visually Coupled Airborne Systems Simulatorbetter known as the Darth Vader helmet, for the armoured archvillain of the popular movie Star Wars. From 1986 to 1989, Furness directed the air forces Super Cockpit program. The essential idea of this project was that the capacity of human pilots to handle spatial information depended on these data being portrayed in a way that takes advantage of the humans natural perceptual mechanisms. Applying the HMD to this goal, Furness designed a system that projected information such as computer-generated 3-D maps, forward-looking infrared and radar imagery, and avionics data into an immersive, 3-D virtual space that the pilot could view and hear in real time. The helmets tracking system, voice-actuated controls, and sensors enabled the pilot to control the aircraft with gestures, utterances, and eye movements, translating immersion in a data-filled virtual space into control modalities. The more natural perceptual interface also reduced the complexity and number of controls in the cockpit. The Super Cockpit thus realized Lickliders vision of man-machine symbiosis by creating a virtual environment in which pilots flew through data. Beginning in 1987, British Aerospace (now part of BAE Systems) also used the HMD as the basis for a similar training simulator, known as the Virtual Cockpit, that incorporated head, hand, and eye tracking, as well as speech recognition.

Sutherland and Furness brought the notion of simulator technology from real-world imagery to virtual worlds that represented abstract models and data. In these systems, visual verisimilitude was less important than immersion and feedback that engaged all the senses in a meaningful way. This approach had important implications for medical and scientific research. Project GROPE, started in 1967 at the University of North Carolina by Frederick Brooks, was particularly noteworthy for the advancements it made possible in the study of molecular biology. Brooks sought to enhance perception and comprehension of the interaction of a drug molecule with its receptor site on a protein by creating a window into the virtual world of molecular docking forces. He combined wire-frame imagery to represent molecules and physical forces with haptic (tactile) feedback mediated through special hand-grip devices to arrange the virtual molecules into a minimum binding energy configuration. Scientists using this system felt their way around the represented forces like flight trainees learning the instruments in a Link cockpit, grasping the physical situations depicted in the virtual world and hypothesizing new drugs based on their manipulations. During the 1990s, Brookss laboratory extended the use of virtual reality to radiology and ultrasound imaging.

Virtual reality was extended to surgery through the technology of telepresence, the use of robotic devices controlled remotely through mediated sensory feedback to perform a task. The foundation for virtual surgery was the expansion during the 1970s and 80s of microsurgery and other less invasive forms of surgery. By the late 1980s, microcameras attached to endoscopic devices relayed images that could be shared among a group of surgeons looking at one or more monitors, often in diverse locations. In the early 1990s, a DARPA initiative funded research to develop telepresence workstations for surgical procedures. This was Sutherlands window into a virtual world, with the added dimension of a level of sensory feedback that could match a surgeons fine motor control and hand-eye coordination. The first telesurgery equipment was developed at SRI International in 1993; the first robotic surgery was performed in 1998 at the Broussais Hospital in Paris.

As virtual worlds became more detailed and immersive, people began to spend time in these spaces for entertainment, aesthetic inspiration, and socializing. Research that conceived of virtual places as fantasy spaces, focusing on the activity of the subject rather than replication of some real environment, was particularly conducive to entertainment. Beginning in 1969, Myron Krueger of the University of Wisconsin created a series of projects on the nature of human creativity in virtual environments, which he later called artificial reality. Much of Kruegers work, especially his VIDEOPLACE system, processed interactions between a participants digitized image and computer-generated graphical objects. VIDEOPLACE could analyze and process the users actions in the real world and translate them into interactions with the systems virtual objects in various preprogrammed ways. Different modes of interaction with names like finger painting and digital drawing suggest the aesthetic dimension of this system. VIDEOPLACE differed in several aspects from training and research simulations. In particular, the system reversed the emphasis from the user perceiving the computers generated world to the computer perceiving the users actions and converting these actions into compositions of objects and space within the virtual world. With the emphasis shifted to responsiveness and interaction, Krueger found that fidelity of representation became less important than the interactions between participants and the rapidity of response to images or other forms of sensory input.

The ability to manipulate virtual objects and not just see them is central to the presentation of compelling virtual worldshence the iconic significance of the data glove in the emergence of VR in commerce and popular culture. Data gloves relay a users hand and finger movements to a VR system, which then translates the wearers gestures into manipulations of virtual objects. The first data glove, developed in 1977 at the University of Illinois for a project funded by the National Endowment for the Arts, was called the Sayre Glove after one of the team members. In 1982 Thomas Zimmerman invented the first optical glove, and in 1983 Gary Grimes at Bell Laboratories constructed the Digital Data Entry Glove, the first glove with sufficient flexibility and tactile and inertial sensors to monitor hand position for a variety of applications, such as providing an alternative to keyboard input for data entry.

Zimmermans glove would have the greatest impact. He had been thinking for years about constructing an interface device for musicians based on the common practice of playing air guitarin particular, a glove capable of tracking hand and finger movements could be used to control instruments such as electronic synthesizers. He patented an optical flex-sensing device (which used light-conducting fibres) in 1982, one year after Grimes patented his glove-based computer interface device. By then, Zimmerman was working at the Atari Research Center in Sunnyvale, California, along with Scott Fisher, Brenda Laurel, and other VR researchers who would be active during the 1980s and beyond. Jaron Lanier, another researcher at Atari, shared Zimmermans interest in electronic music. Beginning in 1983, they worked together on improving the design of the data glove, and in 1985 they left Atari to start up VPL Research; its first commercial product was the VPL DataGlove.

By 1985, Fisher had also left Atari to join NASAs Ames Research Center at Moffett Field, California, as founding director of the Virtual Environment Workstation (VIEW) project. The VIEW project put together a package of objectives that summarized previous work on artificial environments, ranging from creation of multisensory and immersive virtual environment workstations to telepresence and teleoperation applications. Influenced by a range of prior projects that included Sensorama, flight simulators, and arcade rides, and surprised by the expense of the air forces Darth Vader helmets, Fishers group focused on building low-cost, personal simulation environments. While the objective of NASA was to develop telerobotics for automated space stations in future planetary exploration, the group also considered the workstations use for entertainment, scientific, and educational purposes. The VIEW workstation, called the Virtual Visual Environment Display when completed in 1985, established a standard suite of VR technology that included a stereoscopic head-coupled display, head tracker, speech recognition, computer-generated imagery, data glove, and 3-D audio technology.

The VPL DataGlove was brought to market in 1987, and in October of that year it appeared on the cover of Scientific American (see photograph). VPL also spawned a full-body, motion-tracking system called the DataSuit, a head-mounted display called the EyePhone, and a shared VR system for two people called RB2 (Reality Built for Two). VPL declared June 7, 1989, Virtual Reality Day. On that day, both VPL and Autodesk publicly demonstrated the first commercial VR systems. The Autodesk VR CAD (computer-aided design) system was based on VPLs RB2 technology but was scaled down for operation on personal computers. The marketing splash introduced Laniers new term virtual reality as a realization of cyberspace, a concept introduced in science fiction writer William Gibsons Neuromancer in 1984. Lanier, the dreadlocked chief executive officer of VPL, became the public celebrity of the new VR industry, while announcements by Autodesk and VPL let loose a torrent of enthusiasm, speculation, and marketing hype. Soon it seemed that VR was everywhere, from the Mattel/Nintendo PowerGlove (1989) to the HMD in the movie The Lawnmower Man (1992), the Nintendo VirtualBoy game system (1995), and the television series VR5 (1995).

Numerous VR companies were founded in the early 1990s, most of them in Silicon Valley, but by mid-decade most of the energy unleashed by the VPL and Autodesk marketing campaigns had dissipated. The VR configuration that took shape over a span of projects leading from Sutherland to LanierHMD, data gloves, multimodal sensory input, and so forthfailed to have a broad appeal as quickly as the enthusiasts had predicted. Instead, the most visible and successfully marketed products were location-based entertainment systems rather than personal VR systems. These VR arcades and simulators, designed by teams from the game, movie, simulation, and theme park industries, combined the attributes of video games, amusement park rides, and highly immersive storytelling. Perhaps the most important of the early projects was Disneylands Star Tours, an immersive flight simulator ride based on the Star Wars movie series and designed in collaboration with producer George Lucass Industrial Light & Magic. Disney had long built themed rides utilizing advanced technology, such as animatronic charactersnotably in Pirates of the Caribbean, an attraction originally installed at Disneyland in 1967. Star Tours utilized simulated motion and special-effects technology, mixing techniques learned from Hollywood films and military flight simulators with strong story lines and architectural elements that shaped the viewers experience from the moment they entered the waiting line for the attraction. After the opening of Star Tours in 1987, Walt Disney Imagineering embarked on a series of projects to apply interactive technology and immersive environments to ride systems, including 3-D motion-picture photography used in Honey, I Shrunk the Audience (1995), the DisneyQuest indoor interactive theme park (1998), and the multiplayer-gaming virtual world, Toontown Online (2001).

In 1990, Virtual World Entertainment opened the first BattleTech emporium in Chicago. Modeled loosely on the U.S. militarys SIMNET system of networked training simulators, BattleTech centres put players in individual pods, essentially cockpits that served as immersive, interactive consoles for both narrative and competitive game experiences. All the vehicles represented in the game were controlled by other players, each in his own pod and linked to a high-speed network set up for a simultaneous multiplayer experience. The players immersion in the virtual world of the competition resulted from a combination of elements, including a carefully constructed story line, the physical architecture of the arcade space and pod, and the networked virtual environment. During the 1990s, BattleTech centres were constructed in other cities around the world, and the BattleTech franchise also expanded to home electronic games, books, toys, and television.

While the Disney and Virtual World Entertainment projects were the best-known instances of location-based VR entertainments, other important projects included Iwerks Entertainments Turbo Tour and Turboride 3-D motion simulator theatres, first installed in San Francisco in 1992; motion-picture producer Steven Spielbergs Gameworks arcades, the first of which opened in 1997 as a joint project of Universal Studios, Sega Corporation, and Dreamworks SKG; many individual VR arcade rides, beginning with Sega Arcades R360 gyroscope flight simulator, released in 1991; and, finally, Visions of Realitys VR arcades, the spectacular failure of which contributed to the bursting of the investment bubble for VR ventures in the mid-1990s.

Here is the original post:

Virtual reality | computer science | Britannica.com

What is Virtual Reality? – Virtual Reality Society

The definition of virtual reality comes, naturally, from the definitions for both virtual and reality. The definition of virtual is near and reality is what we experience as human beings. So the term virtual reality basically means near-reality. This could, of course, mean anything but it usually refers to a specific type of reality emulation.

We know the world through our senses and perception systems. In school we all learned that we have five senses: taste, touch, smell, sight and hearing. These are however only our most obvious sense organs. The truth is that humans have many more senses than this, such as a sense of balance for example. These other sensory inputs, plus some special processing of sensory information by our brains ensures that we have a rich flow of information from the environment to our minds.

Everything that we know about our reality comes by way of our senses. In other words, our entire experience of reality is simply a combination of sensory information and our brains sense-making mechanisms for that information. It stands to reason then, that if you can present your senses with made-up information, your perception of reality would also change in response to it. You would be presented with a version of reality that isnt really there, but from your perspective it would be perceived as real. Something we would refer to as a virtual reality.

So, in summary, virtual reality entails presenting our senses with a computer generated virtual environment that we can explore in some fashion.

Answering what is virtual reality in technical terms is straight-forward. Virtual reality is the term used to describe a three-dimensional, computer generated environment which can be explored and interacted with by a person. That person becomes part of this virtual world or is immersed within this environment and whilst there, is able to manipulate objects or perform a series of actions.

Although we talk about a few historical early forms of virtual reality elsewhere on the site, today virtual reality is usually implemented using computer technology. There are a range of systems that are used for this purpose, such as headsets, omni-directional treadmills and special gloves. These are used to actually stimulate our senses together in order to create the illusion of reality.

This is more difficult than it sounds, since our senses and brains are evolved to provide us with a finely synchronised and mediated experience. If anything is even a little off we can usually tell. This is where youll hear terms such asimmersiveness and realism enter the conversation. These issues that divide convincing or enjoyable virtual reality experiences from jarring or unpleasant ones are partly technical and partly conceptual. Virtual reality technology needs to take our physiology into account. For example, the human visual field does not look like a video frame. We have (more or less) 180 degrees of vision and although you are not always consciously aware of your peripheral vision, if it were gone youd notice. Similarly when what your eyes and the vestibular system in your ears tell you are in conflict it can cause motion sickness. Which is what happens to some people on boats or when they read while in a car.

If an implementation of virtual reality manages to get the combination of hardware, software and sensory synchronicity just right it achieves something known as a sense of presence. Where the subject really feels like they are present in that environment.

This may seems like a lot of effort, and it is! What makes the development of virtual reality worthwhile? The potential entertainment value is clear. Immersive films and video games are good examples. The entertainment industry is after all a multi-billion dollar one and consumers are always keen on novelty. Virtual reality has many other, more serious, applications as well.

There are a wide variety of applications for virtual reality which include:

Virtual reality can lead to new and exciting discoveries in these areas which impact upon our day to day lives.

Wherever it is too dangerous, expensive or impractical to do something in reality, virtual reality is the answer. From trainee fighter pilots to medical applications trainee surgeons, virtual reality allows us to take virtual risks in order to gain real world experience. As the cost of virtual reality goes down and it becomes more mainstream you can expect more serious uses, such as education or productivity applications, to come to the fore. Virtual reality and its cousin augmented reality could substantively change the way we interface with our digital technologies. Continuing the trend of humanising our technology.

There are many different types of virtual reality systems but they all share the same characteristics such as the ability to allow the person to view three-dimensional images. These images appear life-sized to the person.

Plus they change as the person moves around their environment which corresponds with the change in their field of vision. The aim is for a seamless join between the persons head and eye movements and the appropriate response, e.g. change in perception. This ensures that the virtual environment is both realistic and enjoyable.

A virtual environment should provide the appropriate responses in real time- as the person explores their surroundings. The problems arise when there is a delay between the persons actions and system response or latency which then disrupts their experience. The person becomes aware that they are in an artificial environment and adjusts their behaviour accordingly which results in a stilted, mechanical form of interaction.

The aim is for a natural, free-flowing form of interaction which will result in a memorable experience.

Virtual reality is the creation of a virtual environment presented to our senses in such a way that we experience it as if we were really there. It uses a host of technologies to achieve this goal and is a technically complex feat that has to account for our perception and cognition. It has both entertainment and serious uses. The technology is becoming cheaper and more widespread. We can expect to see many more innovative uses for the technology in the future and perhaps a fundamental way in which we communicate and work thanks to the possibilities of virtual reality.

Read more here:

What is Virtual Reality? – Virtual Reality Society

What is virtual reality? – A simple introduction

by Chris Woodford. Last updated: March 3, 2017.

You’ll probably never go to Mars, swim with dolphins, run anOlympic 100 meters, or sing onstage with the Rolling Stones. But ifvirtual reality ever lives up to its promise, you might be able to doall these thingsand many morewithout even leaving your home.Unlike real reality (the actual world in which we live),virtual reality means simulating bits of our world (or completelyimaginary worlds) using high-performance computers and sensoryequipment, like headsets and gloves. Apart from games andentertainment, it’s long been used for training airline pilots andsurgeons and for helping scientists to figure out complex problemssuch as the structure of protein molecules. How does it work? Let’s take acloser look!

Photo: Virtual reality means blocking yourself off from the real world and substitutinga computer-generated alternative. Often, it involves wearing a wraparound headset called a head-mounted display, clamping stereo headphones over your ears, and touching or feeling your way around your imaginary home using datagloves (gloves with built-in sensors). Picture by Wade Sisler courtesy of NASA Ames Research Center.

Virtual reality (VR) means experiencing things through ourcomputers that don’t really exist. From that simple definition, theidea doesn’t sound especially new. When you look at an amazingCanaletto painting, for example, you’re experiencing the sites andsounds of Italy as it was about 250 years agoso that’s a kind ofvirtual reality. In the same way, if you listen to ambientinstrumental or classical music with your eyes closed, and startdreaming about things, isn’t that an example of virtual realityanexperience of a world that doesn’t really exist? What about losingyourself in a book or a movie? Surely that’s a kind of virtualreality?

If we’re going to understand why books, movies, paintings, andpieces of music aren’t the same thing as virtual reality, we need todefine VR fairly clearly. For the purposes of this simple, introductory article,I’m going to define it as:

Putting it another way, virtual reality is essentially:

Artwork: This Canaletto painting of Venice, Italy is believable and in some sense explorable (you can move your eyes around and think about different parts of the picture), but it’s not interactive, computer-generated, or immersive, so it doesn’t meet our definition of virtual reality: looking at this picture is not like being there. There’s nothing to stop us making an explorable equivalent in VR, but we need CGInot oil paintsto do it. Picture courtesy of Wikimedia Commons.

We can see from this why reading a book, looking at a painting,listening to a classical symphony, or watching a movie don’t qualifyas virtual reality. All of them offer partial glimpses ofanother reality, but none are interactive, explorable, or fullybelievable. If you’re sitting in a movie theater looking at a giantpicture of Mars on the screen, and you suddenly turn your head toofar, you’ll see and remember that you’re actually on Earth and theillusion will disappear. If you see something interesting on thescreen, you can’t reach out and touch it or walk towards it; again,the illusion will simply disappear. So these forms of entertainmentare essentially passive: however plausible they might be, theydon’t actively engage you in any way.

VR is quite different. It makes you think you are actually livinginside a completely believable virtual world (one in which, to usethe technical jargon, you are partly or fully immersed). It istwo-way interactive: as you respond to what you see, what you seeresponds to you: if you turn your head around, what you see or hearin VR changes to match your new perspective.

“Virtual reality” has often been used as a marketing buzzwordfor compelling, interactive video games or even 3D movies andtelevision programs, none of which really count as VR because they don’t immerseyou either fully or partially in a virtual world. Search for “virtualreality” in your cellphone app store and you’ll find hundreds ofhits, even though a tiny cellphone screen could never get anywherenear producing the convincing experience of VR. Nevertheless, thingslike interactive games and computer simulations would certainly meetparts of our definition up above, so there’s clearly more thanone approach to building virtual worldsand more than one flavor ofvirtual reality. Here are a few of the bigger variations:

For the complete VR experience, we need three things. First, aplausible, and richly detailed virtual world to explore; a computer modelor simulation, in other words. Second, a powerful computer thatcan detect what we’re going and adjust our experience accordingly, inreal time (so what we see or hear changes as fast as we movejustlike in real reality). Third, hardware linked to the computer thatfully immerses us in the virtual world as we roam around. Usually,we’d need to put on what’s called a head-mounted display (HMD) withtwo screens and stereo sound, and wear one or more sensory gloves.Alternatively, we could move around inside a room, fitted out withsurround-sound loudspeakers, onto which changing images are projectedfrom outside. We’ll explore VR equipment in more detail in a moment.

A highly realistic flight simulator on a home PC might qualify asnonimmersive virtual reality, especially if it uses a very widescreen, with headphones or surround sound, and a realistic joystickand other controls. Not everyone wants or needs to be fully immersedin an alternative reality. An architect might build a detailed 3Dmodel of a new building to show to clients that can be explored on adesktop computer by moving a mouse. Most people would classify thatas a kind of virtual reality, even if it doesn’t fully immerse you.In the same way, computer archaeologists often create engaging 3Dreconstructions of long-lost settlements that you can move around andexplore. They don’t take you back hundreds or thousands of years orcreate the sounds, smells, and tastes of prehistory, but they give amuch richer experience than a few pastel drawings or even an animatedmovie.

What about “virtual world” games like Second Life and Minecraft? Do theycount as virtual reality? Although they meet the first four of ourcriteria (believable, interactive, computer-created and explorable),they don’t really meet the fifth: they don’t fully immerse you. Butone thing they do offer that cutting-edge VR typically doesn’t iscollaboration: the idea of sharing an experience in a virtualworld with other people, often in real time or something very closeto it. Collaboration and sharing are likely to become increasinglyimportant features of VR in future.

Virtual reality was one of the hottest, fastest-growingtechnologies in the late 1980s and early 1990s, but the rapid rise ofthe World Wide Web largely killed off interest after that. Eventhough computer scientists developed a way of building virtual worldson the Web (using a technology analogous to HTML called VirtualReality Markup Language, VRML), ordinary people were much moreinterested in the way the Web gave them new ways to access realrealitynew ways to find and publish information, shop, and sharethoughts, ideas, and experiences with friends through social media.With Facebook’s growing interest in the technology, the future of VRseems likely to be both Web-based and collaborative.

Photo: Augmented reality: A heads-up display, like this one used by the US Air Force,superimposes useful, computer-based information on top of the things you see with your own eyes. Picture by Major Chad E. Gibson courtesy of US Air Force.

Mobile devices like smartphones and tablets have put what used tobe supercomputer power in our hands and pockets. If we’re wandering round the world, maybe visiting a heritage site like the pyramids or a fascinatingforeign city we’ve never been to before, what we want is typicallynot virtual reality but an enhanced experience of the excitingreality we can see in front of us. That’s spawned the idea ofaugmented reality (AR), where,for example, you point your smartphone at alandmark or a striking building and interesting information about itpops up automatically. Augmented reality is all about connecting thereal world we experience to the vast virtual world of informationthat we’ve collectively created on the Web. Neither of these worldsis virtual, but the idea of exploring and navigating the twosimultaneously does, nevertheless, have things in common with virtualreality. For example, how can a mobile device figure out its preciselocation in the world? How do the things you see on the screen ofyour tablet change as you wander round a city? Technically, theseproblems are similar to the ones developers of VR systems have tosolveso there are close links between AR and VR.

Close your eyes and think of virtual reality and you probablypicture something like our top photo: a geek wearing a wraparoundheadset (HMD) and datagloves, wired into a powerful workstation orsupercomputer. What differentiates VR from an ordinary computerexperience (using your PC to write an essay or play games) is thenature of the input and output. Where an ordinary computer usesthings like a keyboard,mouse, or (more exotically)speech recognition for input, VR uses sensors that detect how your body ismoving. And where a PC displays output on a screen (or a printer), VRuses two screens (one for each eye), stereo or surround-soundspeakers, and maybe some forms of haptic (touch and body perception)feedback as well. Let’s take a quick tour through some of the morecommon VR input and output devices.

Photo: The view from inside. A typical HMD has two tiny screensthat show different pictures to each of your eyes, so your brain produces a combined3D (stereoscopic) image. Picture by courtesy of US Air Force.

There are two big differences between VR and looking at anordinary computer screen: in VR, you see a 3D image that changessmoothly, in real-time, as you move your head. That’s made possibleby wearing a head-mounted display, which looks like a giant motorbikehelmet or welding visor, but consists of two small screens (one infront of each eye), a blackout blindfold that blocks out all otherlight (eliminating distractions from the real world), and stereoheadphones. The two screens display slightly different, stereoscopicimages, creating a realistic 3D perspective of the virtual world.HMDs usually also have built-in accelerometers or position sensorsso they can detect exactly how your head and body are moving (bothposition and orientationwhich way they’re tilting or pointing) andadjust the picture accordingly. The trouble with HMDs is that they’requite heavy, so they can be tiring to wear for longperiods; some of the really heavy ones are even mounted on standswith counterweights. But HMDs don’t have to be so elaborateand sophisticated: at the opposite end of the spectrum, Googlehas developed an affordable, low-cost pair of cardboard goggleswith built-in lenses that convert an ordinary smartphone into a crude HMD.

An alternative to putting on an HMD is to sit or stand inside aroom onto whose walls changing images are projected from outside. As youmove in the room, the images change accordingly. Flight simulatorsuse this technique, often with images of landscapes, cities, andairport approaches projected onto large screens positioned justoutside a mockup of a cockpit. A famous 1990s VR experiment calledCAVE (Cave Automatic Virtual Environment), developed at theUniversity of Illinois by Thomas de Fanti, also worked this way.People moved around inside a large cube-shaped room withsemi-transparent walls onto which stereo images were back-projectedfrom outside. Although they didn’t have to wear HMDs, they did needstereo glasses to experience full 3D perception.

See something amazing and your natural instinct is to reach outand touch iteven babies do that. So giving people the ability tohandle virtual objects has always been a big part of VR. Usually,this is done using datagloves, which are ordinary gloves with sensorswired to the outside to detect hand and figure motions. One technicalmethod of doing this uses fiber-optic cables stretched the length ofeach finger. Each cable has tiny cuts in it so, as you flex yourfingers back and forth, more or less light escapes. A photocell atthe end of the cable measures how much light reaches it and thecomputer uses this to figure out exactly what your fingers are doing.Other gloves use strain gauges, piezoelectric sensors, orelectromechanical devices (such as potentiometers) to measure fingermovements.

Photos: Left/above: EXOS datagloves produced by NASA in the 1990s had very intricate external sensorsto detect finger movements with high precision. Picture courtesy of NASA Marshall Space Flight Center (NASA-MSFC).Right/below: This more elaborate EXOS glove had separate sensors on each finger segment, wired up to a single ribboncable connected up to the main VR computer. Picture by Wade Sisler courtesy of NASA Ames Research Center.

Artwork: How a fiber-optic dataglove works. Each finger has a fiber-optic cable stretched along its length. (1) At one end of the finger, a light-emitting diode (LED) shines light into the cable. (2) Light rays shoot down the cable, bouncing off the sides. (3) There are tiny abrasions in the top of each fiber through which some of the rays escape. The more you flex your fingers, the more light escapes. (4) The amount of light arriving at a photocell at the end gives a rough indication of how much you’re flexing your finger. (5) A cable carries this signal off to the VR computer. This is a simplified version of the kind of dataglove VPL patented in 1992, and you’ll find the idea described in much more detail in US Patent 5,097,252.

Even simpler than a dataglove, a wand is a stick you can use totouch, point to, or otherwise interact with a virtual world.It has position or motion sensors (such as accelerometers)built in, along with mouse-like buttons or scroll wheels. Originally,wands were clumsily wired into the main VR computer; increasingly,they’re wireless.

Photo: A typical handheld virtual reality controller (complete with elastic bands), looking not so different from a video game controller. Photo courtesy of NASA Ames Research Center.

VR has always suffered from the perception that it’s little morethan a glorified arcade gameliterally a “dreamy escape” fromreality. In that sense, “virtual reality” can be an unhelpfulmisnomer; “alternative reality,” “artificial reality,” or”computer simulation” might be better terms. Thekey thing to remember about VR is that it really isn’t a fad orfantasy waiting in the wings to whistle people off to alternativeworlds; it’s a hard-edged practical technology that’s been routinelyused by scientists, doctors, dentists, engineers, architects,archaeologists, and the military for about the last 30 years. Whatsorts of things can we do with it?

Photo: Flight training is a classic application of virtual reality, though it doesn’t use HMDs or datagloves. Instead, you sit in a pretend cockpit with changing images projected onto giant screens to give an impression of the view you’d see from your plane. The cockpit is a meticulous replica of the one in a real airplane with exactly the same instruments and controls. Photo by Javier Garcia courtesy of US Air Force.

Difficult and dangerous jobs are hard to train for. How can yousafely practice taking a trip to space, landing a jumbo jet, making aparachute jump, or carrying out brain surgery? All these things areobvious candidates for virtual reality applications. As we’ve seenalready, flight cockpit simulators were among the earliest VRapplications; they can trace their history back to mechanicalsimulators developed by Edwin Link in the 1920s.Just like pilots, surgeons are now routinely trained using VR. In a2008 study of735 surgical trainees from 28 different countries, 68 percent saidthe opportunity to train with VR was “good” or “excellent”for them and only 2 percent rated it useless or unsuitable.

Anything that happens at the atomic or molecular scale iseffectively invisible unless you’re prepared to sit with your eyesglued to an electron microscope. But suppose you want to design newmaterials or drugs and you want to experiment with the molecularequivalent of LEGO. That’s another obvious application for virtualreality. Instead of wrestling with numbers, equations, ortwo-dimensional drawings of molecular structures, you can snapcomplex molecules together right before your eyes. This kind of workbegan in the 1960s at the University of North Carolina at ChapelHill, where Frederick Brooks launchedGROPE, a project to develop a VR system for exploring the interactions between protein moleculesand drugs.

Photo: If you’re heading to Mars, a trip in virtual reality could help you visualize what you’ll find when you get there. Picture courtesy of NASA Ames Research Center.

Apart from its use in things like surgical training and drug design,virtual reality also makes possible telemedicine (monitoring,examining, or operating on patients remotely). A logical extension ofthis has a surgeon in one location hooked up to a virtual realitycontrol panel and a robot in another location (maybe an entirecontinent away) wielding the knife. The best-knownexample of this is the daVinci surgical robot, released in 2009, ofwhich several thousand have now been installed in hospitalsworldwide. Introduce collaboration and there’s the possibility of awhole group of the world’s best surgeons working together on aparticularly difficult operationa kind of WikiSurgery, if youlike!

Architects used to build models out of card and paper; now they’remuch more likely to build virtual reality computer models you canwalk through and explore. By the same token, it’s generally muchcheaper to design cars, airplanes, and other complex, expensivevehicles on a computer screen than to model them inwood, plastic, orother real-world materials. This is an area where virtual realityoverlaps with computer modeling: instead of simply making animmersive 3D visual model for people to inspect and explore, you’recreating a mathematical model that can be tested for its aerodynamic,safety, or other qualities.

From flight simulators to race-car games, VR has long hovered onthe edges of the gaming worldnever quite good enough torevolutionize the experience of gamers, largely due to computersbeing too slow, displays lacking full 3D, and the lack of decent HMDsand datagloves. All that may be about to change with the developmentof affordable new peripherals like the Oculus Rift.

Like any technology, virtual reality has both good and bad points.How many of us would rather have a complex brain operation carriedout by a surgeon trained in VR, compared to someone who has merelyread books or watched over the shoulders of their peers? How many ofus would rather practice our driving on a car simulator before we setfoot on the road? Or sit back and relax in a Jumbo Jet, confident inthe knowledge that our pilot practiced landing at this very airport,dozens of times, in a VR simulator before she ever set foot in a realcockpit?

Critics always raise the risk that people may be seduced byalternative realities to the point of neglecting their real-worldlivesbut that criticism has been leveled at everything from radioand TV to computer games and the Internet. And, at some point, itbecomes a philosophical and ethical question: What is real anyway?And who is to say which is the better way to pass your time? Likemany technologies, VR takes little or nothing away from the realworld: you don’t have to use it if you don’t want to.

The promise of VR has loomed large over the world of computing forat least the last quarter centurybut remains largely unfulfilled.While science, architecture, medicine, and the military all rely onVR technology in different ways, mainstream adoption remainsvirtually nonexistent; we’re not routinely using VR the way we usecomputers, smartphones, or the Internet. But the 2014 acquisition ofVR company Oculus, by Facebook, greatly renewed interest in the areaand could change everything. Facebook’s basic idea is to let peopleshare things with their friends using the Internet and the Web. Whatif you could share not simply a photo or a link to a Web article butan entire experience? Instead of sharing photos of your wedding withyour Facebook friends, what if you could make it possible for peopleto attend your wedding remotely, in virtual reality, in perpetuity?What if we could record historical events in such a way that peoplecould experience them again and again, forever more? These are thesorts of social, collaborative virtual reality sharing that (we mightguess) Facebook is thinking about exploring right now. If so, thefuture of virtual reality looks very bright indeed!

So much for the future, but what of the past. Virtual reality hasa long and very rich history. Here are a few of the more interestinghighlights…

Artwork: The first virtual reality machine? Morton Heilig’s 1962 Sensorama. Picture courtesy US Patent and Trademark Office.

Excerpt from:

What is virtual reality? – A simple introduction

(VIDEO) Cleburne ISD board experiences virtual reality …

With construction going steady on the new Cleburne High School project, Corgan Architects are finding new ways to update the district on its progress.

Using virtual reality, the Cleburne ISD board of trustees walked the halls of the new school during Monday nights special called work session.

As the plans for the school are created and updated, Corgan Vice President and Project Manager Doug Koehne said they will update the VR experience so others can see what the school will look like and how the space will be used.

Local community members, parents and students were able to participate in the VR experience during the second annual CHS Career & Technical Education showcase on Feb. 15, and Koehne said they received some great feedback.

After experiencing the virtual reality program, board Vice President John Finnell said the artist renderings looked great and he cant wait until the school opens in December 2019.

While interviewing district staff before the project began, Koehne said some of the things that were important to include in the schematics was to have circulation throughout the building and natural light.

There is more than one way to get from one area of the school to the other, he said. There will also be lots of windows for more natural light to come in.

Also throughout the school, there will be opportunities for students to showcase their work in their various classes, he said.

CISD Superintendent Kyle Heath said the school is not only for the students to enjoy but also for the teachers and staff to enjoy.

Other business

With half the school year already completed, district officials updated the board on several goals and projects they hope to accomplish over the next few years.

CISD CTE Director Mark McClure gave trustees a continuous improvement plan for his department.

He said his goal is to develop more business relationships with local industry partners to provide students internship opportunities when they take CTE courses so they can succeed in the workplace after graduating.

McClure has visited other school districts in the area to see their CTE programs and how they partner with local industries.

Whether the students want to go into construction, public safety or the health sciences field, he said its important for them to have internship opportunities, paid or unpaid.

In other news, two other district officials updated trustees on the progress theyve made in their respective departments.

CISD Assistant Superintendent of Curriculum and Instruction Andrea Hensley gave trustees an update on instruction within the district and how their goal is to implement district-wide curriculum to ensure students are learning what they are suppose to learn in the classroom in every grade.

CISD District Operations Executive Director Barry Hipp gave trustees an update on the departments long-range facilities plan, which was established in 2012, to ensure they kept up with maintenance needs throughout the district.

Since the plan was created, Hipp said theyve scheduled maintenance in many areas, including HVAC systems, roofing, painting, flooring and parking lots.

To view a video of Finnell participating in the VR experience, visit http://www.cleburnetimesreview.com. For more information and updates on the CHS project, visit http://www.c-isd.com and click on 2016 Bond.

Administrators give department updates

Go here to read the rest:

(VIDEO) Cleburne ISD board experiences virtual reality …

Welsh police force is first in UK to use virtual reality to …

Police in the United Kingdom have started taking advantage of virtual reality technology to train officers. Gwent Police, located in Wales, recently launched the new VR training system, becoming the first police force in the U.K. to do so.

The technology makes it possible to train officers to deal with situations that they may encounter on the streets, and to test how they react to various scenarios, which is difficult to ascertain under routine training conditions, but can potentially be replicated (or replicated as closely as possible) by using immersive VR.

The scenario used for training involves a 280-degree VR scene in which the officer moves an avatar around, interacts with other characters, uses handcuffs, carries out arrests, and enters properties in a branching narrative.

[Virtual reality] provides the ability of a safe learning environment, which promotes open conversations about opportunities for options for action, investigation and safeguarding, Superintendent Vicki Townsend told Digital Trends. Often within policing, there is no right or wrong answer to how a situation is managed. Its about understanding what you would do, the power and legislation you utilize to take that action, and why you have done it. The scenarios provide the opportunity as a group to maximize this learning by focusing on the decision-making model, and allows the development of officers from peers with more or different experiences.

The use of virtual reality as a training technique is something that has already been explored by military medics, astronauts, surgeons, and a range of other professions where its important to get hands-on experience. VR enables them to test skills in a safe environment, where the chance of physical risk (to themselves or others) is lessened.

As VR technology matures further, more and more sectors and professions will likely adapt these tools to their own purposes and requirements.

We are currently delivering the training as part of the force training days to frontline officers, Townsend said. Forty officers get an input [each] week. This started in January and is due to finish in May. This is is the first scenario that we built. We have planned to build 10 scenarios We are also hoping to build multi-agency-based scenarios.

Continued here:

Welsh police force is first in UK to use virtual reality to …

How and why our experiments with virtual reality motion made …

Experiments with VR motion controllers show improving immersion increases the risk of VR sickness and that the ill effects are a varied and complex matter.

One of the joys of working in the R&D Labs at Tapptic is the excuse to spend a week playing with new gadgets, but all good things can turn sour. And our experiments testing new hand-held motion controller systems and pushing the boundaries of virtual reality motion to the limits isnt one wed recommend, because it caused some of our human guinea pigs to feel severe discomfort and nauseous.

This article will describe our experiments, explain how they made us feel sick, and how we tried to reduce nausea and other ill effects. It will also outline our consequent analysis of VR sickness syndrome and conclusions that both the causes and the symptoms of VR sickness are more complex, profound and varied than many VR motion studies suggest.

While wandering around the Electronic Entertainment Expo Los Angeles last year, a motion controller system caught our attention. People were testing a VR game calledSprint Vector, clutching special handheld controllers that enabled them to run and jump in a VR simulation by swinging their arms back-and-forth, like a soldier on a speed march, and throwing both arms up in the air to make their avatar jump while the players real legs remain stationary.

For those who havent played a VR game before, the usual method of in-game locomotion is teleportation, where the player looks and points with the controller toward where they want to go and presses a button to move there. Teleportation came about as a way to avoid motion sickness, commonly experienced when using a joystick or keyboard directional buttons in VR games, but this lacks the immersive nature of being able to actually walk or run and to move forward while looking right or left in your virtual world.

Above: HTC Vive factory

So lets teleport to Poland. Micha Owsianko, the VR expert at Tapptic, built a simulation to test the HTC Viveheadset with the handheld motion controllers. We conducted a number of experiments including walking and running using the handheld controllers, with the avatars direction of movement determined by, first, the way users head was facing, and subsequently, the way the torso was facing. We also examined what happens when the avatar walks/runs through virtual objects such as walls.

Ten people took part in the study. All were affected in some way, but some were unconcerned, while for others the effects of VR sickness were severe and prolonged, and in one case did not kick in until long after the experiment had finished.

Before we discuss the experiments, lets explore the causes and symptoms of VR sickness.

The common symptoms of VR sickness are disorientation, lack of balance, headaches, and eye fatigue, as well as feeling sick, even retching and vomiting. These are similar to motion sickness, like car, seasickness, and simulator sickness (a long-time problem with Air Force flight simulators). Some of the causes are also similar, but with one major difference: You dont need physical motion to experience VR sickness.

Like motion sickness, VR can cause nausea when there is a disconnection between your external sensory information (what you see and hear) and your internal sensors, known as the vestibular system. This means: if what you see and what you feel doesnt match, you will feel ill and can actually vomit. Not everyone will be affected in this way, but its one of the main reasons why VR sickness happens.

But there are other causes of VR sickness that have nothing to do with motion. One of these is the eyes. Serious gamers claim that higher frame rate, such as 60 frames per second (FPS), delivers a much better gaming experience than 30 fps (for reference, the fps of a standard movie is shot at 24 fps, high-definition HD film doubles this, at least).

Perhaps there is biological reason for this: In order to minimize eye fatigue and disorientation, you need a smooth and consistently high frame rate. Expert opinion varies on what fps is acceptable for VR, but at Tapptic we believe 60 FPS per eye is the minimum requirement for VR (and 120 FPS for full HD resolution). This means you need powerful machines to run VR or settle for simpler simulations.

Another ingredient for the visual disorientation is field of view. Interestingly, this is more acute for women than men. Did you know that women tend to have better peripheral vision than men? So women see a more panoramic view, while men tend to have better straight-ahead distance vision. This means that women need a bigger field of view in VR to avoid feeling nauseous.

Then theres the full array of proprioceptors in our body. These are muscle spindles that are located in muscle fibres throughout the body. They inform us where each limb is, how the joints are positioned and how much pressure each part of the body is experiencing, without the eyes needing to see them. If the messages stop or if the eyes and proprioceptors tell you different things, it may result in an out of body experience.

This mismatch between what the proprioceptors tell you is happening (real world) and what your eyes tell you is happening (VR world) can cause sickness. Our studies reveal this is particularly likely to occur when the VR simulation allows you to walk through objects. So if the VR avatar walks through a wall, the brain expects proprioceptors to report that you have hit a wall. And, we suspect, prior to impact the brain may warn the body to brace and/or prevent impact. When your real body feels no impact from the VR collision because there is no haptic feedback, it does funny things to your brain and stomach.

Most studies focus on the frame rate (vision) and motion orientation (vestibular system), suggesting that the impact of proprioceptors is not fully appreciated.

Above: HTC Vive refinery

The psychological implications of VR sickness are often overlooked.

When writing his 1987paper on flight simulator sickness, J.S. Crowley identified that airmen who had experienced physical symptoms of simulator sickness feared repeating training sessions in the simulator. While flight simulators are different to modern VR headsets, the physical symptoms are very similar to VR sickness i.e. eye fatigue, disorientation, nausea, vomiting, etc.

My own experience suggests there are psychological implications of VR also. It might sound silly, but after my severe and delayed reaction to the VR experiments, even a week on, I felt some fear about taking part in more VR testing.

Despite prolonged experimentation with the Vive VR system, the effects didnt hit me right away. They appeared several hours after I finished, and when they kicked in, I felt terrible. I could no longer work, had to leave the office early, go home, and sleep off the effects for a couple hours. This delayed reaction is concerning. If the negative effects are triggered during the simulation, then it makes it extremely difficult to manage when to stop.

A generally good rule of thumb is this: If your face or ears are getting hot or you get disoriented, stop right away. If you ignore these warning signs and continue using VR, then you risk going to a deep sickness state, which can last for hours and give you a headache that strong painkillers wont shift.

It took Micha, our VR expert, a couple of hours to put together a demo for the HTC VIVE system, and a few more hours of refinement, polishing and testing, to have a working system. This enabled us to move around the VR world, holding the Vive handheld controllers, while swinging our arms to simulate the walking of the virtual avatar. The faster you swung your arms, the faster the avatar moved.

The two of us tested it for some time with no obvious ill effects. Then we invited some colleagues to take part, several of whom began to feel nauseous very quickly.

We tried a number of refinements:

Results on the next page:

See the original post here:

How and why our experiments with virtual reality motion made …

Virtual Reality and the Future of Journalism | Chicago …

Virtual reality is taking journalism and storytelling to a new level by giving consumers the sensation of being in the middle of an event or story.

During the Winter Olympics in South Korea, NBC featured VR for 30 events, giving audiences 360-degree views of competitions.

The New York Times has also integrated VR into its journalism, not only giving readers a new perspective on the Olympics and its athletes, but also putting them in the middle of a battle with ISIS in Iraq, for example.

The Times has also launched what it is calling an augmented reality experience, which it describes as a bridge between the physical and digital world. The first experiment puts a New York Times honor box within your physical space via an app and your cellphone camera. The digital reproduction allows you to walk around it as well as to look at it from above and behind.

Our WTTW colleague and filmmaker Barbara Allenworkedwith Stanfords virtual reality labto build a VR experience around the flooding and devastation of Hurricane Katrina.

I think people saw what happened, but didnt really understand the feeling of what happened to those people, Allen said. With the virtual reality experience, it allows you to have a more empathetic feeling and understand of what those people went through.

(HammerandTusk / Pixabay)

Allen also created a VR experience with the Joffrey Ballet for BuzzFeed. It puts the consumer in the middle of dance rehearsals, allowing them to move the camera around the space to follow any and all of the dancers.

Besides storytelling, VR has many practical applications including training pilots, surgeons, firefighters and even interior decorating. It even helped Allen conquer her fear of heights.

Allen joins us in discussion.

Related stories:

From Virtual Reality to Physical Barriers: Building a Safer School

Feb. 26: What can school districts do to prevent a mass shooting? While the gun debate rages on, schools have to come up with other ways to make sure students are safe.

Virtual Reality Submarine to Set Sail at Lincoln Park Zoo

Oct. 18, 2017: A new experience coming this fall to Lincoln Park Zoo will allow visitors dive into the ocean and explore landscapes and wildlife at the North and South Poles or in deep ocean waters.

Augmented Reality App Sharpens Focus on St. Valentines Day Massacre

Feb. 17, 2017: Last fall, 21st century technology was used to tell the story of a 20th century tragedy: the Eastland Disaster. The team behind that project is set to launch a second installment of its augmented reality app. Learn more.

Read the original here:

Virtual Reality and the Future of Journalism | Chicago …

Virtual Reality – Stanford Children’s Health

A new study is being conducted on using a VR program as a tool for stress inoculation therapy, which aims to help patients mitigate anxiety through cognitive behavioral therapy techniques, including relaxation and exposure. The study includes sending a VR headset home with patients who have a cardiac catheterization procedure scheduled so they can learn about the procedure and practice relaxation techniques. Although catheterizations are outpatient procedures, catheterization patients must undergo general anesthesia. Doctors find the experience can cause stress and anxiety for patients, especially if theyre young.

More:

Virtual Reality – Stanford Children’s Health

Virtual Reality – Latest Virtual Reality News Headset Reviews

Virtual Reality: what is it and why is it important to know about?

Virtual reality is essentially the use of technology to create the illusion of presence in an environment that isnt really there. It works by sending information to various senses, such as sight and hearing, that fool our brains into experiencing something virtual. The illusion is often completed by the presence of interactivity, in other words the virtual world responds in some way to your presence.

Read more here:

Virtual Reality – Latest Virtual Reality News Headset Reviews

Virtual reality | computer science | Britannica.com

Virtual reality (VR), the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment. VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive information and are worn as goggles, headsets, gloves, or body suits. In a typical VR format, a user wearing a helmet with a stereoscopic screen views animated images of a simulated environment. The illusion of being there (telepresence) is effected by motion sensors that pick up the users movements and adjust the view on the screen accordingly, usually in real time (the instant the users movement takes place). Thus, a user can tour a simulated suite of rooms, experiencing changing viewpoints and perspectives that are convincingly related to his own head turnings and steps. Wearing data gloves equipped with force-feedback devices that provide the sensation of touch, the user can even pick up and manipulate objects that he sees in the virtual environment.

The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense, the National Science Foundation, and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics, simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.

Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of mediafrom futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatresover the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York Worlds Fair by Fred Waller and Ralph Walker, originated in Wallers studies of vision and depth perception. Wallers work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Traineran example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.

Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a cinema of the future. By late 1960, Heilig had built an individual console with a variety of inputsstereoscopic images, motion chair, audio, temperature changes, odours, and blown airthat he patented in 1962 as the Sensorama Simulator, designed to stimulate the senses of an individual to simulate an actual experience realistically. During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted stereoscopic 3-D TV display that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.

The seeds for virtual reality were planted in several computing fields during the 1950s and 60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called light guns). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.

During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a man-computer symbiosis and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.

Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MITs Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPAs premier research centres. In 1965 Sutherland outlined the characteristics of what he called the ultimate display and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Robertss Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbarts invention of a new input device, the computer mouse.

Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilots head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called augmented reality because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearers ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewers immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.

An important area of application for VR systems has always been training for real-life activities. The appeal of simulations is that they can provide training equal or nearly equal to practice with real systems, but at reduced cost and with greater safety. This is particularly the case for military training, and the first significant application of commercial simulators was pilot training during World War II. Flight simulators rely on visual and motion feedback to augment the sensation of flying while seated in a closed mechanical system on the ground. The Link Company, founded by former piano maker Edwin Link, began to construct the first prototype Link Trainers during the late 1920s, eventually settling on the blue box design acquired by the Army Air Corps in 1934. The first systems used motion feedback to increase familiarity with flight controls. Pilots trained by sitting in a simulated cockpit, which could be moved hydraulically in response to their actions (see photograph). Later versions added a cyclorama scene painted on a wall outside the simulator to provide limited visual feedback. Not until the Celestial Navigation Trainer, commissioned by the British government in World War II, were projected film strips used in Link Trainersstill, these systems could only project what had been filmed along a correct flight or landing path, not generate new imagery based on a trainees actions. By the 1960s, flight trainers were using film and closed-circuit television to enhance the visual experience of flying. The images could be distorted to generate flight paths that diverted slightly from what had been filmed; sometimes multiple cameras were used to provide different perspectives, or movable cameras were mounted over scale models to depict airports for simulated landings.

Inspired by the controls in the Link flight trainer, Sutherland suggested that such displays include multiple sensory outputs, force-feedback joysticks, muscle sensors, and eye trackers; a user would be fully immersed in the displayed environment and fly through concepts which never before had any visual representation. In 1968 he moved to the University of Utah, where he and his colleague David Evans founded Evans & Sutherland Computer Corporation. The new company initially focused on the development of graphics applications, such as scene generators for flight simulator systems. These systems could render scenes at roughly 20 frames per second in the early 1970s, about the minimum frame rate for effective flight training. General Electric Company constructed the first flight simulators with built-in, real-time computer image generation, first for the Apollo program in the 1960s, then for the U.S. Navy in 1972. By the mid-1970s, these systems were capable of generating simple 3-D models with a few hundred polygon faces; they utilized raster graphics (collections of dots) and could model solid objects with textures to enhance the sense of realism (see computer graphics). By the late 1970s, military flight simulators were also incorporating head-mounted displays, such as McDonnell Douglas Corporations VITAL helmet, primarily because they required much less space than a projected display. A sophisticated head tracker in the HMD followed a pilots eye movements to match computer-generated images (CGI) with his view and handling of the flight controls.

Advances in flight simulators, human-computer interfaces, and augmented reality systems pointed to the possibility of immersive, real-time control systems, not only for research or training but also for improved performance. Since the 1960s, electrical engineer Thomas Furness had been working on visual displays and instrumentation in cockpits for the U.S. Air Force. By the late 1970s, he had begun development of virtual interfaces for flight control, and in 1982 he demonstrated the Visually Coupled Airborne Systems Simulatorbetter known as the Darth Vader helmet, for the armoured archvillain of the popular movie Star Wars. From 1986 to 1989, Furness directed the air forces Super Cockpit program. The essential idea of this project was that the capacity of human pilots to handle spatial information depended on these data being portrayed in a way that takes advantage of the humans natural perceptual mechanisms. Applying the HMD to this goal, Furness designed a system that projected information such as computer-generated 3-D maps, forward-looking infrared and radar imagery, and avionics data into an immersive, 3-D virtual space that the pilot could view and hear in real time. The helmets tracking system, voice-actuated controls, and sensors enabled the pilot to control the aircraft with gestures, utterances, and eye movements, translating immersion in a data-filled virtual space into control modalities. The more natural perceptual interface also reduced the complexity and number of controls in the cockpit. The Super Cockpit thus realized Lickliders vision of man-machine symbiosis by creating a virtual environment in which pilots flew through data. Beginning in 1987, British Aerospace (now part of BAE Systems) also used the HMD as the basis for a similar training simulator, known as the Virtual Cockpit, that incorporated head, hand, and eye tracking, as well as speech recognition.

Sutherland and Furness brought the notion of simulator technology from real-world imagery to virtual worlds that represented abstract models and data. In these systems, visual verisimilitude was less important than immersion and feedback that engaged all the senses in a meaningful way. This approach had important implications for medical and scientific research. Project GROPE, started in 1967 at the University of North Carolina by Frederick Brooks, was particularly noteworthy for the advancements it made possible in the study of molecular biology. Brooks sought to enhance perception and comprehension of the interaction of a drug molecule with its receptor site on a protein by creating a window into the virtual world of molecular docking forces. He combined wire-frame imagery to represent molecules and physical forces with haptic (tactile) feedback mediated through special hand-grip devices to arrange the virtual molecules into a minimum binding energy configuration. Scientists using this system felt their way around the represented forces like flight trainees learning the instruments in a Link cockpit, grasping the physical situations depicted in the virtual world and hypothesizing new drugs based on their manipulations. During the 1990s, Brookss laboratory extended the use of virtual reality to radiology and ultrasound imaging.

Virtual reality was extended to surgery through the technology of telepresence, the use of robotic devices controlled remotely through mediated sensory feedback to perform a task. The foundation for virtual surgery was the expansion during the 1970s and 80s of microsurgery and other less invasive forms of surgery. By the late 1980s, microcameras attached to endoscopic devices relayed images that could be shared among a group of surgeons looking at one or more monitors, often in diverse locations. In the early 1990s, a DARPA initiative funded research to develop telepresence workstations for surgical procedures. This was Sutherlands window into a virtual world, with the added dimension of a level of sensory feedback that could match a surgeons fine motor control and hand-eye coordination. The first telesurgery equipment was developed at SRI International in 1993; the first robotic surgery was performed in 1998 at the Broussais Hospital in Paris.

As virtual worlds became more detailed and immersive, people began to spend time in these spaces for entertainment, aesthetic inspiration, and socializing. Research that conceived of virtual places as fantasy spaces, focusing on the activity of the subject rather than replication of some real environment, was particularly conducive to entertainment. Beginning in 1969, Myron Krueger of the University of Wisconsin created a series of projects on the nature of human creativity in virtual environments, which he later called artificial reality. Much of Kruegers work, especially his VIDEOPLACE system, processed interactions between a participants digitized image and computer-generated graphical objects. VIDEOPLACE could analyze and process the users actions in the real world and translate them into interactions with the systems virtual objects in various preprogrammed ways. Different modes of interaction with names like finger painting and digital drawing suggest the aesthetic dimension of this system. VIDEOPLACE differed in several aspects from training and research simulations. In particular, the system reversed the emphasis from the user perceiving the computers generated world to the computer perceiving the users actions and converting these actions into compositions of objects and space within the virtual world. With the emphasis shifted to responsiveness and interaction, Krueger found that fidelity of representation became less important than the interactions between participants and the rapidity of response to images or other forms of sensory input.

The ability to manipulate virtual objects and not just see them is central to the presentation of compelling virtual worldshence the iconic significance of the data glove in the emergence of VR in commerce and popular culture. Data gloves relay a users hand and finger movements to a VR system, which then translates the wearers gestures into manipulations of virtual objects. The first data glove, developed in 1977 at the University of Illinois for a project funded by the National Endowment for the Arts, was called the Sayre Glove after one of the team members. In 1982 Thomas Zimmerman invented the first optical glove, and in 1983 Gary Grimes at Bell Laboratories constructed the Digital Data Entry Glove, the first glove with sufficient flexibility and tactile and inertial sensors to monitor hand position for a variety of applications, such as providing an alternative to keyboard input for data entry.

Zimmermans glove would have the greatest impact. He had been thinking for years about constructing an interface device for musicians based on the common practice of playing air guitarin particular, a glove capable of tracking hand and finger movements could be used to control instruments such as electronic synthesizers. He patented an optical flex-sensing device (which used light-conducting fibres) in 1982, one year after Grimes patented his glove-based computer interface device. By then, Zimmerman was working at the Atari Research Center in Sunnyvale, California, along with Scott Fisher, Brenda Laurel, and other VR researchers who would be active during the 1980s and beyond. Jaron Lanier, another researcher at Atari, shared Zimmermans interest in electronic music. Beginning in 1983, they worked together on improving the design of the data glove, and in 1985 they left Atari to start up VPL Research; its first commercial product was the VPL DataGlove.

By 1985, Fisher had also left Atari to join NASAs Ames Research Center at Moffett Field, California, as founding director of the Virtual Environment Workstation (VIEW) project. The VIEW project put together a package of objectives that summarized previous work on artificial environments, ranging from creation of multisensory and immersive virtual environment workstations to telepresence and teleoperation applications. Influenced by a range of prior projects that included Sensorama, flight simulators, and arcade rides, and surprised by the expense of the air forces Darth Vader helmets, Fishers group focused on building low-cost, personal simulation environments. While the objective of NASA was to develop telerobotics for automated space stations in future planetary exploration, the group also considered the workstations use for entertainment, scientific, and educational purposes. The VIEW workstation, called the Virtual Visual Environment Display when completed in 1985, established a standard suite of VR technology that included a stereoscopic head-coupled display, head tracker, speech recognition, computer-generated imagery, data glove, and 3-D audio technology.

The VPL DataGlove was brought to market in 1987, and in October of that year it appeared on the cover of Scientific American (see photograph). VPL also spawned a full-body, motion-tracking system called the DataSuit, a head-mounted display called the EyePhone, and a shared VR system for two people called RB2 (Reality Built for Two). VPL declared June 7, 1989, Virtual Reality Day. On that day, both VPL and Autodesk publicly demonstrated the first commercial VR systems. The Autodesk VR CAD (computer-aided design) system was based on VPLs RB2 technology but was scaled down for operation on personal computers. The marketing splash introduced Laniers new term virtual reality as a realization of cyberspace, a concept introduced in science fiction writer William Gibsons Neuromancer in 1984. Lanier, the dreadlocked chief executive officer of VPL, became the public celebrity of the new VR industry, while announcements by Autodesk and VPL let loose a torrent of enthusiasm, speculation, and marketing hype. Soon it seemed that VR was everywhere, from the Mattel/Nintendo PowerGlove (1989) to the HMD in the movie The Lawnmower Man (1992), the Nintendo VirtualBoy game system (1995), and the television series VR5 (1995).

Numerous VR companies were founded in the early 1990s, most of them in Silicon Valley, but by mid-decade most of the energy unleashed by the VPL and Autodesk marketing campaigns had dissipated. The VR configuration that took shape over a span of projects leading from Sutherland to LanierHMD, data gloves, multimodal sensory input, and so forthfailed to have a broad appeal as quickly as the enthusiasts had predicted. Instead, the most visible and successfully marketed products were location-based entertainment systems rather than personal VR systems. These VR arcades and simulators, designed by teams from the game, movie, simulation, and theme park industries, combined the attributes of video games, amusement park rides, and highly immersive storytelling. Perhaps the most important of the early projects was Disneylands Star Tours, an immersive flight simulator ride based on the Star Wars movie series and designed in collaboration with producer George Lucass Industrial Light & Magic. Disney had long built themed rides utilizing advanced technology, such as animatronic charactersnotably in Pirates of the Caribbean, an attraction originally installed at Disneyland in 1967. Star Tours utilized simulated motion and special-effects technology, mixing techniques learned from Hollywood films and military flight simulators with strong story lines and architectural elements that shaped the viewers experience from the moment they entered the waiting line for the attraction. After the opening of Star Tours in 1987, Walt Disney Imagineering embarked on a series of projects to apply interactive technology and immersive environments to ride systems, including 3-D motion-picture photography used in Honey, I Shrunk the Audience (1995), the DisneyQuest indoor interactive theme park (1998), and the multiplayer-gaming virtual world, Toontown Online (2001).

In 1990, Virtual World Entertainment opened the first BattleTech emporium in Chicago. Modeled loosely on the U.S. militarys SIMNET system of networked training simulators, BattleTech centres put players in individual pods, essentially cockpits that served as immersive, interactive consoles for both narrative and competitive game experiences. All the vehicles represented in the game were controlled by other players, each in his own pod and linked to a high-speed network set up for a simultaneous multiplayer experience. The players immersion in the virtual world of the competition resulted from a combination of elements, including a carefully constructed story line, the physical architecture of the arcade space and pod, and the networked virtual environment. During the 1990s, BattleTech centres were constructed in other cities around the world, and the BattleTech franchise also expanded to home electronic games, books, toys, and television.

While the Disney and Virtual World Entertainment projects were the best-known instances of location-based VR entertainments, other important projects included Iwerks Entertainments Turbo Tour and Turboride 3-D motion simulator theatres, first installed in San Francisco in 1992; motion-picture producer Steven Spielbergs Gameworks arcades, the first of which opened in 1997 as a joint project of Universal Studios, Sega Corporation, and Dreamworks SKG; many individual VR arcade rides, beginning with Sega Arcades R360 gyroscope flight simulator, released in 1991; and, finally, Visions of Realitys VR arcades, the spectacular failure of which contributed to the bursting of the investment bubble for VR ventures in the mid-1990s.

Continue reading here:

Virtual reality | computer science | Britannica.com

Virtual Reality on Steam

Rec Room

Early Access, VR, Multiplayer, Sports

The Lab

Free to Play, VR, Action, Singleplayer

Arizona Sunshine

Zombies, VR, Action, Adventure

Onward

Early Access, VR, Simulation, Action

SUPERHOT VR

Action, Indie, VR, Bullet Time

Job Simulator

Simulation, VR, Funny, Singleplayer

Tilt Brush

Design & Illustration, VR

Serious Sam VR: The Last Hope

Early Access, Action, Indie, VR

GORN

Early Access, Violent, VR, Gore

Hot Dogs, Horseshoes & Hand Grenades

Early Access, Simulation, Action, Indie

Bigscreen Beta

Simulation, VR, Utilities

Google Earth VR

VR, Free to Play, Simulation, Casual

Space Pirate Trainer

Early Access, Action, VR, Space

Visit link:

Virtual Reality on Steam

Virtual Reality | FOX Sports

FOX Sports VR App

FOX Sports VR lets you watch top live sports events in Virtual Reality from your own VIP stadium suite or from on-the-field camera positions.

We’re constantly adding new events to the schedule, check back here for the latest updates!

Gold Cup

UEFA Champions League Final

Super Bowl LI

The BIG EAST Tournament

MLS Cup

The Big Ten Championship Game

The Battle of Bedlam

The Red River Rivalry Mexico vs. Venezuela

Read more:

Virtual Reality | FOX Sports

Gear VR with Controller Virtual Reality – SM-R324NZAAXAR …

2 Compatible with the following Samsung Galaxy smartphones: USB Type-C: Galaxy S8, Galaxy S8+, microUSB: Galaxy S7, Galaxy S7 edge, Galaxy Note 5, Galaxy S6, Galaxy S6 edge and Galaxy S6 edge+. Galaxy smartphone sold separately.

Price, Promotion, Processing: Pricing, delivery date and other errors may be withdrawn or revised and/or your order may be cancelled at any time, without prior notice, before we have both (a) shipped or provided access to your product or service, and (b) received your payment for the product or service. All sales on Samsung.com are subject to the full Terms of Sale . Samsung is not responsible for any errors, omissions or misdirected or lost orders, or orders which may be delayed. Samsung reserves the right to modify pricing and modify or cancel promotions at any time, without prior notice.

See the original post here:

Gear VR with Controller Virtual Reality – SM-R324NZAAXAR …

3 Ways Virtual Reality Is Transforming Medical Care | NBC News – NBCNews.com

Aug.22.2017 / 2:13 PM ET

Let our news meet your inbox.

Think virtual reality is just about gaming and the world of make-believe? Get real. From product design to real estate, many industries have adopted VR and related technologies and nowhere are the benefits of VR greater than in healthcare.

We are seeing more and more of this incorporated faster than ever before, said Dr. Ajit Sachdeva, Director of Education with the American College of Surgeons. VR has reached a tipping point in medicine.

As NBC News MACH reported previously, psychologists have found VR to be good for treating post-traumatic stress disorder. And stroke doctors, pain specialists, surgeons, and other medical practitioners have found their own uses for VR. In some cases, medical VR involves the familiar headsets; in others, 3D glasses and special video screens give a VR-like experience.

The use of VR and 3D visualization technology in medicine isnt brand-new. Medical researchers have been exploring ways to create 3D models of patients internal organs using VR since the 1990s. But advances in computing power have made simulated images much more realistic and much faster to create.

X-rays, CT scans, and MRI scans can now be turned into high-resolution 3D images in under a minute, said Sergio Agirre, chief technology officer of EchoPixel, a Mountain View, California firm whose visualization software is being used in hospitals across the U.S. Twenty years ago, it would probably take them a week to be able to do that.

These days, common surgical procedures like appendectomies or cesarean sections are often pretty routine one case is similar to the next. But some especially complicated procedures including the separation of conjoined twins present unique challenges that can be met only with meticulous planning. For these, 3D visualization is proving to be a game-changer.

Recently, VR played a vital role in the successful separation of conjoined twins at Masonic Childrens Hospital in Minneapolis. The three-month-old twins were joined far more extensively than some other conjoined twins, with intricate connections between their hearts and livers. That meant the surgery to separate the twins would be unusually complicated and potentially very dangerous for the twins.

Before surgery, the surgical team took CT, ultrasound, and MRI scans and created a super-detailed virtual model of the twins bodies and then ventured inside their organs to identify potential pitfalls and plan how these would be avoided during surgery.

You look through the 3D glasses, and you can basically walk through the structure, peeling apart parts so you can look at exactly what you want to, said Dr. Anthony Azakie, one of the surgeons who separated the twins. He said the high-resolution visualization helped minimize the number of surprises that we were potentially dealing with.

VR technology is also being used by vascular specialists like Dr. In Sup Choi, director of interventional neuroradiology at Lahey Hospital & Medical Center in Burlington, Massachusetts. When he uses interactive 3D visualizations to prepare for procedures to fix aneurysms and blocked arteries, he said, he gets a better idea of what types of devices we have to use and what approach might work best.

If doctors are donning VR gear, so are their patients. Theyre using the headsets to immerse themselves in a peaceful virtual world that takes their focus off discomfort associated with medical problems and treatments.

Because anesthesia and sedation can be risky for some patients, including those who are frail or very elderly, some hospitals are offering these patients VR headsets as a way to help control pain during minimally invasive procedures. Its still experimental at this point, but the results so far have been successful.

Similarly, VR has been shown to reduce anxiety in cancer patients undergoing chemotherapy infusions. VR is even making injections and other painful or potentially frightening procedures less distressing to children.

But burn patients may be some of the biggest beneficiaries of VR technology. From daily cleaning and bandaging of burns to skin grafts, severe burn patients experience some of the most painful procedures in medicine, said Dr. Hunter Hoffman, a University of Washington scientist with expertise in the use of VR for pain relief. Pain medications help, but theyre often not strong enough.

For these patients, Hoffman helped create the VR game SnowWorld, which features imagery designed specifically to distract burn patients from pain. Patients who play the game during treatment report up to 50 percent less pain than similar patients not playing the game, according to preliminary research. Other research suggests that patients playing the game actually show changes in the brain that indicate theyre feeling less pain.

SnowWorld is now being evaluated in clinical trials at four sites in the U.S. and at two international sites.

VR shouldnt be considered a replacement for pain-killing medication, Hoffman said, adding that combining drugs and VR could be especially effective.

VR is also helping patients overcome balance and mobility problems resulting from stroke or head injury.

Using VR, I can control whats going on around the patient and measure what kind of impact its having on that patients ability to change, said Emily Keshner, a professor of physical therapy at Temple University in Philadelphia. We expose them to this repeatedly and we give them feedback about how they can respond to prevent themselves from falling.

Research has shown that VR-mediated rehabilitation can speed the pace at which these patients regain physical abilities. Theres a long way to go in conducting all the research needed to validate these results and make these techniques part of routine practice, Keshner said but its on the way.

One study of stroke patients showed that VR rehab led to more improvements in arm and hand movement compared to conventional rehab after four weeks of therapy. The VR-assisted patients had better mobility when the doctors checked in two months later. Other research has shown similarly successful outcomes for patients with cerebral palsy undergoing rehab for balance problems.

The power of VR [for therapy] is that youre really changing the way people perceive the world, Keshner said. They learn how to respond. And after practicing in that virtual world, they are much more confident and capable.

FOLLOW NBC MACH ON TWITTER, FACEBOOK, AND INSTAGRAM.

Your Video Begins in: 00:00

Let our news meet your inbox.

Read more here:

3 Ways Virtual Reality Is Transforming Medical Care | NBC News – NBCNews.com


12345...1020...