Exploring Virtual Worlds with Head-Mounted Displays
'Joseph C. Chung
Joseph C. Chung
'Rodney A. Brooks
Rodney A. Brooks
'M. R. Harris
M. R. Harris
'M. T. Kelley
M. T. Kelley
'R. L. Holloway
R. L. Holloway
For nearly a decade the University of North Carolina at Chapel Hill has been conducting research in the use of simple head-mounted displays in "realworld" applications. Such units provide the user with non-holographic true three-dimensional information, since the kinetic depth effect, stereoscopy, and other visual cues combine to immerse the user in a "virtual world" which behaves, in some respects, like the real world.
In 1965 Ivan Sutherland (1) first proposed the Ultimate Display – a display in which computer-generated images would behave exactly as their real-world analogs do. Computer-generated chairs could be sat upon. Computer-generated apple pies would smell and taste just like Mom's. And computer-generated bullets would be fatal. Fans of the television series "Star Trek – the Next Generation" may recognize that such a display exists on the latest version of the starship Enterprise in the form of the "holodeck". While Sutherland's Ultimate Display may indeed be 400 years away, we in the 20th century can at least begin to investigate more feasible versions of it as our current technology allows.
Even for displays less fantastic than the Ultimate Display, Sutherland recognized the need for as complete a sensory input as possible. Most important is kinetic feedback – the response of the computer display to the user's movement. The senses of sight, sound, and feeling lend themselves most easily to this effect, as objects can be moved out of sight, apparent sound sources can shift their relative position when the user's head is turned, and force feedback mechanisms can respond to hand and arm movements. Such display responses are under complete computer control and may or may not be limited to familiar real-world behaviors. This display with its computer-controlled objects and their computer-generated behaviors comprises what has come to be known as a virtual world. It is the basis for a representation-rich approach to problems that previously may have been limited to pencil-and-paper representation.
Here at the UNC-Chapel Hill, the application of the virtual world approach to various problems has become a major research focus, and the use of headmounted displays (HMDs) is an important component of this research. To be honest, our head-mounted displays are nothing new. The technology we use is all commercially available and has been used in other head-mounted display efforts. What is new, however, is the application of the head-mounted display to the problems of molecular structure, architecture, and in the future, medical imaging. We hope to demonstrate the diversity of the problems in which the head-mounted display can be effectively used.
The use of head-mounted displays in the exploration of computer-generated virtual worlds is a step towards a completely natural interface between man and machine. We observe users' appreciation of complex spatial interrelationship develop more quickly and with less effort with 3-D dynamic displays and 3-D interaction devices. It is much easier to change one's view of a scene by walking around it or stooping to look up at it than to decompose the desired change into a series of axis rotations which are effected by turning knobs (the "Etch-a-Sketch constraint"). And, without being distracted by such superfluous tasks, the user is less likely to become confused and lose his orientation. Clearly, the ideal head-mounted display would be a much preferable alternative to conventional displays. Because the head-mounted display is still a relatively new technology, however, what comprises an ideal headmounted display cannot be indisputably defined, nor,as yet, does one exist. First, we review the better-known headmounted displays of the past 20 years.
Sutherland himself took the first step towards the Ultimate Display by building a head-mounted display at Harvard University which he took with him to the University of Utah. (2) This unit used a pair of small CRTs to display stereoscopic images, and also allowed the wearer to see his real surroundings. Special hardware was designed and built to generate the wire frame images presented to the user. Tracking of the user's head position and orientation was accomplished either with direct mechanical linkage between the HMD and an encoding device attached to the ceiling, or with an ultrasonic head position sensor. Sutherland achieved good results with this device.
In 1983 Mark Callahan of M.I.T.'s Architecture Machine Group produced an updated version of Sutherland's HMD using the then available improved display and computing engines. (3)
At the NASA Ames Research Center Fisher et al (4) developed the next step in head-mounted displays, intended for telerobotics and space station information management applications. This unit was capable of displaying computer-generated images from remote cameras, and of mixing either kind with frames stored on optical video disk. Breaking tradition with previous systems, the NASA unit positioned its liquid crystal display screens directly in front of the wearer's eyes. Fisher also enhanced the user interaction with the virtual world through the use of the gesture-sensing DateGlove* from VPL Research and through the incorporation of speech-recognition into the system.
* DataGlove™ is a registered trademark of VPL Research, Inc., Redwood City, California.
CAE Electronics Ltd. of Quebec has developed a fiber-optic helmet-mounted display system (FOHMD) (5) , intended for use with air combat flight simulators and othersuch applications as remotely piloted vehicles. In the FOHMD system, four lightvalve projectors transmit the two eyes' images through fiber-optic cables to the helmet display, where they are viewed through a wideangle display. The beam splitters on the helmet display still permit the pilot to view cockpit indicators and head-up displays. The FOHMD provides a viewing field of 64° vertically by 135° horizontally, including a high resolution inset field (25° by 19°). Head tracking is achieved through two solid state sensors, each capable of reporting the two-dimensional position of an infrared LED within its field of view. By flashing the helmet-mounted LEDs in sequence, the helmet position and orientation can be computed from the information supplied by the sensors.
Under the direction of Dr. Thomas Furness, an experimental headmounted display was developed at the Armstrong Aerospace Medical Research Laboratory at Wright-Patterson Air Force Base. (6) The Visually Coupled Airborne Systems Simulator (VCASS) was designed as an inexpensive platform with which new cockpit configurations could be evaluated. The VCASS uses miniature television tubes and an innovative optical system to present a 120' three-dimensional scene to the pilot. It also features gesture and voice communication with the host, and three-dimensional sound display.
HEAD-MOUNTED DISPLAY RESEARCH AT UNCSee-through HMD:
Borrowing ideas from the Sutherland and Callahan units, we constructed a see-through headmounted display cheaply and simply from off-the-shelf commercially available products. (See Figure 1.) The unit was built on plastic suspension straps from a pilot's instrument-training hood, onto which was mounted a horizontal shelf located at the wearer's eyebrows. Two Seiko color liquid crystal television sets were dismantled to provide the 2-inch-diagonal display screens and driving circuits. These screens have a resolution of 220v x 320h pixels. Half silvered mirrors at a 45° angle enable the wearer to view the screens while still being able to see his physical surroundings. Plastic lenses between the half-silvered mirrors and the screens adjust the focal length to a comfortable value, and an electroluminescent panel backlights the liquid crystal screens. The field-of-view presented by this unit was approximately 25° horizontally.
We have recently begun collaborating with a group at the Air Force Institute of Technology, located at Wright-Patterson Air Force Base and under the leadership of Major Phil Amburn, a recent UNC student. Amburn’s group has designed and constructed a head-mounted display similar to the NASA Ames unit. (See Figure 2.) It is built on a bicycling helmet, and although one is able to strap it on quite securely, it is much heavier than our see-through HMD. The moment of inertia of the unit is very high and we feel that removing a couple of pounds will greatly reduce user fatigue. Since television technology is constantly improving, Amburn was able to use larger 3inch-diagonal color television screens, and with simple magnifying optics, these screens provide a horizontal viewing field of approximately 55°. The AFIT unit uses fluorescent backlighting, which provides brighter images than our see-through HMD's electroluminescent panel. We have also found that with very little modification, the LEEP optics can be easily incorporated, and in first experiments, even with no correction for the optical distortion, the wideangle optics enhanced the visual effect.
FUTURE DIRECTIONSDisplay of choice:
We have two goals for our work in head-mounted displays. For the short term we would like to make the HMD the "display of choice" in our graphics tab. This means that when somebody has some three dimensional data which he would like to examine quickly, he would be able to load it into the head-mounted display system with minimal effort and then explore it with the HMD. In its current state, with its many wires and gadgets and its difficult adjustments, all excursions with the HMD must be supervised by experienced personnel.
For the longer term, we would like to research the proposition that the head-mounted display can be a useful means of visualizing virtual worlds. This may seem obvious, but it has been our experience that user preferences cannot always be predicted correctly. If a molecular modeling package comes with the option of using a head-mounted display, will the average chemist, who will not necessarily be as thrilled with new whiz-bang gadgets as we are, really choose to use it? In its current state, our HMD is not ready for such an evaluation. Much work is still required to bring the head-mounted display up to a level where it has a fighting chance of acceptance.
Our current system configuration should permit us to get down to a delay of 100 milliseconds, which is on the borderline of human perception of "instantaneous." As better computing and tracking systems become available, this problem may not be as troublesome as it has been. Another approach is being taken by our collaborators at AFIT, who are attempting to use predictive tracking techniques to reduce the lag effect.
The Pixel-Planes (4) graphics processor does not yet have the capability of generating two separate stereo images, although development of the hardware has begun. We have been getting by with mono images (no stereo disparity or convergence cues), and this has proven to be satisfactory for our Walkthrough application, where interposition, linear perspective, and head motion parallax provide strong depth cueing. The effect is less satisfactory in our molecule docking application, where disparity and convergence would provide valuable cues.
Following the NASA Ames group, we plan to replace our pool-ball mouse with a VPL DataGlove. As many of our anticipated applications are menu-driven to some extent, we are developing pop-up menus using Bezier-defined fonts which can exist in eye space or in model space. Menu selection could be done manually with the DataGlove or through voice input. Audio feedback will also be added, and should prove extremely useful in bump checking of virtual objects.
The head-mounted display sits differently on different heads. This means that there are interuser variations in gaze direction relative to the Polhemus sensor on the HMD, interocular separation, field of view perceived by the user, and registration between real world objects and their computer-generated counterparts (e.g. hand / cursor). These variations can be adjusted for software for each user, but simple, effective calibration schemes must be developed.
We eagerly await the completion of the next generation Pixel-Planes 5 graphics engine. The anticipated twenty-fold speed increase and the ability to work with multiple frame buffers will allow us to explore much more complex virtual worlds in realtime.
Research in our department is aimed at the development of real, working systems. As an in intuitive and natural means of exploring virtual worlds, the headmounted display holds great promise for improving human-computer interaction. Much work lies ahead, however, before the head-mounted display can become a commonplace tool in the repertoire of problem-solving aids provided by computer graphics.
ACKNOWLEDGEMENTOur appreciation goes to David Lines for his editorial assistance in the preparation of this paper.
This research was supported by the National Institutes of Health (Grant RR 02170-05), and the Office for Naval Research (Contract No. N00014-860680).
Appeared in Non-Holographic True 3-Dimensional Display Technologies, SPIT Proceedings, Vol. 1083, Los Angeles, CA, January 15--20,1989.
UNC is an Equal Opportunity/Affirmative Action Institution.
I.E. Sutherland, "The ultimate display Proceedings of the IFIP Congress 2, 506-508 (1965).back
I.E. Sutherland, "A head-mounted three-dimensional display," 1968 Fall Joint Computer Conference, AFIPS Conference Proceedings, 33, 757-764 (1968).back
M.A. Callahan, A 3-D display head-set for personalized computing, M.S. thesis, Dept. of Architecture, Massachusetts Institute of Technology, 110 pp. (1983).back
S.S. Fisher, M. McGreevy, J. Humphries, and W. Robinett, "Virtual environment display system," Proc. 1986 Workshop on Interactive 3D Graphics, 77-87 (1986).back
CAE Electronics, Ltd., Introducing the virtual display system you wear, 4 pp.(1986).back
C.V.Glines, "Brainbuckets", Air Force Magazine, 690, 86-90 (1986).back