jim.shamlin.com

7: Augmented Virtuality: Bringing The Material Into The Virtual

The author describes a feature used by a greeting card company that enables the user to play an animation: when you hold the card in front of a Web camera, the front of the card is replaced with an animated scene on the computer screen, but the rest of the scene remains unchanged. That is, the user sees on the computer screen himself and his real environment, but the paper card he is holding in the image is replaced by an animation (a character who performs a musical number), as if he were holding a small video screen.

The key quality of this concept of "augmented virtuality" is that the user remains in their real environment, but digital technology projects some element into reality.

While both the greeting card company and another example he provides, of a sports card company that does something similar, call their products "augmented reality," the author suggests it is not - and in fact he maintains augmented virtuality to be the "exact opposite" of augmented reality.

(EN: I'm not sure I quite get the distinction yet. The notion of augmented reality involves overlaying data onto a real-world depiction, and these examples seem to fit that definition as well. The chief difference I see is that AR deals with depicting real data for the purposes of knowing more than can be seen while AV depicts something fictional and reality is merely a setting for it. Perhaps the difference will become clearer as the chapter progresses?)

Bodily Engagement

Nintendo's gaming concept of the "Wii" (later mimicked by other game producers) was to provide the player with a stick-like control that could be physically manipulated - to swing a golf club, the user didn't make some artificial gesture using a joystick and buttons, but swung the controller itself as if it were a club. The idea, which worked brilliantly, was to appeal to the non-gamer by giving them a more intuitive way to interact with the game.

Looking backward, the "Guitar Hero" game did something similar, by providing the user a device that could be held and manipulated in a way similar to a guitar; as did "Dance Dance Revolution" by using a physical platform to enable to user to interact with the game by stepping into specific spots of the platform itself. And looking forward, Sony offers a "Move" system that eliminates the physical controller by using a camera to recognize the player's physical body and interpret movements.

There's a side note about the use of technology to stimulate physical activity rather than substitute for it - a player can actually work up quite a sweat - and a number of fitness applications are being developed so that the game can be used as a tutor or coach for people who wish to use it as a means to exercise in their own home.

(EN: I've serious doubts about whether a person gets quite the same workout playing golf or tennis in their living room as they do from the actual activity - but would concede that anything is better than nothing in terms of physical activity, and there is some health benefit for a purely callisthenic routine.)

The technologies we are seeing today seem fairly basic, but are the first steps toward more wondrous things to come: freeing games from the joystick, or computers from the mouse-and-keyboard, and allowing a person greater latitude in interacting with a smarter device has a great deal of potential.

From the Body to the Mind

While the previous examples point to use of technology focused on the body, other firms are leveraging similar technology in a learning environment to focus primarily upon the mind. Especially for children, using a game-like interface and physical activity takes some of the tedium out of learning basic reading and math skills that have traditionally been taught by rote.

In one example, similar to the greeting card example that opened the chapter, students hold a card over their bodies and see, on a computer screen, internal organs projected over the card, based on where it is held.

Some work-related applications either replace the mouse with physical gestures, or enable the user to manipulate a box-like controller to move and rotate a 3D model that appears on screen.

Another example shows the latter technology used in an art museum, to enable to user to manipulate objects that are in display cases to see them at various angles without touching the actual artifact.

Another example enables doctors to interact with medical imaging, making medical images such as x-rays, MRI scans, and CAT scans into three-dimensional models rather than two-dimensional pictures, and in that way to get a better look at what is going on inside the body.

Expanding the Senses

Previous examples described innovative methods for providing data to the computer, but in each case data is provided from computer to user through only one sensation: sight.

Firms are experimenting with haptic interfaces, which not only use a device such as a glove or body suit for input to the computer, but for feedback to the user: the device enables the user to "feel" pressure as if they are physically touching the device in question.

(EN: console-based video games have leveraged this as many as twenty years ago, by using a steering wheel that would vibrate to simulate the conditions of a road surface, or seats that would tilt to mimic the centrifugal forces as the user takes a curve in the road. This has been done as far back as the 1980s. Also, the "motion rides" in amusement parks and "sensesurround" theaters also use physical sensation to enhance the experience. If memory serves me correctly, the latter was being used back in the mid-1970s.)

There is much experimentation, but little commercialization except in the medical field, where equipment for both training and practice of procedures that use laparoscopes or robotics inside the patient's body (invisible to the surgeon) provides touch feedback.

There's also a niche use for training race car drivers, using a driving simulator to enable the driver to pilot, and engineers to tune, a virtual vehicle on a real track in preparation for a race. A few venues use the same equipment for entertainment purposes.

Eliminating the Device

The author considers common sensors as ways to eliminate the device itself: using RFID tags, Bluetooth, or GPS positioning to locate a given object inside of a physical space. These can be attached to existing objects, or built into physical ones: for example, a house key that sends a message to a parent to let them know their child (and which of their children) has arrived at home.

One technology provides very small sensors that can be attached to any object: "we can make anything into a 3D interaction device," they claim. The author marvels that you "can't get more general-purpose" than that.

The camera-based system eliminates even the need for a physical tag. The author mentions Sony's PSP console, for which a camera can be attached that the player can calibrate to recognize his body, then gesture and pose to control on-screen elements of the game.

Microsoft's Kinect system for their Xbox console mimics this, and extends it by using a second infrared camera to sense depth as well as a microphone to process voice commands from the player. A few comments are provided to show how awestruck people are by its ability to accurately recognize vocal commands and fine gestures.

(EN: there's an extended first-person account from the co-author that I will skip, as it merely illustrates what's already been described. That's followed with some optimistic statements about the potential for this type of interface to revolutionize the way we interact with technology, which is too vague to annotate.)

Applying Augmented Virtuality

The author at last points out that there is a dearth of viable augmented virtuality innovations to date. Many are gimmicky and rely on fascination with the technical implementation, and provide little value to the user that couldn't easily (and better) delivered otherwise. As such, this real "begs" for further innovation.