The Use of Immersive Virtual Reality in the Learning Sciences: Digital Transformations of Teachers, Students, and Social Context
Jeremy N. Bailenson and Nick Yee
Department of Communication
Stanford University
Jim Blascovich and Andrew C. Beall
Department of Psychology
Stanford University
Nicole Lundblad
Department of Symbolic Systems
Stanford University
Michael Jin
Department of Computer Science
Stanford University
The article primarily shows how virtual environments can help teachers engage students more by getting visual cues on what students aren’t getting enough ‘eye gaze’ and setting students in the virtual center and front of the classroom via virtual headset. More importantly, this article provides us with definitions for various types of environments we will be considering for our project. Below is an excerpt from the article, but first I will highlight the most important portion with itlaics that is written later in the article.Feel free to read the full excerpt as well;
Virtual Environments (VE’s) are distinct from other types of multimedia learning environments (e.g., Mayer, 2001). In this article, we define VEs as “synthetic sensory information that leads to perceptions of environments and their contents as if they were not syn- thetic” (Blascovich et al., 2002, p. 105). Typically, digital computers are used to generate these images and to enable real-time interaction between users and VEs. In principle, people can interact with a VE by using any perceptual channel, in- cluding visual (e.g., by wearing a head-mounted display [HMD] with digital dis- plays that project VEs), auditory (e.g., by wearing earphones that help localize sound in VEs), haptic (e.g., by wearing gloves that use mechanical feedback or air blast systems that simulate contact with object VEs), or olfactory (e.g., by wearing a nose piece or collar that releases different smells when a person approaches dif- ferent objects in VEs).
An immersive virtual environment (IVE) is one that perceptually surrounds the user, increasing his or her sense of presence or actually being within it. Consider a child’s video game; playing that game using a joystick and a television set is a VE. However, if the child were to have special equipment that allowed him or her to take on the actual point of view of the main character of the video game, that is, to control that character’s movements with his or her own movements such that the child were actually inside the video game, then the child would be in an IVE. In other words, in an IVE, the sensory information of the VE is more psychologically prominent and engaging than the sensory information of the outside physical world. For this to occur, IVEs typically include two characteristic systems. First, the users are unobtrusively tracked physically as they interact with the IVE. User actions such as head orientation and body position (e.g., the direction of the gaze) are automatically and continually recorded, and the IVE, in turn, is updated to re- flect the changes resulting from these actions. In this way, as a person in the IVE moves, the tracking technology senses this movement and renders the virtual scene to match the user’s position and orientation. Second, sensory information from the physical world is kept to a minimum. For example, in an IVE that relies on visual images, the user wears an HMD or sits in a dedicated projection room. By doing so, the user cannot see objects from the physical world, and consequently it is eas- ier for him or her to become enveloped by the synthetic information.
There are two important features of IVEs that will continually surface in later discussions. The first is that IVEs necessarily track a user’s movements, including body position, head direction, as well as facial expressions and gestures, thereby providing a wealth of information about where in the IVE the user is focusing his or her attention, what he or she observes from that specific vantage point, and what are his or her reactions to the environment. The second is that the designer of an IVE has tremendous control over the user’s experience and can alter the appear- ance and design of the virtual world to fit experimental goals, providing a wealth of real-time adjustments to specific user actions.
Collaborative virtual environments (CVEs) involve more than a single user. CVE users interact via avatars. For example, while in a CVE, as Person A communicates verbally and nonverbally in one location, the CVE technology can nearly instantaneously track his or her movements, gestures, expressions, and sounds. Person B, in another location, sees and hears Person A’s avatar exhibiting these behaviors in his or her own version of the CVE when it is networked to Person A’s CVE.
An affordance of a virtual environments in learning, is that virtual environments can be catered to the audience to encourage learning. One example is a doll house that was featured to encourage children to tell stories where a same age virtual avatar was the teaching agent, rather than an authoritative teacher figure. This opens up the potential in our virtual environment to encourage users to participate with others via dancing, jumping, chanting, ect. Imagine a virtual item that encourages jumping. Sensing the jumping moves a fire place air breather visual up and down, putting air into a balloon that explodes when full, creating particles that float to the beat of the music. This could encourage participation more than traditional ‘dancing’ and encourage those who don’t normally move around otherwise.
According to this article, co-learners can only enhance learning thanks to dialogue and shared experience.
VE’s offer the ability to provide multiple perspectives of the same scene. The visualizations in VE’s can also act as visual cues with the integration of other sensory technology. An advantage is that with user testing, behavioral profiles can be made once the user type is analyzed to enhance the ‘usability.’
No comments:
Post a Comment