[MUSIC] Hi, I'm David Swomp, I am the manager of the immersive Virtual Reality Lab here at the University College London. And I'm just going to show you our cape facility. This is a four-wall projected VR facility, a bit of a contrast to head-mounted displays in a number of ways. So, crucially the difference here is, instead of making you wear the displays, we just project an environment around you. So you can see an environment here. It's a room, it has some furniture in it, tables, chairs, a clock on the wall, that kind of thing. And the people not wearing the set can see it. So although it is still a single person, a single user facility, more than one person can see the environment in here. The way it works, three of these walls are back projected, which means they don't cast a shadow, the projections come from behind the wall. The floor is projected from above, just because we don't have room here to back project it, it's the only one that casts a shadow, casts is slightly back so people tend not to see it. For the 3D, we have these tracked active stereo glasses. Now what I mean by active stereo is instead of using the polarizing technology that you might get with 3D cinema where you have two projectors, different polarizations, we use a single projector for each screen. And they alternately put out a left eye image and a right image very, very quickly at 100 frames a second. What these glasses do, they have liquid crystal lenses that alternatively block out the left and then the right eye, so that each eye only sees the appropriate view. As well as that, of course, they have to display a perspective correct view. So for that the glasses are tracked. so you'll see as I move them around, we get a different perspective view from the floor down there. So the person wearing these has a perspective correct view of the environment all the time. The other aspect of this of course, is that the person needs to have accurate perspective of the world. And for this we need to have tracked point of view for each of the eyes. There are lots of different ways you can do this tracking of course. Here we have camera based optical tracking. So the six cameras you see around here with the red LEDs, they're bathing this environment in near infrared light. And the near infrared cameras are picking up the positions of these retro reflective markers. They're arranged in a pattern that you can recognize. So from the six different views, you can very accurately get the position and orientation of this in space. And that information is fed back to the computer and it will draw from the correct orientation. So this differs from normally, say, if you were just setting up 3D graphics on a desktop computer. You'll set up your camera to have a viewing frost that see 40 centimeters in front of the screen, and everything will look about right. Here, we've got eight different viewing frost, one for each eye for each of these screens, and they're constantly changing shape. So these off-axis projections that all have to be calculated for every frame as it changes. So you can see how the point of view moves through the scene, if I put this down on the floor, we have a view from the floor. We also have, for interaction, another tract device. You can have as many of these as you like. You might have, if you want to do full body motion capture, you would put these markers and a motion capture suit all over your body. Here, we've just got a tracking device that's tracking the position of my hand, so you can see this pointing device here. It also has some buttons on it, so we can program them to do different things. In this environment we programmed it to allow it to unlock this box, and open the lid of it. But you might use it for navigation, for selecting objects, for making things happen. Just depending on the application you have. One of the main contrasts with headmount displays, and this is a consequence of not wearing the displays, is here we're capable of providing a much wider field of view than you'll get with a headmount display. Typical head-mounted displays that you have now will go between 100 and 120 degrees field of view. In here, if you stand in the middle, you have 270 degrees horizontal field of view. Of course, head-mounted displays will give you a complete range of view, no matter where you look, you're looking at the environment. In here just because of the design of this, if I look up I'm looking out at environment, if I look back this way, I'm looking out of the environment. Another consequence of this is that you can get away with lower frame rates in here. With the head mount display you really want to be running minimum 60 hertz. Probably 90 hertz to have smooth looking graphics as you look around. In here because you're not wearing it, when you do, especially rotations, turning your head, what you're going to look at is already there. So you have some, a greater tolerance for lower frame rates in here. Another consequence of projection setups goes to head mounted displays is here. You can see your own body. So when I look around in here I can see my own hands. This is, depending on your application, this can be an advantage. Sometimes the head mounted displays it's better not to be able to see your own body. So for example, if you have a virtual body, an avatar, obviously you have the overhead of having to create that and having to track that. On the other hand, if you want to do interactions with virtual objects, it's very much easier to trick people into thinking that the positioning is accurate. The overhead of doing it when you can see your real body and track them with virtual objects is far greater. You also enter into what we call accommodation vergence problems. So for example, if I look out here and I'm looking at the edge of this table here. To me, this looks like my hand is in the same place as it. However, if I look at it, I can't quite focus on both my hand and the edge of the table at the same time, because physically they're at different distances. So having said that this is a single user experience, only one person can be wearing a tracked pair of glasses and getting the correct perspective hue. You can, of course, have small groups of people in here. They can wear the stereo glasses. And if they stand near to the person being tracked, they will have an almost perspective correct view. This is quite good for things like designer view and engineering or architecture ,where you might have groups of people who want to talk to each other. And of course, they don't have the isolation of wearing head-mounted displays. They can see each other in here. One other aspect that impacts in different ways, is the problem of simulator sickness. This effects all modes of virtual reality, head mount displays and caves. In here, you have the advantage on one hand that because you're not wearing the screens, you can cope better with the latency of rotation. So that you don't get this big wash of graphics when you turn your head, which can be quite disorienting, especially if there's some latency there. On the other hand, you also have a very, very wide field of view. So you get that stimulation to the periphery of your eye where all the motion sensitivity is. So actually, when you do a lot of navigation in here, you can make motion sickness problems worse. So again, this comes down to good application design to avoid these problems. [MUSIC]