[MUSIC] In a previous lesson, we learned how to create and draw 3D objects in a virtual environment using OpenGL ES. However, as most Android devices only have 2D screens, the 3D objects have to be projected onto a 2D plane and displayed on the screen. A user cannot really see the 3D Shape, hence, we have to use shading, color, and motion effects to show the 3D shape. In this lesson, I'm going to show you how to enable virtual reality with binocular view using OpenGL ES. Before we look into binocular view, let me explain briefly how we perceive 3D effects. We perceive 3D or depth through a number of visual cues, such as perspective, where an object appears to be smaller when it's further away. Occlusion, where a closer object will occlude objects which are further away. Details, where more details of the object can be seen when it's closer. Shading, where shading of the object or light casting on the object can create 3D effects showing the 3D shape of the object. Shadow, the shadow of an object which is closer to the light source will appear to be bigger. Motion, when the object is closer to the viewer, the object will appear to be moving faster than others which are further away. And binocular or disparity or stereo vision, the disparity between the views from the left and right eyes can be used to infer the depth information, which is how our visual systems perceive depth or 3D. This is also how all the virtual reality or augmented reality goggles works. Let's focus on the binocular disparity or stereo vision. The distance between the left and right eye is called intraocular distance. The typical intraocular distance for adults is about 6.3 centimeters. To create binocular views of a scene, we can use the simple offset techniques, hence, creating two views where one is shifted with respect to the intraocular distance. In other words, the projection matrices are the same for both views. However, this will result in only part of the object being seen in one of the two views, which will lead to discomfort for the user. Another approach to create binocular vision is the toe-in approach, where the views are shifted like the offset technique mentioned before, but the views are also rotated towards the focus point. Although it seems to be able to provide more accurate binocular views, it introduces a slight vertical parallax and leads to discomfort for the user when looking at the scene for long periods of time. The best method to create binocular vision is the off-axis approach, where the projection matrix of each of the views is adjusted such that both eyes can see the screen or object from the different viewing points, creating a binocular disparity effect. As shown in this figure, the IOD stands for intraocular distance, and the frustum shift refers to the shifting of the views towards the screen or object. So the frustrum shift can be calculated by -IOD over 2 times nearZ over screenZ. nearZ is the z-coordinate or depth of the near clipping plane of the frustum, and the screenZ, it is the z-coordinate of the system of a screen. The aspect is the variable associated with the aspect ratio of a screen, which is the width of the screen divided by the height of the screen. The frustum of the left eye is the frustumshift- aspect 1, frustumshift + aspect -1. The frustum of the right eye is then -aspect -frustumshift 1, aspect -frustumshift -1. We can use this to create the projection matrices for the two views. For the viewing matrix, we need to set the locations of the eyes. The coordinates of the left eye can be set to -IOD over 2, 0, 0.1, where 0.1 is the z-location of the eye. The coordinates of the right eye is then equal to IOD over 2, 0, 0.1. This means that both eyes are located at the same location side by side. To create a binocular or stereo view of the scene, we can use the frame buffer technique. Hence, we can create two frame buffers, one for each eye, and then project a 3D object onto each frame buffer using the calculated projection matrix of each eye. Let's look into how to create such binocular views in Android. I first create a new Java class called StereoView, and define the variables, depthZ, which is the Z location of the object, aspect is the screen aspect ratio. The nearZ and the farZ are the near and far clipping plane of the frustum. screenZ is the Z location of the screen. The intraocular distance, IOD, the frustumshift, and the modeltranslation is the shift of the object model. To draw a 3D object onto the frame buffer, we need to calculate the projection matrix and the view matrix. As mentioned earlier, the frustum of the left eye is equal to frustumshift-aspect 1, frustumshift+aspect-1, and I use that to set the ProjectionMatrix for the left view in the constructor of the StereoView class. Then I set the left eye's location to -IOD over 2, 0, 0.1, for the ViewMatrix of the left view. I set the modeltranslation to IOD over 2 to shift the model along the x-axis. The same methods apply to calculate the ProjectionMatrix and the ViewMatrix for the right view. To show the frame buffer, we need another projection matrix, and I name it mDisplayProjectionMatrix. This projection matrix is mainly used to project a 2D texture of the frame buffer onto the screen. Hence, I use the orthogonal projection matrix, like the frame buffer examples before, I set the ViewMatrix and scale the ModelMatrix according to the aspect ratio. For the left view, I translate the ModelMatrix to the left by 1, and for the right view, I translate the model to the right. Then I use the ProtectionMatrix, ViewMatrix, and the ModelMatrix to calculate the mMVPMatrix for drawing the 2D plane. For the shaders of the StereoView object, like the previous mirror or reflection example, a simple texture mapping vertex and fragment shader is used to create the StereoView. To simplify the programming, I create a function called getModelMatrix. This function is used to calculate the model matrix for joining the object in the frame buffer of the left or right view. It takes the rotatex, rotatey, and rotatez as the parameters, which are the rotational angles about the x, y, and z-axes set by the user. Then I created rotational matrices accordingly, I multiply the rotational matrices with the mFrameModelMatrix which I defined earlier to calculate the pModelMatrix. The function then returns the pModelMatrix for drawing the object. In the draw function, the program simply draws a 2D plane with a frame buffer texture mapped onto the plane to show the projected left or right view. In the MyRenderer Java class, I added two StereoView objects, namely, the mleftview and the mrightview. I create these two objects in the onSurfaceChanged function. In the onDrawFrame function, similar to the previous mirror or reflection example, for the left view, I first bind the frame buffer, set the viewport, calculate the model view projection matrix, and then draw the object and then bind the frame buffer. Notice that I used the getModelMatrix function of the mleftview to calculate the ModelMatrix. The same method applies to draw the object in the frame buffer for the right view. After I finish drawing the object onto the frame buffer, I then set the viewport back to the size of the screen, and call the mleftview and mrightview draw functions to draw the projected views onto the screen. As an example, I use a 360 camera image taken in Imperial College London as a texture of a sphere. When running the program to draw this 3D sphere, two views will be shown on the screen, where the left view shows slightly more on the left, and the right view shows slightly more on the right. As you can see there, that a small part of our blue faculty building is only shown on the top-right corner of the right View. This left and right view disparity enables a user to see 3D. To see 3D effects of a binocular view, you need to use a VR goggle like this one. You can run your program on to a Android phone and the insert the film into the VR goggles. You would then be able to see the 3D effect through them. The VR goggles basically help separating and aligning the left and right views of the screen for your left and right eyes to see. In the next video, I'll show you how to implement this in an example program. [MUSIC]