I have talked about how to use the inertial sensors to rotate the view of a 3D scene. Such techniques are commonly used in creating virtual reality apps. Before we look into methods to create virtual reality apps on Android, I'll first talk about the Frame Buffer which is an essential tool for creating virtual reality effects. The OpenGL pipeline outlines how graphics are processed in the GPU in your Android device. This module assumes you have an understanding of how to program the vertex and fragment processor with OpenGL to draw 3D graphics onto the Frame buffer for displaying on the screen. If you are not sure about this or would like a refresher, please refer to my OpenGL course which covers all of this in detail. So to quickly recap how the pipeline works, we first define the vertexes of an object and pass the vertexes to the vertex shader. The vertex shader, reassembles the shape and goes through the rasterization process, then the fragment shader would test the depth and blend the color to draw the object in the frame buffer which is then displayed on the screen. However, instead of displaying onto the screen, we can also use the frame buffer to draw off-screen objects for generating multiple views, shadows, and other visual effects. Let's take a closer look at the frame buffer. To draw an object accurately in 3D, we need to have color and depth information. So in this case the character A will be shown in front of the S. Part of the yellowish surface of the character S is hidden behind the bluish green surface of the character A. To create a frame buffer, we have to attach a color buffer and a depth buffer. The color buffer can be in the form of a texture or a render buffer, depth will be stored in a render buffer. To create a new frame buffer, we first call the glGenFramebuffers to generate frame buffers. Bear in mind that we create this new frame buffer for drawing objects outside of the screen. So this new frame buffer will not be shown on the screen. However, we can project this frame buffer onto a surface to be drawn on the screen. In this case, I generated one frame buffer. For storing the color information in the frame buffer, we can use a texture. So I first generate one texture and point to that we have the frameBufferTextureID. Then, initialize the texture, I will explain the function initialize texture later. So basically, I initialize the texture with the specified width and height with the pixel format of RGBA, I then bind the frame buffer and tell the graphics processor that I'm going to use this newly generated frame buffer instead of the one for displaying on the screen. A call the glFrameBufferTexture2D to attach the newly created texture to the frame buffer as a color attachments zero. For the depth information, I then generate a Renderbuffer, bind the render buffer and set its storage format to depth components 24 bits, and with this specify the width and height. Then, I call the GLFramebufferRenderbuffer to attach the Renderbuffer to the Framebuffer. After attaching the texture and Renderbuffer to the Framebuffer, I can then check the status of the Framebuffer to see if it has created successfully. When a Framebuffer is created, we can then unbind the texture in the Framebuffer so that we can draw objects onto the Framebuffer for displaying on the screen. After generating the texture we need to bind and set the parameters. So i'll define the function called initializeTexture. It basically calls glActivateTexture to activate the texture, bind the texture, and then set a filter parameters to GL_NEAREST and set the texture wrap parameters to clamp to edge. After that, it calls the glTextImage2D to set pixel format, width, height, and pixel type of the texture. To draw the 3D objects onto the frame buffer in the onDraw frame function of the my renderer, we simply code the glBindFramebuffer to bind a new frame buffer. Then, we draw the object as if it were drawn on the screen. Afterwards, we have to unbind the framebuffer for the renderer to draw objects onto the default frame buffer to be displayed on the screen. To show the content of the frame buffer, we just have to project the frame buffer onto a 2D plane. As I mentioned previously, the frame buffer uses a texture object to store the color components of the 3D objects. So to view the content of the frame buffer, we just map the colored texture of the frame buffer on to the 2D plane. To show the content of the frame buffer, I first create a new class called FrameBufferDisplay. It basically, draws a 2D plane in 3D space and maps a texture on to the 2D surface. Therefore, the vertexes of a 2D plane is set to have the coordinates (-1,-1,-1) (1,-1,1) (1,1,1) and (-1,1,1) and the texture coordinates to be (0,0) (1,0) (1,1) and (0,1). Vertex index and texture buffers are then created for uploading the coordinates onto the shaders. In the vertex shader for the frame buffer display, I just set the gl_Position to the vertex position times the model view projection matrix and set the values of the varying variable texture coordinates. In the fragment shader, I first define a four-dimensional vector variable fragment color, and use the texture 2D function to get the color of the fragment from the texture image. Then, I check if the fragment has any color or just black. If it's just black or a very dim color, I will discard the fragment, otherwise I set that gl_FragColor to be equal to the fragmentColor. Note that I used the function discard, this will allow me to set those black fragments to be transparent. When drawing this 2D plane, I bind the texture of the the GL_TEXTURE1 with the color texture attached to the frame buffer with ID frameBufferTextureID[0]. Then, I set the TextureHandle to 1 to select the gL texture one object. I pass the vertexes to the shaders, and draw 2D plane in the virtual environment. For joining the 2D plane in the 3D space, we don't need to use perspective projection. So instead of perspective projection, we can simply use orthogonal projection to draw the 2D plane. Therefore, in the onSerfaceChanged function, I used a matrix orthoM function to set the mProjectionMatrix to be a orthogonal matrix. Note that it depends on the aspect ratio of the screen, the projection matrix is set to draw the 2D plane to take up the full screen size. In addition, I created a frame display object call msphericalmirror and pass the screen width and height dimension to create a 2D plane. Then, in the onDrawFram function, I've first set the viewport to the actual viewport width and height of the screen. I then reset the mModalMatrix to be an identity matrix and scale its x dimension to be the frame buffer display's aspect ratio, and update the model view projection matrix using matrix multiplications. This will allow me to scale the 2D plane to fit the screen. Then, I call the draw function to draw the plane to display the content of the frame buffer. We can use the frame buffer to create many interesting effects like this one. This Sphere in the middle appears like a shiny reflective metal ball. Is actually a 2D plane mapped with the texture of the frame buffer. In the next video, I'll show you how to implement this in our example program. We have covered a lot of complex content in this video, so please feel free to review it.