





















































(For more resources on Debugging with OpenGL ES in iOS 5, see here.)
The Open Graphics Library (OpenGL) can be simply defned as a software interface to the graphics hardware. It is a 3D graphics and modeling library that is highly portable and extremely fast. Using the OpenGL graphics API, you can create some brilliant graphics that are capable of representing 2D and 3D data.
The OpenGL library is a multi-purpose, open-source graphics library that supports applications for 2D and 3D digital content creation, mechanical and architectural design, virtual prototyping, fight simulation, and video games, and allows application developers to confgure a 3D graphics pipeline, and submit data to it.
An object is defned by connected vertices. The vertices of the object are then transformed, lit, and assembled into primitives, and rasterized to create a 2D image that can be directly sent to the underlying graphics hardware to render the drawing, which is deemed to be typically very fast, due to the hardware being dedicated to processing graphics commands.
We have some fantastic stuff to cover in this article, so let's get started.
In this section, we will be taking a look at the improvements that have been made to the Xcode 4 development environment, and how this can enable us to debug OpenGL ES applications much easier, compared to the previous versions of Xcode.
We will look at how we can use the frame capture feature of the debugger to capture all frame objects that are included within an OpenGL ES application. This tool enables you to list all the frame objects that are currently used by your application at a given point of time.
We will familiarize ourselves with the new OpenGL ES debugger within Xcode, to enable us to track down specifc issues relating to OpenGL ES within the code.
Before we can proceed, we frst need to create our OpenGLESExample project.
Once your project has been created, you will be presented with the Xcode development interface, along with the project fles that the template created for you within the Project Navigator window.
Now that we have our project created, we need to confgure our project to enable us to debug the state of the objects.
To enable us to detect and monitor the state of the objects within our application, we need to enable this feature through the Edit Scheme… section of our project, as shown in the following screenshot:
From the Edit Scheme section, as shown in the following screenshot, select the Run OpenGLESExampleDebug action, then click on the Options tab, and then select the OpenGL ES Enable frame capture checkbox.
For this feature to work, you must run the application on an iOS device, and the device must be running iOS 5.0 or later. This feature will not work within the iOS simulator. You will need to ensure that after you have attached your device, you will then need to restart Xcode for this option to become available.
When you have confgured your project correctly, click on the OK button to accept the changes made, and close the dialog box. Next, build and run your OpenGL ES application. When you run your application, you will see two three-dimensional and colored box cubes.
When you run your application on the iOS device, you will notice that the frame capture appears within the Xcode 4 debug bar, as shown in the following screenshot:
When using the OpenGL ES features of Xcode 4.2, these debugging features enable you to do the following:
The following screenshot displays the captured frame of our sample application. The debug navigator contains a list of every draw call and state call associated with that particular frame.
The buffers that are associated with the frame are shown within the editor pane, and the state information is shown in the debug windowpane. The default view when the OpenGL ES frame capture is launched is displayed in the Auto view. This view displays the color portion, which is the Renderbuffer #1, as well as its grayscale equivalent of the image, that being Renderbuffer #2.
You can also toggle the visibility between each of the channels for red, green and blue, as well as the alpha channels, and then use the Range scroll to adjust the color range. This can be done easily by selecting each of the cog buttons, shown in the previous screenshot.
You also have the ability to step through each of the draw calls in the debug navigator, or by using the double arrows and slider in the debug bar.
When using the draw call arrows or sliders, you can have Xcode select the stepped-to draw call from the debug navigator. This can be achieved by Control + clicking below the captured frame, and choosing the Reveal in Debug Navigator from the shortcut menu.
You can also use the shortcut menu to toggle between the standard view of drawing the image, as well as showing the wireframe view of the object, by selecting the Show Wireframe option from the pop-up menu, as shown in the previous screenshot.
When using the wireframe view of an object, it highlights the element that is being drawn by the selected draw call. To turn off the wireframe feature and have the image return back to the normal state, select the Hide Wireframe option from the pop-up menu, as shown in the following screenshot:
Now that you have a reasonable understanding of debugging through an OpenGL ES application and its draw calls, let's take a look at how we can view the textures associated with an OpenGL ES application.
When referring to textures in OpenGL ES 2.0, this is basically an image that can be sampled by the graphics engine pipeline, and is used to map a colored image onto a mapping surface. To view objects that have been captured by the frame capture button, follow these simple steps:
To see details about any object contained within the OpenGL ES assistant editor, double-click on the object, or choose the item from the pop-up list, as shown in the next screenshot.
It is worth mentioning that, from within this view, you have the ability to change the orientation of any object that has been captured and has been rendered to the view. To change the orientation, locate the Orientation options shown at the bottom- right hand of the screen. Objects can be changed to appear in one or more views as needed, and these are as follows:
For example, if you want to see information about the vertex array object (VAO), you would double-click on it to see it in more detail, as shown in the following screenshot.
This displays all the X, Y, and Z-axes required to construct each of our objects. Next, we will take a look into how shaders are constructed.
There are two types of shaders that you can write for OpenGL ES; these are Vertex shaders and Fragment shaders.
These two shaders make up what is known as the Programmable portion of the OpenGL ES 2.0 programmable pipeline, and are written in a C-like language syntax, called the OpenGL ES Shading Language (GLSL).
The following screenshot outlines the OpenGL ES 2.0 programmable pipeline, and combines a version of the OpenGL Shading Language for programming Vertex Shader and Fragment Shader that has been adapted for embedded platforms for iOS devices:
Shaders are not new, these have been used in a variety of games that use OpenGL. Such games that come to mind are: Doom 3 and Quake 4, or several fight simulators, such as Microsoft's Flight Simulator X.
Once thing to note about shaders, is that they are not compiled when your application is built. The source code of the shader gets stored within your application bundle as a text fle, or defned within your code as a string literal, that is,
vertShaderPathname = [[NSBundlemainBundle] pathForResource:@"Shader" ofType:@"vsh"];
Before you can use your shaders, your application has to load and compile each of them. This is done to preserve device independence.
Let's take for example, if Apple decided to change to a different GPU manufacturer, for future releases of its iPhone, the compiled shaders may not work on the new GPU. Having your application deferring the compilation to runtime will avoid this problem, and any latest versions of the GPU will be fully supported without a need for you to rebuild your application.
The following table explains the differences between the two shaders.
Shader type | Description |
Vertex shaders | These are programs that get called once-per-vertex in your scene. An example to explain this better would be - if you were rendering a simple scene with a single square, with one vertex at each corner, this would be called four times.
Its job is to perform some calculations such as lighting, geometry transforms, moving, scaling and rotating of objects, to simulate realism. |
Fragment shaders | These are programs that get called once-per-pixel in your scene. So, if you're rendering that same simple scene with a single square, it will be called once for each pixel that the square covers. Fragment shaders can also perform lighting calculations, and so on, but their most important job is to set the final color for the pixel. |
Next, we will start by examining the implementation of the vertex shader that the OpenGL template created for us. You will notice that these shaders are code fles that have been implemented using C-Syntax like instructions. Lets, start by examining each section of the vertex shader fle, by following these simple steps:
attribute vec4 position;
attribute vec3 normal;
varyinglowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main(){
vec3eyeNormal = normalize(normalMatrix * normal);
vec3lightPosition = vec3(0.0, 0.0, 1.0);
vec4diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
floatnDotVP = max(0.0, dot(eyeNormal,
normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
gl_Position = modelViewProjectionMatrix * position;
}
varyinglowp vec4 colorVarying;
void main(){
gl_FragColor = colorVarying;
}
We will now take a look at this code snippet, and explain what is actually going on here.
You will notice that within the fragment shader, the declaration of the varying type variable colorVarying, as highlighted in the code, has the same name as it did in the vertex shader. This is very important; if these names were different, OpenGL ES won't realize it's the same variable, and your program will produce unexpected results.
The type is also very important, and it has to be the same data type as it was declared within the vertex shader. This is a GLSL keyword that is used to specify the precision of the number of bytes used to represent a number.
From a programming point of view, the more bytes that are used to represent a number, the fewer problems you will be likely to have with the rounding of foating point calculations. GLSL allows the user to precision modifers any time a variable is declared, and it must be declared within this fle. Failure to declare it within the fragment shader, will result in your shader failing to compile.
The lowp keyword is going to give you the best performance with the least accuracy during interpolation. This is the better option when dealing with colors, where small rounding errors don't matter. Should you fnd the need to increase the precision, it is better to use the mediump or highp, if the lack of precision causes you problems within your application.
For more information on the OpenGL ES Shading Language (GLSL) or the Precision modifers, refer to the following documentation located at: http://www.khronos.org/registry/ gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf.