Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-glsl-40-discarding-fragments-create-perforated-look
Packt
10 Aug 2011
4 min read
Save for later

GLSL 4.0: Discarding Fragments to Create a Perforated Look

Packt
10 Aug 2011
4 min read
OpenGL 4.0 Shading Language Cookbook The result will look like the following image: Getting ready The vertex position, normal, and texture coordinates must be provided to the vertex shader from the OpenGL application. The position should be provided at location 0, the normal at location 1, and the texture coordinates at location 2. As in previous examples, the lighting parameters must be set from the OpenGL application via the appropriate uniform variables. How to do it... To create a shader program that discards fragments based on a square lattice (as in the preceding image), use the following code: Use the following code for the vertex shader: #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; layout (location = 2) in vec2 VertexTexCoord; out vec3 FrontColor; out vec3 BackColor; out vec2 TexCoord; struct LightInfo { vec4 Position; // Light position in eye coords. vec3 La; // Ambient light intensity vec3 Ld; // Diffuse light intensity vec3 Ls; // Specular light intensity }; uniform LightInfo Light; struct MaterialInfo { vec3 Ka; // Ambient reflectivity vec3 Kd; // Diffuse reflectivity vec3 Ks; // Specular reflectivity float Shininess; // Specular shininess factor }; uniform MaterialInfo Material; uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 ProjectionMatrix; uniform mat4 MVP; void getEyeSpace( out vec3 norm, out vec4 position ) { norm = normalize( NormalMatrix * VertexNormal); position = ModelViewMatrix * vec4(VertexPosition,1.0); } vec3 phongModel( vec4 position, vec3 norm ) { // The ADS shading calculations go here (see: "Using // functions in shaders," and "Implementing // per-vertex ambient, diffuse and specular (ADS) shading") ... } void main() { vec3 eyeNorm; vec4 eyePosition; TexCoord = VertexTexCoord; // Get the position and normal in eye space getEyeSpace(eyeNorm, eyePosition); FrontColor = phongModel( eyePosition, eyeNorm ); BackColor = phongModel( eyePosition, -eyeNorm ); gl_Position = MVP * vec4(VertexPosition,1.0); } Use the following code for the fragment shader: #version 400 in vec3 FrontColor; in vec3 BackColor; in vec2 TexCoord; layout( location = 0 ) out vec4 FragColor; void main() { const float scale = 15.0; bvec2 toDiscard = greaterThan( fract(TexCoord * scale), vec2(0.2,0.2) ); if( all(toDiscard) ) discard; if( gl_FrontFacing ) FragColor = vec4(FrontColor, 1.0); else FragColor = vec4(BackColor, 1.0); } Compile and link both shaders within the OpenGL application, and install the shader program prior to rendering. How it works... Since we will be discarding some parts of the teapot, we will be able to see through the teapot to the other side. This will cause the back sides of some polygons to become visible. Therefore, we need to compute the lighting equation appropriately for both sides of each face. We'll use the same technique presented earlier in the two-sided shading recipe. The vertex shader is essentially the same as in the two-sided shading recipe, with the main difference being the addition of the texture coordinate. The differences are highlighted in the above listing. To manage the texture coordinate, we have an additional input variable, VertexTexCoord, that corresponds to attribute location 2. The value of this input variable is passed directly on to the fragment shader unchanged via the output variable TexCoord. The ADS shading model is calculated twice, once using the given normal vector, storing the result in FrontColor, and again using the reversed normal, storing that result in BackColor. In the fragment shader, we calculate whether or not the fragment should be discarded based on a simple technique designed to produce the lattice-like pattern shown in the preceding image. We first scale the texture coordinate by the arbitrary scaling factor scale. This corresponds to the number of lattice rectangles per unit (scaled) texture coordinate. We then compute the fractional part of each component of the scaled texture coordinate using the built-in function fract. Each component is compared to 0.2 using the built-in function greaterThan, and the result is stored in the bool vector toDiscard. The greaterThan function compares the two vectors component-wise, and stores the Boolean results in the corresponding components of the return value. If both components of the vector toDiscard are true, then the fragment lies within the inside of each lattice frame, and therefore we wish to discard this fragment. We can use the built-in function all to help with this check. The function all will return true if all of the components of the parameter vector are true. If the function returns true, we execute the discard statement to reject the fragment. In the else branch, we color the fragment based on the orientation of the polygon, as in the two-sided shading recipe presented earlier. Summary This recipe showed us how to use the discard keyword to "throw away" fragments and create a perforated look. Further resources on this subject: Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 [Article] OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects [Article] OpenGL 4.0: Building a C++ Shader Program Class [Article] The Basics of GLSL 4.0 Shaders [Article] GLSL 4.0: Using Subroutines to Select Shader Functionality [Article]
Read more
  • 0
  • 0
  • 5271

article-image-basics-glsl-40-shaders
Packt
10 Aug 2011
11 min read
Save for later

The basics of GLSL 4.0 shaders

Packt
10 Aug 2011
11 min read
Shaders were first introduced into OpenGL in version 2.0, introducing programmability into the formerly fixed-function OpenGL pipeline. Shaders are implemented using the OpenGL Shading Language (GLSL). The GLSL is syntactically similar to C, which should make it easier for experienced OpenGL programmers to learn. Due to the nature of this text, I won't present a thorough introduction to GLSL here. Instead, if you're new to GLSL, reading through these recipes should help you to learn the language by example. If you are already comfortable with GLSL, but don't have experience with version 4.0, you'll see how to implement these techniques utilizing the newer API. However, before we jump into GLSL programming, let's take a quick look at how vertex and fragment shaders fit within the OpenGL pipeline. Vertex and fragment shaders In OpenGL version 4.0, there are five shader stages: vertex, geometry, tessellation control, tessellation evaluation, and fragment. In this article we'll focus only on the vertex and fragment stages. Shaders replace parts of the OpenGL pipeline. More specifically, they make those parts of the pipeline programmable. The following block diagram shows a simplified view of the OpenGL pipeline with only the vertex and fragment shaders installed. Vertex data is sent down the pipeline and arrives at the vertex shader via shader input variables. The vertex shader's input variables correspond to vertex attributes (see Sending data to a shader using per-vertex attributes and vertex buffer objects). In general, a shader receives its input via programmer-defined input variables, and the data for those variables comes either from the main OpenGL application or previous pipeline stages (other shaders). For example, a fragment shader's input variables might be fed from the output variables of the vertex shader. Data can also be provided to any shader stage using uniform variables (see Sending data to a shader using uniform variables). These are used for information that changes less often than vertex attributes (for example, matrices, light position, and other settings). The following figure shows a simplified view of the relationships between input and output variables when there are two shaders active (vertex and fragment). The vertex shader is executed once for each vertex, possibly in parallel. The data corresponding to vertex position must be transformed into clip coordinates and assigned to the output variable gl_Position before the vertex shader finishes execution. The vertex shader can send other information down the pipeline using shader output variables. For example, the vertex shader might also compute the color associated with the vertex. That color would be passed to later stages via an appropriate output variable. Between the vertex and fragment shader, the vertices are assembled into primitives, clipping takes place, and the viewport transformation is applied (among other operations). The rasterization process then takes place and the polygon is filled (if necessary). The fragment shader is executed once for each fragment (pixel) of the polygon being rendered (typically in parallel). Data provided from the vertex shader is (by default) interpolated in a perspective correct manner, and provided to the fragment shader via shader input variables. The fragment shader determines the appropriate color for the pixel and sends it to the frame buffer using output variables. The depth information is handled automatically. Replicating the old fixed functionality Programmable shaders give us tremendous power and flexibility. However, in some cases we might just want to re-implement the basic shading techniques that were used in the default fixed-function pipeline, or perhaps use them as a basis for other shading techniques. Studying the basic shading algorithm of the old fixed-function pipeline can also be a good way to get started when learning about shader programming. In this article, we'll look at the basic techniques for implementing shading similar to that of the old fixed-function pipeline. We'll cover the standard ambient, diffuse, and specular (ADS) shading algorithm, the implementation of two-sided rendering, and flat shading. in the next article, we'll also see some examples of other GLSL features such as functions, subroutines, and the discard keyword. Implementing diffuse, per-vertex shading with a single point light source One of the simplest shading techniques is to assume that the surface exhibits purely diffuse reflection. That is to say that the surface is one that appears to scatter light in all directions equally, regardless of direction. Incoming light strikes the surface and penetrates slightly before being re-radiated in all directions. Of course, the incoming light interacts with the surface before it is scattered, causing some wavelengths to be fully or partially absorbed and others to be scattered. A typical example of a diffuse surface is a surface that has been painted with a matte paint. The surface has a dull look with no shine at all. The following image shows a torus rendered with diffuse shading. The mathematical model for diffuse reflection involves two vectors: the direction from the surface point to the light source (s), and the normal vector at the surface point (n). The vectors are represented in the following diagram. The amount of incoming light (or radiance) that reaches the surface is partially dependent on the orientation of the surface with respect to the light source. The physics of the situation tells us that the amount of radiation that reaches a point on a surface is maximal when the light arrives along the direction of the normal vector, and zero when the light is perpendicular to the normal. In between, it is proportional to the cosine of the angle between the direction towards the light source and the normal vector. So, since the dot product is proportional to the cosine of the angle between two vectors, we can express the amount of radiation striking the surface as the product of the light intensity and the dot product of s and n. Where Ld is the intensity of the light source, and the vectors s and n are assumed to be normalized. You may recall that the dot product of two unit vectors is equal to the cosine of the angle between them. As stated previously, some of the incoming light is absorbed before it is re-emitted. We can model this interaction by using a reflection coefficient (Kd), which represents the fraction of the incoming light that is scattered. This is sometimes referred to as the diffuse reflectivity, or the diffuse reflection coefficient. The diffuse reflectivity becomes a scaling factor for the incoming radiation, so the intensity of the outgoing light can be expressed as follows: Because this model depends only on the direction towards the light source and the normal to the surface, not on the direction towards the viewer, we have a model that represents uniform (omnidirectional) scattering. In this recipe, we'll evaluate this equation at each vertex in the vertex shader and interpolate the resulting color across the face. In this and the following recipes, light intensities and material reflectivity coefficients are represented by 3-component (RGB) vectors. Therefore, the equations should be treated as component-wise operations, applied to each of the three components separately. Luckily, the GLSL will make this nearly transparent because the needed operators will operate component-wise on vector variables. Getting ready Start with an OpenGL application that provides the vertex position in attribute location 0, and the vertex normal in attribute location 1 (see Sending data to a shader using per-vertex attributes and vertex buffer objects). The OpenGL application also should provide the standard transformation matrices (projection, modelview, and normal) via uniform variables. The light position (in eye coordinates), Kd, and Ld should also be provided by the OpenGL application via uniform variables. Note that Kd and Ld are type vec3. We can use a vec3 to store an RGB color as well as a vector or point. How to do it... To create a shader pair that implements diffuse shading, use the following code: Use the following code for the vertex shader. #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; out vec3 LightIntensity; uniform vec4 LightPosition; // Light position in eye coords. uniform vec3 Kd; // Diffuse reflectivity uniform vec3 Ld; // Light source intensity uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 ProjectionMatrix; uniform mat4 MVP; // Projection * ModelView void main() { // Convert normal and position to eye coords vec3 tnorm = normalize( NormalMatrix * VertexNormal); vec4 eyeCoords = ModelViewMatrix * vec4(VertexPosition,1.0)); vec3 s = normalize(vec3(LightPosition - eyeCoords)); // The diffuse shading equation LightIntensity = Ld * Kd * max( dot( s, tnorm ), 0.0 ); // Convert position to clip coordinates and pass along gl_Position = MVP * vec4(VertexPosition,1.0); } Use the following code for the fragment shader. #version 400 in vec3 LightIntensity; layout( location = 0 ) out vec4 FragColor; void main() { FragColor = vec4(LightIntensity, 1.0); } Compile and link both shaders within the OpenGL application, and install the shader program prior to rendering. See Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 for details about compiling, linking, and installing shaders. How it works... The vertex shader does all of the work in this example. The diffuse reflection is computed in eye coordinates by first transforming the normal vector using the normal matrix, normalizing, and storing the result in tnorm. Note that the normalization here may not be necessary if your normal vectors are already normalized and the normal matrix does not do any scaling. The normal matrix is typically the inverse transpose of the upper-left 3x3 portion of the model-view matrix. We use the inverse transpose because normal vectors transform differently than the vertex position. For a more thorough discussion of the normal matrix, and the reasons why, see any introductory computer graphics textbook. (A good choice would be Computer Graphics with OpenGL by Hearn and Baker.) If your model-view matrix does not include any non-uniform scalings, then one can use the upper-left 3x3 of the model-view matrix in place of the normal matrix to transform your normal vectors. However, if your model-view matrix does include (uniform) scalings, you'll still need to (re)normalize your normal vectors after transforming them. The next step converts the vertex position to eye (camera) coordinates by transforming it via the model-view matrix. Then we compute the direction towards the light source by subtracting the vertex position from the light position and storing the result in s. Next, we compute the scattered light intensity using the equation described above and store the result in the output variable LightIntensity. Note the use of the max function here. If the dot product is less than zero, then the angle between the normal vector and the light direction is greater than 90 degrees. This means that the incoming light is coming from inside the surface. Since such a situation is not physically possible (for a closed mesh), we use a value of 0.0. However, you may decide that you want to properly light both sides of your surface, in which case the normal vector needs to be reversed for those situations where the light is striking the back side of the surface (see Implementing two-sided shading). Finally, we convert the vertex position to clip coordinates by multiplying with the model-view projection matrix, (which is: projection * view * model) and store the result in the built-in output variable gl_Position. gl_Position = MVP * vec4(VertexPosition,1.0); The subsequent stage of the OpenGL pipeline expects that the vertex position will be provided in clip coordinates in the output variable gl_Position. This variable does not directly correspond to any input variable in the fragment shader, but is used by the OpenGL pipeline in the primitive assembly, clipping, and rasterization stages that follow the vertex shader. It is important that we always provide a valid value for this variable. Since LightIntensity is an output variable from the vertex shader, its value is interpolated across the face and passed into the fragment shader. The fragment shader then simply assigns the value to the output fragment. There's more... Diffuse shading is a technique that models only a very limited range of surfaces. It is best used for surfaces that have a "matte" appearance. Additionally, with the technique used above, the dark areas may look a bit too dark. In fact, those areas that are not directly illuminated are completely black. In real scenes, there is typically some light that has been reflected about the room that brightens these surfaces. In the following recipes, we'll look at ways to model more surface types, as well as provide some light for those dark parts of the surface.
Read more
  • 0
  • 0
  • 7650

article-image-opengl-40-building-c-shader-program-class
Packt
03 Aug 2011
5 min read
Save for later

OpenGL 4.0: Building a C++ Shader Program Class

Packt
03 Aug 2011
5 min read
  OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your OpenGL Shading language use Getting ready There's not much to prepare for with this one, you just need to build an environment that supports C++. Also, I'll assume that you are using GLM for matrix and vector support. If not, just leave out the functions involving the GLM classes. The reader would benefit from the previous articles on Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 and OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects. How to do it... We'll use the following header file for our C++ class: namespace GLSLShader { enum GLSLShaderType { VERTEX, FRAGMENT, GEOMETRY,TESS_CONTROL, TESS_EVALUATION }; }; class GLSLProgram { private: int handle; bool linked; string logString; int getUniformLocation(const char * name ); bool fileExists( const string & fileName ); public: GLSLProgram(); bool compileShaderFromFile( const char * fileName, GLSLShader::GLSLShaderType type ); bool compileShaderFromString( const string & source, GLSLShader::GLSLShaderType type ); bool link(); void use(); string log(); int getHandle(); bool isLinked(); void bindAttribLocation( GLuint location, const char * name); void bindFragDataLocation( GLuint location, const char * name ); void setUniform(const char *name,float x,float y, float z); void setUniform(const char *name, const vec3 & v); void setUniform(const char *name, const vec4 & v); void setUniform(const char *name, const mat4 & m); void setUniform(const char *name, const mat3 & m); void setUniform(const char *name, float val ); void setUniform(const char *name, int val ); void setUniform(const char *name, bool val ); void printActiveUniforms(); void printActiveAttribs(); }; The techniques involved in the implementation of these functions are covered in previous recipes. (Code available here). We'll discuss some of the design decisions in the next section. How it works... The state stored within a GLSLProgram object includes the handle to the OpenGL shader program object (handle), a Boolean variable indicating whether or not the program has been successfully linked (linked), and a string for storing the most recent log produced by a compile or link action (logString). The two private functions are utilities used by other public functions. The getUniformLocation function is used by the setUniform functions to find the location of a uniform variable, and the fileExists function is used by compileShaderFromFile to check for file existence. The constructor simply initializes linked to false, handle to zero, and logString to the empty string. The variable handle will be initialized by calling glCreateProgram when the first shader is compiled. The compileShaderFromFile and compileShaderFromString functions attempt to compile a shader of the given type (the type is provided as the second argument). They create the shader object, load the source code, and then attempt to compile the shader. If successful, the shader object is attached to the OpenGL program object (by calling glAttachShader) and a value of true is returned. Otherwise, the log is retrieved and stored in logString, and a value of false is returned. The link function simply attempts to link the program by calling glLinkProgram. It then checks the link status, and if successful, sets the variable linked to true and returns true. Otherwise, it gets the program log (by calling glGetProgramInfoLog), stores it in logString, and returns false. The use function simply calls glUseProgram if the program has already been successfully linked; otherwise, it does nothing. The log function returns the contents of logString, which should contain the log of the most recent compile or link action. The functions getHandle and isLinked are simply "getter" functions that return the handle to the OpenGL program object and the value of the linked variable. The functions bindAttribLocation and bindFragDataLocation are wrappers around glBindAttribLocation and glBindFragDataLocation. Note that these functions should only be called prior to linking the program. The setUniform overloaded functions are straightforward wrappers around the appropriate glUniform functions. Each of them calls getUniformLocation to query for the variable's location before calling the glUniform function. Finally, the printActiveUniforms and printActiveAttribs functions are useful mainly for debugging purposes. They simply display a list of the active uniforms/attributes to standard output. The following is a simple example of the use of the GLSLProgram class: GLSLProgram prog; if( ! prog.compileShaderFromFile("myshader.vert", GLSLShader::VERTEX)) { printf("Vertex shader failed to compile!n%s", prog.log().c_str()); exit(1); } if( ! prog.compileShaderFromFile("myshader.frag", GLSLShader::FRAGMENT)) { printf("Fragment shader failed to compile!n%s", prog.log().c_str()); exit(1); } // Possibly call bindAttribLocation or bindFragDataLocation // here... if( ! prog.link() ) { printf("Shader program failed to link!n%s", prog.log().c_str()); exit(1); } prog.use(); prog.printActiveUniforms(); prog.printActiveAttribs(); prog.setUniform("ModelViewMatrix", matrix); prog.setUniform("LightPosition", 1.0f, 1.0f, 1.0f); ... Summary This article covered the topic of Building a C++ Shader Program Class. Further resources on this subject: OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects [Article] Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 [Article] The Basics of GLSL 4.0 Shaders [Article] GLSL 4.0: Using Subroutines to Select Shader Functionality [Article] GLSL 4.0: Discarding Fragments to Create a Perforated Look [Article]
Read more
  • 0
  • 0
  • 5543
Visually different images

article-image-opengl-40-using-uniform-blocks-and-uniform-buffer-objects
Packt
03 Aug 2011
10 min read
Save for later

OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects

Packt
03 Aug 2011
10 min read
  OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your OpenGL Shading language use         Read more about this book       If your OpenGL/GLSL program involves multiple shader programs that use the same uniform variables, one has to manage the variables separately for each program. Uniform blocks were designed to ease the sharing of uniform data between programs. In this article by David Wolff, author of OpenGL 4.0 Shading Language Cookbook, we will create a buffer object for storing the values of all the uniform variables, and bind the buffer to the uniform block. Then when changing programs, the same buffer object need only be re-bound to the corresponding block in the new program. (For more resources on this subject, see here.) Uniform locations are generated when a program is linked, so the locations of the uniforms may change from one program to the next. The data for those uniforms may have to be re-generated and applied to the new locations. A uniform block is simply a group of uniform variables defined within a syntactical structure known as a uniform block. For example, in this recipe, we'll use the following uniform block: uniform BlobSettings { vec4 InnerColor; vec4 OuterColor; float RadiusInner; float RadiusOuter;}; This defines a block with the name BlobSettings that contains four uniform variables. With this type of block definition, the variables within the block are still part of the global scope and do not need to be qualified with the block name. The buffer object used to store the data for the uniforms is often referred to as a uniform buffer object. We'll see that a uniform buffer object is simply just a buffer object that is bound to a certain location. For this recipe, we'll use a simple example to demonstrate the use of uniform buffer objects and uniform blocks. We'll draw a quad (two triangles) with texture coordinates, and use our fragment shader to fill the quad with a fuzzy circle. The circle is a solid color in the center, but at its edge, it gradually fades to the background color, as shown in the following image. Getting ready Start with an OpenGL program that draws two triangles to form a quad. Provide the position at vertex attribute location 0, and the texture coordinate (0 to 1 in each direction) at vertex attribute location 1 (see: Sending data to a shader using per-vertex attributes and vertex buffer objects). We'll use the following vertex shader: #version 400 layout (location = 0) in vec3 VertexPosition;layout (location = 1) in vec3 VertexTexCoord; out vec3 TexCoord; void main(){ TexCoord = VertexTexCoord; gl_Position = vec4(VertexPosition,1.0);} The fragment shader contains the uniform block, and is responsible for drawing our fuzzy circle: #version 400in vec3 TexCoord;layout (location = 0) out vec4 FragColor; uniform BlobSettings { vec4 InnerColor; vec4 OuterColor; float RadiusInner; float RadiusOuter;}; void main() { float dx = TexCoord.x - 0.5; float dy = TexCoord.y - 0.5; float dist = sqrt(dx * dx + dy * dy); FragColor = mix( InnerColor, OuterColor, smoothstep( RadiusInner, RadiusOuter, dist ) );} The uniform block is named BlobSettings. The variables within this block define the parameters of our fuzzy circle. The variable OuterColor defines the color outside of the circle. InnerColor is the color inside of the circle. RadiusInner is the radius defining the part of the circle that is a solid color (inside the fuzzy edge), and the distance from the center of the circle to the inner edge of the fuzzy boundary. RadiusOuter is the outer edge of the fuzzy boundary of the circle (when the color is equal to OuterColor). The code within the main function computes the distance of the texture coordinate to the center of the quad located at (0.5, 0.5). It then uses that distance to compute the color by using the smoothstep function. This function provides a value that smoothly varies between 0.0 and 1.0 when the value of the third argument is between the values of the first two arguments. Otherwise it returns 0.0 or 1.0 depending on whether it is less than the first or greater than the second, respectively. The mix function is then used to linearly interpolate between InnerColor and OuterColor based on the value returned by the smoothstep function. How to do it... In the OpenGL program, after linking the shader program, use the following steps to send data to the uniform block in the fragment shader: Get the index of the uniform block using glGetUniformBlockIndex. GLuint blockIndex = glGetUniformBlockIndex(programHandle, "BlobSettings"); Allocate space for the buffer to contain the data for the uniform block. We get the size of the block using glGetActiveUniformBlockiv. GLint blockSize; glGetActiveUniformBlockiv(programHandle, blockIndex, GL_UNIFORM_BLOCK_DATA_SIZE, &blockSize); GLubyte * blockBuffer= (GLubyte *) malloc(blockSize); Query for the offset of each variable within the block. To do so, we first find the index of each variable within the block. // Query for the offsets of each block variableconst GLchar *names[] = { "InnerColor", "OuterColor", "RadiusInner", "RadiusOuter" }; GLuint indices[4];glGetUniformIndices(programHandle, 4, names, indices); GLint offset[4];glGetActiveUniformsiv(programHandle, 4, indices, GL_UNIFORM_OFFSET, offset); Place the data into the buffer at the appropriate offsets. GLfloat outerColor[] = {0.0f, 0.0f, 0.0f, 0.0f};GLfloat innerColor[] = {1.0f, 1.0f, 0.75f, 1.0f};GLfloat innerRadius = 0.25f, outerRadius = 0.45f; memcpy(blockBuffer + offset[0], innerColor, 4 * sizeof(GLfloat));memcpy(blockBuffer + offset[1], outerColor, 4 * sizeof(GLfloat));memcpy(blockBuffer + offset[2], &innerRadius, sizeof(GLfloat));memcpy(blockBuffer + offset[3], &outerRadius, sizeof(GLfloat)); Create the OpenGL buffer object and copy the data into it. GLuint uboHandle;glGenBuffers( 1, &uboHandle );glBindBuffer( GL_UNIFORM_BUFFER, uboHandle );glBufferData( GL_UNIFORM_BUFFER, blockSize, blockBuffer, GL_DYNAMIC_DRAW ); Bind the buffer object to the uniform block. glBindBufferBase( GL_UNIFORM_BUFFER, blockIndex, uboHandle ); How it works... Phew! This seems like a lot of work! However, the real advantage comes when using multiple programs where the same buffer object can be used for each program. Let's take a look at each step individually. First, we get the index of the uniform block by calling glGetUniformBlockIndex, then we query for the size of the block by calling glGetActiveUniformBlockiv. After getting the size, we allocate a temporary buffer named blockBuffer to hold the data for our block. The layout of data within a uniform block is implementation dependent, and implementations may use different padding and/or byte alignment. So, in order to accurately layout our data, we need to query for the offset of each variable within the block. This is done in two steps. First, we query for the index of each variable within the block by calling glGetUniformIndices. This accepts an array of variable names (third argument) and returns the indices of the variables in the array indices (fourth argument). Then we use the indices to query for the offsets by calling glGetActiveUniformsiv. When the fourth argument is GL_UNIFORM_OFFSET, this returns the offset of each variable in the array pointed to by the fifth argument. This function can also be used to query for the size and type; however, in this case we choose not to do so, to keep the code simple (albeit less general). The next step involves filling our temporary buffer blockBuffer with the data for the uniforms at the appropriate offsets. Here we use the standard library function memcpy to accomplish this. Now that the temporary buffer is populated with the data with the appropriate layout, we can create our buffer object and copy the data into the buffer object. We call glGenBuffers to generate a buffer handle, and then bind that buffer to the GL_UNIFORM_BUFFER binding point by calling glBindBuffer. The space is allocated within the buffer object and the data is copied when glBufferData is called. We use GL_DYNAMIC_DRAW as the usage hint here, because uniform data may be changed somewhat often during rendering. Of course, this is entirely dependent on the situation. Finally, we associate the buffer object with the uniform block by calling glBindBufferBase. This function binds to an index within a buffer binding point. Certain binding points are also so-called "indexed buffer targets". This means that the target is actually an array of targets, and glBindBufferBase allows us to bind to one index within the array. There's more... If the data for a uniform block needs to be changed at some later time, one can call glBufferSubData to replace all or part of the data within the buffer. If you do so, don't forget to first bind the buffer to the generic binding point GL_UNIFORM_BUFFER. Using an instance name with a uniform block A uniform block can have an optional instance name. For example, with our BlobSettings block, we could have used the instance name Blob, as shown here: uniform BlobSettings { vec4 InnerColor; vec4 OuterColor; float RadiusInner; float RadiusOuter;} Blob; In this case, the variables within the block are placed within a namespace qualified by the instance name. Therefore our shader code needs to refer to them prefixed with the instance name. For example: FragColor = mix( Blob.InnerColor, Blob.OuterColor, smoothstep( Blob.RadiusInner, Blob.RadiusOuter, dist ) ); Additionally, we need to qualify the variable names within the OpenGL code when querying for variable indices. The OpenGL specification says that they must be qualified with the block name (BlobSettings). However, my tests using the ATI Catalyst (10.8) drivers required me to use the instance name (Blob). Using layout qualifiers with uniform blocks Since the layout of the data within a uniform buffer object is implementation dependent, it required us to query for the variable offsets. However, one can avoid this by asking OpenGL to use the standard layout std140. This is accomplished by using a layout qualifier when declaring the uniform block. For example: layout( std140 ) uniform BlobSettings { ...}; The std140 layout is described in detail within the OpenGL specification document (available at http://www.opengl.org). Other options for the layout qualifier that apply to uniform block layouts include packed and shared. The packed qualifier simply states that the implementation is free to optimize memory in whatever way it finds necessary (based on variable usage or other criteria). With the packed qualifier, we still need to query for the offsets of each variable. The shared qualifier guarantees that the layout will be consistent between multiple programs and program stages provided that the uniform block declaration does not change. If you are planning to use the same buffer object between multiple programs and/or program stages, it is a good idea to use the shared option. There are two other layout qualifiers that are worth mentioning: row_major and column_major. These define the ordering of data within the matrix type variables within the uniform block. One can use multiple qualifiers for a block. For example, to define a block with both the row_major and shared qualifiers, we would use the following syntax: layout( row_major, shared ) uniform BlobSettings { ...}; Summary This article covered the topic of Using Uniform Blocks and Uniform Buffer Objects. Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 [Article] The Basics of GLSL 4.0 Shaders [Article] GLSL 4.0: Using Subroutines to Select Shader Functionality [Article] GLSL 4.0: Discarding Fragments to Create a Perforated Look [Article]
Read more
  • 0
  • 0
  • 12945

article-image-tips-and-tricks-getting-started-opengl-and-glsl-40
Packt
03 Aug 2011
14 min read
Save for later

Tips and Tricks for Getting Started with OpenGL and GLSL 4.0

Packt
03 Aug 2011
14 min read
  OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your OpenGL Shading language use  Introduction The first step towards using the OpenGL Shading Language version 4.0 is to create a program that utilizes the latest version of the OpenGL API. GLSL programs don't stand on their own, they must be a part of a larger OpenGL program. In this article, we will see some tips on getting a basic OpenGL/GLSL program up and running and some techniques for communication between the OpenGL application and the shader (GLSL) program. There isn't any GLSL programming in this article, but don't worry, we'll jump into GLSL with both feet in the next article. First, let's start with some background. The OpenGL Shading Language The OpenGL Shading Language (GLSL) is now a fundamental and integral part of the OpenGL API. Going forward, every program written using OpenGL will internally utilize one or several GLSL programs. These "mini-programs" written in GLSL are often referred to as shader programs, or simply shaders. A shader program is one that runs on the GPU, and as the name implies, it (typically) implements the algorithms related to the lighting and shading effects of a 3-dimensional image. However, shader programs are capable of doing much more than just implementing a shading algorithm. They are also capable of performing animation, tessellation, and even generalized computation. The field of study dubbed GPGPU (General Purpose Computing on Graphics Processing Units) is concerned with utilization of GPUs (often using specialized APIs such as CUDA or OpenCL) to perform general purpose computations such as fluid dynamics, molecular dynamics, cryptography, and so on. Shader programs are designed to be executed directly on the GPU and often in parallel. For example, a fragment shader might be executed once for every pixel, with each execution running simultaneously on a separate GPU thread. The number of processors on the graphics card determines how many can be executed at one time. This makes shader programs incredibly efficient, and provides the programmer with a simple API for implementing highly parallel computation. The computing power available in modern graphics cards is impressive. The following table shows the number of shader processors available for several models in the NVIDIA GeForce 400 series cards (source: http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units). Shader programs are intended to replace parts of the OpenGL architecture referred to as the fixed-function pipeline. The default lighting/shading algorithm was a core part of this fixedfunction pipeline. When we, as programmers, wanted to implement more advanced or realistic effects, we used various tricks to force the fixed-function pipeline into being more flexible than it really was. The advent of GLSL helped by providing us with the ability to replace this "hard-coded" functionality with our own programs written in GLSL, thus giving us a great deal of additional flexibility and power. In fact, recent (core) versions of OpenGL not only provide this capability, but they require shader programs as part of every OpenGL program. The old fixed-function pipeline has been deprecated in favor of a new programmable pipeline, a key part of which is the shader program written in GLSL. Profiles: Core vs. Compatibility OpenGL version 3.0 introduced a deprecation model, which allowed for the gradual removal of functions from the OpenGL specification. Functions or features can now be marked as deprecated, meaning that they are expected to be removed from a future version of OpenGL. For example, immediate mode rendering using glBegin/glEnd was marked deprecated in version 3.0 and removed in version 3.1. In order to maintain backwards compatibility, the concept of compatibility profiles was introduced with OpenGL 3.2. A programmer who is writing code intended for a particular version of OpenGL (with older features removed) would use the so-called core profile. Someone who also wanted to maintain compatibility with older functionality could use the compatibility profile. It may be somewhat confusing that there is also the concept of a full vs. forward compatible context, which is distinguished slightly from the concept of a core vs. compatibility profile. A context that is considered forward compatible basically indicates that all deprecated functionality has been removed. In other words, if a context is forward compatible, it only includes functions that are in the core, but not those that were marked as deprecated. A full context supports all features of the selected version. Some window APIs provide the ability to select full or forward compatible status along with the profile. The steps for selecting a core or compatibility profile are window system API dependent. For example, in recent versions of Qt (at least version 4.7), one can select a 4.0 core profile using the following code: QGLFormat format; format.setVersion(4,0); format.setProfile(QGLFormat::CoreProfile); QGLWidget *myWidget = new QGLWidget(format); (you can download the example code here) All programs in this article are designed to be compatible with an OpenGL 4.0 core profile. Using the GLEW Library to access the latest OpenGL functionality The OpenGL ABI (application binary interface) is frozen to OpenGL version 1.1 on Windows. Unfortunately for Windows developers, that means that it is not possible to link directly to functions that are provided in newer versions of OpenGL. Instead, one must get access to these functions by acquiring a function pointer at runtime. Getting access to the function pointers requires somewhat tedious work, and has a tendency to clutter your code. Additionally, Windows typically comes with a standard OpenGL header file that conforms to OpenGL 1.1. The OpenGL wiki states that Microsoft has no plans to update the gl.h and opengl32.lib that comes with their compilers. Thankfully, others have provided libraries that manage all of this for us by probing your OpenGL libraries and transparently providing the necessary function pointers, while also exposing the necessary functionality in its header files. One such library is called GLEW (OpenGL Extension Wrangler). Getting ready Download the GLEW distribution from http://glew.sourceforge.net. There are binaries available for Windows, but it is also a relatively simple matter to compile GLEW from source (see the instructions on the website: http://glew.sourceforge.net). Place the header files glew.h and wglew.h from the GLEW distribution into a proper location for your compiler. If you are using Windows, copy the glew32.lib to the appropriate library directory for your compiler, and place the glew32.dll into a system-wide location, or the same directory as your program's executable. Full installation instructions for all operating systems and common compilers are available on the GLEW website. How to do it... To start using GLEW in your project, use the following steps: Make sure that, at the top of your code, you include the glew.h header before you include the OpenGL header files: #include <GL/glew.h> #include <GL/gl.h> #include <GL/glu.h> In your program code, somewhere just after the GL context is created (typically in an initialization function), and before any OpenGL functions are called, include the following code: GLenum err = glewInit(); if( GLEW_OK != err ) { fprintf(stderr, "Error initializing GLEW: %sn", glewGetErrorString(err) ); } That's all there is to it! How it works... Including the glew.h header file provides declarations for the OpenGL functions as function pointers, so all function entry points are available at compile time. At run time, the glewInit() function will scan the OpenGL library, and initialize all available function pointers. If a function is not available, the code will compile, but the function pointer will not be initialized. There's more... GLEW includes a few additional features and utilities that are quite useful. GLEW visualinfo The command line utility visualinfo can be used to get a list of all available extensions and "visuals" (pixel formats, pbuffer availability, and so on). When executed, it creates a file called visualinfo.txt, which contains a list of all the available OpenGL, WGL, and GLU extensions, including a table of available visuals (pixel formats, pbuffer availability, and the like). GLEW glewinfo The command line utility glewinfo lists all available functions supported by your driver. When executed, the results are printed to stdout. Checking for extension availability at runtime You can also check for the availability of extensions by checking the status of some GLEW global variables that use a particular naming convention. For example, to check for the availability of ARB_vertex_program, use something like the following: if ( ! GLEW_ARB_vertex_program ) { fprintf(stderr, "ARB_vertex_program is missing!n"); ... } See also Another option for managing OpenGL extensions is the GLee library (GL Easy Extension). It is available from http://www.elf-stone.com/glee.php and is open source under the modified BSD license. It works in a similar manner to GLEW, but does not require runtime initialization. Using the GLM library for mathematics Mathematics is core to all of computer graphics. In earlier versions, OpenGL provided support for managing coordinate transformations and projections using the standard matrix stacks (GL_MODELVIEW and GL_PROJECTION). In core OpenGL 4.0, however, all of the functionality supporting the matrix stacks has been removed. Therefore, it is up to us to provide our own support for the usual transformation and projection matrices, and then to pass them into our shaders. Of course, we could write our own matrix and vector classes to manage this, but if you're like me, you prefer to use a ready-made, robust library. One such library is GLM (OpenGL Mathematics) written by Christophe Riccio. Its design is based on the GLSL specification, so the syntax is very similar to the mathematical support in GLSL. For experienced GLSL programmers, this makes it very easy to use. Additionally, it provides extensions that include functionality similar to some of the much-missed OpenGL functions such as glOrtho, glRotate, or gluLookAt. Getting ready Download the latest GLM distribution from http://glm.g-truc.net. Unzip the archive file, and copy the glm directory contained inside to anywhere in your compiler's include path. How to do it... Using the GLM libraries is simply a matter of including the core header file (highlighted in the following code snippet) and headers for any extensions. We'll include the matrix transform extension, and the transform2 extension. #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> #include <glm/gtx/transform2.hpp> The GLM classes are then available in the glm namespace. The following is an example of how you might go about making use of some of them. glm::vec4 position = glm::vec4( 1.0f, 0.0f, 0.0f, 1.0f ); glm::mat4 view = glm::lookAt( glm::vec3(0.0,0.0,5.0), glm::vec3(0.0,0.0,0.0), glm::vec3(0.0,1.0,0.0) ); glm::mat4 model = glm::mat4(1.0f); model = glm::rotate( model, 90.0f, glm::vec3(0.0f,1.0f,0.0) ); glm::mat4 mv = view * model; glm::vec4 transformed = mv * position; How it works... The GLM library is a header-only library. All of the implementation is included within the header files. It doesn't require separate compilation and you don't need to link your program to it. Just placing the header files in your include path is all that's required! The preceding example first creates a vec4 (four coordinate vector) representing a position. Then it creates a 4x4 view matrix by using the glm::lookAt function from the transform2 extension. This works in a similar fashion to the old gluLookAt function. In this example, we set the camera's location at (0,0,5), looking towards the origin, with the "up" direction in the direction of the Y-axis. We then go on to create the modeling matrix by first storing the identity matrix in the variable model (via the constructor: glm::mat4(1.0f)), and multiplying by a rotation matrix using the glm::rotate function. The multiplication here is implicitly done by the glm::rotate function. It multiplies its first parameter by the rotation matrix that is generated by the function. The second parameter is the angle of rotation (in degrees), and the third parameter is the axis of rotation. The net result is a rotation matrix of 90 degrees around the Y-axis. Finally, we create our model view matrix (mv) by multiplying the view and model variables, and then using the combined matrix to transform the position. Note that the multiplication operator has been overloaded to behave in the expected way. As stated above, the GLM library conforms as closely as possible to the GLSL specification, with additional features that go beyond what you can do in GLSL. If you are familiar with GLSL, GLM should be easy and natural to use. Swizzle operators (selecting components using commands like: foo.x, foo.xxy, and so on) are disabled by default in GLM. You can selectively enable them by defining GLM_SWIZZLE before including the main GLM header. The GLM manual has more detail. For example, to enable all swizzle operators you would do the following: #define GLM_SWIZZLE #include <glm/glm.hpp> There's more... It is not recommended to import all of the GLM namespace using a command like: using namespace glm; This will most likely cause a number of namespace clashes. Instead, it is preferable to import symbols one at a time, as needed. For example: #include <glm/glm.hpp> using glm::vec3; using glm::mat4; Using the GLM types as input to OpenGL GLM supports directly passing a GLM type to OpenGL using one of the OpenGL vector functions (with the suffix "v"). For example, to pass a mat4 named proj to OpenGL we can use the following code: #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> ... glm::mat4 proj = glm::perspective( viewAngle, aspect, nearDist, farDist ); glUniformMatrix4fv(location, 1, GL_FALSE, &proj[0][0]); See also The GLM website http://glm.g-truc.net has additional documentation and examples. Determining the GLSL and OpenGL version In order to support a wide range of systems, it is essential to be able to query for the supported OpenGL and GLSL version of the current driver. It is quite simple to do so, and there are two main functions involved: glGetString and glGetIntegerv. How to do it... The code shown below will print the version information to stdout: const GLubyte *renderer = glGetString( GL_RENDERER ); const GLubyte *vendor = glGetString( GL_VENDOR ); const GLubyte *version = glGetString( GL_VERSION ); const GLubyte *glslVersion = glGetString( GL_SHADING_LANGUAGE_VERSION ); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); printf("GL Vendor : %sn", vendor); printf("GL Renderer : %sn", renderer); printf("GL Version (string) : %sn", version); printf("GL Version (integer) : %d.%dn", major, minor); printf("GLSL Version : %sn", glslVersion); How it works... Note that there are two different ways to retrieve the OpenGL version: using glGetString and glGetIntegerv. The former can be useful for providing readable output, but may not be as convenient for programmatically checking the version because of the need to parse the string. The string provided by glGetString(GL_VERSION) should always begin with the major and minor versions separated by a dot; however, the minor version could be followed with a vendor-specific build number. Additionally, the rest of the string can contain additional vendor-specific information and may also include information about the selected profile. glGetInteger is available in OpenGL 3.0 or greater. The queries for GL_VENDOR and GL_RENDERER provide additional information about the OpenGL driver. The call glGetString(GL_VENDOR) returns the company responsible for the OpenGL implementation. The call to glGetString(GL_RENDERER) provides the name of the renderer which is specific to a particular hardware platform (such as "ATI Radeon HD 5600 Series"). Note that both of these do not vary from release to release, so can be used to determine the current platform. Of more importance to us is the call to glGetString(GL_SHADING_LANGUAGE_VERSION) which provides the supported GLSL version number. This string should begin with the major and minor version numbers separated by a period, but similar to the GL_VERSION query, may include other vendor-specific information. There's more... It is often useful to query for the supported extensions of the current OpenGL implementation. In versions prior to OpenGL 3.0, one could retrieve a full, space separated list of extension names with the following code: GLubyte *extensions = glGetString(GL_EXTENSIONS); The string that is returned can be extremely long and parsing it can be susceptible to error if not done carefully. In OpenGL 3.0, a new technique was introduced, and the above functionality was deprecated (and finally removed in 3.1). Extension names are now indexed and can be individually queried by index. We use the glGetStringi variant for this. For example, to get the name of the extension stored at index i, we use: glGetString(GL_EXTENSIONS, i). To print a list of all extensions, we could use the following code: GLint nExtensions; glGetIntegerv(GL_NUM_EXTENSIONS, &nExtensions); for( int i = 0; i < nExtensions; i++ ) printf("%sn", glGetStringi( GL_EXTENSIONS, i ) );
Read more
  • 0
  • 0
  • 4933

article-image-cryengine-3-fun-physics
Packt
12 Jul 2011
4 min read
Save for later

CryENGINE 3: Fun Physics

Packt
12 Jul 2011
4 min read
CryENGINE 3 Cookbook Over 90 recipes written by Crytek developers for creating third-generation real-time games Low gravity In this simple recipe, we will look at utilizing the GravityBox to set up a low gravity area within a level. Getting ready Have Sandbox open Then open My_Level.cry How to do it... To start, first we must place down a GravityBox. In the RollupBar, click on the Entities button. Under the Physics section, select GravityBox. Place the GravityBox on the ground: Keeping the default dimensions (20, 20, 20 meters), the only property here that we want to change is the gravity. The default settings in this box set this entire area within the level to be a zero gravity zone. To adjust the up/down gravity of this, we need to change the value of gravity and the Z axis. To mimic normal gravity, this value would need to be set to the acceleration value of -9.81. To change this value to a lower gravity value, (something like the Moon's gravity) simply change it to a higher negative value such as -1.62. How it works... The GravityBox is a simple bounding box which overrides the defined gravity in the code (-9.81) and sets its own gravity value within the bounding box. Anything physicalized and activated to receive physics updates will behave within the confines of these gravitational rules unless they fall outside of the bounding box. There's more... Here are some useful tips about the gravity objects. Uniform property The uniform property within the GravityBox defines whether the GravityBox should use its own local orientation or the world's. If true, the GravityBox will use its own local rotation for the gravitational direction. If false, it will use the world's direction. This is used when you wish to have the gravity directed sideways. Set this value to True and then rotate the GravityBox onto its side. Gravity sphere Much like the GravityBox, the GravitySphere holds all the same principles but in a radius instead of a bounding box. The only other difference with the GravitySphere is that a false uniform Boolean will cause any object within the sphere to be attracted/repulsed from the center of the axis.   Hangman on a rope In this recipe, we will look at how we can utilize a rope to hang a dead body from it. Getting ready Open Sandbox Then open My_Level.cry How to do it... Begin by drawing out a rope: Open the RollupBar. From the Misc button, select Rope. With Grid Snap on and set to 1 meter, draw out a straight rope that has increments of one meter (by clicking once for every increment) up to four meters (double-click to finalize the rope). Align the rope so that from end to end it is along the Z axis (up and down) and a few meters off the ground: Next, we will need something solid to hang the rope from. Place down a solid with 1, 1, 1 meter. Align the rope underneath the solid cube while keeping both off the ground. Make sure when aligning the rope to get the end constraint to turn from red to green. This means it is attached to a physical surface: Lastly, we will need to hang a body from this rope. However, we will not hang him in the traditional manner, but rather by one of his feet. In the RollupBar, click on the Entities button. Under the Physics section, select DeadBody. Rotate this body up-side-down and align one of his feet to the bottom end of the rope. Select the rope to make sure the bottom constraint turns green to signal that it is attached. Verify that the Hangman on a rope recipe works by going into game mode and punching the dead body: How it works... The rope is a complicated cylinder that can contain as many bending segments as defined and is allowed to stretch and compress depending on the values defined. Tension and breaking strength can also be defined. But since ropes have expensive physics properties involved, they should be used sparingly.  
Read more
  • 0
  • 0
  • 2065
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-blender-25-modeling-basic-humanoid-character
Packt
01 Jul 2011
14 min read
Save for later

Blender 2.5: modeling a basic humanoid character

Packt
01 Jul 2011
14 min read
Mission Briefing Our objective is to create a basic model of a humanoid, with all the major parts included and correctly shaped: head, arms, torso, legs, and feet will be defined. We won't be creating fine details of the model, but we will definitely pay attention to the process and the mindset necessary to achieve our goal. What Does It Do? We'll start by creating a very simple (and ugly) base mesh that we can tweak later to get a nice finished model. From a single cube, we will be creating an entire model of a basic humanoid character, and take the opportunity to follow our own "feelings" to create the finished model. Why Is It Awesome? This project will help you learn some good points that will be handy when working on future projects (even in complex projects). First of all, we'll learn a basic procedure for applying the box modeling technique to create a base mesh. We'll then learn that our models don't have to look nice all the time to ensure a proper result, instead we must have a proper understanding of where we are heading to avoid getting lost along the way. Finally, we'll learn to separate the complexity of a modeling task into two different parts, using the best tools for the job each time (thus having a more enjoyable time and very good freedom to creative). The brighter side of this project will be working with the sculpting tools, since they give us a very cool way of tweaking meshes without having to handle individual vertices, edges, or faces. This advantage constitutes an added value for our workflow: we can separate the boring technical parts of modeling (mostly extruding and defining topology) from the actual fine tweaking of the form. Moreover, if we have the possibility of using the sculpt tools with a pen tablet, the modeling experience will be greatly improved and will feel extremely intuitive. Your Hotshot Objectives Although this project is not really complex, we will separate it into seven tasks, to make it easier to follow. They are: Creating the base mesh Head Arms Torso Legs Feet and final tweaks Scene setup Creating the Base Mesh Let's begin our project by creating the mesh that will be further tweaked to get our final model. For this project we'll apply a methodology (avoiding overly complicated, unintelligible, written descriptions) that will give us some freedom and allow us to explore our creativity without the rigidity of having a strict blueprint to follow. There's a warning, though: our model will look ugly most of the time. This is because in the initial building process we're not going to put so much emphasis on how it looks but on the structure of the mesh. Having said that, let's start with the first task of the project. Prepare for Lift Off Fire up Blender, delete the default lamp, set the name of the default cube to character(from the Item panel, Properties sidebar) and save the file as character.blend in the project's directory. Engage Thrusters First, we need to set the character object with a Mirror modifier, so that we only need to work on one side of the character while the other side gets created automatically as we work. Select the character object, switch to Edit Mode and then switch to Front View (View | Front), then add a new edge loop running vertically by using the Loop Cut and Slide tool. Make sure that the new edge loop is not moved from the center of the cube, so that it separates the cube into two mirrored sides. Now set the viewport shading to wireframe (press the Z key), select the vertices on the left-hand side of the cube, and delete them (press the X key). Now let's switch back to Object Mode, then go to the Modifiers tab in the Properties Editor and add a new Mirror Modifier to the character object. On the settings panel for the Mirror Modifier, let's enable the Clipping option. This will leave us with the object set up according to our needs. Switch to Edit Mode for the character object and then to Face Select mode. Select the upper face of the mesh, extrude it (E key) and then move the extrusion 1 unit upwards, along the Z axis. Now perform a second extrusion, this time on the face that remains selected from the previous one, and move it 1 unit upwards too; this will leave us with three sections (the lowest one is the biggest). Follow along by switching to Right View (View | Right), extrude again, press Escape, and then move the extrusion 0.2 units upwards (press the G key, Z key, then type 0.2). With the upper face selected, let's scale it down by a factor of 0.3 (S key, then type 0.3) and then move it by 0.6 units along the Y axis (G key, Y key, then type 0.6). Continue by extruding again and moving the extrusion 0.5 units upwards (G key, Z key, then type 0.5). Then add another extrusion, moving it up by 0.1 units (G key, Z key, then type 0.1). With the last extrusion selected, perform a scale operation, by a factor of 1.5 (S key, then type 1.5). Right after that, extrude again and move the extrusion 1.5 units upwards (G key, Z key, then type 1.5). Now let's rotate the view freely, so that the face of the last extrusion that faces the front is selectable, select it and move it -0.5 units along the Y axis (press the G key, Y key, then type -0.5).Let's take a look at a screenshot to make sure that we are on the right path: Note the (fairly noticeable) shapes showing the neck area, the head, and the torso of our model. Take a look at the face on the model's side from where we'll later extrude the arm. Now let's switch to Front View (View | Front), then select the upper face on the side of the torso of the model, extrude it, press Escape, and move it 0.16 units along the X axis (G key, X key, then type 0.16). Continue by scaling it down by a factor of 0.75 (S key, then type 0.75) and move it up by 0.07 units (press the G key, Z key, then type 0.07). Then switch to Right View (View | Right) and move it 0.2 units along the Y axis (press the G key, Y key, then type 0.2). This will give us the starting point to extrude the arm. Switch to Front View (View | Front) and perform another extrusion (having selected the face that remains selected by the previous extrusion), press Escape, and then move it 0.45 units along the X axis (press the G key, X key, then type 0.45). Then let's switch to Edge Select Mode, deselect all the edges that could be selected (Select | Select/Deselect All), rotate the view to be able to select any of the horizontal edges of the last extrusion, and then select the upper horizontal edge of the last extrusion; then move it -0.16 units along the X axis (G key, X key, then type -0.16). Right after that, let's select the lower horizontal edge of the last extrusion and move it 0.66 units upwards (G key, Z key, then type 0.66). Finalize this tweak by selecting the last two edges that we worked with and move them -0.15 units along the X axis (press the G key, X key, then type -0.15). Let's also select the lower edge of the first extrusion that we made for the arm and move it 0.14 units along the X axis (press the G key, X key, then type 0.14). Since this process is a bit tricky, let's use a screenshot, to help us ensure that we are performing it correctly: The only reason to perform this weird tweaking of the base mesh is to ensure a proper topology (internal mesh structure) for the shoulder when the model is finished. Let's remember to take a look at the shoulder of the finished model and compare it with the previous screenshot to understand it. Make sure to only select the face shown selected in the previous screenshot and switch back to Front View (View | Front) to work on the arms. Extrude the selected face, press Escape, and then move it by 1.6 units along the X axis (press the G key, X key, then type 1.6). Then scale it down by a factor of 0.75 (press the S key, then type 0.75) and move it up 0.07 units (press the G key, Z key, then type 0.07). Continue by performing a second extrusion, press Escape and then move it 1.9 units along the X axis (press the G key, X key, then type 1.9). Then let's perform a scale constrained to the Y axis, this time by a factor of 0.5 (press the S key, Y key, then type 0.5). To perform some tweaks, let's switch to Top View (View | Top) and move the selected face 0.17 units along the Y axis (press the G key, Y key, then type 0.17). To model the simple shape that we will create for the hand, let's make sure that we have selected the rightmost face from the last extrusion, extrude it, and move it 0.25 units along the X axis (press the G key, X key, then type 0.25). Then perform a second extrusion and move it 0.25 units along the X axis as well, and finish the extrusions by adding a last one, this time moving it 0.6 units along the X axis (press the G key, X key, then type 0.6). For the thumb, let's select the face pointing forwards in the second-last extrusion, extrude it, and move the extruded face to the right and down (remember we are in Top View) so that it looks well shaped with the rest of the hand. For this we can perform a rotation of the selected face to orient it better. To finish the hand, let's select the faces forming the thumb and the one between the thumb and the other "fingers", and move them -0.12 units along the Y axis (press the G key, Y key, then type -0.12). Also select the two faces on the other side of the face and move them 0.08 units along the Y axis (press the G key, Y key, then type 0.08). The following screenshot should be very helpful to follow the process: Now it's time to model the legs of our character. For that, let's pan the 3D View to get the lower face visible, select it, extrude it, and move it -0.4 units (press the G key, Z key, then type -0.4). Now switch to Edge Select Mode, select the rightmost edge of the face we just extruded down, and move it -0.85 units along the X axis (G key, X key, then type -0.85). To extrude the thigh, let's first switch to Face Select Mode, select the face that runs diagonally after we moved the edge in the previous step, then switch to Front View (View | Front), extrude the face, press Escape, and then apply a scale operation along the Z axis by a factor of 0 (press the S key, Z key, then type 0), to get it looking entirely flat. With the face from the last extrusion selected, let's move it -0.8 units along the Z axis (press the G key, Z key, then type -0.8). Right after that, let's scale the selected face by a factor of 1.28 along the X axis (press the S key, X key, then type 1.28) and move it 0.06 units along the X axis (press the G key, X key, then type 0.06). Now switch to Right View (View | Right), scale it by a factor of 0.8 (press the S key, Y key, then type 0.8), and then move it -0.12 units along the Y axis (press the G key, Y key, then type -0.12). Perform another extrusion, then press Escape and move it -2.2 units along the Z axis (press the G key, Z key, then type -2.2). To give it a better form, let's now scale the selected face by a factor of 0.8 along the Y axis (press the S key, Y key, then type 0.8) and move it 0.05 units along the Y axis (press the G key, Y key, then type 0.05). To complete the thigh, let's switch to Front View (View | Front), scale it by a factor of 0.7 along the X axis (press the S key, X key, then type 0.7), and then move it -0.18 units along the X axis (press the G key, X key, then type -0.18). Right after the thigh, let's continue working on the leg. Make sure that the face from the tip of the previous extrusion is selected, extrude it, press Escape, then move it -2.3 units along the Z axis (press the G key, Z key, then type -2.3). Then let's switch to Right View (View | Right), scale it by a factor of 0.7 along the Y axis (press the S key, Y key, then type 0.7), and move it -0.02 units along the Y axis (press the G key, Y key, then type -0.02). Now we just need to create the feet by extruding the face selected previously and moving it -0.6 units along the Z axis (press the G key, Z key, then type -0.6). Then select the face of the last extrusion that faces the front, extrude it, press Escape, and move it -1.9 units along the Y axis. As a final touch, let's switch to Edge Select mode, then select the upper horizontal edge of the last extrusion and move it -0.3 units along the Z axis (press the G key, Z key, then type -0.3). Let's take a look at a couple of screenshots showing us how our model should look by now: In the previous screenshot, we can see the front part, whereas the back side of the model is seen in the next one. Let's take a couple of minutes to inspect the screenshots and compare them to our actual model, to be entirely sure that we have the correct mesh now. Notice that our model isn't looking especially nice yet; that's because we've just worked on creating the mesh, the actual form will be worked in the coming tasks. Objective Complete - Mini Debriefing In this task we just performed the very initial step of our modeling process: creating the base mesh to work with. In order to avoid overly complicated written explanations we are using a modeling process that leaves the actual "shaping" for later, so we only worked out the topology of our mesh and laid out some simple foundations such as general proportions. The good thing about this approach is that we put in effort where it is really required, saving some time and enjoying the process a lot more. Classified Intel There are two main methods for modeling: poly-to-poly modeling and box modeling. The poly-to-poly method is about working with very localized (often detailed) geometry, paying attention to how each polygon is laid out in the model. The box modeling method is about constructing the general form very fast, by using the extrude operation, while paying attention to general aspects such as proportions, deferring the detailed tweaks for later. In this project we apply the box modeling method. We just worked out a very simple mesh, mostly by performing extrusions and very simple tweaks. Our main concern while doing this task was to keep proportions correct, forgetting about the fine details of the "form" that we are working out. The next tasks of this project will be about using Blender's sculpting tools to ease the tweaking job a lot, getting a very nice model in the end without having to tweak individual vertices!
Read more
  • 0
  • 0
  • 3683

article-image-how-create-new-vehicle-cryengine-3
Packt
23 Jun 2011
12 min read
Save for later

How to Create a New Vehicle in CryENGINE 3

Packt
23 Jun 2011
12 min read
  CryENGINE 3 Cookbook Over 100 recipes written by Crytek developers for creating AAA games using the technology that created Crysis 2 Creating a new car mesh (CGA) In this recipe, we will show you how to build the basic mesh structure for your car to be used in the next recipe. This recipe is not to viewed as a guide on how to model your own mesh, but rather as a template for how the mesh needs to be structured to work with the XML script of the vehicle. For this recipe, you will be using 3DSMax to create and export your .CGA. .CGA (Crytek Geometry Animation): The .cga file is created in the 3D application and contains animated hard body geometry data. It only supports directly-linked objects and does not support skeleton-based animation (bone animation) with weighted vertices. It works together with .anm files. Getting ready Create a box primitive and four cylinders within Max and then create a new dummy helper. How to do it... After creating the basic primitives within Max, we need to rename these objects. Rename the primitives to match the following naming convention: Helper = MyVehicle Box = body Front Left Wheel = wheel1 Front Right Wheel = wheel2 Rear Left Wheel = wheel3 Rear Right Wheel = wheel4 Remember that CryENGINE 3 assumes that y is forward. Rotate and reset any x-forms if necessary. From here you can now set up the hierarchy to match what we will build into the script: In Max, link all the wheels to the body mesh. Link the body mesh to the MyVehicle dummy helper. Your hierarchy should look like the following screenshot in the Max schematic view: Next, you will want to create a proxy mesh for each wheel and the body. Be sure to attach these proxies to each mesh. Proxy meshes can be a direct duplication of the simple primitive geometry we have created. Before we export this mesh, make one final adjustment to the positioning of the vehicle: Move the body and the wheels up on the Z axis to align the bottom surface of the wheels to be flushed with 0 on the Z. Without moving the body or the wheels, be sure that the MyVehicle helper is positioned at 0,0,0 (this is the origin of the vehicle). Also, re-align the pivot of the body to 0,0,0. Once complete, your left viewport should look something like the following screenshot (if you have your body still selected): After setting up the materials, you are now ready to export the CGA: Open the CryENGINE Exporter from the Utilities tab. Select the MyVehicle dummy helper and click the Add Selected button. Change the export to: Animated Geometry (*.cga). Set Export File per Node to True. Set Merge All Nodes to False. Save this Max scene in the following directory: MyGameFolderObjectsvehiclesMyVehicle. Now, click on Export Nodes to export the CGA. How it works... This setup of the CGA is a basic setup of the majority of the four wheeled vehicles used for CryENGINE 3. This same basic setup can also be seen in the HMMWV provided in the included assets with the SDK package of CryENGINE 3. Even though the complete HMMWV may seem to be a very complicated mesh used as a vehicle, it can also be broken down into the same basic structure as the vehicle we just created. The main reason for the separation of the parts on the vehicles is because each part performs its own function. Since the physics of the vehicle code drives the vehicle forward in the engine, it actually controls each wheel independently, so it can animate them based on what they can do at that moment. This means that you have the potential for a four wheel drive on all CryENGINE 3 vehicles, all animating at different speeds based on the friction that they grip. Since all of the wheels are parented to the body (or hull) mesh, this means that they drive their parent (the body of the vehicle) but the body also handles where the wheels need to be offset from in order to stay aligned when driving. The body itself acts as the base mesh for all other extras put onto the vehicle. Everything else from Turrets to Doors to Glass Windows branch out from the body. The dummy helper is only the parent for the body mesh due to the fact that it is easier to export multiple LODs for that vehicle (for example, HMMWV, HMMWV_LOD1, HMMWV_LOD2, and so on). In the XML, this dummy helper is ignored in the hierarchy and the body is treated as the parent node. There's more... Here are some of the more advanced techniques used. Dummy helpers for modification of the parts A more advanced trick is the use of dummy helpers set inside the hierarchy to be used in later reference through the vehicle's mod system. How this works is that if you had a vehicle such as the basic car shown previously, but you wanted to add on an additional mesh just to have a modified type of this same car (something like adding a spoiler to the back), then you can create a dummy helper and align it to the pivot of the object, so it will line up to the body of the mesh when added through the script later on. This same method was used in Crysis 2 with the Taxi signs on the top of the Taxi cars. The Taxi itself was the same model used as the basic civilian car, but had an additional dummy helper where the sign needed to be placed. This allowed for a clever way to save on memory when rendering multiple vehicle props within a single area but making each car look different. Parts for vehicles and their limitless possibilities Adding the basic body and four wheels to make a basic car model is only the beginning. There are limitless possibilities to what you can make as far as the parts on a vehicle are concerned. Anything from a classic gunner turret seen on the HMMWV or even tank turrets, all the way to arms for an articulated Battlemech as seen in the Crysis 2 Total Conversion mod—MechWarrior: Living Legends. Along with the modifications system, you have the capabilities to add on a great deal of extra parts to be detached and exploded off through the damage scripts later on. The possibilities are limitless. Creating a new car XML In this recipe, we will show you how to build a new script for CryENGINE 3 to recognize your car model as a vehicle entity. For this recipe, you must have some basic knowledge in XML formatting. Getting ready Open DefaultVehicle.xml in the XML editor of your choice (Notepad, Notepad++, UltraEdit, and so on). This XML will be used as the basic template to construct our new vehicle XML. DefaultVehicle.xml is found at the following location: MyGameFolderScriptsEntitiesVehiclesImplementationsXml. Open the MyVehicle.max scene made from the previous recipe to use as a reference for the parts section within this recipe. How to do it... Basic Properties: First, we will need to rename the filename to what the vehicle's name would be. Delete filename = Objects/Default.cgf. Rename name = DefaultVehicle to name = MyVehicle. Add actionMap = landvehicle to the end of the cell. Save the file as MyVehicle.XML. Your first line should now look like the following: Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you. <Vehicle name="MyVehicle" actionMap="landvehicle"> Now we need to add some physics simulation to the vehicle otherwise there might be some strange reactions with the vehicle. Insert the following after the third line (after the Buoyancy cell): <Simulation maxTimeStep="0.02" minEnergy="0.002" maxLoggedCollisions="2"/> Damages and Components: For now, we will skip the Damages and Components cells as we will address them in a different recipe. Parts: To associate the parts made in the Max file, the hierarchy of the geometry in 3DSMax needs to be the very same as is referenced in the XML. To do this, we will first clear out the class = Static cell and replace it with the following: <Part name="body" class="Animated" mass="100" component="Hull"> <Parts> </Parts> <Animated filename="objects/vehicles/MyVehicle/MyVehicle.cga" filenameDestroyed="objects/vehicles/HMMWV/HMMWV_damaged.cga"/> </Part> Now, within the <Parts> tag that is underneath the body, we will put in the wheels as the children: <Parts> <Part name="wheel1" class="SubPartWheel" component="wheel_1" mass="80"> <SubPartWheel axle="0" density="0" damping="-0.7" driving="1" lenMax="0.4" maxFriction="1" minFriction="1" slipFrictionMod="0.3" stiffness="0" suspLength="0.25" rimRadius="0.3" torqueScale="1.1"/> </Part> </Parts> Remaining within the <Parts> tag, add in wheels 2-4 using the same values as previously listed. The only difference is you must change the axle property of wheels 3 and 4 to the value of 1 (vehicle physics has an easier time calculating what the wheels need to if only two wheels are associated with a single axle). The last part that needs to be added in is the Massbox. This part isn't actually a mesh that was made in 3DSMax, but a generated bounding box, generated by code with the mass and size defined here in the XML. Write the following code snippet after the <body> tag: <Part name="massBox" class="MassBox" mass="1500" position="0,0,1." disablePhysics="0" disableCollision="0" isHidden="0"> <MassBox size="1.25,2,1" drivingOffset="-0.7"/> </Part> If scripted correctly, your script should look similar to the following for all of the parts on your vehicle: <Parts> <Part name="body" class="Animated" mass="100" component="Hull"> <Parts> <Part name="wheel1" class="SubPartWheel" component="wheel_1" mass="80"> <SubPartWheel axle="0" density="0" damping="-0.7" driving="1" lenMax="0.4" maxFriction="1" minFriction="1" slipFrictionMod="0.3" stiffness="0" suspLength="0.25" rimRadius="0.3" torqueScale="1.1"/> </Part> <Part name="wheel2" class="SubPartWheel" component="wheel_2" mass="80"> <SubPartWheel axle="0" density="0" damping="-0.7" driving="1" lenMax="0.4" maxFriction="1" minFriction="1" slipFrictionMod="0.3" stiffness="0" suspLength="0.25" rimRadius="0.3" torqueScale="1.1"/> </Part> <Part name="wheel3" class="SubPartWheel" component="wheel_3" mass="80"> <SubPartWheel axle="1" density="0" damping="-0.7" driving="1" lenMax="0.4" maxFriction="1" minFriction="1" slipFrictionMod="0.3" stiffness="0" suspLength="0.25" rimRadius="0.3" torqueScale="1.1"/> </Part> <Part name="wheel4" class="SubPartWheel" component="wheel_4" mass="80"> <SubPartWheel axle="1" density="0" damping="-0.7" driving="1" lenMax="0.4" maxFriction="1" minFriction="1" slipFrictionMod="0.3" stiffness="0" suspLength="0.25" rimRadius="0.3" torqueScale="1.1"/> </Part> </Parts> <Animated filename="objects/vehicles/MyVehicle/MyVehicle.cga" filenameDestroyed="objects/vehicles/HMMWV/HMMWV_damaged.cga"/> </Part> <Part name="massBox" class="MassBox" mass="1500" position="0,0,1." disablePhysics="0" disableCollision="0" isHidden="0"> <MassBox size="1.25,2,1" drivingOffset="-0.7"/> </Part> </Parts> Movement Parameters: Finally, you will need to implement the MovementParams needed, so that the XML can access a particular movement behavior from the code that will propel your vehicle. To get started right away, we have provided an example of the ArcadeWheeled parameters, which we can copy over to MyVehicle: <MovementParams> <ArcadeWheeled> <Steering steerSpeed="45" steerSpeedMin="80" steerSpeedScale="1" steerSpeedScaleMin="1" kvSteerMax="26" v0SteerMax="40" steerRelaxation="130" vMaxSteerMax="12"/> <Handling> <RPM rpmRelaxSpeed="2" rpmInterpSpeed="4" rpmGearShiftSpeed="2"/> <Power acceleration="8" decceleration="0.1" topSpeed="32" reverseSpeed="5" pedalLimitMax="0.30000001"/> <WheelSpin grip1="5.75" grip2="6" gripRecoverSpeed="2" accelMultiplier1="1.2" accelMultiplier2="0.5"/> <HandBrake decceleration="15" deccelerationPowerLock="1" lockBack="1" lockFront="0" frontFrictionScale="1.1" backFrictionScale="0.1" angCorrectionScale="5" latCorrectionScale="1" isBreakingOnIdle="1"/> <SpeedReduction reductionAmount="0" reductionRate="0.1"/> <Friction back="10" front="6" offset="-0.2"/> <Correction lateralSpring="2" angSpring="10"/> <Compression frictionBoost="0" frictionBoostHandBrake="4"/> </Handling> <WheeledLegacy damping="0.11" engineIdleRPM="500" engineMaxRPM="5000" engineMinRPM="100" stabilizer="0.5" maxTimeStep="0.02" minEnergy="0.012" suspDampingMin="0" suspDampingMax="0" suspDampingMaxSpeed="3"/> <AirDamp dampAngle="0.001,0.001,0.001" dampAngVel="0.001,1,0"/> <Eject maxTippingAngle="110" timer="0.3 "/> <SoundParams engineSoundPosition="engineSmokeOut" runSoundDelay="0" roadBumpMinSusp="10" roadBumpMinSpeed="6" roadBumpIntensity="0.3" maxSlipSpeed="11"/> </ArcadeWheeled> </MovementParams> After saving your XML, open the Sandbox Editor and place down from the Entities types: VehiclesMyVehicle. You should now be able to enter this vehicle (get close to it and press the F key) and drive around (W = accelerate, S = brake/reverse, A = turn left, D = turn right)! How it works... The parts defined here in the XML are usually an exact match to the Max scene that the vehicle is created in. As long as the naming of the parts and the name of the subobjects within Max are the same, the vehicle structure should work. The parts in the XML can themselves be broken down into their own properties: Name: The name of the part. Class: The classification of the part. Base (obsolete) Static: Static vehicle (should not be used). Animated: The main part for an active rigid body of a vehicle. AnimatedJoint: Used for any other part that's used as a child of the animated part. EntityAttachment (obsolete) Light: Light parts for headlights, rear light, and so on. SubPart (obsolete) SubPartWheel: Wheels. Tread: Used with tanks. MassBox: Driving Massbox of the vehicle. Mass: Mass of the part (usually used when the part is detached) Component: Which component this part is linked to. If the component uses useBoundsFromParts="1", then this part will also be included in the total bounding box size. Filename: If a dummy helper is created in Max to be used as a part, then an external mesh can be referenced and used as this part. DisablePhysics: Prevents the part from being physicalized as rigid. DisableCollision: Disables all collision. It is an useful example for mass blocks. isHidden: Hides the part from rendering. There's more... The def_vehicle.xml file found in MyGameFolderScriptsEntitiesVehicles, holds all the property's definitions that can be utilized in the XML of the vehicles. After following the recipes found in this article, you may want to review def_vehicle.xml for further more advanced properties that you can add to your vehicles.  
Read more
  • 0
  • 0
  • 4119

article-image-blender-25-rigging-torso
Packt
22 Jun 2011
9 min read
Save for later

Blender 2.5: Rigging the Torso

Packt
22 Jun 2011
9 min read
Blender 2.5 Character Animation Cookbook 50 great recipes for giving soul to your characters by building high-quality rigs This article is about our character's torso: we're going to see how to create hips, a spine, and a neck. Aside from what you'll learn from here, it's important for you to take a look at how some of those rigs were built. You'll see some similarities, but also some new ideas to apply to your own characters. It's pretty rare to see two rigs built the exact same way. How to create a stretchy spine A human spine, also called vertebral column, is a bony structure that consists of several vertebrae (24 or 33, if you consider the pelvic region). It acts as our main axis and allows us a lot of flexibility to bend forward, sideways, and backward. And why is this important to know? That number of vertebrae is something useful for us riggers. Not that we're going to create all those tiny bones to make our character's spine look real, but that information can be used within Blender. You can subdivide one physical bone for up to 32 logical segments (that can be seen in the B-Bone visualization mode), and this bone will make a curved deformation based on its parent and child bones. That allows us to get pretty good deformations on our character's spine while keeping the number of bones to a minimum. This is good to get a realistic deformation, but in animation we often need the liberty to squash and stretch our character: and this is needed not only in cartoony animations, but to emphasize realistic poses too. We're going to see how to use some constraints to achieve that. We're going to talk about just the spine, without the pelvic region. The latter needs a different setup which is out of the scope of this article. How to do it... Open the file 002-SpineStretch.blend from support files. It's a mesh with some bones already set for the limbs, as you can see in the next screenshot. There's no weight painting yet, because it's waiting for you to create the stretchy spine. Select the armature and enter into its Edit Mode (Tab). Go to side view (Numpad 3); make sure the 3D cursor is located near the character's back, in the line of what would be his belly button. Press Shift + A to add a new bone. Move its tip to a place near the character's eyes. Go to the Properties window, under the Object Data tab, and switch the armature's display mode to B-Bone. You'll see that this bone you just created is a bit fat, let's make it thinner using the B-Bone scale tool (Ctrl + Alt + S). With the bone still selected, press (W) and select Subdivide. Do the same to the remaining bones so we end up with five bones. Still, in side view, you can select and move (G) the individual joints to best fit the mesh, building that curved shape common in a human spine, ending with a bone to serve as the head, as seen in the next screenshot: Name these bones as D_Spine1, D_Spine2, D_Spine3, D_Neck, and D_Head. You may think just five bones aren't enough to build a good spine. And here's when the great rigging tools in Blender come to help us. Select the D_Neck bone, go to the Properties window, under the Bone tab and increase the value of Segments in the Deform section to 6. You will not notice any difference yet. Below the Segments field there are the Ease In and Ease Out sliders. These control the amount of curved deformation on the bone at its base and its tip, respectively, and can range from 0 (no curve) to 2. Select the next bone below in the chain (D_Spine3) and change its Segments value to 8. Do the same to the remaining bones below, with values of 8 and 6, respectively. To see the results, go out of Edit Mode (Tab). You should end up with a nice curvy spine as seen in the following screenshot: Since these bones are already set to deform the mesh, we could just add some shapes to them and move our character's torso to get a nice spine movement. But that's not enough for us, since we also want the ability to make this character stretch. Go back into Edit Mode, select the bones in this chain, press Shift + W, and select No Scale. This will make sure that the stretching of the parent bone will not be transferred to its children. This can also be accomplished under the Properties window, by disabling the Inherit Scale option of each bone. Still in Edit Mode, select all the spine bones and duplicate (Shift + D) them. Press Esc to make them stay at the same location of the original chain, followed by Ctrl + Alt + S to make them fatter (to allow us to distinguish both chains). When in Pose Mode, these bones would also appear subdivided, which can make our view quite cluttered. Change back the Segments property of each bone to 1 and disable their deform property on the same panel under the Properties Window. Name these new bones as Spine1, Spine2, Spine3, Neck, and Head, go out of Edit Mode (Tab) and you should have something that looks similar to the next screenshot: Now let's create the appropriate constraints. Enter in Pose Mode (Ctrl + Tab), select the bone Spine1, hold Shift, and select D_Spine1. Press Shift + Ctrl + C to bring up the Constraints menu. Select the Copy Location constraint. This will make the deformation chain move when you move the Spine_1 bone. The Copy Location constraint here is added because there is no pelvic bone in this example, since it's creation involves a different approach which we'll see in the next recipe, Rigging the pelvis. With the pelvic bone below the first spinal bone, its location will drive the location of the rest of the chain, since it will be the chain's root bone. Thus, this constraint won't be needed with the addition of the pelvis. Make sure that you check out our next recipe, dedicated to creating the pelvic bone. With those bones still selected, bring up the Constraints menu again and select the Stretch To constraint. You'll see that the deformation chain will seem to disappear, but don't panic. Go to the Properties Panel, under the Bone Constraints tab and look for the Stretch To constraint you have just created. Change the value of the Head or Tail slider to 1, so the constraint would be evaluated considering the tip of the Spine_1 bone instead of its base. Things will look different now, but not yet correct. Press the Reset button to recalculate the constraints and make things look normal again. This constraint will cause the first deformation bone to be stretched when you scale (S) the Spine_1 bone. Try it and see the results. The following screenshot shows the constraint values: This constraint should be enough for stretching, and we may think it could replace the Copy Rotation constraint. That's not true, since the StretchTo constraint does not apply rotations on the bone's longitudinal Y axis. So, let's add a Copy Rotation constraint. On the 3D View, with the Spine1 and D_Spine1 selected (in that order, that's important!), press Ctrl + Shift + C and choose the Copy Rotation constraint. Since the two bones have the exact same size and position in 3D space, you don't need to change any of the constraint's settings. You should add the Stretch To and Copy Rotation constraints to the remaining controller bones exactly the same way you did with the D_Spine1 bone in steps 9 to 12. As the icing on the cake, disable the X and Z scaling transformation on the controller bones. Select each, go to the Transform Panel (N), and press the lock button near the X and Z sliders under Scale. Now, when you select any of these controller bones and press S, the scale is just applied on their Y axis, making the deforming ones stretch properly. Remember that the controller bones also work as expected when rotated (R). The next screenshot shows the locking applied: Enter into Edit Mode (Tab), select the Shoulder.L bone, hold Shift, and select both Shoulder.R and Spine3 (in this order; that's important). Press Ctrl + P and choose Keep Offset to make both shoulder controllers children of the Spine3 bone and disable its scale inheriting either through Shift + W or the Bone tab on the Properties panel. When you finish setting these constraints and applying the rig to the mesh through weight painting, you can achieve something stretchy, as you can see in the next screenshot: The file 002-SpineStretch-complete.blend has this complete recipe, for your reference in case of doubts. How it works... When creating spine rigs in Blender, there's no need to create lots of bones, since Blender allows us to logically subdivide each one to get soft and curved deformations. The amount of curved deformation can also be controlled through the Ease In and Ease Out sliders, and it also works well with stretching. When you scale a bone on its local Y axis in Pose Mode, it doesn't retain its volume, thus the mesh deformed by it would be scaled without the stretching feeling. You must create controller bones to act as targets to the Stretch To constraint, so when they're scaled, the constrained bones will stretch and deform the mesh with its volume preserved. There's more... You should notice that the spine controllers will be hidden inside the character's body when you turn off the armature's X-Ray property. Therefore, you need to create some custom shapes for these controller bones in order to make your rig more usable.
Read more
  • 0
  • 0
  • 2922

article-image-cryengine-3-terrain-sculpting
Packt
21 Jun 2011
11 min read
Save for later

CryENGINE 3: Terrain Sculpting

Packt
21 Jun 2011
11 min read
CryENGINE 3 Cookbook Over 100 recipes written by Crytek developers for creating AAA games using the technology that created Crysis 2 Creating a new level Before we can do anything with the gameplay of the project that you are creating, we first need a foundation of a new level for the player to stand on. This recipe will cover how to create a new level from scratch. Getting ready Before we begin, you must have Sandbox 3 open. How to do it... At any point, with Sandbox open, you may create a new level by following these steps: Click File (found in the top -left of the Sandbox's main toolbar). Click New. From here, you will see a new dialog screen that will prompt you for information on how you want to set up your level. The most important aspect of a level is naming it, as you will not be able to create a level without some sort of proper name for the level's directory and its .cry file. You may name your level anything you wish, but for the ease of instruction we shall refer to this level as My_Level: In the Level Name dialog box, type in My_Level. For the Terrain properties, use the following values: Use Custom Terrain Size: True Heightmap Resolution:512x512 Meters Per Unit: 1 Click OK. Depending on your system specifications, you may find that creating a new level will require anywhere from a few seconds to a couple of minutes. Once finished, the Viewport should display a clear blue sky with the dialog in your console reading the following three lines: Finished synchronous pre-cache of render meshes for 0 CGF's Finished pre-caching camera position (1024,1024,100) in 0.0 sec Spawn player for channel 1 This means that the new level was created successfully. How it works... Let's take a closer look at each of the options used while creating this new level. Using the Terrain option This option allows the developer to control whether to have any terrain on the level to be manipulated by a heightmap or not. Sometimes terrain can be expensive for levels and if any of your future levels contain only interiors or only placed objects for the player to navigate on, then setting this value to false will be a good choice for you and will save a tremendous amount of memory and aid in the performance of the level later on. Heightmap resolution This drop-down controls the resolution of the heightmap and the base size of the play area defined. The settings can range from the smallest resolution (128 x 128) all the way up to the largest supported resolution (8192 x 8192). Meters per unit If the Heightmap Resolution is looked at in terms of pixel size, then this dialog box can also be viewed as the Meters Per Pixel. This means that each pixel of the heightmap will be represented by these many meters. For example, if a heightmap's resolution has 4 Meters Per Unit (or Pixel), then each pixel on the generated heightmap will measure four meters in length and width on the level. Even though this Meters Per Unit can be used to increase the size of your level, it will decrease the fidelity of the heightmap. You will notice that attempting to smoothen out the terrain may be difficult as there will be a wider minimum triangle size set by this value. Terrain size This is the resulting size of the level with the equation of (Heightmap Resolution) x (Meters Per unit). Here are some examples of the results you will see (m = meters): (128x128) x 4m = 512x512m (512x512) x 16m = 8192x8192m (1024x1024) x 2m = 2048x2048m There's more... If you need to change your unit size after creating the map, you may change it by going into the Terrain Editor | Modify | Set Unit Size. This will allow you to change the original Meters Per Unit to the size you want it to be.   Generating a procedural terrain This recipe deals with the procedural generation of a terrain. Although never good enough for a final product because you will want to fine tune the heightmap to your specifications, these generated terrains are a great starting point for anyone new to creating levels or for anyone who needs to set up a test level with the Sandbox. Different heightmap seeds and a couple of tweaks to the height of the level and you can generate basic mountain ranges or islands quickly that are instantly ready to use. Getting ready Have My_Level open inside of Sandbox. How to do it... Up at the top-middle of the Sandbox main toolbar, you will find a menu selection called Terrain. From there you should see a list of options, but for now you will want to click on Edit Terrain. This opens the Terrain Editor window. The Terrain Editor window has a multitude of options that can be used to manipulate the heightmap in your level. But first we want to set up a basic generated heightmap for us to build a simple map with. Before we generate anything, we should first set the maximum height of the map to something more manageable. Follow these steps: Click Modify. Then click Set Max Height. Set your Max Terrain Height to 256 (these units are in meters). Now, we may be able to generate the terrain: Click Tools. Then click Generate Terrain. Modify the Variation (Random Base) to the value of 15. Click OK. After generating, you should be able to see a heightmap similar to the following screenshot: This is a good time to generate surface texture (File | Generate surface texture | OK), which allows you to see the heightmap with a basic texture in the Perspective View. How it works... The Maximum Height value is important as it governs the maximum height at which you can raise your terrain to. This does not mean that it is the maximum height of your level entirely, as you are still able to place objects well above this value. It is also important to note that if you import a grey scale heightmap into CryENGINE then this value will be used as the upper extreme of the heightmap (255,255,255 white) and the lower extreme will always be at 0 (0,0,0 black). Therefore the heightmap will be generated within 0 m height and the maximum height. Problems such as the following are a common occurrence: Tall spikes are everywhere on the map or there are massive mountains and steep slopes: Solution: Reduce the Maximum Height to a value that is more suited to the mountains and slopes you want The map is very flat and has no hills or anything from my heightmap: Solution: Increase the Maximum Height to a value that is suitable for making the hills you want There's more... Here are some other settings you might choose to use while generating the terrain. Terrain generation settings The following are the settings to generate a procedural terrain: Feature Size: This value handles the general height manipulations within the seed and the size of each mound within the seed. As the size of the feature depends greatly on rounded numbers it is easy to end up with a perfectly rounded island, therefore it is best to leave this value at 7.0. Bumpiness / Noise (Fade): Basically, this is a noise filter for the level. The greater the value, the more noise will appear on the heightmap. Detail (Passes): This value controls how detailed the slopes will become. By default, this value is very high to see the individual bumps on the slopes to give a better impression of a rougher surface. Reducing this value will decrease the amount of detail/roughness in the slopes seen. Variation: This controls the seed number used in the overall generation of the Terrain Heightmap. There are a total of 33 seeds ranging from 0 – 32 to choose from as a starting base for a basic heightmap. Blurring (Blur Passes): This is a Blur filter. The higher the amount, the smoother the slopes will be on your heightmap. Set Water Level: From the Terrain Editor window, you can adjust the water level from Modify | Set Water Level. This value changes the base height of the ocean level (in meters). Make Isle: This tool allows you to take the heightmap from your level and automatically lowers the border areas around the map to create an island. From the Terrain Editor window, select Modify | Make Isle.   Navigating a level with the Sandbox Camera The ability to intuitively navigate levels is a basic skill that all developers should be familiar with. Thankfully, this interface is quite intuitive to anyone who is already familiar with the WASD control scheme popular in most First Person Shooters Games developed on the PC. Getting ready You should have already opened a level from the CryENGINE 3 Software Development Kit content and seen a perspective viewport displaying the level. The window where you can see the level is called the Perspective Viewport window. It is used as the main window to view and navigate your level. This is where a large majority of your level will be created and common tasks such as object placement, terrain editing, and in-editor play testing will be performed. How to do it... The first step to interacting with the loaded level is to practice moving in the Perspective Viewport window. Sandbox is designed to be ergonomic for both left and right-handed users. In this example, we use the WASD control scheme, but the arrow keys are also supported for movement of the camera. Press W to move forwards. Then press S to move backwards. A is pressed to move or strafe left. Finally, D is pressed to move or strafe right. Now you have learned to move the camera on its main axes, it's time to adjust the rotation of the camera. When the viewport is the active window, hold down the right mouse button on your mouse and move the mouse pointer to turn the view. You can also hold down the middle mouse button and move the mouse pointer to pan the view. Roll the middle mouse button wheel to move the view forward or backward. Finally, you can hold down Shift to double the speed of the viewport movements. How it works... The Viewport allows for a huge diversity of views and layouts for you to view your level; the perspective view is just one of many. The perspective view is commonly used as it displays the output of the render engine. It also presents you a view of your level using the standard camera perspective, showing all level geometry, lighting, and effects. To experiment further with the viewport, note that it can also render subsystems and their toolsets such as flow graph, or character editor. There's more... You will likely want to adjust the movement speed and how to customize the viewport to your individual use. You can also split the viewport in multiple different views, which is discussed further. Viewport movement speed control The Speed input is used to increase or decrease the movement speed of all the movements you make in the main Perspective Viewport. The three buttons to the right of the Speed: inputs are quick links to the .1, 1, and 10 speeds. Under Views you can adjust the viewport to view different aspects of your level Top View, Front, and Left views will show their respective aspects of your level, consisting of bounding boxes and line-based helpers. It should be noted that geometry is not drawn. Map view shows an overhead map of your level with helper, terrain, and texture information pertaining to your level. Splitting the main viewport to several subviewports Individual users can customize the layout and set viewing options specific to their needs using the viewport menu accessed by right-clicking on the viewports header. The Layout Configuration window can be opened from the viewport header under Configure Layout. Once selected, you will be able to select one of the preset configurations to arrange the windows of the Sandbox editor into multiple viewport configurations. It should be recognized that in multiple viewport configurations some rendering effects may be disabled or performance may be reduced.  
Read more
  • 0
  • 0
  • 2276
article-image-cryengine-3-sandbox-basics
Packt
21 Jun 2011
9 min read
Save for later

CryENGINE 3: Sandbox Basics

Packt
21 Jun 2011
9 min read
  CryENGINE 3 Cookbook Over 100 recipes written by Crytek developers for creating AAA games using the technology that created Crysis 2 Placing the objects in the world Placing objects is a simple task; however, basic terrain snapping is not explained to most new developers. It is common to ask why, when dragging and dropping an object into the world, they cannot see the object. This section will teach you the easiest ways to place an object into your map by using the Follow Terrain method. Getting ready Have My_Level open inside of Sandbox (after completing either of the Terrain sculpting or Generating a procedural terrain recipes). Review the Navigating a level with the Sandbox Camera recipe to get familiar with the Perspective View. Have the Rollup Bar open and ready. Make sure you have the EditMode ToolBar open (right-click the top main ToolBar and tick EditMode ToolBar). How to do it... First select the Follow Terrain button. Then open the Objects tab within the Rollup Bar. Now from the Brushes browser, select any object you wish to place down (for example, defaults/box). You may either double-click the object, or drag-and-drop it onto the Perspective View. Move your mouse anywhere where there is visible terrain and then click once more to confirm the position you wish to place it in. How it works... The Follow Terrain tool is a simple tool that allows the pivot of the object to match the exact height of the terrain in that location. This is best seen on objects that have a pivot point close to or near the bottom of them. There's more... You can also follow terrain and snap to objects. This method is very similar to the Follow Terrain method, except that this also includes objects when placing or moving your selected object. This method does not work on non-physicalized objects.   Refining the object placement After placing the objects in the world with just the Follow Terrain or Snapping to Objects, you might find that you will need to adjust the position, rotation, or scale of the object. In this recipe, we will show you the basics of how you might be able to do so along with a few hotkey shortcuts to make this process a little faster. This works with any object that is placed in your level, from Entities to Solids. Getting ready Have My_Level open inside of Sandbox Review the Navigating a level with the Sandbox Camera recipe to get familiar with the Perspective View. Make sure you have the EditMode ToolBar open (right-click on the top main ToolBar and tick EditMode ToolBar). Place any object in the world. How to do it... In this recipe, we will call your object (the one whose location you wish to refine) Box for ease of reference. Select Box. After selecting Box, you should see a three axis widget on it, which represents each axis in 3D space. By default, these axes align to the world: Y = Forward X = Right Z = Up To move the Box in the world space and change its position, proceed with the following steps: Click on the Select and Move icon in the EditMode ToolBar (1 for the keyboard shortcut). Click on the X arrow and drag your mouse up and down relative to the arrow's direction. Releasing the mouse button will confirm the location change. You may move objects either on a single axis, or two at once by clicking and dragging on the plane that is adjacent to any two axes: X + Y, X + Z, or Y + Z. To rotate an object, do the following: Select Box (if you haven't done so already). Click on the Select and Rotate icon in the EditMode ToolBar (2 for the keyboard shortcut). Click on the Z arrow (it now has a sphere at the end of it) and drag your mouse from side to side to roll the object relative to the axis. Releasing the mouse button will confirm the rotation change. You cannot rotate an object along multiple axes. To scale an object, do the following: Select Box (if you haven't done so already). Click on the Select and Scale icon in the EditMode ToolBar (3 for the keyboard shortcut). Click on the CENTER box and drag your mouse up and down to scale on all three axes at once. Releasing the mouse button will confirm the scale change. It is possible to scale on just one axis or two axes; however, this is highly discouraged as Non-Uniform Scaling will result in broken physical meshes for that object. If you require an object to be scaled up, we recommend you only scale uniformly on all three axes! There's more... Here are some additional ways to manipulate objects within the world. Local position and rotation To make position or rotation refinement a bit easier, you might want to try changing how the widget will position or rotate your object by changing it to align itself relative to the object's pivot. To do this, there is a drop-down menu in the EditMode ToolBar that will have the option to select Local. This is called Local Direction. This setup might help to position your object after you have rotated it. Grid and angle snaps To aid in positioning of non-organic objects, such as buildings or roads, you may wish to turn on the Snap to Grid option. Turning this feature on will allow you to move the object on a grid (currently relative to its location). To change the grid spacing, click the drop-down arrow next to the number to change the spacing (grid spacing is in meters). Angle Snaps is found immediately to the right of the Grid Snaps. Turning this feature on will allow you to rotate an object by every five degrees. Ctrl + Shift + Click Even though it is a Hotkey, to many developers this hotkey is extremely handy for initial placement of objects. It allows you to move the object quickly to any point on any physical surface relative to your Perspective View.   Utilizing the layers for multiple developer collaboration A common question that is usually asked about the CryENGINE is how does one developer work on the same level as another at the same time. The answer is—Layers. In this recipe, we will show you how you may be able to utilize the layer system for not only your own organization, but to set up external layers for other developers to work on in parallel. Getting ready Have My_Level open inside of Sandbox. Review the Navigating a level with the Sandbox Camera to get familiar with the Perspective View. Have the Rollup Bar open and ready. Review the Placing the objects in the world (place at least two objects) recipe. How to do it... For this recipe, we will assume that you have your own repository for your project or some means to send your work to others in your team. First, start by placing down two objects on the map. For the sake of the recipe, we shall refer to them as Box1 and Box2. After you've placed both boxes, open the Rollup Bar and bring up the Layers tab. Create a new layer by clicking the New Layer button (paper with a + symbol). A New Layer dialog box will appear. Give it the following parameters: Name = ActionBubble_01 Visible = True External = True Frozen = False Export To Game = True Now select Box1 and open the Objects tab within the Rollup Bar. From here you will see in the main rollup of this object with values such as – Name, Helper Size, MTL, and Minimal Spec. But also in this rollup you will see a button for layers (it should be labelled as Main). Clicking on that button will show you a list of all other available layers. Clicking again on another layer that is not highlighted will move this object to that layer (do this now by clicking on ActionBubble_01). Now save your level by clicking—File | Save. Now in your build folder, go to the following location: -... GameLevelsMy_Level. From here you will notice a new folder called Layers. Inside that folder, you will see ActionBubble_01.lyr. This layer shall be the layer that your other developers will work on. In order for them to be able to do so, you must first commit My_Level.cry and the Layers folder to your repository (it is easiest to commit the entire folder). After doing so, you may now have your other developer make changes to that layer by moving Box1 to another location. Then have them save the map. Have them commit only the ActionBubble_01.lyr to the repository. Once you have retrieved it from the updated repository, you will notice that Box1 will have moved after you have re-opened My_Level.cry in the Editor with the latest layer. How it works... External layers are the key to this whole process. Once a .cry file has been saved to reference an external layer, it will access the data inside of those layers upon loading the level in Sandbox. It is good practice to assign a Map owner who will take care of the .cry file. As this is the master file, only one person should be in charge of maintaining it by creating new layers if necessary. There's more... Here is a list of limitations of what external layers cannot hold. External layer limitations Even though any entity/object you place in your level can be placed into external layers, it is important to note that there are some items that cannot be placed inside of these layers. Here is a list of the common items that are solely owned by the .cry file: Terrain Heightmap Unit Size Max Terrain Height Holes Textures Vegetation Environment Settings (unless forced through Game Logic) Ocean Height Time of Day Settings (unless forced through Game Logic) Baked AI Markup (The owner of the .cry file must regenerate AI if new markup is created on external layers) Minimap Markers  
Read more
  • 0
  • 0
  • 2210

article-image-mastering-blender-25-basics
Packt
17 Jun 2011
13 min read
Save for later

Mastering the Blender 2.5 Basics

Packt
17 Jun 2011
13 min read
Blender 2.5 Character Animation Cookbook 50 great recipes for giving soul to your characters by building high-quality rigs Adjusting and tracking the timing Timing, by itself, is a subject that goes well beyond the scope of a simple recipe. It is, in fact, the main subject of a number of animation-related books. Strictly speaking, Timing in animation is how long it takes (in frames or seconds) between two Extreme poses. You can have your character in great poses, but if the timing between them is not right, your shot may be ruined. Maybe it is a difficult thing to master because there are no definite rules for it: everyone is born with a particular sense of timing. Despite that, it's enormously important to look at video and real life references to understand the timing for different actions. Imagine a tennis ball falling to the ground and bouncing. Think of the time between its first and second contact with the ground. Now replace it with a bowling ball and think of the time required for this bounce. You know, from your life experience, that the tennis ball bounces slower than the bowling ball. The timing between these two balls is different. The timing here (along with spacing, subject of the next recipe) is the main factor that makes us perceive the different nature and weight of each ball. The "rules" of timing can also be broken for comedic effect: something that purposely moves faster or slower than usual may get a laugh from the audience. We're going to see how different timings can change how we perceive a shot with the same poses. How to do it... Open the file 007-Timing.blend (Go to Support to get the code). It has our character Otto with three poses, making him look from one side to the other: (Move the mouse over the image to enlarge it.) Press Alt + A to play the animation. You may think the timing is acceptable for this head turn, but this method of checking the timing is not ideal. When you tell Blender to play the animation through Alt + A, you're relying in your computer's power to process all the information of your scene in real time. You'd probably end up seeing something slower than what you'll actually get after rendering the frames. When playing the animation inside the 3D view, you can see the actual playback frame rate on the top left corner of the window. If it's slower than the scene frame rate (in this case, 24 fps), it means that the rendered animation will be faster than what you're seeing. When adjusting the timing, we must be sure of the exact results of every keyframe set. Even a one-frame change can make a huge impact on the scene, but rendering a complex scene just to test the timing is out of the question, because it just takes too long to see the results. We need a quick way to check the timing precisely. Fortunately, Blender allows us to make a quick "render" of our 3D view, with only the OpenGL information. This is also called "playblast", and is exactly what we need. Take a look at the header of our 3D view and find the button with a clapperboard icon, as seen in the next screenshot: OpenGL stands for Open Graphics Library, and is a free cross-platform specification and API for writing 2D and 3D computer graphics. Not only are the objects inside Blender's 3D view made using this library, but also the user interface with all its buttons, icons, and text are drawn on the screen with OpenGL. From OpenGL version 2.0 it's possible to use GLSL, a high level shading language heavily used to create games and supported by Blender to enhance the way objects are displayed on the screen in real time. From Blender 2.5, GLSL is the default real time rendering method when the user selects the Textured viewport shading mode, but that option has to be supported by your graphics card. Click on that clapperboard button, and the active 3D view will be used for a quick OpenGL render of your scene. This preview rendering shares the Render panel settings in the Properties window, so the picture size, frame rate, output folder, file format, duration, and stamp will be the same. If you can't see the button in your 3D View header (it is available only in the header) it may be an issue of lack of space; you can click with the middle button (or the scroll wheel) of your mouse over the header and drag it to the sides to find it. After the OpenGL rendering is complete, press Esc to go back to your scene and press Ctrl + F11 to preview the animation with the correct frame rate to check the timing. Starting with the Blender 2.5 series, there's no built-in player in the program, so you have to specify one in the User Preferences window (Ctrl + Alt + U), on the File tab. This player can even be a previous version of Blender in the 2.4 series or any player you wish, such as DJV or Mplayer. With any of these options you must tell Blender the file path where the player is installed. Now that you can watch the animation with the correct frame rate, you'll notice that the head turns quite fast, since it only takes five frames to complete. This fast timing makes our action seem to happen after the character listens to an abrupt and loud noise coming from his left, so he has to turn his head quickly and look to see what happened. Let's suppose our character is watching a tennis match in Wimbledon, and his seat is in line with the net, at the middle of the court (yep, lucky guy). Watching the ball from the serve until it reaches the other side of the court should take longer than what we have just set up, so let's adjust our keyframes now. In the DopeSheet window, leave the first keyframe at frame 1. Select the last column of keyframes by holding Alt and right-clicking on any keyframe set at frame 5. Move (G) the column to frame 15 (hold Ctrl for snapping to the frames), so our action takes three times longer than the original. Another way of selecting a column of keyframes is through the DopeSheet Summary option on the window header. It creates an extra line above all channels. If you select the diamond on this line, all keyframes on that column will be selected. You can even collapse all channels and use only the DopeSheet Summary to move the keys along the timeline to make timing adjustments easily. Now, the Breakdown, or intermediate position between two Extreme poses. It doesn't have to be at the exact middle of our action. Actually, it's good to avoid symmetry not only in our models and poses, but in our motions too. Move (G) the Breakdown to frame 6, and you'll have something similar to the next screenshot: Now you can make another OpenGL render to preview the action with the new timing. You can choose to disable the layer where the armature is located, the second, by holding Shift and clicking over it, so you don't have the bones on the preview. Of course this is far from a finished shot: it's a good idea to make the character blink during the head turn, add some moving holds, animate the eyeballs, add some facial expressions, and so on. This rough example is only to show how drastically the timing can change the feel of an action. If you set the timing between the positions even higher, our character may seem like he's looking at something slower (someone on a bike, maybe?) moving in front of him. How it works... Along with good posing, the timing is crucial to make our actions vivid, believable, and with a sense of weight. The timing also is very important to help your audience understand what is happening in the scene, so it must be carefully adjusted. To have a precise view of how the timing is working in an action within Blender, it's best to use the OpenGL preview mode, since the usual Alt + A shortcut to preview the animation inside the 3D View can be misleading. There's more... Depending on the complexity of your scene, you can achieve the correct frame rate within the 3D view with Alt + A. You can disable the visibility of irrelevant objects or some modifiers to help speed up this real time processing, like lowering (or disabling) the Subdivision Surface modifier and hiding the armature and background layers. Spacing: favoring and easing poses The previous recipe shows us how to adjust the timing of our character's actions, which is something extremely important to make our audience not only understand what is happening on the screen, but also know the weight and forces involved in the motion. Since timing is closely related to spacing, there is often confusion between the two concepts. Timing in animation is the number of frames between two Extreme poses. Spacing is how the animated subject moves and shows variations of speed along these frames. Actions with the same timing and different spacing are perceived differently by the audience, and these principles combined are responsible for the feeling of weight of our actions. We're going to see how the spacing works and how we can create eases and favoring poses to enhance movements. How to do it... Open the file 007-Spacing.blend. It has our character Otto turning his head from right to left, just like in the timing recipe. We don't have a Breakdown position defined yet, and this action has a timing set to 15 frames. First, let's understand the most elementary type of spacing: linear, or even spacing. This is when the calculated intermediate positions between two keyframes have the same distance among them, without any kind of acceleration. This isn't something we're used to seeing in nature, thus it's not the default interpolation mode in Blender. To use it, select the desired keyframes in a DopeSheet or a Graph Editor window, press Shift + T, and choose the Linear interpolation mode. The curves between the keyframes will turn into straight lines, as you can see in the next screenshot showing the channels for the Head bone. If you preview the animation with Alt + A, you'll see that the movement is very mechanical and unappealing, something we don't see in nature. That's why this interpolation mode isn't the default one. Movements in nature all have some variation in speed, going from a resting state, accelerating to a peak velocity, then slowing down until another resting state. These variations of speed are called eases, and are represented with curved lines on the Graph Editor. When there is an increase in speed we have an ease out. When the movement slows down to a resting state, we have an ease in. This variation in speed is the default interpolation method in Blender, and you can enable it by selecting the desired keyframes in a DopeSheet or Graph Editor window, press Shift + T and select the Bezier interpolation mode. The next screenshot shows the same keyframes with easing: When we adjust the curve handles on the Graph Editor, we're actually defining the eases of that movement. When you insert keyframes in Blender, it automatically creates both eases: out and in (with same speeds). Since not all movements have the same variation of speed at their beginning and end, it's a good idea to change the handles on the Graph Editor. This difference of speed between the start and end keyframes is called favoring. When the Spacing between two poses have different eases, we say the movement "favors" one of the poses, notably the one which has the bigger ease. In the next screenshot, the curves for the Head bone were adjusted so the movement favors the second pose. Note that there is a softer curve near the second pose, while the first has sharper lines near it. This will make the head leave the first pose very quick and slowly settle into the second one. In order to create sharp angles with the handles in the Graph Editor window, you need to select the desired curve channels, press V and choose the Free handle type. Open the video file 007-Spacing.mov in a video player, which enables navigating through the frames (such as DJV), to watch the three actions at the same time. Although the timing of the action is unchanged, you can clearly notice how the interpolation changes the motion. In the next screenshot, you can see that at frame 8, the Favoring version has the face closer to the second pose: Now that you understand what spacing is, know the difference between the interpolation types, and can use eases to favor poses, let's add a Breakdown position. This action is pretty boring, since the head turn happens without any arcs. It's a good idea to tilt the head down a little during the turn, making an imaginary arc with the eyes. Especially during quick head turns, it's a good idea to make your character blink during the turn. Unless your character is following something with the eyes—such as in a tennis court in our example—a quick blink is useful to make a "scene cut" in our minds from one subject to the other. On the DopeSheet window, in the Action Editor, select the Favoring action. Go to frame 6, where the character looks to the camera. Select and rotate (R) the Head and Neck bones to front on their local X axis, as seen in the next screenshot, and insert a keyframe (I) for its rotation: Since Blender automatically creates symmetrical eases on each new keyframe, it's time to adjust our spacing for the Head and Neck bones on the Graph Editor window. If you play the animation with Alt + A, you'll notice that the motion goes very weird because of that automatic ease. The F-Curves on the X axis of each bone for this motion are not soft. Ideally, since this is a Breakdown position, the curves between it and its surrounding Extreme poses should be smooth, regardless of the favoring. Select the curve handles on frames 1 and 6, and move (G) them in order to soften the curve peak in that Breakdown position. The next screenshot shows the curves before and after editing. Notice how the peak curves at the Breakdown in the middle get smoother after editing: Now the action looks more natural, with a nice Breakdown and favoring created using the F-Curves. The file 007-Spacing-complete.blend has this finished example for your reference, in which you can play the animation with Alt + A to see the results. How it works... By understanding the principle of Spacing, you can create eases and favoring in order to create snappy and interesting motions. Just like visible shapes, the pace of motion in nature is often asymmetrical. To make your motions not only more interesting but also more believable and with accents to reinforce the purpose behind the movements, you should master Spacing. Be sure to check out the interpolation curves in your animations: interesting movements normally have different eases between two Extreme positions.
Read more
  • 0
  • 0
  • 1778

article-image-working-away3d-cameras
Packt
06 Jun 2011
10 min read
Save for later

Working with Away3D Cameras

Packt
06 Jun 2011
10 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Cameras are an absolutely essential part of the 3D world of computer graphics. In fact, no real-time 3D engine can exist without having a camera object. Cameras are our eyes into the 3D world. Away3D has a decent set of cameras, which at the time of writing, consists of Camera3D, TargetCamera3D, HoverCamera3D, and SpringCam classes. Although they have similar base features, each one has some additional functionality to make it different. Creating an FPS controller There are different scenarios where you wish to get a control of the camera in first person, such as in FPS video games. Basically, we want to move and rotate our camera in any horizontal direction defined by the combination of x and y rotation of the user mouse and by keyboard keys input. In this recipe, you will learn how to develop such a class from scratch, which can then be useful in your consequential projects where FPS behavior is needed. Getting ready Set up a basic Away3D scene extending AwayTemplate and give it the name FPSDemo. Then, create one more class which should extend Sprite and give it the name FPSController. How to do it... FPSController class encapsulates all the functionalities of the FPS camera. It is going to receive the reference to the scene camera and apply FPS behavior "behind the curtain". FPSDemo class is a basic Away3D scene setup where we are going to test our FPSController: FPSController.as package utils { public class FPSController extends Sprite { private var _stg:Stage; private var _camera:Object3D private var _moveLeft_Boolean=false; private var _moveRight_Boolean=false; private var _moveForward_Boolean=false; private var _moveBack_Boolean=false; private var _controllerHeigh:Number; private var _camSpeed_Number=0; private static const CAM_ACCEL_Number=2; private var _camSideSpeed_Number=0; private static const CAM_SIDE_ACCEL_Number=2; private var _forwardLook_Vector3D=new Vector3D(); private var _sideLook_Vector3D=new Vector3D(); private var _camTarget_Vector3D=new Vector3D(); private var _oldPan_Number=0; private var _oldTilt_Number=0; private var _pan_Number=0; private var _tilt_Number=0; private var _oldMouseX_Number=0; private var _oldMouseY_Number=0; private var _canMove_Boolean=false; private var _gravity:Number; private var _jumpSpeed_Number=0; private var _jumpStep:Number; private var _defaultGrav:Number; private static const GRAVACCEL_Number=1.2; private static const MAX_JUMP_Number=100; private static const FRICTION_FACTOR_Number=0.75; private static const DEGStoRADs:Number = Math.PI / 180; public function FPSController(camera:Object3D,stg:Stage, height_Number=20,gravity:Number=5,jumpStep:Number=5) { _camera=camera; _stg=stg; _controllerHeigh=height; _gravity=gravity; _defaultGrav=gravity; _jumpStep=jumpStep; init(); } private function init():void{ _camera.y=_controllerHeigh; addListeners(); } private function addListeners():void{ _stg.addEventListener(MouseEvent.MOUSE_DOWN, onMouseDown,false,0,true); _stg.addEventListener(MouseEvent.MOUSE_UP, onMouseUp,false,0,true); _stg.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown,false,0,true); _stg.addEventListener(KeyboardEvent.KEY_UP, onKeyUp,false,0,true); } private function onMouseDown(e:MouseEvent):void{ _oldPan=_pan; _oldTilt=_tilt; _oldMouseX=_stg.mouseX+400; _oldMouseY=_stg.mouseY-300; _canMove=true; } private function onMouseUp(e:MouseEvent):void{ _canMove=false; } private function onKeyDown(e:KeyboardEvent):void{ switch(e.keyCode) { case 65:_moveLeft = true;break; case 68:_moveRight = true;break; case 87:_moveForward = true;break; case 83:_moveBack = true;break; case Keyboard.SPACE: if(_camera.y<MAX_JUMP+_controllerHeigh){ _jumpSpeed=_jumpStep; }else{ _jumpSpeed=0; } break; } } private function onKeyUp(e:KeyboardEvent):void{ switch(e.keyCode) { case 65:_moveLeft = false;break; case 68:_moveRight = false;break; case 87:_moveForward = false;break; case 83:_moveBack = false;break; case Keyboard.SPACE:_jumpSpeed=0;break; } } public function walk():void{ _camSpeed *= FRICTION_FACTOR; _camSideSpeed*= FRICTION_FACTOR; if(_moveForward){ _camSpeed+=CAM_ACCEL;} if(_moveBack){_camSpeed-=CAM_ACCEL;} if(_moveLeft){_camSideSpeed-=CAM_SIDE_ACCEL;} if(_moveRight){_camSideSpeed+=CAM_SIDE_ACCEL;} if (_camSpeed < 2 && _camSpeed > -2){ _camSpeed=0; } if (_camSideSpeed < 0.05 && _camSideSpeed > -0.05){ _camSideSpeed=0; } _forwardLook=_camera.transform.deltaTransformVector(new Vector3D(0,0,1)); _forwardLook.normalize(); _camera.x+=_forwardLook.x*_camSpeed; _camera.z+=_forwardLook.z*_camSpeed; _sideLook=_camera.transform.deltaTransformVector(new Vector3D(1,0,0)); _sideLook.normalize(); _camera.x+=_sideLook.x*_camSideSpeed; _camera.z+=_sideLook.z*_camSideSpeed; _camera.y+=_jumpSpeed; if(_canMove){ _pan = 0.3*(_stg.mouseX+400 - _oldMouseX) + _oldPan; _tilt = -0.3*(_stg.mouseY-300 - _oldMouseY) + _oldTilt; if (_tilt > 70){ _tilt = 70; } if (_tilt < -70){ _tilt = -70; } } var panRADs_Number=_pan*DEGStoRADs; var tiltRADs_Number=_tilt*DEGStoRADs; _camTarget.x = 100*Math.sin( panRADs) * Math.cos (tiltRADs) +_camera.x; _camTarget.z = 100*Math.cos( panRADs) * Math.cos (tiltRADs) +_camera.z; _camTarget.y = 100*Math.sin(tiltRADs) +_camera.y; if(_camera.y>_controllerHeigh){ _gravity*=GRAVACCEL; _camera.y-=_gravity; } if(_camera.y<=_controllerHeigh ){ _camera.y=_controllerHeigh; _gravity=_defaultGrav; } _camera.lookAt(_camTarget); } } } Now let's put it to work in the main application: FPSDemo.as package { public class FPSDemo extends AwayTemplate { [Embed(source="assets/buildings/CityScape.3ds",mimeType=" application/octet-stream")] private var City:Class; [Embed(source="assets/buildings/CityScape.png")] private var CityTexture:Class; private var _cityModel:Object3D; private var _fpsWalker:FPSController; public function FPSDemo() { super(); } override protected function initGeometry() : void{ parse3ds(); } private function parse3ds():void{ var max3ds_Max3DS=new Max3DS(); _cityModel=max3ds.parseGeometry(City); _view.scene.addChild(_cityModel); _cityModel.materialLibrary.getMaterial("bakedAll [Plane0"). material=new BitmapMaterial(Cast.bitmap(new CityTexture())); _cityModel.scale(3); _cityModel.x=0; _cityModel.y=0; _cityModel.z=700; _cityModel.rotate(Vector3D.X_AXIS,-90); _cam.z=-1000; _fpsWalker=new FPSController(_cam,stage,_view,20,12,250); } override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); _fpsWalker.walk(); } } } How it works... FPSController class looks a tad scary, but that is only at first glance. First we pass the following arguments into the constructor: camera: Camera3D reference (here Camera3D, by the way, is the most appropriate one for FPS). stg: References to flash stage because we are going to assign listeners to it from within the class. height: It is the camera distance from the ground. We imply here that the ground is at 0,0,0. gravity: Gravity force for jump. JumpStep: Jump altitude. Next we define listeners for mouse UP and DOWN states as well as events for registering input from A,W,D,S keyboard keys to be able to move the FPSController in four different directions. In the onMouseDown() event handler, we update the old pan, tilt the previous mouseX and mouseY values as well as by assigning the current values when the mouse has been pressed to _oldPan, _oldTilt, _oldMouseX, and _oldMouseY variables accordingly. That is a widely used technique. We need to do this trick in order to have nice and continuous transformation of the camera each time we start moving the FPSController. In the methods onKeyUp() and onKeyDown(), we switch the flags that indicate to the main movement execution code. This will be seen shortly and we will also see which way the camera should be moved according to the relevant key press. The only part that is different here is the block of code inside the Keyboard.SPACE case. This code activates jump behavior when the space key is pressed. On the SPACE bar, the camera jumpSpeed (that, by default, is zero) receives the _jumpStep incremented value and this, in case the camera has not already reached the maximum altitude of the jump defined by MAX_JUMP, is added to the camera ground height. Now it's the walk() function's turn. This method is supposed to be called on each frame in the main class: _camSpeed *= FRICTION_FACTOR; _camSideSpeed*= FRICTION_FACTOR; Two preceding lines slow down, or in other words apply friction to the front and side movements. Without applying the friction. It will take a lot of time for the controller to stop completely after each movement as the velocity decrease is very slow due to the easing. Next we want to accelerate the movements in order to have a more realistic result. Here is acceleration implementation for four possible walk directions: if(_moveForward){ _camSpeed+= CAM_ACCEL;} if(_moveBack){_camSpeed-= CAM_ACCEL;} if(_moveLeft){_camSideSpeed-= CAM_SIDE_ACCEL;} if(_moveRight){_camSideSpeed+= CAM_SIDE_ACCEL;} The problem is that because we slow down the movement by continuously dividing current speed when applying the drag, the speed value actually never becomes zero. Here we define the range of values closest to zero and resetting the side and front speeds to 0 as soon as they enter this range: if (_camSpeed < 2 && _camSpeed > -2){ _camSpeed=0; } if (_camSideSpeed < 0.05 && _camSideSpeed > -0.05){ _camSideSpeed=0; } Now we need to create an ability to move the camera in the direction it is looking. To achieve this we have to transform the forward vector, which present the forward look of the camera, into the camera space denoted by _camera transformation matrix. We use the deltaTransformVector() method as we only need the transformation portion of the matrix dropping out the translation part: _forwardLook=_camera.transform.deltaTransformVector(new Vector3D(0,0,1)); _forwardLook.normalize(); _camera.x+=_forwardLook.x*_camSpeed; _camera.z+=_forwardLook.z*_camSpeed; Here we make pretty much the same change as the previous one but for the sideways movement transforming the side vector by the camera's matrix: _sideLook=_camera.transform.deltaTransformVector(new Vector3D(1, 0,0)); _sideLook.normalize(); _camera.x+=_sideLook.x*_camSideSpeed; _camera.z+=_sideLook.z*_camSideSpeed; And we also have to acquire base values for rotations from mouse movement. _pan is for the horizontal (x-axis) and _tilt is for the vertical (y-axis) rotation: if(_canMove){ _pan = 0.3*(_stg.mouseX+400 - _oldMouseX) + _oldPan; _tilt = -0.3*(_stg.mouseY-300 - _oldMouseY) + _oldTilt; if (_tilt > 70){ _tilt = 70; } if (_tilt < -70){ _tilt = -70; } } We also limit the y-rotation so that the controller would not rotate too low into the ground and conversely, too high into zenith. Notice that this entire block is wrapped into a _canMove Boolean flag that is set to true only when the mouse DOWN event is dispatched. We do it to prevent the rotation when the user doesn't interact with the controller. Finally we need to incorporate the camera local rotations into the movement process. So that while moving, you will be able to rotate the camera view too: var panRADs_Number=_pan*DEGStoRADs; var tiltRADs_Number=_tilt*DEGStoRADs; _camTarget.x = 100*Math.sin( panRADs) * Math.cos(tiltRADs) + _camera.x; _camTarget.z = 100*Math.cos( panRADs) * Math.cos(tiltRADs) + _camera.z; _camTarget.y = 100*Math.sin(tiltRADs) +_camera.y; And the last thing is applying gravity force each time the controller jumps up: if(_camera.y>_controllerHeigh){ _gravity*=GRAVACCEL; _camera.y-=_gravity; } if(_camera.y<=_controllerHeigh ){ _camera.y=_controllerHeigh; _gravity=_defaultGrav; } Here we first check whether the camera y-position is still bigger than its height, this means that the camera is in the "air" now. If true, we apply gravity acceleration to gravity because, as we know, in real life, the falling body constantly accelerates over time. In the second statement, we check whether the camera has reached its default height. If true, we reset the camera to its default y-position and also reset the gravity property as it has grown significantly from the acceleration addition during the last jump. To test it in a real application, we should initiate an instance of the FPSController class. Here is how it is done in FPSDemo.as: _fpsWalker=new FPSController(_cam,stage,20,12,250); We pass to it our scene camera3D instance and the rest of the parameters that were discussed previously. The last thing to do is to set the walk() method to be called on each frame: override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); _fpsWalker.walk(); } Now you can start developing the Away3D version of Unreal Tournament!
Read more
  • 0
  • 0
  • 1674
article-image-away3d-detecting-collisions
Packt
02 Jun 2011
6 min read
Save for later

Away3D: Detecting Collisions

Packt
02 Jun 2011
6 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Introduction In this article, you are going to learn how to check intersection (collision) between 3D objects. Detecting collisions between objects in Away3D This recipe will teach you the fundamentals of collision detection between objects in 3D space. We are going to learn how to perform a few types of intersection tests. These tests can hardly be called collision detection in their physical meaning, as we are not going to deal here with any simulation of collision reaction between two bodies. Instead, the goal of the recipe is to understand the collision tests from a mathematical point of view. Once you are familiar with intersection test techniques,the road to creating of physical collision simulations is much shorter. There are many types of intersection tests in mathematics. These include some simple tests such as AABB (axially aligned bounding box), Sphere - Sphere, or more complex such as Triangle - Triangle, Ray - Plane, Line - Plane, and more. Here, we will cover only those which we can achieve using built-in Away3D functionality. These are AABB and AABS (axially aligned bounding sphere) intersections, as well as Ray-AABS and the more complex Ray- Triangle. The rest of the methods are outside of the scope of this article and you can learn about applying them from various 3D math resources. Getting ready Setup an Away3D scene in a new file extending AwayTemplate. Give the class a name CollisionDemo. How to do it... In the following example, we perform an intersection test between two spheres based on their bounding boxes volumes. You can move one of the spheres along X and Y with arrow keys onto the second sphere. On the objects overlapping, the intersected (static) sphere glows with a red color. AABB test: CollisionDemo.as package { public class CollisionDemo extends AwayTemplate { private var _objA:Sphere; private var _objB:Sphere; private var _matA:ColorMaterial; private var _matB:ColorMaterial; private var _gFilter_GlowFilter=new GlowFilter(); public function CollisionDemo() { super(); _cam.z=-500; } override protected function initMaterials() : void{ _matA=new ColorMaterial(0xFF1255); _matB=new ColorMaterial(0x00FF11); } override protected function initGeometry() : void{ _objA=new Sphere({radius:30,material:_matA}); _objB=new Sphere({radius:30,material:_matB}); _view.scene.addChild(_objA); _view.scene.addChild(_objB); _objB.ownCanvas=true; _objA.debugbb=true; _objB.debugbb=true; _objA.transform.position=new Vector3D(-80,0,400); _objB.transform.position=new Vector3D(80,0,400); } override protected function initListeners() : void{ super.initListeners(); stage.addEventListener(KeyboardEvent.KEY_DOWN,onKeyDown); } override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); if(AABBTest()){ _objB.filters=[_gFilter]; }else{ _objB.filters=[]; } } private function AABBTest():Boolean{ if(_objA.parentMinX>_objB.parentMaxX||_objB.parentMinX>_objA. parentMaxX){ return false; } if(_objA.parentMinY>_objB.parentMaxY||_objB.parentMinY>_objA. parentMaxY){ return false; } if(_objA.parentMinZ>_objB.parentMaxZ||_objB.parentMinZ>_objA. parentMaxZ){ return false; } return true; } private function onKeyDown(e:KeyboardEvent):void{ switch(e.keyCode){ case 38:_objA.moveUp(5); break; case 40:_objA.moveDown(5); break; case 37:_objA.moveLeft(5); break; case 39:_objA.moveRight(5); break; case 65:_objA.rotationZ-=3; break; case 83:_objA.rotationZ+=3; break; default: } } } } In this screenshot, the green sphere bounding box has a red glow while it is being intersected by the red sphere's bounding box: How it works... Testing intersections between two AABBs is really simple. First, we need to acquire the boundaries of the object for each axis. The box boundaries for each axis of any Object3D are defined by a minimum value for that axis and maximum value. So let's look at the AABBTest() method. Axis boundaries are defined by parentMin and parentMax for each axis, which are accessible for each object extending Object3D. You can see that Object3D also has minX,minY,minZ and maxX,maxY,maxZ. These properties define the bounding box boundaries too, but in objects space and therefore aren't helpful in AABB tests between two objects. So in order for a given bounding box to intersect a bounding box of other objects, three conditions have to be met for each of them: Minimal X coordinate for each of the objects should be less than maximum X of another. Minimal Y coordinate for each of the objects should be less than maximum Y of another. Minimal Z coordinate for each of the objects should be less than maximum Z of another. If one of the conditions is not met for any of the two AABBs, there is no intersection. The preceding algorithm is expressed in the AABBTest() function: private function AABBTest():Boolean{ if(_objA.parentMinX>_objB.parentMaxX||_objB.parentMinX>_objA. parentMaxX){ return false; } if(_objA.parentMinY>_objB.parentMaxY||_objB.parentMinY>_objA. parentMaxY){ return false; } if(_objA.parentMinZ>_objB.parentMaxZ||_objB.parentMinZ>_objA. parentMaxZ){ return false; } return true; } As you can see, if all of the conditions we listed previously are met, the execution will skip all the return false blocks and the function will return true, which means the intersection has occurred. There's more... Now let's take a look at the rest of the methods for collision detection, which are AABS-AABS, Ray-AABS, and Ray-Triangle. AABS test The intersection test between two bounding spheres is even simpler to perform than AABBs. The algorithm works as follows. If the distance between the centers of two spheres is less than the sum of their radius, then the objects intersect. Piece of cake! Isn't it? Let's implement it within the code. The AABS collision algorithm gives us the best performance. While there are many other even more sophisticated approaches, try to use this test if you are not after extreme precision. (Most of the casual games can live with this approximation). First, let's switch the debugging mode of _objA and _objB to bounding spheres. In the last application we built, go to the initGeometry() function and change: _objA.debugbb=true; _objB.debugbb=true; To: _objA.debugbs=true; _objB.debugbs=true; Next, we add the function to the class which implements the algorithm we described previously: private function AABSTest():Boolean{ var dist_Number=Vector3D.distance(_objA.position,_objB. position); if(dist<=(_objA.radius+_objB.radius)){ return true; } return false; } Finally, we add the call to the method inside onEnterFrame(): if(AABSTest()){ _objB.filters=[_gFilter]; }else{ _objB.filters=[]; } Each time AABSTest returns true, the intersected sphere is highlighted with a red glow:
Read more
  • 0
  • 0
  • 2177

article-image-importing-3d-formats-away3d
Packt
31 May 2011
5 min read
Save for later

Importing 3D Formats into Away3D

Packt
31 May 2011
5 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Introduction The Away3D library contains a large set of 3D geometric primitives such as Cube, Sphere, Plane, and many more. Nevertheless, when we think of developing breathtaking and cutting edge 3D applications, there is really no way to get it done without using more sophisticated models than just basic primitives. Therefore, we need to use external 3D modeling programs such as Autodesk 3DsMax and Maya, or Blender to create complex models. The Power of Away3D is that it allows us to import a wide range of 3D formats for static meshes as well as for animations. Besides the models, the not less important part of the 3D world is textures. They are critical in making the model look cool and influencing the ultimate user experience. In this article, you will learn essential techniques to import different 3D formats into Away3D. Exporting models from 3DsMax/Maya/Blender You can export the following modeling formats from 3D programs: (Wavefront), Obj, DAE (Collada), 3ds, Ase (ASCII), MD2, Kmz, 3DsMax, and Maya can export natively Obj, DAE, 3ds, and ASCII. One of the favorite 3D formats of Away3D developers is DAE (Collada), although it is not the best in terms of performance because the file is basically an XML which becomes slow to parse when containing a lot of data. The problem is that although 3DsMax and Maya have got a built-in Collada exporter, the models from the output do not work in Away3D. The work around is to use open source Collada exporters such as ColladaMax/ColladaMaya, OpenCollada. The only difference between these two is the software versions support. Getting ready Go to http://opencollada.org/download.html and download the OpenCollada plugin for the appropriate software (3DsMax or Maya). Go to http://sourceforge.net/projects/colladamaya/files/ and download the ColladaMax or colladamaya plugin. Follow the instructions of the installation dialog of the plugin. The plugin will get installed automatically in the 3dsMax/Maya plugins directory (taking into account that the software was installed into the default path). How to do it... 3DsMax: Here is how to export Collada using OpenCollada plugin in 3DsMax2011. In order to export Collada (DAE) from 3DsMax, you should do the following: In 3DsMax, go to File and click on Export or Export Selected (target model selected). Select the OpenCOLLADA(*.DAE) format from the formats drop-down list. ColladaMax export settings: (Currently 3DsMax 2009 and lower) ColladaMax export settings are almost the same as those of OpenCollada. The only difference you can see in the exporting interface is the lack of Copy Images and Export user defined properties checkboxes. Select the checkboxes as is shown in the previous screenshot. Relative paths: Makes sure the texture paths are relative. Normals: Exporting object's normals. Copy Images: Is optional. If we select this option, the exporter outputs a folder with related textures into the same directory as the exported object. Triangulate: In case some parts of the mesh consist of more than three angled polygons, they get triangulated. Animation settings: Away3D supports bones animations from external assets. If you set bones animation and wish to export it, then check the Sample animation and set the begin and end frame for animation span that you want to export from the 3DsMax animation timeline. Maya: For showcase purposes, you can download a 30-day trial version of Autodesk Maya 2011. The installation process in Maya is slightly different: Open Maya. Go to top menu bar and select Window. In the drop-down list, select Settings/Preferences, in the new drop-down list, select Plug-in manager. Now you should see the Plug-in Manager interface: Now click on the Browse button and navigate to the directory where you extracted the OpenCollada ZIP archive. Select the COLLADAMaya.mll file and open it. Now you should see the OpenCollada plugin under the Other Registered Plugins category. Check the AutoLoad checkbox if you wish for the plugin to be loaded automatically the next time you start the program. After your model is ready for export, click on File | Export All or Export selected. The export settings for ColladaMaya are the same as for 3DsMax. How it works... The Collada file is just another XML but with a different format name (.dae). When exporting a model in a Collada format, the exporter writes into the XML nodes tree all essential data describing the model structure as well as animation data when one exports bone-based animated models. When deploying your DAE models to the web hosting directory, don't forget to change the .DAE extension to .XML. Forgetting will result in the file not being able to load because .DAE extension is ignored by most servers by default. There's more... Besides the Collada, you can also export OBJ, 3Ds, and ASE. Fortunately, for exporting these formats, you don't need any third party plugins but only those already located in the software. Free programs such as Blender also serve as an alternative to expansive commercial software such as Maya, or 3DsMax Blender comes with already built-in Collada exporter. Actually, it has two such exporters. At the time of this writing, these are 1.3 and 1.4. You should use 1.4 as 1.3 seems to output corrupted files that are not parsed in Away3D. The export process looks exactly like the one for 3dsMax. Select your model. Go to File, then Export. In the drop-down list of different formats, select Collada 1.4. The following interface opens: Select Triangles, Only Export Selection (if you wish to export only selected object), and Sample Animation. Set exporting destination path and click on Export and close. You are done.
Read more
  • 0
  • 0
  • 2427