Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-openscenegraph-advanced-scene-graph-components
Packt
22 Dec 2010
12 min read
Save for later

OpenSceneGraph: advanced scene graph components

Packt
22 Dec 2010
12 min read
Creating billboards in a scene In the 3D world, a billboard is a 2D image that is always facing a designated direction. Applications can use billboard techniques to create many kinds of special effects, such as explosions, fares, sky, clouds, and trees. In fact, any object can be treated as a billboard with itself cached as the texture, while looking from a distance. Thus, the implementation of billboards becomes one of the most popular techniques, widely used in computer games and real-time visual simulation programs. The osg::BillBoard class is used to represent a list of billboard objects in a 3D scene. It is derived from osg::Geode, and can orient all of its children (osg::Drawable objects) to face the viewer's viewpoint. It has an important method, setMode(), that is used to determine the rotation behavior, which must set one of the following enumerations as the argument UsagePOINT_ROT_EYEIf all drawables are rotated about the viewer position with the object coordinate Z axis constrained to the window coordinate Y axis.POINT_ROT_WORLDIf drawables are rotated about the viewer directly from their original orientation to the current eye direction in the world space.AXIAL_ROTIf drawables are rotated about an axis specified by setAxis(). Every drawable in the osg::BillBoard node should have a pivot point position, which is specified via the overloaded addDrawable() method, for example: billboard->addDrawable( child, osg::Vec3(1.0f, 0.0f, 0.0f) ); All drawables also need a unified initial front face orientation, which is used for computing rotation values. The initial orientation is set by the setNormal() method. And each newly-added drawable must ensure that its front face orientation is in the same direction as this normal value; otherwise the billboard results may be incorrect. Time for action – creating banners facing you The prerequisite for implementing billboards in OSG is to create one or more quad geometries first. These quads are then managed by the osg::BillBoard class. This forces all child drawables to automatically rotate around a specified axis, or face the viewer. These can be done by presetting a unified normal value and rotating each billboard according to the normal and current rotation axis or viewing vector. We will create two banks of OSG banners, arranged in a V, to demonstrate the use of billboards in OSG. No matter where the viewer is and how he manipulates the scene camera, the front faces of banners are facing the viewer all the time. This feature can then be used to represent textured trees and particles in user applications. Include the necessary headers: #include <osg/Billboard> #include <osg/Texture2D> #include <osgDB/ReadFile> #include <osgViewer/Viewer> Create the quad geometry directly from the osg::createTexturedQuadGeometry() function. Every generated quad is of the same size and origin point, and uses the same image file. Note that the osg256.png file can be found in the data directory of your OSG installation path, but it requires the osgdb_png plugin for reading image data. osg::Geometry* createQuad() { osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; osg::ref_ptr<osg::Image> image = osgDB::readImageFile( "Images/osg256.png" ); texture->setImage( image.get() ); osg::ref_ptr<osg::Geometry> quad= osg::createTexturedQuadGeometry( osg::Vec3(-0.5f, 0.0f,-0.5f), osg::Vec3(1.0f,0.0f,0.0f), osg::Vec3(0.0f,0.0f,1.0f) ); osg::StateSet* ss = quad->getOrCreateStateSet() ss->setTextureAttributeAndModes( 0, texture.get() ); return quad.release(); } In the main entry, we first create the billboard node and set the mode to POINT_ROT_EYE. That is, the drawable will rotate to face the viewer and keep its Z axis upright in the rendering window. The default normal setting of the osg::BillBoard class is the negative Y axis, so rotating it to the viewing vector will show the quads on the XOZ plane in the best appearance: osg::ref_ptr<osg::Billboard> geode = new osg::Billboard; geode->setMode( osg::Billboard::POINT_ROT_EYE ); Now let's create the banner quads and arrange them in a V formation: osg::Geometry* quad = createQuad(); for ( unsigned int i=0; i<10; ++i ) { float id = (float)i; geode->addDrawable( quad, osg::Vec3(-2.5f+0.2f*id, id, 0.0f) ); geode->addDrawable( quad, osg::Vec3( 2.5f-0.2f*id, id, 0.0f) ); } All quad textures' backgrounds are automatically cleared because of the alpha test, which is performed internally in the osgdb_png plugin. That means we have to set correct rendering order of all the drawables to ensure that the entire process is working properly: osg::StateSet* ss = geode->getOrCreateStateSet(); ss->setRenderingHint( osg::StateSet::TRANSPARENT_BIN ); It's time for us to start the viewer, as there are no important steps left to create and render billboards: osgViewer::Viewer viewer; viewer.setSceneData( geode.get() ); return viewer.run(); Try navigating in the scene graph: You will find that the billboard's children are always rotating to face the viewer, but the images' Y directions are never changed (point to the window's Y coordinate all along). Replace the mode POINT_ROT_EYE to POINT_ROT_WORLD and see if there is any difference: What just happened? The basic usage of billboards in OSG scene graph is shown in this example. But it is still possible to be further improved. All the banner geometries here are created with the createQuad() function, which means that the same quad and the same texture are reallocated at least 20 times! The object sharing mechanism is certainly an optimization here. Unfortunately, it is not clever enough to add the same drawable object to osg::Billboard with different positions, which could cause the node to work improperly. What we could do is to create multiple quad geometries that share the same texture object. This will highly reduce the video card's texture memory occupancy and the rendering load. Another possible issue is that somebody may require loaded nodes to be rendered as billboards, not only as drawables. A node can consist of different kinds of child nodes, and is much richer than a basic shape or geometry mesh. OSG also provides the osg::AutoTransform class, which automatically rotates an object's children to be aligned with screen coordinates. Have a go hero – planting massive trees on the ground Billboards are widely used for simulating massive trees and plants. One or more tree pictures with transparent backgrounds are applied to quads of different sizes, and then added to the billboard node. These trees will automatically face the viewer, or to be more real, rotate about an axis as if its branches and leaves are always at the front. Now let's try to create some simple billboard trees. We only need to prepare an image nice enough. Creating texts Text is one of the most important components in all kinds of virtual reality programs. It is used everywhere—for displaying stats on the screen, labeling 3D objects, logging, and debugging. Texts always have at least one font to specify the typeface and qualities, as well as other parameters, including size, alignment, layout (left-to-right or right-to-left), and resolution, to determine its display behaviors. OpenGL doesn't directly support the loading of fonts and displaying texts in 3D space, but OSG provides full support for rendering high quality texts and configuring different text attributes, which makes it much easier to develop related applications. The osgText library actually implements all font and text functionalities. It requires the osgdb_freetype plugin to work properly. This plugin can load and parse TrueType fonts with the help of FreeType, a famous third-party dependency. After that, it returns an osgText::Font instance, which is made up of a complete set of texture glyphs. The entire process can be described with the osgText::readFontFile() function. The osgText::TextBase class is the pure base class of all OSG text types. It is derived from osg::Drawable, but doesn't support display lists by default. Its subclass, osgText::Text, is used to manage fat characters in the world coordinates. Important methods includes setFont(), setPosition(), setCharacterSize(), and setText(), each of which is easy to understand and use, as shown in the following example. Time for action – writing descriptions for the Cessna This time we are going to display a Cessna in the 3D space and provide descriptive texts in front of the rendered scene. A heads-up display (HUD) camera can be used here, which is rendered after the main camera, and only clears the depth buffer for directly updating texts to the frame buffer. The HUD camera will then render its child nodes in a way that is always visible. Include the necessary headers: #include <osg/Camera> #include <osgDB/ReadFile> #include <osgText/Font> #include <osgText/Text> #include <osgViewer/Viewer> The osgText::readFontFile() function is used for reading a suitable font file, for instance, an undistorted TrueType font. The OSG data paths (specified with OSG_FILE_PATH) and the windows system path will be searched to see if the specified file exists: osg::ref_ptr<osgText::Font> g_font = osgText::readFontFile("fonts/arial.ttf"); Create a standard HUD camera and set a 2D orthographic projection matrix for the purpose of drawing 3D texts in two dimensions. The camera should not receive any user events, and should never be affected by any parent transformations. These are guaranteed by the setAllowEventFocus() and setReferenceFrame() methods: setAllowEventFocus() and setReferenceFrame() methods: osg::Camera* createHUDCamera( double left, double right, double bottom, double top ) { osg::ref_ptr<osg::Camera> camera = new osg::Camera; camera->setReferenceFrame( osg::Transform::ABSOLUTE_RF ); camera->setClearMask( GL_DEPTH_BUFFER_BIT ); camera->setRenderOrder( osg::Camera::POST_RENDER ); camera->setAllowEventFocus( false ); camera->setProjectionMatrix( osg::Matrix::ortho2D(left, right, bottom, top) ); return camera.release(); } The text is created by a separate global function, too. It defines a font object describing every character's glyph, as well as the size and position parameters in the world space, and the content of the text. In the HUD text implementation, texts should always align with the XOY plane: osgText::Text* createText( const osg::Vec3& pos, const std::string& content, float size ) { osg::ref_ptr<osgText::Text> text = new osgText::Text; text->setFont( g_font.get() ); text->setCharacterSize( size ); text->setAxisAlignment( osgText::TextBase::XY_PLANE ); text->setPosition( pos ); text->setText( content ); return text.release(); } In the main entry, we create a new osg::Geode node and add multiple text objects to it. These introduce the leading features of a Cessna. Of course, you can add your own explanations about this type of monoplane by using additional osgText::Text drawables: osg::ref_ptr<osg::Geode> textGeode = new osg::Geode; textGeode->addDrawable( createText( osg::Vec3(150.0f, 500.0f, 0.0f), "The Cessna monoplane", 20.0f) ); textGeode->addDrawable( createText( osg::Vec3(150.0f, 450.0f, 0.0f), "Six-seat, low-wing and twin-engined", 15.0f) ); The node including all texts should be added to the HUD camera. To ensure that the texts won't be affected by OpenGL normals and lights (they are textured geometries, after all), we have to disable lighting for the camera node: osg::Camera* camera = createHUDCamera(0, 1024, 0, 768); camera->addChild( textGeode.get() ); camera->getOrCreateStateSet()->setMode( GL_LIGHTING, osg::StateAttribute::OFF ); The last step is to add the Cessna model and the camera to the scene graph, and start the viewer as usual: osg::ref_ptr<osg::Group> root = new osg::Group; root->addChild( osgDB::readNodeFile("cessna.osg") ); root->addChild( camera ); osgViewer::Viewer viewer; viewer.setSceneData( root.get() ); return viewer.run(); In the rendering window, you will see two lines of text over the Cessna model. No matter how you translate, rotate, or scale on the view matrix, the HUD texts will never be covered. Thus, users can always read the most important information directly, without looking away from their usual perspectives: What just happened? To build the example code with CMake or other native compilers, you should add the osgText library as dependence, and include the osgParticle, osgShadow, and osgFX libraries. Here we specify the font from the arial.ttf file. This is a default font in most Windows and UNIX systems, and can also be found in OSG data paths. As you can see, this kind of font offers developers highly-precise displayed characters, regardless of font size settings. This is because the outlines of TrueType fonts are made of mathematical line segments and Bezier curves, which means they are not vector fonts. Bitmap (raster) fonts don't have such features and may sometimes look ugly when resized. Disable setFont() here, to force osgText to use a default 12x12 bitmap font. Can you figure out the difference between these two fonts? Have a go hero – using wide characters to support more languages The setText() method of osgText::Text accepts std::string variables directly. Meanwhile, it also accepts wide characters as the input argument. For example: wchar_t* wstr = …; text->setText( wstr ); This makes it possible to support multi-languages, for instance, Chinese and Japanese characters. Now, try obtaining a sequence of wide characters either by defining them directly or converting from multi-byte characters, and apply them to the osgText::Text object, to see if the language that you are interested in can be rendered. Please note that the font should also be changed to support the corresponding language.
Read more
  • 0
  • 0
  • 4717

article-image-advanced-lighting-3d-graphics-xna-game-studio-40
Packt
22 Dec 2010
9 min read
Save for later

Advanced Lighting in 3D Graphics with XNA Game Studio 4.0

Packt
22 Dec 2010
9 min read
  3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them         Implementing a point light with HLSL A point light is just a light that shines equally in all directions around itself (like a light bulb) and falls off over a given distance: In this case, a point light is simply modeled as a directional light that will slowly fade to darkness over a given distance. To achieve a linear attenuation, we would simply divide the distance between the light and the object by the attenuation distance, invert the result (subtract from 1), and then multiply the lambertian lighting with the result. This would cause an object directly next to the light source to be fully lit, and an object at the maximum attenuation distance to be completely unlit. However, in practice, we will raise the result of the division to a given power before inverting it to achieve a more exponential falloff: Katt = 1 – (d / a) f In the previous equation, Katt is the brightness scalar that we will multiply the lighting amount by, d is the distance between the vertex and light source, a is the distance at which the light should stop affecting objects, and f is the falloff exponent that determines the shape of the curve. We can implement this easily with HLSL and a new Material class. The new Material class is similar to the material for a directional light, but specifies a light position rather than a light direction. For the sake of simplicity, the effect we will use will not calculate specular highlights, so the material does not include a "specularity" value. It also includes new values, LightAttenuation and LightFalloff, which specify the distance at which the light is no longer visible and what power to raise the division to. public class PointLightMaterial : Material { public Vector3 AmbientLightColor { get; set; } public Vector3 LightPosition { get; set; } public Vector3 LightColor { get; set; } public float LightAttenuation { get; set; } public float LightFalloff { get; set; } public PointLightMaterial() { AmbientLightColor = new Vector3(.15f, .15f, .15f); LightPosition = new Vector3(0, 0, 0); LightColor = new Vector3(.85f, .85f, .85f); LightAttenuation = 5000; LightFalloff = 2; } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientLightColor"] != null) effect.Parameters["AmbientLightColor"].SetValue( AmbientLightColor); if (effect.Parameters["LightPosition"] != null) effect.Parameters["LightPosition"].SetValue(LightPosition); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["LightAttenuation"] != null) effect.Parameters["LightAttenuation"].SetValue( LightAttenuation); if (effect.Parameters["LightFalloff"] != null) effect.Parameters["LightFalloff"].SetValue(LightFalloff); } } The new effect has parameters to reflect those values: float4x4 World; float4x4 View; float4x4 Projection; float3 AmbientLightColor = float3(.15, .15, .15); float3 DiffuseColor = float3(.85, .85, .85); float3 LightPosition = float3(0, 0, 0); float3 LightColor = float3(1, 1, 1); float LightAttenuation = 5000; float LightFalloff = 2; texture BasicTexture; sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>; }; bool TextureEnabled = true; The vertex shader output struct now includes a copy of the vertex's world position that will be used to calculate the light falloff (attenuation) and light direction. struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : TEXCOORD1; float4 WorldPosition : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.WorldPosition = worldPosition; output.UV = input.UV; output.Normal = mul(input.Normal, World); return output; } Finally, the pixel shader calculates the light much the same way that the directional light did, but uses a per-vertex light direction rather than a global light direction. It also determines how far along the attenuation value the vertex's position is and darkens it accordingly. The texture, ambient light, and diffuse color are calculated as usual: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 diffuseColor = DiffuseColor; if (TextureEnabled) diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb; float3 totalLight = float3(0, 0, 0); totalLight += AmbientLightColor; float3 lightDir = normalize(LightPosition - input.WorldPosition); float diffuse = saturate(dot(normalize(input.Normal), lightDir)); float d = distance(LightPosition, input.WorldPosition); float att = 1 - pow(clamp(d / LightAttenuation, 0, 1), LightFalloff); totalLight += diffuse * att * LightColor; return float4(diffuseColor * totalLight, 1); } We can now achieve the above image using the following scene setup from the Game1 class: models.Add(new CModel(Content.Load<Model>("teapot"), new Vector3(0, 60, 0), Vector3.Zero, new Vector3(60), GraphicsDevice)); models.Add(new CModel(Content.Load<Model>("ground"), Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice)); Effect simpleEffect = Content.Load<Effect>("PointLightEffect"); models[0].SetModelEffect(simpleEffect, true); models[1].SetModelEffect(simpleEffect, true); PointLightMaterial mat = new PointLightMaterial(); mat.LightPosition = new Vector3(0, 1500, 1500); mat.LightAttenuation = 3000; models[0].Material = mat; models[1].Material = mat; camera = new FreeCamera(new Vector3(0, 300, 1600), MathHelper.ToRadians(0), // Turned around 153 degrees MathHelper.ToRadians(5), // Pitched up 13 degrees GraphicsDevice); Implementing a spot light with HLSL A spot light is similar in theory to a point light—in that it fades out after a given distance. However, the fading is not done around the light source, but is based on the angle between the direction of an object and the light source, and the light's actual direction. If the angle is larger than the light's "cone angle", we will not light the vertex. Katt = (dot(p - lp, ld) / cos(a)) f In the previous equation, Katt is still the scalar that we will multiply our diffuse lighting with, p is the position of the vertex, lp is the position of the light, ld is the direction of the light, a is the cone angle, and f is the falloff exponent. Our new spot light material reflects these values: public class SpotLightMaterial : Material { public Vector3 AmbientLightColor { get; set; } public Vector3 LightPosition { get; set; } public Vector3 LightColor { get; set; } public Vector3 LightDirection { get; set; } public float ConeAngle { get; set; } public float LightFalloff { get; set; } public SpotLightMaterial() { AmbientLightColor = new Vector3(.15f, .15f, .15f); LightPosition = new Vector3(0, 3000, 0); LightColor = new Vector3(.85f, .85f, .85f); ConeAngle = 30; LightDirection = new Vector3(0, -1, 0); LightFalloff = 20; } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientLightColor"] != null) effect.Parameters["AmbientLightColor"].SetValue( AmbientLightColor); if (effect.Parameters["LightPosition"] != null) effect.Parameters["LightPosition"].SetValue(LightPosition); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["LightDirection"] != null) effect.Parameters["LightDirection"].SetValue(LightDirection); if (effect.Parameters["ConeAngle"] != null) effect.Parameters["ConeAngle"].SetValue( MathHelper.ToRadians(ConeAngle / 2)); if (effect.Parameters["LightFalloff"] != null) effect.Parameters["LightFalloff"].SetValue(LightFalloff); } } Now we can create a new effect that will render a spot light. We will start by copying the point light's effect and making the following changes to the second block of effect parameters: float3 AmbientLightColor = float3(.15, .15, .15); float3 DiffuseColor = float3(.85, .85, .85); float3 LightPosition = float3(0, 5000, 0); float3 LightDirection = float3(0, -1, 0); float ConeAngle = 90; float3 LightColor = float3(1, 1, 1); float LightFalloff = 20; Finally, we can update the pixel shader to perform the lighting calculations: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 diffuseColor = DiffuseColor; if (TextureEnabled) diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb; float3 totalLight = float3(0, 0, 0); totalLight += AmbientLightColor; float3 lightDir = normalize(LightPosition - input.WorldPosition); float diffuse = saturate(dot(normalize(input.Normal), lightDir)); // (dot(p - lp, ld) / cos(a))^f float d = dot(-lightDir, normalize(LightDirection)); float a = cos(ConeAngle); float att = 0; if (a < d) att = 1 - pow(clamp(a / d, 0, 1), LightFalloff); totalLight += diffuse * att * LightColor; return float4(diffuseColor * totalLight, 1); } If we were to then set up the material as follows and use our new effect, we would see the following result: SpotLightMaterial mat = new SpotLightMaterial(); mat.LightDirection = new Vector3(0, -1, -1); mat.LightPosition = new Vector3(0, 3000, 2700); mat.LightFalloff = 200; Drawing multiple lights Now that we can draw one light, the natural question to ask is how to draw more than one light. Well this, unfortunately, is not simple. There are a number of approaches—the easiest of which is to simply loop through a certain number of lights in the pixel shader and sum a total lighting value. Let's create a new shader based on the directional light effect that we created in the last chapter to do just that. We'll start by copying that effect, then modifying some of the effect parameters as follows. Notice that instead of a single light direction and color, we instead have an array of three of each, allowing us to draw up to three lights: #define NUMLIGHTS 3 float3 DiffuseColor = float3(1, 1, 1); float3 AmbientColor = float3(0.1, 0.1, 0.1); float3 LightDirection[NUMLIGHTS]; float3 LightColor[NUMLIGHTS]; float SpecularPower = 32; float3 SpecularColor = float3(1, 1, 1); Second, we need to update the pixel shader to do the lighting calculations one time per light: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Start with diffuse color float3 color = DiffuseColor; // Texture if necessary if (TextureEnabled) color *= tex2D(BasicTextureSampler, input.UV); // Start with ambient lighting float3 lighting = AmbientColor; float3 normal = normalize(input.Normal); float3 view = normalize(input.ViewDirection); // Perform lighting calculations per light for (int i = 0; i < NUMLIGHTS; i++) { float3 lightDir = normalize(LightDirection[i]); // Add lambertian lighting lighting += saturate(dot(lightDir, normal)) * LightColor[i]; float3 refl = reflect(lightDir, normal); // Add specular highlights lighting += pow(saturate(dot(refl, view)), SpecularPower) * SpecularColor; } // Calculate final color float3 output = saturate(lighting) * color; return float4(output, 1); } We now need a new Material class to work with this shader: public class MultiLightingMaterial : Material { public Vector3 AmbientColor { get; set; } public Vector3[] LightDirection { get; set; } public Vector3[] LightColor { get; set; } public Vector3 SpecularColor { get; set; } public MultiLightingMaterial() { AmbientColor = new Vector3(.1f, .1f, .1f); LightDirection = new Vector3[3]; LightColor = new Vector3[] { Vector3.One, Vector3.One, Vector3.One }; SpecularColor = new Vector3(1, 1, 1); } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientColor"] != null) effect.Parameters["AmbientColor"].SetValue(AmbientColor); if (effect.Parameters["LightDirection"] != null) effect.Parameters["LightDirection"].SetValue(LightDirection); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["SpecularColor"] != null) effect.Parameters["SpecularColor"].SetValue(SpecularColor); } } If we wanted to implement the three directional light systems found in the BasicEffect class, we would now just need to copy the light direction values over to our shader: Effect simpleEffect = Content.Load<Effect>("MultiLightingEffect"); models[0].SetModelEffect(simpleEffect, true); models[1].SetModelEffect(simpleEffect, true); MultiLightingMaterial mat = new MultiLightingMaterial(); BasicEffect effect = new BasicEffect(GraphicsDevice); effect.EnableDefaultLighting(); mat.LightDirection[0] = -effect.DirectionalLight0.Direction; mat.LightDirection[1] = -effect.DirectionalLight1.Direction; mat.LightDirection[2] = -effect.DirectionalLight2.Direction; mat.LightColor = new Vector3[] { new Vector3(0.5f, 0.5f, 0.5f), new Vector3(0.5f, 0.5f, 0.5f), new Vector3(0.5f, 0.5f, 0.5f) }; models[0].Material = mat; models[1].Material = mat;
Read more
  • 0
  • 0
  • 4223

article-image-introduction-hlsl-3d-graphics-xna-game-studio-40
Packt
21 Dec 2010
16 min read
Save for later

Introduction to HLSL in 3D Graphics with XNA Game Studio 4.0

Packt
21 Dec 2010
16 min read
3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them           Read more about this book       (For more resources on this subject, see here.) Getting started The vertex shader and pixel shader are contained in the same code file called an Effect. The vertex shader is responsible for transforming geometry from object space into screen space, usually using the world, view, and projection matrices. The pixel shader's job is to calculate the color of every pixel onscreen. It is giving information about the geometry visible at whatever point onscreen it is being run for and takes into account lighting, texturing, and so on. For your convenience, I've provided the starting code for this article here. public class Game1 : Microsoft.Xna.Framework.Game{ GraphicsDeviceManager graphics; SpriteBatch spriteBatch; List<CModel> models = new List<CModel>(); Camera camera; MouseState lastMouseState; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 800; }// Called when the game should load its contentprotected override void LoadContent(){ spriteBatch = new SpriteBatch(GraphicsDevice); models.Add(new CModel(Content.Load<Model>("ship"), new Vector3(0, 400, 0), Vector3.Zero, new Vector3(1f), GraphicsDevice)); models.Add(new CModel(Content.Load<Model>("ground"), Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice)); camera = new FreeCamera(new Vector3(1000, 500, -2000), MathHelper.ToRadians(153), // Turned around 153 degrees MathHelper.ToRadians(5), // Pitched up 13 degrees GraphicsDevice); lastMouseState = Mouse.GetState();}// Called when the game should update itselfprotected override void Update(GameTime gameTime){ updateCamera(gameTime); base.Update(gameTime);}void updateCamera(GameTime gameTime){ // Get the new keyboard and mouse state MouseState mouseState = Mouse.GetState(); KeyboardState keyState = Keyboard.GetState(); // Determine how much the camera should turn float deltaX = (float)lastMouseState.X - (float)mouseState.X; float deltaY = (float)lastMouseState.Y - (float)mouseState.Y; // Rotate the camera ((FreeCamera)camera).Rotate(deltaX * .005f, deltaY * .005f); Vector3 translation = Vector3.Zero; // Determine in which direction to move the camera if (keyState.IsKeyDown(Keys.W)) translation += Vector3.Forward; if (keyState.IsKeyDown(Keys.S)) translation += Vector3.Backward; if (keyState.IsKeyDown(Keys.A)) translation += Vector3.Left; if (keyState.IsKeyDown(Keys.D)) translation += Vector3.Right; // Move 3 units per millisecond, independent of frame rate translation *= 4 * (float)gameTime.ElapsedGameTime.TotalMilliseconds; // Move the camera ((FreeCamera)camera).Move(translation); // Update the camera camera.Update(); // Update the mouse state lastMouseState = mouseState;}// Called when the game should draw itselfprotected override void Draw(GameTime gameTime){ GraphicsDevice.Clear(Color.CornflowerBlue); foreach (CModel model in models) if (camera.BoundingVolumeIsInView(model.BoundingSphere)) model.Draw(camera.View, camera.Projection, ((FreeCamera)camera).Position); base.Draw(gameTime); }} Assigning a shader to a model In order to draw a model with XNA, it needs to have an instance of the Effect class assigned to it. Recall from the first chapter that each ModelMeshPart in a Model has its own Effect. This is because each ModelMeshPart may need to have a different appearance, as one ModelMeshPart may, for example, make up armor on a soldier while another may make up the head. If the two used the same effect (shader), then we could end up with a very shiny head or a very dull piece of armor. Instead, XNA provides us the option to give every ModelMeshPart a unique effect. In order to draw our models with our own effects, we need to replace the BasicEffect of every ModelMeshPart with our own effect loaded from the content pipeline. For now, we won't worry about the fact that each ModelMeshPart can have its own effect; we'll just be assigning one effect to an entire model. Later, however, we will add more functionality to allow different effects on each part of a model. However, before we start replacing the instances of BasicEffect assigned to our models, we need to extract some useful information from them, such as which texture and color to use for each ModelMeshPart. We will store this information in a new class that each ModelMeshPart will keep a reference to using its Tag properties: public class MeshTag{ public Vector3 Color; public Texture2D Texture; public float SpecularPower; public Effect CachedEffect = null; public MeshTag(Vector3 Color, Texture2D Texture, float SpecularPower) { this.Color = Color; this.Texture = Texture; this.SpecularPower = SpecularPower; }} This information will be extracted using a new function in the CModel class: private void generateTags(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) if (part.Effect is BasicEffect) { BasicEffect effect = (BasicEffect)part.Effect; MeshTag tag = new MeshTag(effect.DiffuseColor, effect.Texture, effect.SpecularPower); part.Tag = tag; }} This function will be called along with buildBoundingSphere() in the constructor: ...buildBoundingSphere();generateTags();... Notice that the MeshTag class has a CachedEffect variable that is not currently used. We will use this value as a location to store a reference to an effect that we want to be able to restore to the ModelMeshPart on demand. This is useful when we want to draw a model using a different effect temporarily without having to completely reload the model's effects afterwards. The functions that will allow us to do this are as shown: // Store references to all of the model's current effectspublic void CacheEffects(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) ((MeshTag)part.Tag).CachedEffect = part.Effect;}// Restore the effects referenced by the model's cachepublic void RestoreEffects(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) if (((MeshTag)part.Tag).CachedEffect != null) part.Effect = ((MeshTag)part.Tag).CachedEffect;} We are now ready to start assigning effects to our models. We will look at this in more detail in a moment, but it is worth noting that every Effect has a dictionary of effect parameters. These are variables that the Effect takes into account when performing its calculations—the world, view, and projection matrices, or colors and textures, for example. We modify a number of these parameters when assigning a new effect, so that each texture of ModelMeshPart can be informed of its specific properties: public void SetModelEffect(Effect effect, bool CopyEffect){foreach(ModelMesh mesh in Model.Meshes)foreach (ModelMeshPart part in mesh.MeshParts){Effect toSet = effect;// Copy the effect if necessaryif (CopyEffect)toSet = effect.Clone();MeshTag tag = ((MeshTag)part.Tag);// If this ModelMeshPart has a texture, set it to the effectif (tag.Texture != null){setEffectParameter(toSet, "BasicTexture", tag.Texture);setEffectParameter(toSet, "TextureEnabled", true);}elsesetEffectParameter(toSet, "TextureEnabled", false);// Set our remaining parameters to the effectsetEffectParameter(toSet, "DiffuseColor", tag.Color);setEffectParameter(toSet, "SpecularPower", tag.SpecularPower);part.Effect = toSet;}}// Sets the specified effect parameter to the given effect, if it// has that parametervoid setEffectParameter(Effect effect, string paramName, object val){ if (effect.Parameters[paramName] == null) return; if (val is Vector3) effect.Parameters[paramName].SetValue((Vector3)val); else if (val is bool) effect.Parameters[paramName].SetValue((bool)val); else if (val is Matrix) effect.Parameters[paramName].SetValue((Matrix)val); else if (val is Texture2D) effect.Parameters[paramName].SetValue((Texture2D)val);} The CopyEffect parameter, an option that this function has, is very important. If we specify false—telling the CModel not to copy the effect per ModelMeshPart—any changes made to the effect will be reflected any other time the effect is used. This is a problem if we want each ModelMeshPart to have a different texture, or if we want to use the same effect on multiple models. Instead, we can specify true to have the CModel copy the effect for each mesh part so that they can set their own effect parameters: Finally, we need to update the Draw() function to handle Effects other than BasicEffect: public void Draw(Matrix View, Matrix Projection, Vector3 CameraPosition){ // Calculate the base transformation by combining // translation, rotation, and scaling Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position);foreach (ModelMesh mesh in Model.Meshes){ Matrix localWorld = modelTransforms[mesh.ParentBone.Index] * baseWorld; foreach (ModelMeshPart meshPart in mesh.MeshParts) { Effect effect = meshPart.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); }} Creating a simple effect We will create our first effect now, and assign it to our models so that we can see the result. To begin, right-click on the content project, choose Add New Item, and select Effect File. Call it something like SimpleEffect.fx: The code for the new file is as follows. Don't worry, we'll go through each piece in a moment: float4x4 World;float4x4 View;float4x4 Projection;struct VertexShaderInput{ float4 Position : POSITION0;};struct VertexShaderOutput{ float4 Position : POSITION0;};VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); return output;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ return float4(.5, .5, .5, 1);}technique Technique1{ pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); }} To assign this effect to the models in our scene, we need to first load it in the game's LoadContent() function, then use the SetModelEffect() function to assign the effect to each model. Add the following to the end of the LoadContent function: Effect simpleEffect = Content.Load<Effect>("SimpleEffect");models[0].SetModelEffect(simpleEffect, true);models[1].SetModelEffect(simpleEffect, true); If you were to run the game now, you would notice that the models appear both flat and gray. This is the correct behavior, as the effect doesn't have the code necessary to do anything else at the moment. After we break down each piece of the shader, we will add some more exciting behavior: Let's begin at the top. The first three lines in this effect are its effect paremeters. These three should be familiar to you—they are the world, view, and projection matrices (in HLSL, float4x4 is the equivelant of XNA's Matrix class). There are many types of effect parameters and we will see more later. float4x4 World;float4x4 View;float4x4 Projection; The next few lines are where we define the structures used in the shaders. In this case, the two structs are VertexShaderInput and VertexShaderOutput. As you might guess, these two structs are used to send input into the vertex shader and retrieve the output from it. The data in the VertexShaderOutput struct is then interpolated between vertices and sent to the pixel shader. This way, when we access the Position value in the pixel shader for a pixel that sits between two vertices, we will get the actual position of that location instead of the position of one of the two vertices. In this case, the input and output are very simple: just the position of the vertex before and after it has been transformed using the world, view, and projection matrices: struct VertexShaderInput{ float4 Position : POSITION0;};struct VertexShaderOutput{ float4 Position : POSITION0;}; You may note that the members of these structs are a little different from the properties of a class in C#—in that they must also include what are called semantics. Microsoft's definition for shader semantics is as follows (http://msdn.microsoft.com/en-us/library/bb509647%28VS.85%29.aspx): A semantic is a string attached to a shader input or output that conveys information about the intended use of a parameter. Basically, we need to specify what we intend to do with each member of our structs so that the graphics card can correctly map the vertex shader's outputs with the pixel shader's inputs. For example, in the previous code, we use the POSITION0 semantics to tell the graphics card that this value is the one that holds the position at which to draw the vertex. The next few lines are the vertex shader itself. Basically, we are just multiplying the input (object space or untransformed) vertex position by the world, view, and projection matrices (the mul function is part of HLSL and is used to multiply matrices and vertices) and returning that value in a new instance of the VertexShaderOutput struct: VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); return output;} The next bit of code makes up the pixel shader. It accepts a VertexShaderOutput struct as its input (which is passed from the vertex shader), and returns a float4—equivelent to XNA's Vector4 class, in that it is basically a set of four floating point (decimal) numbers. We use the COLOR0 semantic for our return value to let the pipeline know that this function is returning the final pixel color. In this case, we are using those numbers to represent the red, green, blue, and transparency values respectively of the pixel that we are shading. In this extremely simple pixel shader, we are just returning the color gray (.5, .5, .5), so any pixel covered by the model we are drawing will be gray (like in the previous screenshot). float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ return float4(.5, .5, .5, 1);} The last part of the shader is the shader definition. Here, we tell the graphics card which vertex and pixel shader versions to use (every graphics card supports a different set, but in this case we are using vertex shader 1.1 and pixel shader 2.0) and which functions in our code make up the vertex and pixel shaders: technique Technique1{ pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); }} Texture mapping Let's now improve our shader by allowing it to render the textures each ModelMeshPart has assigned. As you may recall, the SetModelEffect function in the CModel class attempts to set the texture of each ModelMeshPart to its respective effect. However, it attempts to do so only if it finds the BasicTexture parameter on the effect. Let's add this parameter to our effect now (under the world, view, and projection properties): texture BasicTexture; We need one more parameter in order to draw textures on our models, and that is an instance of a sampler. The sampler is used by HLSL to retrieve the color of the pixel at a given position in a texture—which will be useful later on—in our pixel shader where we will need to retrieve the pixel from the texture corresponding the point on the model we are shading: sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>;}; A third effect parameter will allow us to turn texturing on and off: bool TextureEnabled = false; Every model that has a texture should also have what are called texture coordinates. The texture coordinates are basically two-dimensional coordinates called UV coordinates that range from (0, 0) to (1, 1) and that are assigned to every vertex in the model. These coordinates correspond to the point on the texture that should be drawn onto that vertex. A UV coordinate of (0, 0) corresponds to the top-left of the texture and (1, 1) corresponds to the bottom-right. The texture coordinates allow us to wrap two-dimensional textures onto the three-dimensional surfaces of our models. We need to include the texture coordinates in the input and output of the vertex shader, and add the code to pass the UV coordinates through the vertex shader to the pixel shader: struct VertexShaderInput{ float4 Position : POSITION0; float2 UV : TEXCOORD0;};struct VertexShaderOutput{ float4 Position : POSITION0; float2 UV : TEXCOORD0;};VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); output.UV = input.UV; return output;} Finally, we can use the texture sampler, the texture coordinates (also called UV coordinates), and HLSL's tex2D function to retrieve the texture color corresponding to the pixel we are drawing on the model: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ float3 output = float3(1, 1, 1); if (TextureEnabled) output *= tex2D(BasicTextureSampler, input.UV); return float4(output, 1);} If you run the game now, you will see that the textures are properly drawn onto the models: Texture sampling The problem with texture sampling is that we are rarely able to simply copy each pixel from a texture directly onto the screen because our models bend and distort the texture due to their shape. Textures are distorted further by the transformations we apply to our models—rotation and other transformations. This means that we almost always have to calculate the approximate position in a texture to sample from and return that value, which is what HLSL's sampler2D does for us. There are a number of considerations to make when sampling. How we sample from our textures can have a big impact on both our game's appearance and performance. More advanced sampling (or filtering) algorithms look better but slow down the game. Mip mapping refers to the use of multiple sizes of the same texture. These multiple sizes are calculated before the game is run and stored in the same texture, and the graphics card will swap them out on the fly, using a smaller version of the texture for objects in the distance, and so on. Finally, the address mode that we use when sampling will affect how the graphics card handles UV coordinates outside the (0, 1) range. For example, if the address mode is set to "clamp", the UV coordinates will be clamped to (0, 1). If the address mode is set to "wrap," the coordinates will be wrapped through the texture repeatedly. This can be used to create a tiling effect on terrain, for example. For now, because we are drawing so few models, we will use anisotropic filtering. We will also enable mip mapping and set the address mode to "wrap". sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>; MinFilter = Anisotropic; // Minification Filter MagFilter = Anisotropic; // Magnification Filter MipFilter = Linear; // Mip-mapping AddressU = Wrap; // Address Mode for U Coordinates AddressV = Wrap; // Address Mode for V Coordinates}; This will give our models a nice, smooth appearance in the foreground and a uniform appearance in the background:
Read more
  • 0
  • 0
  • 2331
Visually different images

article-image-ogre3d-scene-graph
Packt
20 Dec 2010
13 min read
Save for later

The Ogre 3D scene graph

Packt
20 Dec 2010
13 min read
Creating a scene node We will learn how to create a new scene node and attach our 3D model to it. How to create a scene node with Ogre 3D We will follow these steps: In the old version of our code, we had the following two lines in the createScene() function: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad.mesh"); mSceneMgr->getRootSceneNode()->attachObject(ent); Replace the last line with the following: Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); Then add the following two lines; the order of those two lines is irrelevant forthe resulting scene: mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Compile and start the application. What just happened? We created a new scene node named Node 1. Then we added the scene node to the root scene node. After this, we attached our previously created 3D model to the newly created scene node so it would be visible. How to work with the RootSceneNode The call mSceneMgr->getRootSceneNode() returns the root scene node. This scene node is a member variable of the scene manager. When we want something to be visible, we need to attach it to the root scene node or a node which is a child or a descendent in any way. In short, there needs to be a chain of child relations from the root node to the node; otherwise it won't be rendered. As the name suggests, the root scene node is the root of the scene. So the entire scene will be, in some way, attached to the root scene node. Ogre 3D uses a so-called scene graph to organize the scene. This graph is like a tree, it has one root, the root scene node, and each node can have children. We already have used this characteristic when we called mSceneMgr->getRootSceneNode()->addChild(node);. There we added the created scene node as a child to the root. Directly afterwards, we added another kind of child to the scene node with node->attachObject(ent);. Here, we added an entity to the scene node. We have two different kinds of objects we can add to a scene node. Firstly, we have other scene nodes, which can be added as children and have children themselves. Secondly, we have entities that we want rendered. Entities aren't children and can't have children themselves. They are data objects which are associated with the node and can be thought of as leaves of the tree. There are a lot of other things we can add to a scene, like lights, particle systems, and so on. We will later learn what these things are and how to use them. Right now, we only need entities. Our current scene graph looks like this: The first thing we need to understand is what a scene graph is and what it does. A scene graph is used to represent how different parts of a scene are related to each other in 3D space. 3D space Ogre 3D is a 3D rendering engine, so we need to understand some basic 3D concepts. The most basic construct in 3D is a vector, which is represented by an ordered triple (x,y,z). Each position in a 3D space can be represented by such a triple using the Euclidean coordination system for three dimensions. It is important to know that there are different kinds of coordinate systems in 3D space. The only difference between the systems is the orientation of the axis and the positive rotation direction. There are two systems that are widely used, namely, the left-handed and the right-handed versions. In the following image, we see both systems—on the left side, we see the left-handed version; and on the right side, we see the right-handed one. Source:http://en.wikipedia.org/wiki/File:Cartesian_coordinate_system_handedness.svg The names left-and right-handed are based on the fact that the orientation of the axis can be reconstructed using the left and right hand. The thumb is the x-axis, the index finger the y-axis, and the middle finger the z-axis. We need to hold our hands so that we have a ninety-degree angle between thumb and index finger and also between middle and index finger. When using the right hand, we get a right-handed coordination system. When using the left hand, we get the left-handed version. Ogre uses the right-handed system, but rotates it so that the positive part of the x-axis is pointing right and the negative part of the x-axis points to the left. The y-axis is pointing up and the z-axis is pointing out of the screen and it is known as the y-up convention. This sounds irritating at first, but we will soon learn to think in this coordinate system. The website http://viz.aset.psu.edu/gho/sem_notes/3d_fundamentals/html/3d_coordinates.html contains a rather good picture-based explanation of the different coordination systems and how they relate to each other. Scene graph A scene graph is one of the most used concepts in graphics programming. Simply put, it's a way to store information about a scene. We already discussed that a scene graph has a root and is organized like a tree. But we didn't touch on the most important function of a scene graph. Each node of a scene graph has a list of its children as well as a transformation in the 3D space. The transformation is composed of three aspects, namely, the position, the rotation, and the scale. The position is a triple (x,y,z), which obviously describes the position of the node in the scene. The rotation is stored using a quaternion, a mathematical concept for storing rotations in 3D space, but we can think of rotations as a single floating point value for each axis, describing how the node is rotated using radians as units. Scaling is quite easy; again, it uses a triple (x,y,z), and each part of the triple is simply the factor to scale the axis with. The important thing about a scene graph is that the transformation is relative to the parent of the node. If we modify the orientation of the parent, the children will also be affected by this change. When we move the parent 10 units along the x-axis, all children will also be moved by 10 units along the x-axis. The final orientation of each child is computed using the orientation of all parents. This fact will become clearer with the next diagram. The position of MyEntity in this scene will be (10,0,0) and MyEntity2 will be at (10,10,20). Let's try this in Ogre 3D. Pop quiz – finding the position of scene nodes Look at the following tree and determine the end positions of MyEntity and MyEntity2: MyEntity(60,60,60) and MyEntity2(0,0,0) MyEntity(70,50,60) and MyEntity2(10,-10,0) MyEntity(60,60,60) and MyEntity2(10,10,10) Setting the position of a scene node Now, we will try to create the setup of the scene from the diagram before the previous image. Time for action – setting the position of a scene node Add this new line after the creation of the scene node: node->setPosition(10,0,0); To create a second entity, add this line at the end of the createScene() function: Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Then create a second scene node: Ogre::SceneNode* node2 = mSceneMgr->createSceneNode("Node2"); Add the second node to the first one: node->addChild(node2); Set the position of the second node: node2->setPosition(0,10,20); Attach the second entity to the second node: node2->attachObject(ent2); Compile the program and you should see two instances of Sinbad: What just happened? We created a scene which matches the preceding diagram. The first new function we used was at step 1. Easily guessed, the function setPosition(x,y,z) sets the position of the node to the given triple. Keep in mind that this position is relative to the parent. We wanted MyEntity2 to be at (10,10,20), because we added node2, which holds MyEntity2, to a scene node which already was at the position (10,0,0). We only needed to set the position of node2 to (0,10,20). When both positions combine, MyEntity2 will be at (10,10,20). Pop quiz – playing with scene nodes We have the scene node node1 at (0,20,0) and we have a child scene node node2, which has an entity attached to it. If we want the entity to be rendered at (10,10,10), at which position would we need to set node2? (10,10,10) (10,-10,10) (-10,10,-10) Have a go hero – adding a Sinbad Add a third instance of Sinbad and let it be rendered at the position (10,10,30). Rotating a scene node We already know how to set the position of a scene node. Now, we will learn how to rotate a scene node and another way to modify the position of a scene node. Time for action – rotating a scene node We will use the previous code, but create completely new code for the createScene() function. Remove all code from the createScene() function. First create an instance of Sinbad.mesh and then create a new scene node. Set the position of the scene node to (10,10,0), at the end attach the entity to the node, and add the node to the root scene node as a child: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad. mesh"); Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); node->setPosition(10,10,0); mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Again, create a new instance of the model, also a new scene node, and set the position to (10,0,0): Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Ogre::SceneNode* node2 = mSceneMgr->createSceneNode("Node2"); node->addChild(node2); node2->setPosition(10,0,0); Now add the following two lines to rotate the model and attach the entity to the scene node: node2->pitch(Ogre::Radian(Ogre::Math::HALF_PI)); node2->attachObject(ent2); Do the same again, but this time use the function yaw instead of the function pitch and the translate function instead of the setPosition function: Ogre::Entity* ent3 = mSceneMgr->createEntity("MyEntity3","Sinbad. mesh"); Ogre::SceneNode* node3 = mSceneMgr->createSceneNode("Node3",); node->addChild(node3); node3->translate(20,0,0); node3->yaw(Ogre::Degree(90.0f)); node3->attachObject(ent3); And the same again with roll instead of yaw or pitch: Ogre::Entity* ent4 = mSceneMgr->createEntity("MyEntity4","Sinbad. mesh"); Ogre::SceneNode* node4 = mSceneMgr->createSceneNode("Node4"); node->addChild(node4); node4->setPosition(30,0,0); node4->roll(Ogre::Radian(Ogre::Math::HALF_PI)); node4->attachObject(ent4); Compile and run the program, and you should see the following screenshot: What just happened? We repeated the code we had before four times and always changed some small details. The first repeat is nothing special. It is just the code we had before and this instance of the model will be our reference model to see what happens to the other three instances we made afterwards. In step 4, we added one following additional line: node2->pitch(Ogre::Radian(Ogre::Math::HALF_PI)); The function pitch(Ogre::Radian(Ogre::Math::HALF_PI)) rotates a scene node around the x-axis. As said before, this function expects a radian as parameter and we used half of pi, which means a rotation of ninety degrees. In step 5, we replaced the function call setPosition(x,y,z) with translate(x,y,z). The difference between setPosition(x,y,z) and translate(x,y,z) is that setPosition sets the position—no surprises here. translate adds the given values to the position of the scene node, so it moves the node relatively to its current position. If a scene node has the position (10,20,30) and we call setPosition(30,20,10), the node will then have the position (30,20,10). On the other hand, if we call translate(30,20,10), the node will have the position (40,40,40). It's a small, but important, difference. Both functions can be useful if used in the correct circumstances, like when we want to position in a scene, we would use the setPosition(x,y,z) function. However, when we want to move a node already positioned in the scene, we would use translate(x,y,z). Also, we replaced pitch(Ogre::Radian(Ogre::Math::HALF_PI))with yaw(Ogre::Degree(90.0f)). The yaw() function rotates the scene node around the y-axis. Instead of Ogre::Radian(), we used Ogre::Degree(). Of course, Pitch and yaw still need a radian to be used. However, Ogre 3D offers the class Degree(), which has a cast operator so the compiler can automatically cast into a Radian(). Therefore, the programmer is free to use a radian or degree to rotate scene nodes. The mandatory use of the classes makes sure that it's always clear which is used, to prevent confusion and possible error sources. Step 6 introduces the last of the three different rotate function a scene node has, namely, roll(). This function rotates the scene node around the z-axis. Again, we could use roll(Ogre::Degree(90.0f)) instead of roll(Ogre::Radian(Ogre::Math::HALF_PI)). The program when run shows a non-rotated model and all three possible rotations. The left model isn't rotated, the model to the right of the left model is rotated around the x-axis, the model to the left of the right model is rotated around the y-axis, and the right model is rotated around the z-axis. Each of these instances shows the effect of a different rotate function. In short, pitch() rotates around the x-axis, yaw() around the y-axis, and roll() around the z-axis. We can either use Ogre::Degree(degree) or Ogre::Radian(radian) to specify how much we want to rotate. Pop quiz – rotating a scene node Which are the three functions to rotate a scene node? pitch, yawn, roll pitch, yaw, roll pitching, yaw, roll Have a go hero – using Ogre::Degree Remodel the code we wrote for the previous section in such a way that each occurrence of Ogre::Radian is replaced with an Ogre::Degree and vice versa, and the rotation is still the same. Scaling a scene node We already have covered two of the three basic operations we can use to manipulate our scene graph. Now it's time for the last one, namely, scaling. Time for action – scaling a scene node Once again, we start with the same code block we used before. Remove all code from the createScene() function and insert the following code block: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad. mesh"); Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); node->setPosition(10,10,0); mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Again, create a new entity: Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Now we use a function that creates the scene node and adds it automatically as a child. Then we do the same thing we did before: Ogre::SceneNode* node2 = node->createChildSceneNode("node2"); node2->setPosition(10,0,0); node2->attachObject(ent2); Now, after the setPosition() function, call the following line to scale the model: node2->scale(2.0f,2.0f,2.0f); Create a new entity: Ogre::Entity* ent3 = mSceneMgr->createEntity("MyEntity3","Sinbad. mesh"); Now we call the same function as in step 3, but with an additional parameter: Ogre::SceneNode* node3 = node->createChildSceneNode("node3",Ogre:: Vector3(20,0,0)); After the function call, insert this line to scale the model: node3->scale(0.2f,0.2f,0.2f); Compile the program and run it, and you should see the following image:
Read more
  • 0
  • 0
  • 3589

article-image-environmental-effects-3d-graphics-xna-game-studio-40
Packt
16 Dec 2010
10 min read
Save for later

Environmental Effects in 3D Graphics with XNA Game Studio 4.0

Packt
16 Dec 2010
10 min read
3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them We will look at a technique called region growing to add plants and trees to the terrain's surface, and finish by combining the terrain with our sky box, water, and billboarding effects to create a mountain scene: Building a terrain from a heightmap A heightmap is a 2D image that stores, in each pixel, the height of the corresponding point on a grid of vertices. The pixel values range from 0 to 1, so in practice we will multiply them by the maximum height of the terrain to get the final height of each vertex. We build a terrain out of vertices and indices as a large rectangular grid with the same number of vertices as the number of pixels in the heightmap. Let's start by creating a new Terrain class. This class will keep track of everything needed to render our terrain: textures, the effect, vertex and index buffers, and so on. public class Terrain { VertexPositionNormalTexture[] vertices; // Vertex array VertexBuffer vertexBuffer; // Vertex buffer int[] indices; // Index array IndexBuffer indexBuffer; // Index buffer float[,] heights; // Array of vertex heights float height; // Maximum height of terrain float cellSize; // Distance between vertices on x and z axes int width, length; // Number of vertices on x and z axes int nVertices, nIndices; // Number of vertices and indices Effect effect; // Effect used for rendering GraphicsDevice GraphicsDevice; // Graphics device to draw with Texture2D heightMap; // Heightmap texture } The constructor will initialize many of these values: public Terrain(Texture2D HeightMap, float CellSize, float Height, GraphicsDevice GraphicsDevice, ContentManager Content) { this.heightMap = HeightMap; this.width = HeightMap.Width; this.length = HeightMap.Height; this.cellSize = CellSize; this.height = Height; this.GraphicsDevice = GraphicsDevice; effect = Content.Load<Effect>("TerrainEffect"); // 1 vertex per pixel nVertices = width * length; // (Width-1) * (Length-1) cells, 2 triangles per cell, 3 indices per // triangle nIndices = (width - 1) * (length - 1) * 6; vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionNormalTexture), nVertices, BufferUsage.WriteOnly); indexBuffer = new IndexBuffer(GraphicsDevice, IndexElementSize.ThirtyTwoBits, nIndices, BufferUsage.WriteOnly); } Before we can generate any normals or indices, we need to know the dimensions of our grid. We know that the width and length are simply the width and height of our heightmap, but we need to extract the height values from the heightmap. We do this with the getHeights() function: private void getHeights() { // Extract pixel data Color[] heightMapData = new Color[width * length]; heightMap.GetData<Color>(heightMapData); // Create heights[,] array heights = new float[width, length]; // For each pixel for (int y = 0; y < length; y++) for (int x = 0; x < width; x++) { // Get color value (0 - 255) float amt = heightMapData[y * width + x].R; // Scale to (0 - 1) amt /= 255.0f; // Multiply by max height to get final height heights[x, y] = amt * height; } } This will initialize the heights[,] array, which we can then use to build our vertices. When building vertices, we simply lay out a vertex for each pixel in the heightmap, spaced according to the cellSize variable. Note that this will create (width – 1) * (length – 1) "cells"—each with two triangles: The function that does this is as shown: private void createVertices() { vertices = new VertexPositionNormalTexture[nVertices]; // Calculate the position offset that will center the terrain at (0, 0, 0) Vector3 offsetToCenter = -new Vector3(((float)width / 2.0f) * cellSize, 0, ((float)length / 2.0f) * cellSize); // For each pixel in the image for (int z = 0; z < length; z++) for (int x = 0; x < width; x++) { // Find position based on grid coordinates and height in // heightmap Vector3 position = new Vector3(x * cellSize, heights[x, z], z * cellSize) + offsetToCenter; // UV coordinates range from (0, 0) at grid location (0, 0) to // (1, 1) at grid location (width, length) Vector2 uv = new Vector2((float)x / width, (float)z / length); // Create the vertex vertices[z * width + x] = new VertexPositionNormalTexture( position, Vector3.Zero, uv); } } When we create our terrain's index buffer, we need to lay out two triangles for each cell in the terrain. All we need to do is find the indices of the vertices at each corner of each cell, and create the triangles by specifying those indices in clockwise order for two triangles. For example, to create the triangles for the first cell in the preceding screenshot, we would specify the triangles as [0, 1, 4] and [4, 1, 5]. private void createIndices() { indices = new int[nIndices]; int i = 0; // For each cell for (int x = 0; x < width - 1; x++) for (int z = 0; z < length - 1; z++) { // Find the indices of the corners int upperLeft = z * width + x; int upperRight = upperLeft + 1; int lowerLeft = upperLeft + width; int lowerRight = lowerLeft + 1; // Specify upper triangle indices[i++] = upperLeft; indices[i++] = upperRight; indices[i++] = lowerLeft; // Specify lower triangle indices[i++] = lowerLeft; indices[i++] = upperRight; indices[i++] = lowerRight; } } The last thing we need to calculate for each vertex is the normals. Because we are creating the terrain from scratch, we will need to calculate all of the normals based only on the height data that we are given. This is actually much easier than it sounds: to calculate the normals we simply calculate the normal of each triangle of the terrain and add that normal to each vertex involved in the triangle. Once we have done this for each triangle, we simply normalize again, averaging the influences of each triangle connected to each vertex. private void genNormals() { // For each triangle for (int i = 0; i < nIndices; i += 3) { // Find the position of each corner of the triangle Vector3 v1 = vertices[indices[i]].Position; Vector3 v2 = vertices[indices[i + 1]].Position; Vector3 v3 = vertices[indices[i + 2]].Position; // Cross the vectors between the corners to get the normal Vector3 normal = Vector3.Cross(v1 - v2, v1 - v3); normal.Normalize(); // Add the influence of the normal to each vertex in the // triangle vertices[indices[i]].Normal += normal; vertices[indices[i + 1]].Normal += normal; vertices[indices[i + 2]].Normal += normal; } // Average the influences of the triangles touching each // vertex for (int i = 0; i < nVertices; i++) vertices[i].Normal.Normalize(); } We'll finish off the constructor by calling these functions in order and then setting the vertices and indices that we created into their respective buffers: createVertices(); createIndices(); genNormals(); vertexBuffer.SetData<VertexPositionNormalTexture>(vertices); indexBuffer.SetData<int>(indices); Now that we've created the framework for this class, let's create the TerrainEffect.fx effect. This effect will, for the moment, be responsible for some simple directional lighting and texture mapping. We'll need a few effect parameters: float4x4 View; float4x4 Projection; float3 LightDirection = float3(1, -1, 0); float TextureTiling = 1; texture2D BaseTexture; sampler2D BaseTextureSampler = sampler_state { Texture = <BaseTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; The TextureTiling parameter will determine how many times our texture is repeated across the terrain's surface—simply stretching it across the terrain would look bad because it would need to be stretched to a very large size. "Tiling" it across the terrain will look much better. We will need a very standard vertex shader: struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, mul(View, Projection)); output.Normal = input.Normal; output.UV = input.UV; return output; } The pixel shader is also very standard, except that we multiply the texture coordinates by the TextureTiling parameter. This works because the texture sampler's address mode is set to "wrap", and thus the sampler will simply wrap the texture coordinates past the edge of the texture, creating the tiling effect. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float light = dot(normalize(input.Normal), normalize(LightDirection)); light = clamp(light + 0.4f, 0, 1); // Simple ambient lighting float3 tex = tex2D(BaseTextureSampler, input.UV * TextureTiling); return float4(tex * light, 1); } The technique definition is the same as our other effects: technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } In order to use the effect with our terrain, we'll need to add a few more member variables to the Terrain class: Texture2D baseTexture; float textureTiling; Vector3 lightDirection; These values will be set from the constructor: public Terrain(Texture2D HeightMap, float CellSize, float Height, Texture2D BaseTexture, float TextureTiling, Vector3 LightDirection, GraphicsDevice GraphicsDevice, ContentManager Content) { this.baseTexture = BaseTexture; this.textureTiling = TextureTiling; this.lightDirection = LightDirection; // etc... Finally, we can simply set these effect parameters along with the View and Projection parameters in the Draw() function: effect.Parameters["BaseTexture"].SetValue(baseTexture); effect.Parameters["TextureTiling"].SetValue(textureTiling); effect.Parameters["LightDirection"].SetValue(lightDirection); Let's now add the terrain to our game. We'll need a new member variable in the Game1 class: Terrain terrain; We'll need to initialize it in the LoadContent() method: terrain = new Terrain(Content.Load<Texture2D>("terrain"), 30, 4800, Content.Load<Texture2D>("grass"), 6, new Vector3(1, -1, 0), GraphicsDevice, Content); Finally, we can draw it in the Draw() function: terrain.Draw(camera.View, camera.Projection); Multitexturing Our terrain looks pretty good as it is, but to make it more believable the texture applied to it needs to vary—snow and rocks at the peaks, for example. To do this, we will use a technique called multitexturing, which uses the red, blue, and green channels of a texture as a guide as to where to draw textures that correspond to those channels. For example, sand may correspond to red, snow to blue, and rock to green. Adding snow would then be as simple as painting blue onto the areas of this "texture map" that correspond with peaks on the heightmap. We will also have one extra texture that fills in the area where no colors have been painted onto the texture map—grass, for example. To begin with, we will need to modify our texture parameters on our effect from one texture to five: the texture map, the base texture, and the three color channel mapped textures. texture RTexture; sampler RTextureSampler = sampler_state { texture = <RTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture GTexture; sampler GTextureSampler = sampler_state { texture = <GTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture BTexture; sampler BTextureSampler = sampler_state { texture = <BTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture BaseTexture; sampler BaseTextureSampler = sampler_state { texture = <BaseTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture WeightMap; sampler WeightMapSampler = sampler_state { texture = <WeightMap>; AddressU = Clamp; AddressV = Clamp; MinFilter = Linear; MagFilter = Linear; }; Second, we need to update our pixel shader to draw these textures onto the terrain: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float light = dot(normalize(input.Normal), normalize( LightDirection)); light = clamp(light + 0.4f, 0, 1); float3 rTex = tex2D(RTextureSampler, input.UV * TextureTiling); float3 gTex = tex2D(GTextureSampler, input.UV * TextureTiling); float3 bTex = tex2D(BTextureSampler, input.UV * TextureTiling); float3 base = tex2D(BaseTextureSampler, input.UV * TextureTiling); float3 weightMap = tex2D(WeightMapSampler, input.UV); float3 output = clamp(1.0f - weightMap.r - weightMap.g - weightMap.b, 0, 1); output *= base; output += weightMap.r * rTex + weightMap.g * gTex + weightMap.b * bTex; return float4(output * light, 1); } We'll need to add a way to set these values to the Terrain class: public Texture2D RTexture, BTexture, GTexture, WeightMap; All we need to do now is set these values to the effect in the Draw() function: effect.Parameters["RTexture"].SetValue(RTexture); effect.Parameters["GTexture"].SetValue(GTexture); effect.Parameters["BTexture"].SetValue(BTexture); effect.Parameters["WeightMap"].SetValue(WeightMap); To use multitexturing in our game, we'll need to set these values in the Game1 class: terrain.WeightMap = Content.Load<Texture2D>("weightMap"); terrain.RTexture = Content.Load<Texture2D>("sand"); terrain.GTexture = Content.Load<Texture2D>("rock"); terrain.BTexture = Content.Load<Texture2D>("snow");
Read more
  • 0
  • 0
  • 3679

article-image-openscenegraph-managing-scene-graph
Packt
15 Dec 2010
9 min read
Save for later

OpenSceneGraph: Managing Scene Graph

Packt
15 Dec 2010
9 min read
  OpenSceneGraph 3.0: Beginner's Guide Create high-performance virtual reality applications with OpenSceneGraph, one of the best 3D graphics engines. The Group interface The osg::Group type represents the group nodes of an OSG scene graph. It can have any number of child nodes, including the osg::Geode leaf nodes and other osg::Group nodes. It is the most commonly-used base class of the various NodeKits—that is, nodes with various functionalities. The osg::Group class derives from osg::Node, and thus indirectly derives from osg::Referenced. The osg::Group class contains a children list with each child node managed by the smart pointer osg::ref_ptr<>. This ensures that there will be no memory leaks whenever deleting a set of cascading nodes in the scene graph. The osg::Group class provides a set of public methods for defining interfaces for handling children. These are very similar to the drawable managing methods of osg::Geode, but most of the input parameters are osg::Node pointers. The public method addChild() attaches a node to the end of the children list. Meanwhile, there is an insertChild() method for inserting nodes to osg::Group at a specific location, which accepts an integer index and a node pointer as parameters. The public methods removeChild() and removeChildren() will remove one or more child nodes from the current osg::Group object. The latter uses two parameters: the zero-based index of the start element, and the number of elements to be removed. The getChild() returns the osg::Node pointer stored at a specified zero-based index. The getNumChildren() returns the total number of children. You will be able to handle the child interface of osg::Group with ease because of your previous experience of handling osg::Geode and drawables. Managing parent nodes We have already learnt that osg::Group is used as the group node, and osg::Geode as the leaf node of a scene graph. Additionally, both classes should have an interface for managing parent nodes. OSG allows a node to have multiple parents. In this section, we will first have a glimpse of parent management methods, which are declared in the osg::Node class directly: The method getParent() returns an osg::Group pointer as the parent node. It requires an integer parameter that indicates the index in the parent's list. The method getNumParents() returns the total number of parents. If the node has a single parent, this method will return 1, and only getParent(0) is available at this time. The method getParentalNodePaths() returns all possible paths from the root node of the scene to the current node (but excluding the current node). It returns a list of osg::NodePath variables. The osg::NodePath is actually a std::vector object of node pointers, for example, assuming we have a graphical scene: The following code snippet will find the only path from the scene root to the node child3: osg::NodePath& nodePath = child3->getParentalNodePaths()[0]; for ( unsigned int i=0; i<nodePath.size(); ++i ) { osg::Node* node = nodePath[i]; // Do something... } You will successively receive the nodes Root, Child1, and Child2 in the loop. We don't need to use the memory management system to reference a node's parents. When a parent node is deleted, it will be automatically removed from its child nodes' records as well. A node without any parents can only be considered as the root node of the scene graph. In that case, the getNumParents() method will return 0 and no parent node can be retrieved. Time for action – adding models to the scene graph In the past examples, we always loaded a single model, like the Cessna, by using the osgDB::readNodeFile() function. This time we will try to import and manage multiple models. Each model will be assigned to a node pointer and then added to a group node. The group node, which is defined as the scene root, is going to be used by the program to render the whole scene graph at last: Include the necessary headers: #include <osg/Group> #include <osgDB/ReadFile> #include <osgViewer/Viewer> In the main function, we will first load two different models and assign them to osg::Node pointers. A loaded model is also a sub-scene graph constructed with group and leaf nodes. The osg::Node class is able to represent any kind of sub graphs, and if necessary, it can be converted to osg::Group or osg::Geode with either the C++ dynamic_cast<> operator, or convenient conversion methods like asGroup() and asGeode(), which are less time-costly than dynamic_cast<>. osg::ref_ptr<osg::Node> model1 = osgDB::readNodeFile( "cessna.osg" ); osg::ref_ptr<osg::Node> model2 = osgDB::readNodeFile( "cow.osg" ); Add the two models to an osg::Group node by using the addChild() method: osg::ref_ptr<osg::Group> root = new osg::Group; root->addChild( model1.get() ); root->addChild( model2.get() ); Initialize and start the viewer: osgViewer::Viewer viewer; viewer.setSceneData( root.get() ); return viewer.run(); Now you will see a cow getting stuck in the Cessna model! It is a little incredible to see that in reality, but in a virtual world, these two models just belong to uncorrelated child nodes managed by a group node, and then rendered separately by the scene viewer. What just happened? Both osg::Group and osg::Geode are derived from the osg::Node base class. The osg::Group class allows the addition of any types of child nodes, including the osg::Group itself. However, the osg::Geode class contains no group or leaf nodes. It only accepts drawables for rendering purposes. It is convenient if we can find out whether the type of a certain node is osg::Group, osg::Geode, or other derived type especially those read from files and managed by ambiguous osg::Node classes, such as: osg::ref_ptr<osg::Node> model = osgDB::readNodeFile( "cessna.osg" ); Both the dynamic_cast<> operator and the conversion methods like asGroup(), asGeode(), among others, will help to convert from one pointer or reference type to another. Firstly, we take the dynamic_cast<> operator as an example. This can be used to perform downcast conversions of the class inheritance hierarchy, such as: osg::ref_ptr<osg::Group> model = dynamic_cast<osg::Group*>( osgDB::readNodeFile("cessna.osg") ); The return value of the osgDB::readNodeFile() function is always osg::Node*, but we can also try to manage it with an osg::Group pointer. If, the root node of the Cessna sub graph is a group node, then the conversion will succeed, otherwise it will fail and the variable model will be NULL.   You may also perform an upcast conversion, which is actually an implicit conversion: osg::ref_ptr<osg::Group> group = ...; osg::Node* node1 = dynamic_cast<osg::Node*>( group.get() ); osg::Node* node2 = group.get(); On most compilers, both node1 and node2 will compile and work fine. The conversion methods will do a similar job. Actually, it is preferable to use those methods instead of dynamic_cast<> if one exists for the type you need, especially in a performance-critical section of code: // Assumes the Cessna's root node is a group node. osg::ref_ptr<osg::Node> model = osgDB::readNodeFile("cessna.osg"); osg::Group* convModel1 = model->asGroup(); // OK! osg::Geode* convModel2 = model->asGeode(); // Returns NULL. Traversing the scene graph A typical traversal consists of the following steps: First, start at an arbitrary node (for example, the root node). Move down (or sometimes up) the scene graph recursively to the child nodes, until a leaf node is reached, or a node with no children is reached. Backtrack to the most recent node that doesn't finish exploring, and repeat the above steps. This can be called a depth-first search of a scene graph. Different updating and rendering operations will be applied to all scene nodes during traversals, which makes traversing a key feature of scene graphs. There are several types of traversals, with different purposes: An event traversal firstly processes mouse and keyboard inputs, and other user events, while traversing the nodes. An update traversal (or application traversal) allows the user application to modify the scene graph, such as setting node and geometry properties, applying node functionalities, executing callbacks, and so on. A cull traversal tests whether a node is within the viewport and worthy of being rendered. It culls invisible and unavailable nodes, and outputs the optimized scene graph to an internal rendering list. A draw traversal (or rendering traversal) issues low-level OpenGL API calls to actually render the scene. Note that it has no correlation with the scene graph, but only works on the rendering list generated by the cull traversal. In the common sense, these traversals should be executed per frame, one after another. But for systems with multiple processors and graphics cards, OSG can process them in parallel and therefore improve the rendering efficiency. The visitor pattern can be used to implement traversals. Transformation nodes The osg::Group nodes do nothing except for traversing down to their children. However, OSG also supports the osg::Transform family of classes, which is created during the traversal-concatenated transformations to be applied to geometry. The osg::Transform class is derived from osg::Group. It can't be instantiated directly. Instead, it provides a set of subclasses for implementing different transformation interfaces. When traversing down the scene graph hierarchy, the osg::Transform node always adds its own transformation to the current transformation matrix, that is, the OpenGL model-view matrix. It is equivalent to concatenating OpenGL matrix commands such as glMultMatrix(), for instance: This example scene graph can be translated into following OpenGL code: glPushMatrix(); glMultMatrix( matrixOfTransform1 ); renderGeode1(); // Assume this will render Geode1 glPushMatrix(); glMultMatrix( matrixOfTransform2 ); renderGeode2(); // Assume this will render Geode2 glPopMatrix(); glPopMatrix(); To describe the procedure using the concept of coordinate frame, we could say that Geode1 and Transform2 are under the relative reference frame of Transform1, and Geode2 is under the relative frame of Transform2. However, OSG also allows the setting of an absolute reference frame instead, which will result in the behavior equivalent to the OpenGL command glLoadMatrix(): transformNode->setReferenceFrame( osg::Transform::ABSOLUTE_RF ); And to switch back to the default coordinate frame: transformNode->setReferenceFrame( osg::Transform::RELATIVE_RF );
Read more
  • 0
  • 0
  • 5387
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-starting-ogre-3d
Packt
25 Nov 2010
7 min read
Save for later

Starting Ogre 3D

Packt
25 Nov 2010
7 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Images         Read more about this book       (For more resources on this subject, see here.) Introduction Up until now, the ExampleApplication class has started and initialized Ogre 3D for us; now we are going to do it ourselves. Time for action – starting Ogre 3D This time we are working on a blank sheet. Start with an empty code file, include Ogre3d.h, and create an empty main function: #include "OgreOgre.h"int main (void){ return 0;} Create an instance of the Ogre 3D Root class; this class needs the name of the "plugin.cfg": "plugin.cfg":Ogre::Root* root = new Ogre::Root("plugins_d.cfg"); If the config dialog can't be shown or the user cancels it, close the application: if(!root->showConfigDialog()){ return -1;} Create a render window: Ogre::RenderWindow* window = root->initialise(true,"Ogre3DBeginners Guide"); Next create a new scene manager: Ogre::SceneManager* sceneManager = root->createSceneManager(Ogre::ST_GENERIC); Create a camera and name it camera: Ogre::Camera* camera = sceneManager->createCamera("Camera");camera->setPosition(Ogre::Vector3(0,0,50));camera->lookAt(Ogre::Vector3(0,0,0));camera->setNearClipDistance(5); With this camera, create a viewport and set the background color to black: Ogre::Viewport* viewport = window->addViewport(camera);viewport->setBackgroundColour(Ogre::ColourValue(0.0,0.0,0.0)); Now, use this viewport to set the aspect ratio of the camera: camera->setAspectRatio(Ogre::Real(viewport->getActualWidth())/Ogre::Real(viewport->getActualHeight())); Finally, tell the root to start rendering: root->startRendering(); Compile and run the application; you should see the normal config dialog and then a black window. This window can't be closed by pressing Escape because we haven't added key handling yet. You can close the application by pressing CTRL+C in the console the application has been started from. What just happened? We created our first Ogre 3D application without the help of the ExampleApplication. Because we aren't using the ExampleApplication any longer, we had to include Ogre3D.h, which was previously included by ExampleApplication.h. Before we can do anything with Ogre 3D, we need a root instance. The root class is a class that manages the higher levels of Ogre 3D, creates and saves the factories used for creating other objects, loads and unloads the needed plugins, and a lot more. We gave the root instance one parameter: the name of the file that defines which plugins to load. The following is the complete signature of the constructor: Root(const String & pluginFileName = "plugins.cfg",const String &configFileName = "ogre.cfg",const String & logFileName = "Ogre.log") Besides the name for the plugin configuration file, the function also needs the name of the Ogre configuration and the log file. We needed to change the first file name because we are using the debug version of our application and therefore want to load the debug plugins. The default value is plugins.cfg, which is true for the release folder of the Ogre 3D SDK, but our application is running in the debug folder where the filename is plugins_d.cfg. ogre.cfg contains the settings for starting the Ogre application that we selected in the config dialog. This saves the user from making the same changes every time he/she start our application. With this file Ogre 3D can remember his choices and use them as defaults for the next start. This file is created if it didn't exist, so we don't append an _d to the filename and can use the default; the same is true for the log file. Using the root instance, we let Ogre 3D show the config dialog to the user in step 3. When the user cancels the dialog or anything goes wrong, we return -1 and with this the application closes. Otherwise, we created a new render window and a new scene manager in step 4. Using the scene manager, we created a camera, and with the camera we created the viewport; then, using the viewport, we calculated the aspect ratio for the camera. After creating all requirements, we tell the root instance to start rendering, so our result would be visible. Following is a diagram showing which object was needed to create the other: Adding resources We have now created our first Ogre 3D application, which doesn't need the ExampleApplication. But one important thing is missing: we haven't loaded and rendered a model yet. Time for action – loading the Sinbad mesh We have our application, now let's add a model. After setting the aspect ratio and before starting the rendering, add the zip archive containing the Sinbad model to our resources: Ogre::ResourceGroupManager::getSingleton().addResourceLocation("../../Media/packs/Sinbad.zip","Zip"); We don't want to index more resources at the moment, so index all added resources now: Ogre::ResourceGroupManager::getSingleton().initialiseAllResourceGroups(); Now create an instance of the Sinbad mesh and add it to the scene: Ogre::Entity* ent = sceneManager->createEntity("Sinbad.mesh");sceneManager->getRootSceneNode()->attachObject(ent); Compile and run the application; you should see Sinbad in the middle of the screen: What just happened? We used the ResourceGroupManager to index the zip archive containing the Sinbad mesh and texture files, and after this was done, we told it to load the data with the createEntity() call in step 3. Using resources.cfg Adding a new line of code for each zip archive or folder we want to load is a tedious task and we should try to avoid it. The ExampleApplication used a configuration file called resources.cfg in which each folder or zip archive was listed, and all the content was loaded using this file. Let's replicate this behavior. Time for action – using resources.cfg to load our models Using our previous application, we are now going to parse the resources.cfg. Replace the loading of the zip archive with an instance of a config file pointing at the resources_d.cfg: the resources_d.cfg:Ogre::ConfigFile cf;cf.load(«resources_d.cfg»); First get the iterator, which goes over each section of the config file: Ogre::ConfigFile::SectionIterator sectionIter =cf.getSectionIterator(); Define three strings to save the data we are going to extract from the config file and iterate over each section: Ogre::String sectionName, typeName, dataname;while (sectionIter.hasMoreElements()){ Get the name of the section: sectionName = sectionIter.peekNextKey(); Get the settings contained in the section and, at the same time, advance the section iterator; also create an iterator for the settings itself: Ogre::ConfigFile::SettingsMultiMap *settings = sectionIter.getNext();Ogre::ConfigFile::SettingsMultiMap::iterator i; Iterate over each setting in the section: for (i = settings->begin(); i != settings->end(); ++i){ Use the iterator to get the name and the type of the resources: typeName = i->first;dataname = i->second; Use the resource name, type, and section name to add it to the resource index: Ogre::ResourceGroupManager::getSingleton().addResourceLocation(dataname, typeName, sectionName); Compile and run the application, and you should see the same scene as before.
Read more
  • 0
  • 0
  • 1942

article-image-ogre-3d-fixed-function-pipeline-and-shaders
Packt
25 Nov 2010
13 min read
Save for later

Ogre 3D: Fixed Function Pipeline and Shaders

Packt
25 Nov 2010
13 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Introduction Fixed Function Pipeline is the rendering pipeline on the graphics card that produces those nice shiny pictures we love looking at. As the prefix Fixed suggests, there isn't a lot of freedom to manipulate the Fixed Function Pipeline for the developer. We can tweak some parameters using the material files, but nothing fancy. That's where shaders can help fill the gap. Shaders are small programs that can be loaded onto the graphics card and then function as a part of the rendering process. These shaders can be thought of as little programs written in a C-like language with a small, but powerful, set of functions. With shaders, we can almost completely control how our scene is rendered and also add a lot of new effects that weren't possible with only the Fixed Function Pipeline. Render Pipeline To understand shaders, we need to first understand how the rendering process works as a whole. When rendering, each vertex of our model is translated from local space into camera space, then each triangle gets rasterized. This means, the graphics card calculates how to represent the model in an image. These image parts are called fragments. Each fragment is then processed and manipulated. We could apply a specific part of a texture to this fragment to texture our model or we could simply assign it a color when rendering a model in only one color. After this processing, the graphics card tests if the fragment is covered by another fragment that is nearer to the camera or if it is the fragment nearest to the camera. If this is true, the fragment gets displayed on the screen. In newer hardware, this step can occur before the processing of the fragment. This can save a lot of computation time if most of the fragments won't be seen in the end result. The following is a very simplified graph showing the pipeline: With almost each new graphics card generation, new shader types were introduced. It began with vertex and pixel/fragment shaders. The task of the vertex shader is to transform the vertices into camera space, and if needed, modify them in any way, like when doing animations completely on the GPU. The pixel/fragment shader gets the rasterized fragments and can apply a texture to them or manipulate them in other ways, for example, for lighting models with an accuracy of a pixel. Time for action – our first shader application Let's write our first vertex and fragment shaders: In our application, we only need to change the used material. Change it to MyMaterial13. Also remove the second quad: manual->begin("MyMaterial13", RenderOperation::OT_TRIANGLE_LIST); Now we need to create this material in our material file. First, we are going to define the fragment shader. Ogre 3D needs five pieces of information about the shader: The name of the shader In which language it is written In which source file it is stored How the main function of this shader is called In what profile we want the shader to be compiled All this information should be in the material file: fragment_program MyFragmentShader1 cg { source Ogre3DBeginnersGuideShaders.cg entry_point MyFragmentShader1 profiles ps_1_1 arbfp1 } The vertex shader needs the same parameter, but we also have to define a parameter that is passed from Ogre 3D to our shader. This contains the matrix that we will use for transforming our quad into camera space: vertex_program MyVertexShader1 cg { source Ogre3DBeginnerGuideShaders.cg entry_point MyVertexShader1 profiles vs_1_1 arbvp1 default_params { param_named_auto worldViewMatrix worldviewproj_matrix } } The material itself just uses the vertex and fragment shader names to reference them: material MyMaterial13 { technique { pass { vertex_program_ref MyVertexShader1 { } fragment_program_ref MyFragmentShader1 { } } } } Now we need to write the shader itself. Create a file named Ogre3DBeginnersGuideShaders.cg in the mediamaterialsprograms folder of your Ogre 3D SDK. Each shader looks like a function. One difference is that we can use the out keyword to mark a parameter as an outgoing parameter instead of the default incoming parameter. The out parameters are used by the rendering pipeline for the next rendering step. The out parameters of a vertex shader are processed and then passed into the pixel shader as an in parameter. The out parameter from a pixel shader is used to create the final render result. Remember to use the correct name for the function; otherwise, Ogre 3D won't find it. Let's begin with the fragment shader because it's easier: void MyFragmentShader1(out float4 color: COLOR) The fragment shader will return the color blue for every pixel we render: { color = float4(0,0,1,0); } That's all for the fragment shader; now we come to the vertex shader. The vertex shader has three parameters—the position for the vertex, the translated position of the vertex as an out variable, and as a uniform variable for the matrix we are using for the translation: void MyVertexShader1( float4 position : POSITION, out float4 oPosition : POSITION, uniform float4x4 worldViewMatrix) Inside the shader, we use the matrix and the incoming position to calculate the outgoing position: { oPosition = mul(worldViewMatrix, position); } Compile and run the application. You should see our quad, this time rendered in blue. What just happened? Quite a lot happened here; we will start with step 2. Here we defined the fragment shader we are going to use. Ogre 3D needs five pieces of information for a shader. We define a fragment shader with the keyword fragment_program, followed by the name we want the fragment program to have, then a space, and at the end, the language in which the shader will be written. As for programs, shader code was written in assembly and in the early days, programmers had to write shader code in assembly because there wasn't another language to be used. But also, as with general programming language, soon there came high-level programming to ease the pain of writing shader code. At the moment, there are three different languages that shaders can be written in: HLSL, GLSL, and CG. The shader language HLSL is used by DirectX and GLSL is the language used by OpenGL. CG was developed by NVidia in cooperation with Microsoft and is the language we are going to use. This language is compiled during the start up of our application to their respective assembly code. So shaders written in HLSL can only be used with DirectX and GLSL shaders with OpenGL. But CG can compile to DirectX and OpenGL shader assembly code; that's the reason why we are using it to be truly cross platform. That's two of the five pieces of information that Ogre 3D needs. The other three are given in the curly brackets. The syntax is like a property file—first the key and then the value. One key we use is source followed by the file where the shader is stored. We don't need to give the full path, just the filename will do, because Ogre 3D scans our directories and only needs the filename to find the file. Another key we are using is entry_point followed by the name of the function we are going to use for the shader. In the code file, we created a function called MyFragmentShader1 and we are giving Ogre 3D this name as the entry point for our fragment shader. This means, each time we need the fragment shader, this function is called. The function has only one parameter out float4 color : COLOR. The prefix out signals that this parameter is an out parameter, meaning we will write a value into it, which will be used by the render pipeline later on. The type of this parameter is called float4, which simply is an array of four float values. For colors, we can think of it as a tuple (r,g,b,a) where r stands for red, g for green, b for blue, and a for alpha: the typical tuple to description colors. After the name of the parameter, we got a : COLOR. In CG, this is called a semantic describing for what the parameter is used in the context of the render pipeline. The parameter :COLOR tells the render pipeline that this is a color. In combination with the out keyword and the fact that this is a fragment shader, the render pipeline can deduce that this is the color we want our fragment to have. The last piece of information we supply uses the keyword profiles with the values ps_1_1 and arbfp1. To understand this, we need to talk a bit about the history of shaders. With each generation of graphics cards, a new generation of shaders have been introduced. What started as a fairly simple C-like programming language without even IF conditions are now really complex and powerful programming languages. Right now, there are several different versions for shaders and each with a unique function set. Ogre 3D needs to know which of these versions we want to use. ps_1_1 means pixel shader version 1.1 and arbfp1 means fragment program version 1. We need both profiles because ps_1_1 is a DirectX specific function set and arbfp1 is a function subset for OpenGL. We say we are cross platform, but sometimes we need to define values for both platforms. All subsets can be found at http://www.ogre3d.org/docs/manual/manual_18.html. That's all needed to define the fragment shader in our material file. In step 3, we defined our vertex shader. This part is very similar to the fragment shader definition code; the main difference is the default_params block. This block defines parameters that are given to the shader during runtime. param_named_auto defines a parameter that is automatically passed to the shader by Ogre 3D. After this key, we need to give the parameter a name and after this, the value keyword we want it to have. We name the parameter worldViewMatrix; any other name would also work, and the value we want it to have has the key worldviewproj_matrix. This key tells Ogre 3D we want our parameter to have the value of the WorldViewProjection matrix. This matrix is used for transforming vertices from local into camera space. A list of all keyword values can be found at http://www.ogre3d.org/docs/manual/manual_23.html#SEC128. How we use these values will be seen shortly. Step 4 used the work we did before. As always, we defined our material with one technique and one pass; we didn't define a texture unit but used the keyword vertex_program_ref. After this keyword, we need to put the name of a vertex program we defined, in our case, this is MyVertexShader1. If we wanted, we could have put some more parameters into the definition, but we didn't need to, so we just opened and closed the block with curly brackets. The same is true for fragment_program_ref. Writing a shader Now that we have defined all necessary things in our material file, let's write the shader code itself. Step 6 defines the function head with the parameter we discussed before, so we won't go deeper here. Step 7 defines the function body; for this fragment shader, the body is extremely simple. We created a new float4 tuple (0,0,1,0), describes the color blue and assigns this color to our out parameter color. The effect is that everything that is rendered with this material will be blue. There isn't more to the fragment shader, so let's move on to the vertex shader. Step 8 defines the function header. The vertex shader has 3 parameters— two are marked as positions using CG semantics and the other parameter is a 4x4 matrix using float4 as values named worldViewMatrix. Before the parameter type definition, there is the keyword uniform. Each time our vertex shader is called, it gets a new vertex as the position parameter input, calculates the position of this new vertex, and saves it in the oPosition parameter. This means with each call, the parameter changes. This isn't true for the worldViewMatrix. The keyword uniform denotes parameters that are constant over one draw call. When we render our quad, the worldViewMatrix doesn't change while the rest of the parameters are different for each vertex processed by our vertex shader. Of course, in the next frame, the worldViewMatrix will probably have changed. Step 9 creates the body of the vertex shader. In the body, we multiply the vertex that we got with the world matrix to get the vertex translated into camera space. This translated vertex is saved in the out parameter to be processed by the rendering pipeline. We will look more closely into the render pipeline after we have experimented with shaders a bit more. Texturing with shaders We have painted our quad in blue, but we would like to use the previous texture. Time for action – using textures in shaders Create a new material named MyMaterial14. Also create two new shaders named MyFragmentShader2 and MyVertexShader2. Remember to copy the fragment and vertex program definitions in the material file. Add to the material file a texture unit with the rock texture: texture_unit { texture terr_rock6.jpg } We need to add two new parameters to our fragment shader. The first is a two tuple of floats for the texture coordinates. Therefore, we also use the semantic to mark the parameter as the first texture coordinates we are using. The other new parameter is of type sampler2D, which is another name for texture. Because the texture doesn't change on a per fragment basis, we mark it as uniform. This keyword indicates that the parameter value comes from outside the CG program and is set by the rendering environment, in our case, by Ogre 3D: void MyFragmentShader2(float2 uv : TEXCOORD0, out float4 color : COLOR, uniform sampler2D texture) In the fragment shader, replace the color assignment with the following line: color = tex2D(texture, uv); The vertex shader also needs some new parameters—one float2 for the incoming texture coordinates and one float2 as the outgoing texture coordinates. Both are our TEXCOORD0 because one is the incoming and the other is the outgoing TEXCOORD0: void MyVertexShader2( float4 position : POSITION, out float4 oPosition : POSITION, float2 uv : TEXCOORD0, out float2 oUv : TEXCOORD0, uniform float4x4 worldViewMatrix) In the body, we calculate the outgoing position of the vertex: oPosition = mul(worldViewMatrix, position); For the texture coordinates, we assign the incoming value to the outgoing value: oUv = uv; Remember to change the used material in the application code, and then compile and run it. You should see the quad with the rock texture.
Read more
  • 0
  • 0
  • 2959

article-image-materials-ogre-3d
Packt
25 Nov 2010
7 min read
Save for later

Materials with Ogre 3D

Packt
25 Nov 2010
7 min read
OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Creating a white quad We will use this to create a sample quad that we can experiment with. Time for action – creating the quad We will start with an empty application and insert the code for our quad into the createScene() function: Begin with creating the manual object: Ogre::ManualObject* manual = mSceneMgr- >createManualObject("Quad"); manual->begin("BaseWhiteNoLighting", RenderOperation::OT_TRIANGLE_ LIST); Create four points for our quad: manual->position(5.0, 0.0, 0.0); manual->textureCoord(0,1); manual->position(-5.0, 10.0, 0.0); manual->textureCoord(1,0); manual->position(-5.0, 0.0, 0.0); manual->textureCoord(1,1); manual->position(5.0, 10.0, 0.0);manual->textureCoord(0,0); Use indices to describe the quad: manual->index(0); manual->index(1); manual->index(2); manual->index(0); manual->index(3); manual->index(1); Finish the manual object and convert it to a mesh: manual->end(); manual->convertToMesh("Quad"); Create an instance of the entity and attach it to the scene using a scene node: Ogre::Entity * ent = mSceneMgr->createEntity("Quad"); Ogre::SceneNode* node = mSceneMgr->getRootSceneNode()- >createChildSceneNode("Node1"); node->attachObject(ent); Compile and run the application. You should see a white quad. What just happened? We used our knowledge to create a quad and attach to it a material that simply renders everything in white. The next step is to create our own material. Creating our own material Always rendering everything in white isn't exactly exciting, so let's create our first material. Time for action – creating a material Now, we are going to create our own material using the white quad we created. Change the material name in the application from BaseWhiteNoLighting to MyMaterial1: manual->begin("MyMaterial1", RenderOperation::OT_TRIANGLE_LIST); Create a new file named Ogre3DBeginnersGuide.material in the mediamaterialsscripts folder of our Ogre3D SDK. Write the following code into the material file: material MyMaterial1 { technique { pass { texture_unit { texture leaf.png } } } } Compile and run the application. You should see a white quad with a plant drawn onto it. What just happened? We created our first material file. In Ogre 3D, materials can be defined in material files. To be able to find our material files, we need to put them in a directory listed in the resources.cfg, like the one we used. We also could give the path to the file directly in code using the ResourceManager. To use our material defined in the material file, we just had to use the name during the begin call of the manual object. The interesting part is the material file itself. Materials Each material starts with the keyword material, the name of the material, and then an open curly bracket. To end the material, use a closed curly bracket—this technique should be very familiar to you by now. Each material consists of one or more techniques; a technique describes a way to achieve the desired effect. Because there are a lot of different graphic cards with different capabilities, we can define several techniques and Ogre 3D goes from top to bottom and selects the first technique that is supported by the user's graphic cards. Inside a technique, we can have several passes. A pass is a single rendering of your geometry. For most of the materials we are going to create, we only need one pass. However, some more complex materials might need two or three passes, so Ogre 3D enables us to define several passes per technique. In this pass, we only define a texture unit. A texture unit defines one texture and its properties. This time the only property we define is the texture to be used. We use leaf.png as the image used for our texture. This texture comes with the SDK and is in a folder that gets indexed by resources.cfg, so we can use it without any work from our side. Have a go hero – creating another material Create a new material called MyMaterial2 that uses Water02.jpg as an image. Texture coordinates take two There are different strategies used when texture coordinates are outside the 0 to 1 range. Now, let's create some materials to see them in action. Time for action – preparing our quad We are going to use the quad from the previous example with the leaf texture material: Change the texture coordinates of the quad from range 0 to 1 to 0 to 2. The quad code should then look like this: manual->position(5.0, 0.0, 0.0); manual->textureCoord(0,2); manual->position(-5.0, 10.0, 0.0); manual->textureCoord(2,0); manual->position(-5.0, 0.0, 0.0); manual->textureCoord(2,2); manual->position(5.0, 10.0, 0.0); manual->textureCoord(0,0); Now compile and run the application. Just as before, we will see a quad with a leaf texture, but this time we will see the texture four times. What just happened? We simply changed our quad to have texture coordinates that range from zero to two. This means that Ogre 3D needs to use one of its strategies to render texture coordinates that are larger than 1. The default mode is wrap. This means each value over 1 is wrapped to be between zero and one. The following is a diagram showing this effect and how the texture coordinates are wrapped. Outside the corners, we see the original texture coordinates and inside the corners, we see the value after the wrapping. Also for better understanding, we see the four texture repetitions with their implicit texture coordinates. We have seen how our texture gets wrapped using the default texture wrapping mode. Our plant texture shows the effect pretty well, but it doesn't show the usefulness of this technique. Let's use another texture to see the benefits of the wrapping mode. Using the wrapping mode with another texture Time for action – adding a rock texture For this example, we are going to use another texture. Otherwise, we wouldn't see the effect of this texture mode: Create a new material similar to the previous one, except change the used texture to: terr_rock6.jpg: material MyMaterial3 { technique { pass { texture_unit { texture terr_rock6.jpg } } } } Change the used material from MyMaterial1 to MyMaterial3: manual->begin("MyMaterial3", RenderOperation::OT_TRIANGLE_LIST) Compile and run the application. You should see a quad covered in a rock texture. What just happened? This time, the quad seems like it's covered in one single texture. We don't see any obvious repetitions like we did with the plant texture. The reason for this is that, like we already know, the texture wrapping mode repeats. The texture was created in such a way that at the left end of the texture, the texture is started again with its right side and the same is true for the lower end. This kind of texture is called seamless. The texture we used was prepared so that the left and right side fit perfectly together. The same goes for the upper and lower part of the texture. If this wasn't the case, we would see instances where the texture is repeated.
Read more
  • 0
  • 0
  • 3712

article-image-ogre-3d-double-buffering
Packt
25 Nov 2010
5 min read
Save for later

Ogre 3D: Double Buffering

Packt
25 Nov 2010
5 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Images         Read more about this book       (For more resources on this subject, see here.) Introduction When a scene is rendered, it isn't normally rendered directly to the buffer, which is displayed on the monitor. Normally, the scene is rendered to a second buffer and when the rendering is finished, the buffers are swapped. This is done to prevent some artifacts, which can be created if we render to the same buffer, which is displayed on the monitor. The FrameListener function, frameRenderingQueued, is called after the scene has been rendered to the back buffer, the buffer which isn't displayed at the moment. Before the buffers are swapped, the rendering result is already created but not yet displayed. Directly after the frameRenderingQueued function is called, the buffers get swapped and then the application gets the return value and closes itself. That's the reason why we see an image this time. Now, we will see what happens when frameRenderingQueued also returns true. Time for action – returning true in the frameRenderingQueued function Once again we modify the code to test the behavior of the Frame Listener Change frameRenderingQueued to return true: bool frameRenderingQueued (const Ogre::FrameEvent& evt){ std::cout << «Frame queued» << std::endl; return true;} Compile and run the application. You should see Sinbad for a short period of time before the application closes, and the following three lines should be in the console output: Frame started Frame queued Frame ended What just happened? Now that the frameRenderingQueued handler returns true, it will let Ogre 3D continue to render until the frameEnded handler returns false. Like in the last example, the render buffers were swapped, so we saw the scene for a short period of time. After the frame was rendered, the frameEnded function returned false, which closes the application and, in this case, doesn't change anything from our perspective. Time for action – returning true in the frameEnded function Now let's test the last of three possibilities. Change frameRenderingQueued to return true: bool frameEnded (const Ogre::FrameEvent& evt){ std::cout << «Frame ended» << std::endl; return true;} Compile and run the application. You should see the scene with Sinbad and an endless repetition of the following three lines: Frame started Frame queued Frame ended What just happened? Now, all event handlers returned true and, therefore, the application will never be closed; it would run forever as long as we aren't going to close the application ourselves. Adding input We have an application running forever and have to force it to close; that's not neat. Let's add input and the possibility to close the application by pressing Escape. Time for action – adding input Now that we know how the FrameListener works, let's add some input. We need to include the OIS header file to use OIS: #include "OISOIS.h" Remove all functions from the FrameListener and add two private members to store the InputManager and the Keyboard: OIS::InputManager* _InputManager;OIS::Keyboard* _Keyboard; The FrameListener needs a pointer to the RenderWindow to initialize OIS, so we need a constructor, which takes the window as a parameter: MyFrameListener(Ogre::RenderWindow* win){ OIS will be initialized using a list of parameters, we also need a window handle in string form for the parameter list; create the three needed variables to store the data: OIS::ParamList parameters;unsigned int windowHandle = 0;std::ostringstream windowHandleString; Get the handle of the RenderWindow and convert it into a string: win->getCustomAttribute("WINDOW", &windowHandle);windowHandleString << windowHandle; Add the string containing the window handle to the parameter list using the key "WINDOW": parameters.insert(std::make_pair("WINDOW", windowHandleString.str())); Use the parameter list to create the InputManager: _InputManager = OIS::InputManager::createInputSystem(parameters); With the manager create the keyboard: _Keyboard = static_cast<OIS::Keyboard*>(_InputManager->createInputObject( OIS::OISKeyboard, false )); What we created in the constructor, we need to destroy in the destructor: ~MyFrameListener(){ _InputManager->destroyInputObject(_Keyboard); OIS::InputManager::destroyInputSystem(_InputManager);} Create a new frameStarted function, which captures the current state of the keyboard, and if Escape is pressed, it returns false; otherwise, it returns true: bool frameStarted(const Ogre::FrameEvent& evt){ _Keyboard->capture(); if(_Keyboard->isKeyDown(OIS::KC_ESCAPE)) { return false; } return true;} The last thing to do is to change the instantiation of the FrameListener to use a pointer to the render window in the startup function: _listener = new MyFrameListener(window);_root->addFrameListener(_listener); Compile and run the application. You should see the scene and now be able to close it by pressing the Escape key. What just happened? We added input processing capabilities to our FrameListener but we didn't use any example classes, except our own versions.
Read more
  • 0
  • 0
  • 1944
article-image-introduction-blender-25-color-grading-sequel
Packt
18 Nov 2010
2 min read
Save for later

Introduction to Blender 2.5 Color Grading - A Sequel

Packt
18 Nov 2010
2 min read
Colorizing with hue adjustment For a quick and dirty colorization of images, hue adjustment is your best friend. However, the danger with using hue adjustment is that you don't have much control over your tones compared to when you were using color curves. To add the hue adjustment node in Blender's Node Editor Window, press SHIFT A then choose Color then finally Hue Saturation Value. This will add the Hue Saturation Value Node which is basically used to adjust the image's tint, saturation (grayscale, vibrant colors), and value (brightness). Later on in this article, you'll see just how useful this node will be. But for now, let's stick with just the hue adjustment aspect of this node. Move the mouse over the image to enlarge it. (Adding the Hue Saturation Value Node) (Hue Saturation Value Node) To colorize your images, simply slide the Hue slider. When using the hue slider, it's a good rule of thumb to keep the adjustments at a minimum, but for other special purpose, you can set them the way you want to. Below are some examples of different values of the Hue Adjustment. (Hue at 0.0) (Hue at 0.209) (Hue at 0.333) (Hue at 0.431) (Hue at 0.671) (Hue at 0.853) (Hue at 1.0)
Read more
  • 0
  • 0
  • 2489

article-image-introduction-blender-25-color-grading
Packt
11 Nov 2010
11 min read
Save for later

Introduction to Blender 2.5: Color Grading

Packt
11 Nov 2010
11 min read
Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        I would like to thank a few people who have made this all possible and I wouldn't be inspired doing this now without their great aid: To Francois Tarlier (http://www.francois-tarlier.com) for patiently bearing with my questions, for sharing his thoughts on color grading with Blender, and for simply developing things to make these things existent in Blender. A clear example of this would be the addition of the Color Balance Node in Blender 2.5's Node Compositor (which I couldn't live without). To Matt Ebb (http://mke3.net/) for creating tools to make Blender's Compositor better and for supporting the efforts of making one. And lastly, to Stu Maschwitz (http://www.prolost.com) for his amazing tips and tricks on color grading. Now, for some explanation. Color grading is usually defined as the process of altering and/or enhancing the colors of a motion picture or a still image. Traditionally, this happens by altering the subject photo-chemically (color timing) in a laboratory. But with modern tools and techniques, color grading can now be achieved digitally. Software like Apple's Final Cut Pro, Adobe's After Effects, Red Giant Software’s Magic Bullet Looks, etc. Luckily, the latest version of Blender has support for color grading by using a selection and plethora of nodes that will then process our input accordingly. However, I really want to stress here that often, it doesn't matter what tools you use, it all really depends on how crafty and artistic you are, regardless of whatever features your application has. Normally, color grading could also be related to color correction in some ways, however strictly speaking, color correction deals majorly on a “correctional” aspect (white balancing, temperature changes, etc.) rather than a specific alteration that would otherwise be achieved when applied with color grading. With color grading, we can turn a motion picture or still image into different types of mood and time of the day, we can fake lens filters and distortions, highlight part of an image via bright spotting, remove red eye effects, denoise an image, add glares, and a lot more. With all the things mentioned above, they can be grouped into three major categories, namely: Color Balancing Contrasting Stylization Material Variation Compensation With Color Balancing, we are trying to fix tint errors and colorizations that occurred during hardware post-production, something that would happen when recording the data into, say, a camera's memory right after it has been internally processed. Or sometimes, this could also be applied to fix some white balance errors that were overlooked while shooting or recording. These are, however, non-solid rules that aren't followed all the time. We can, however, use color balancing to simply correct the tones of an image or frame such that the human skin will look more natural with respect to the scene it is located at. Contrasting deals with how subject/s are emphasized with respect to the scene it is located at. It could also refer to vibrance and high dynamic imaging. It could also be just a general method of “popping out” necessary details present in a frame. Stylization refers to effects that are added on top of the original footage/image after applying color correction, balancing, etc. Some examples would be: dreamy effect, day to night conversion, retro effect, sepia, and many more. And last but not the least is Material Variation Compensation. Often, as artists, there will come a point in time that after hours and hours of waiting for your renders to finish, you will realize at the last minute that something is just not right with how the materials are set up. If you're on a tight deadline, rerendering the entire sequence or frame is not an option. Thankfully, but not absolute all the time, we can compensate this by using color grading techniques to specifically tell Blender to adjust just a portion of an image that looks wrong and save us a ton of time if we were to rerender again. However, with the vast topics that Color Grading has, I can only assume that I will only be leading you to the introductory steps to get you started and for you to have a basis for your own experiments. To have a view of what we could possibly discuss, you can check some of the videos I've done here: http://vimeo.com/13262256 http://vimeo.com/13995077 And to those of you interested with some presets, Francois Tarlier has provided some in this page http://code.google.com/p/ft-projects/downloads/list. Outlining some of the aspects that we'll go through in Part 1 of this article, here's a list of the things we will be doing: Loading Image Files in the Compositor Loading Sequence Files in the Compositor Loading Movie Files in the Compositor Contrasting with Color Curves Colorizing with Color Curves Color Correcting with Color Curves And before we start, here are some prerequisites that you should have: Latest Blender 2.5 version (grab one from http://www.graphicall.org or from the latest svn updates) Movies, Footages, Animations (check http://www.stockfootageforfree.com for free stock footages) Still Images Intermediate Blender skill level Initialization With all the prerequisites met and before we get our hands dirty, there are some things we need to do. Fire up Blender 2.5 and you'll notice (by default) that Blender starts with a cool splash screen and with it on the upper right hand portion, you can see the Blender version number and the revision number. As much as possible, you would want to have a similar revision number as what we'll be using here, or better yet, a newer one. This will ensure that tools we'll be using are up to date, bug free, and possibly feature-pumped. Move the mouse over the image to enlarge it. (Blender 2.5 Initial Startup Screen) After we have ensured we have the right version (and revision number) of Blender, it's time to set up our scenes and screens accordingly to match our ideal workflow later on. Before starting any color grading session, make sure you have a clear plan of what you want to achieve and to do with your footages and images. This way you can eliminate the guessing part and save a lot of time in the process. Next step is to make sure we are in the proper screen for doing color grading. You'll see in the menu bar at the top that we are using the “Default” screen. This is useful for general-purpose Blender workflow like Modeling, Lighting, and Shading setup. To harness Blender's intuitive interface, we'll go ahead and change this screen to something more obvious and useful. (Screen Selection Menu) Click the button on the left of the screen selection menu and you'll see a list of screens to choose from. For this purpose, we'll choose “Compositing”. After enabling the screen, you'll notice that Blender's default layout has been changed to something more varied, but not very dramatic. (Choosing the Compositing Screen) The Compositing Screen will enable us to work seamlessly with color grading in that, by default, it has everything we need to start our session. By default, the compositing screen has the Node Editor on top, the UV/Image Editor on the lower left hand side, the 3D View on the lower right hand side. On the far right corner, equaling the same height as these previous three windows, is the Properties Window, and lastly (but not so obvious) is the Timeline Window which is just below the Properties Window as is situated on the far lower right corner of your screen. Since we won't be digging too much on Blender's 3D aspect here, we can go ahead and ignore the lower right view (3D View), or better yet, let's merge the UV/Image Editor to the 3D View such that the UV/Image Editor will encompass mostly the lower half of the screen (as seen below). You could also merge the Properties Window and the Timeline Window such that the only thing present on the far right hand side is the Properties Window. (Merging the Screen Windows) (Merged Screens) (Merged Screens) Under the Node Editor Window, click on and enable Use Nodes. This will tell Blender that we'll be using the node system in conjunction with the settings we'll be enabling later on. (Enabling “Use Nodes”) After clicking on Use Nodes, you'll notice nodes start appearing on the Node Editor Window, namely the Render Layer and Composite nodes. This is one good hint that Blender now recognizes the nodes as part of its rendering process. But that's not enough yet. Looking on the far right window (Properties Window), look for the Shading and Post Processing tabs under Render. If you can't see some parts, just scroll through until you do. (Locating the Shading and Post Processing Tabs) Under the Shading tab, disable all check boxes except for Texture. This will ensure that we won't get any funny output later on. It will also eliminate the error debugging process, if we do encounter some. (Disabling Shading Options) Next, let's proceed to the Post Processing tab and disable Sequencer. Then let's make sure that Compositing is enabled and checked. (Disabling Post Processing Options) Thats it for now, but we'll get back to the Properties Window whenever necessary. Let's move our attention back to the Node Editor Window above. Same keyboard shortcuts apply here compared to the 3D Viewport. To review, here are the shortcuts we might find helpful while working on the Node Editor Window:   Select Node Right Mouse Button Confirm Left Mouse Button Zoom In Mouse Wheel Up/CTRL + Mouse Wheel Drag Zoom Out Mouse Wheel Down/CTRL + Mouse Wheel Drag Pan Screen Middle Mouse Drag Move Node G Box Selection B Delete Node X Make Links F Cut Links CTRL Left Mouse Button Hide Node H Add Node SHIFT A Toggle Full Screen SHIFT SPACE Now, let's select the Render Layer Node and delete it. We won't be needing it now since we're not directly working with Blender's internal render layer system yet, since we'll be solely focusing our attention on uploading images and footages for grading work. Select the Composite Node and move it far right, just to get it out of view for now. (Deleting the Render Layer Node and Moving the Composite Node) Loading image files in the compositor Blender's Node Compositor can upload pretty much any image format you have. Most of the time, you might want only to work with JPG, PNG, TIFF, and EXR file formats. But choose what you prefer, just be aware though of the image format's compression features. For most of my compositing tasks, I commonly use PNG, it being a lossless type of image, meaning, even after processing it a few times, it retains its original quality and doesn't compress which results in odd results, like in a JPG file. However, if you really want to push your compositing project and use data such as z-buffer (depth), etc. you'll be good with EXR, which is one of the best out there, but it creates such huge file sizes depending on the settings you have. Play around and see which one is most comfortable with you. For ease, we'll load up JPG images for now. With the Node Editor Window active, left click somewhere on an empty space on the left side, imagine placing an imaginative cursor there with the left mouse button. This will tell Blender to place here the node we'll be adding. Next, press SHIFT A. This will bring up the add menu. Choose Input then click on Image. (Adding an Image Node) Most often, when you have the Composite Node selected before performing this action, Blender will automatically connect and link the newly added node to the composite node. If not, you can connect the Image Node's image output node to the Composite Node's image input node. (Image Node Connected to Composite Node) To load images into the Compositor, simply click on Open on the the Image Node and this will bring up a menu for you to browse on. Once you've chosen the desired image, you can double left click on the image or single click then click on Open. After that is done, you'll notice the Image Node's and the Composite Node's preview changed accordingly. (Image Loaded in the Compositor) This image is now ready for compositing work.
Read more
  • 0
  • 0
  • 4580

article-image-blender-25-creating-uv-texture
Packt
21 Oct 2010
4 min read
Save for later

Blender 2.5: creating a UV texture

Packt
21 Oct 2010
4 min read
Before we can create a custom UV texture, we need to export our current UV map from Blender to a file that an image manipulation program, such as GIMP or Photoshop, can read. Exporting our UV map If we have GIMP downloaded, we can export our UV map from Blender to a format that GIMP can read. To do this, make sure we can view our UV map in the Image Editor. Then, go to UVs | Export UV Layout. Then save the file in a folder you can easily get to, naming it UV_layout or whatever you like. (Move the mouse over the image to enlarge.) Now it's time to open GIMP! Downloading GIMP Before we begin, we need to first get an image manipulation program. If you don't have one of the high-end programs, such as Photoshop, there still is hope. There's a wonderful free (and open source) program called GIMP, which parallels Photoshop in functionality. For the sake of creating our textures, we will be using GIMP, but feel free to use whatever you are personally most comfortable with. To download GIMP, visit the program's website at http://www.gimp.org and download the right version for your operating system. Mac Users will need to install X11 so GIMP will run. Consult your Mac OS installation guide for instructions on how to install. Windows users, you will need to install the GTK+ Runtime Environment to run GIMP—the download installer should warn you about this during installation. To install GTK+, visit http://www.gtk.org. Hello GIMP! When we open GIMP for the first time, we should have a 3-window layout, similar to the following screen: Create a new document by selecting File | New. You can also use the Ctrl+N keyboard shortcut. This should bring up a dialog box with a list of settings we can use to customize our new document. Because Blender exported our UV map as an SVG file, we can choose any size image we want, because we can scale the image to fit our document. SVG stands for Scalable Vector Graphic. Vector graphics are images defined by mathematically calculated paths, allowing them to be scaled infinitely without the pixilation caused when raster images are enlarged beyond a certain point. Change the Width and Height attributes to 2000 each. This will create a texture image 2000 pixels wide by 2000 pixels high. Click on OK to create our new document. Getting reference images Before we can create a UV texture for our wine bottle, which will primarily define the bottle's label, we need to know what is typically on a wine bottle's label. If you search the web for any wine bottle, you'll get a pretty good idea of what a wine bottle label looks like. However, for our purposes, we're going to use the following image: Notice how there's typically the name of the wine company, the type of wine, and the year it was made. We're going to use all of these in our own wine bottle label. Importing our UV map A nice thing about GIMP is that we can import images as layers into our current file. We're going to do just this with our UV map. Go to File | Open as Layers... to bring up the file selection dialog box. Navigate to the UV map we saved earlier and open it. Another dialog box will pop up—we can use this to tell GIMP how we want our SVG to appear in our document. Change the Width and Height attributes to match our working document—2000px by 2000px. Click on OK to confirm. Not every file type will bring up this dialog box—it's specific to SVG files only. We should now see our UV map in the document as a new layer. Before we continue, we should change the background color of our texture. Our label is going to be white, so we are going to need to distinguish our label from the rest of the wine bottle's material. With our background layer selected, fill the layer with a black color using the Fill tool. Next, we can create the background color of the label. Create a new layer by clicking on the New Layer button. Name it label_background. Using the Marquee Selection tool, make a selection similar to the following image: Fill it, using the Fill tool, with white. This will be the background for our label—everything else we add with be made in relation to this layer. Keep the UV map layer on top as often as possible. This will help us keep a clear view of where our graphics are in relation to our UV map at all times.
Read more
  • 0
  • 0
  • 9500
article-image-lighting-outdoor-scene-blender
Packt
19 Oct 2010
7 min read
Save for later

Lighting an Outdoor Scene in Blender

Packt
19 Oct 2010
7 min read
  Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        Getting the right files Before we get started, we need a scene to work with. There are three scenes provided for our use—an outdoor scene, an indoor scene, and a hybrid scene that incorporates elements that are found both inside as well as outside. All these files can be downloaded from http://www.cgshark.com/lightingand-rendering/ The file we are going to use for this scene is called exterior.blend. This scene contains a tricycle, which we will light as if it were a product being promoted for a company. To download the files for this tutorial, visit http://www.cgshark.com/lighting-and-rendering/ and select exterior.blend. Blender render settings In computer graphics, a two-dimensional image is created from three-dimensional data through a computational process known as rendering. It's important to understand how to customize Blender's internal renderer settings to produce a final result that's optimized for our project, be it a single image or a full-length film. With the settings Blender provides us, we can set frame rates for animation, image quality, image resolution, and many other essential parts needed to produce that optimized final result. The Scene menu We can access these render settings through the Scene menu. Here, we can adjust a myriad of settings. For the sake of these projects, we are only going to be concerned with: Which window Blender will render our image in How render layers are set up Image dimensions Output location and file type Render settings The first settings we see when we look at the Scene menu are the Render settings. Here, we can tell Blender to render the current frame or an animation using the render buttons. We can also choose what type of window we want Blender to render our image in using the Display options. The first option (and the one chosen by default) is Full Screen. This renders our image in a window that overlaps the three-dimensional window in our scene. To restore the three-dimensional view, select the Back to Previous button at the top of the window. The next option is the Image Editor that Blender uses both for rendering as well as UV editing. This is especially useful when using the Compositor, allowing us to see our result alongside our composite node setup. By default, Blender replaces the three-dimensional window with the Image Editor. The last option is the option that Blender has used, by default, since day one—New Window. This means that Blender will render the image in a newly created window, separate from the rest of the program's interface. For the sake of these projects, we're going to keep this setting at the default setting—Full Screen. Dimensions settings These are some of the most important settings that we can set when dealing with optimizing our project output. We can set the image size, frame rate, frame range, and aspect ratio of our render. Luckily for us, Blender provides us with preset render settings, common in the film industry: HDTV 1080P HDTV 720P TV NTSC TV PAL TV PAL 16:9 Because we want to keep our render times relatively low for our projects, we're going to set our preset dimensions to TV NTSC, which results in an image 720 pixels wide by 480 pixels high. If you're interested in learning more about how the other formats behave, feel free to visit http://en.wikipedia.org/wiki/Display_resolution. Output settings These settings are an important factor when determining how we want our final product to be viewed. Blender provides us with numerous image and video types to choose from. When rendering an animation or image sequence, it's always easier to manually set the folder we want Blender to save to. We can tell Blender where we want it to save by establishing the path in the output settings. By default on Macintosh, Blender saves to the /tmp/ folder. Now that we understand how Blender's renderer works, we can start working with our scene! Establishing a workflow The key to constantly producing high-quality work is to establish a well-tested and efficient workflow. Everybody's workflow is different, but we are going to follow this series of steps: Evaluate what the scene we are lighting will require. Plan how we want to lay out the lamps in our scene. Set lamp positions, intensities, colors, and shadows, if applicable. Add materials and textures. Tweak until we're satisfied. Evaluating our scene Before we even begin to approach a computer, we need to think about our scene from a conceptual perspective. This is important, because knowing everything about our scene and the story that's taking place will help us produce a more realistic result. To help kick start this process, we can ask ourselves a series of questions that will get us thinking about what's happening in our scene. These questions can pertain to an entire array of possibilities and conditions, including: Weather What is the weather like on this particular day? What was it like the day before or the day after? Is it cloudy, sunny, or overcast? Did it rain or snow? Source of light Where is the light coming from? Is it in front of, to the side, or even behind the object? Remember, light is reflected and refracted until all energy is absorbed; this not only affects the color of the light, but the quality as well. Do we need to add additional light sources to simulate this effect? Scale of light sources What is the scale of our light sources in relation to our three-dimensional scene? Believe it or not, this factor carries a lot of weight when it comes to the quality of the final render. If any lights feel out of place, it could potentially affect the believability of the final product. The goal of these questions is to prove to ourselves that the scene we're lighting has the potential to exist in real life. It's much harder, if not impossible, to light a scene if we don't know how it could possibly act in the real world. Let's take a look at these questions. What is the weather like? In our case, we're not concerned with anything too challenging, weather wise. The goal of this tutorial is to depict our tricycle in an environment that reflects the effects of a sunny, cloudless day. To achieve this, we are going to use lights with blue and yellow hues for simulating the effect the sun and sky will have on our tricycle. What are the sources of our light and where are they coming from in relation to our scene? In a real situation, the sun would provide most of the light, so we'll need a key light that simulates how the sun works. In our case, we can use a Sun lamp. The key to positioning light sources within a three-dimensional scene is to find a compromise between achieving the desired mood of the image and effectively illuminating the object being presented. What is the scale of our light sources? The sun is rather large, but because of the nature of the Sun lamp in Blender, we don't have to worry about the scale of the lamp in our three-dimensional scene. Sometimes—more commonly when working with indoor scenes, such as the scene we'll approach later—certain light sources need to be of certain sizes in relation to our scene, otherwise the final result will feel unnatural. Although we will be using a realistic approach to materials, textures, and lighting, we are going to present this scene as a product visualization. This means that we won't explicitly show a ground plane, allowing the viewer to focus on the product being presented, in this case, our tricycle.
Read more
  • 0
  • 0
  • 5788

article-image-using-animated-pieces-board-based-game-xna-40
Packt
30 Sep 2010
14 min read
Save for later

Using Animated Pieces in a Board-based Game with XNA 4.0

Packt
30 Sep 2010
14 min read
  XNA 4.0 Game Development by Example: Beginner's Guide Create your own exciting games with Microsoft XNA 4.0 Dive headfirst into game creation with XNA Four different styles of games comprising a puzzler, a space shooter, a multi-axis shoot 'em up, and a jump-and-run platformer Games that gradually increase in complexity to cover a wide variety of game development techniques Focuses entirely on developing games with the free version of XNA Packed with many suggestions for expanding your finished game that will make you think critically, technically, and creatively Fresh writing filled with many fun examples that introduce you to game programming concepts and implementation with XNA 4.0 A practical beginner's guide with a fast-paced but friendly and engaging approach towards game development Read more about this book (For more resources on XNA 4.0, see here.) Animated pieces We will define three different types of animated pieces: rotating, falling, and fading. The animation for each of these types will be accomplished by altering the parameters of the SpriteBatch.Draw() call. Classes for animated pieces In order to represent the three types of animated pieces, we will create three new classes. Each of these classes will inherit from the GamePiece class, meaning they will contain all of the methods and members of the GamePiece class, but add additional information to support the animation. Child classesChild classes inherit all of their parent's members and methods. The RotatingPiece class can refer to the pieceType and suffix of the piece without recreating them within RotatingPiece itself. Additionally, child classes can extend the functionality of their base class, adding new methods and properties or overriding old ones. In fact, Game1 itself is a child of the Micrsoft.Xna.Game class, which is why all of the methods we use (Update(), Draw(), LoadContent(), and so on) are declared as "override". Let's begin by creating the class we will use for rotating pieces. Time for action – rotating pieces Open your existing Flood Control project in Visual C# Express if it is not already active. Add a new class to the project called "RotatingPiece". Add "using Microsoft.Xna.Framework;" to the using area at the top of the class. Update the declaration of the class to read class RotatingPiece : GamePiece. Add the following declarations to the RotatingPiece class: public bool clockwise;public static float rotationRate = (MathHelper.PiOver2 / 10);private float rotationAmount = 0;public int rotationTicksRemaining = 10; Add a property to retrieve the current rotation amount: public float RotationAmount{ get { if (clockwise) return rotationAmount; else return (MathHelper.Pi*2) - rotationAmount; }} Add a constructor for the RotatingPiece class: public RotatingPiece(string pieceType, bool clockwise) : base(pieceType){ this.clockwise = clockwise;} Add a method to update the piece: public void UpdatePiece(){ rotationAmount += rotationRate; rotationTicksRemaining = (int)MathHelper.Max(0, rotationTicksRemaining-1);} What just happened? In step 2, we modified the declaration of the RotatingPiece class by adding : GamePiece to the end of it. This indicates to Visual C# that the RotatingPiece class is a child of the GamePiece class. The clockwise variable stores a "true" value if the piece will be rotating clockwise and "false" if the rotation is counter-clockwise. When a game piece is rotated, it will turn a total of 90 degrees (or pi/2 radians) over 10 animation frames. The MathHelper class provides a number of constants to represent commonly used numbers, with MathHelper.PiOver2 being equal to the number of radians in a 90 degree angle. We divide this constant by 10 and store the result as the rotationRate for use later. This number will be added to the rotationAmount float, which will be referenced when the animated piece is drawn. Working with radians All angular math is handled in radians from XNA's point of view. A complete (360 degree) circle contains 2*pi radians. In other words, one radian is equal to about 57.29 degrees. We tend to relate to circles more often in terms of degrees (a right angle being 90 degrees, for example), so if you prefer to work with degrees, you can use the MathHelper.ToRadians() method to convert your values when supplying them to XNA classes and methods. The final declaration, rotationTicksRemaining, is reduced by one each time the piece is updated. When this counter reaches zero, the piece has finished animating. When the piece is drawn, the RotationAmount property is referenced by a spriteBatch. Draw() call and returns either the rotationAmount property (in the case of a clockwise rotation) or 2*pi (a full circle) minus the rotationAmount if the rotation is counter-clockwise. The constructor in step 7 illustrates how the parameters passed to a constructor can be forwarded to the class' parent constructor via the :base specification. Since the GamePiece class has a constructor that accepts a piece type, we can pass that information along to its constructor while using the second parameter (clockwise) to update the clockwise member that does not exist in the GamePiece class. In this case, since both the clockwise member and the clockwise parameter have identical names, we specify this.clockwise to refer to the clockwise member of the RotatingPiece class. Simply clockwise in this scope refers only to the parameter passed to the constructor. this notationYou can see that it is perfectly valid C# code to have method parameter names that match the names of class variables, thus potentially hiding the class variables from being used in the method (since referring to the name inside the method will be assumed to refer to the parameter). To ensure that you can always access your class variables even when a parameter name conflicts, you can preface the variable name with this. when referring to the class variable. this. indicates to C# that the variable you want to use is part of the class, and not a local method parameter. Lastly, the UpdatePiece() method simply increases the rotationAmount member while decreasing the rotationTicksRemaining counter (using MathHelper.Max() to ensure that the value does not fall below zero). Time for action – falling pieces Add a new class to the Flood Control project called "FallingPiece". Add using Microsoft.Xna.Framework; to the using area at the top of the class. Update the declaration of the class to read class FallingPiece : GamePiece Add the following declarations to the FallingPiece class: public int VerticalOffset;public static int fallRate = 5; Add a constructor for the FallingPiece class: public FallingPiece(string pieceType, int verticalOffset) : base(pieceType){ VerticalOffset = verticalOffset;} Add a method to update the piece: public void UpdatePiece(){ VerticalOffset = (int)MathHelper.Max( 0, VerticalOffset - fallRate);} What just happened? Simpler than a RotatingPiece, a FallingPiece is also a child of the GamePiece class. A falling piece has an offset (how high above its final destination it is currently located) and a falling speed (the number of pixels it will move per update). As with a RotatingPiece, the constructor passes the pieceType parameter to its base class constructor and uses the verticalOffset parameter to set the VerticalOffset member. Note that the capitalization on these two items differs. Since VerticalOffset is declared as public and therefore capitalized by common C# convention, there is no need to use the "this" notation, since the two variables technically have different names. Lastly, the UpdatePiece() method subtracts fallRate from VerticalOffset, again using the MathHelper.Max() method to ensure the offset does not fall below zero. Time for action – fading pieces Add a new class to the Flood Control project called "FadingPiece". Add using Microsoft.Xna.Framework; to the using area at the top of the class. Update the declaration of the class to read class FadingPiece : GamePiece Add the following declarations to the FadingPiece class: public float alphaLevel = 1.0f;public static float alphaChangeRate = 0.02f; Add a constructor for the FadingPiece class: public FadingPiece(string pieceType, string suffix) : base(pieceType, suffix){} Add a method to update the piece: public void UpdatePiece(){alphaLevel = MathHelper.Max( 0, alphaLevel - alphaChangeRate);} What just happened? The simplest of our animated pieces, the FadingPiece only requires an alpha value (which always starts at 1.0f, or fully opaque) and a rate of change. The FadingPiece constructor simply passes the parameters along to the base constructor. When a FadingPiece is updated, alphaLevel is reduced by alphaChangeRate, making the piece more transparent. Managing animated pieces Now that we can create animated pieces, it will be the responsibility of the GameBoard class to keep track of them. In order to do that, we will define a Dictionary object for each type of piece. A Dictionary is a collection object similar to a List, except that instead of being organized by an index number, a dictionary consists of a set of key and value pairs. In an array or a List, you might access an entity by referencing its index as in dataValues[2] = 12; With a Dictionary, the index is replaced with your desired key type. Most commonly this will be a string value. This way, you can do something like fruitColors["Apple"]="red"; Time for action – updating GameBoard to support animated pieces In the declarations section of the GameBoard class, add three dictionaries: public Dictionary<string, FallingPiece> fallingPieces = new Dictionary<string, FallingPiece>();public Dictionary<string, RotatingPiece> rotatingPieces = new Dictionary<string, RotatingPiece>();public Dictionary<string, FadingPiece> fadingPieces = new Dictionary<string, FadingPiece>(); Add methods to the GameBoard class to create new falling piece entries in the dictionaries: public void AddFallingPiece(int X, int Y, string PieceName, int VerticalOffset){ fallingPieces[X.ToString() + "_" + Y.ToString()] = new FallingPiece(PieceName, VerticalOffset);}public void AddRotatingPiece(int X, int Y,string PieceName, bool Clockwise){ rotatingPieces[X.ToString() + "_" + Y.ToString()] = new RotatingPiece(PieceName, Clockwise);}public void AddFadingPiece(int X, int Y, string PieceName){ fadingPieces[X.ToString() + "_" + Y.ToString()] = new FadingPiece(PieceName,"W");} Add the ArePiecesAnimating() method to the GameBoard class: { if ((fallingPieces.Count == 0) && (rotatingPieces.Count == 0) && (fadingPieces.Count == 0)) { return false; } else { return true; }} Add the UpdateFadingPieces() method to the GameBoard class: private void UpdateFadingPieces(){ Queue<string> RemoveKeys = new Queue<string>(); foreach (string thisKey in fadingPieces.Keys) { fadingPieces[thisKey].UpdatePiece(); if (fadingPieces[thisKey].alphaLevel == 0.0f) RemoveKeys.Enqueue(thisKey.ToString()); } while (RemoveKeys.Count > 0) fadingPieces.Remove(RemoveKeys.Dequeue());} Add the UpdateFallingPieces() method to the GameBoard class: private void UpdateFallingPieces(){ Queue<string> RemoveKeys = new Queue<string>(); foreach (string thisKey in fallingPieces.Keys) { fallingPieces[thisKey].UpdatePiece(); if (fallingPieces[thisKey].VerticalOffset == 0) RemoveKeys.Enqueue(thisKey.ToString()); } while (RemoveKeys.Count > 0) fallingPieces.Remove(RemoveKeys.Dequeue());} Add the UpdateRotatingPieces() method to the GameBoard class: private void UpdateRotatingPieces(){ Queue<string> RemoveKeys = new Queue<string>(); foreach (string thisKey in rotatingPieces.Keys) { rotatingPieces[thisKey].UpdatePiece(); if (rotatingPieces[thisKey].rotationTicksRemaining == 0) RemoveKeys.Enqueue(thisKey.ToString()); } while (RemoveKeys.Count > 0) rotatingPieces.Remove(RemoveKeys.Dequeue());} Add the UpdateAnimatedPieces() method to the GameBoard class: public void UpdateAnimatedPieces(){ if (fadingPieces.Count == 0) { UpdateFallingPieces(); UpdateRotatingPieces(); } else { UpdateFadingPieces(); }} What just happened? After declaring the three Dictionary objects, we have three methods used by the GameBoard class to create them when necessary. In each case, the key is built in the form "X_Y", so an animated piece in column 5 on row 4 will have a key of "5_4". Each of the three Add... methods simply pass the parameters along to the constructor for the appropriate piece types after determining the key to use. When we begin drawing the animated pieces, we want to be sure that animations finish playing before responding to other input or taking other game actions (like creating new pieces). The ArePiecesAnimating() method returns "true" if any of the Dictionary objects contain entries. If they do, we will not process any more input or fill empty holes on the game board until they have completed. The UpdateAnimatedPieces() method will be called from the game's Update() method and is responsible for calling the three different update methods above (UpdateFadingPiece(), UpdateFallingPiece(), and UpdateRotatingPiece()) for any animated pieces currently on the board. The first line in each of these methods declares a Queue object called RemoveKeys. We will need this because C# does not allow you to modify a Dictionary (or List, or any of the similar "generic collection" objects) while a foreach loop is processing them. A Queue is yet another generic collection object that works like a line at the bank. People stand in a line and await their turn to be served. When a bank teller is available, the first person in the line transacts his/her business and leaves. The next person then steps forward. This type of processing is known as FIFO, or First In, First Out. Using the Enqueue() and Dequeue() methods of the Queue class, objects can be added to the Queue (Enqueue()) where they await processing. When we want to deal with an object, we Dequeue() the oldest object in the Queue and handle it. Dequeue() returns the first object waiting to be processed, which is the oldest object added to the Queue. Collection classes C# provides a number of different "collection" classes, such as the Dictionary, Queue, List, and Stack objects. Each of these objects provides different ways to organize and reference the data in them. For information on the various collection classes and when to use each type, see the following MSDN entry: http://msdn.microsoft.com/en-us/library/6tc79sx1(VS.80).aspx Each of the update methods loops through all of the keys in its own Dictionary and in turn calls the UpdatePiece() method for each key. Each piece is then checked to see if its animation has completed. If it has, its key is added to the RemoveKeys queue. After all of the pieces in the Dictionary have been processed, any keys that were added to RemoveKeys are then removed from the Dictionary, eliminating those animated pieces. If there are any FadingPieces currently active, those are the only animated pieces that UpdateAnimatedPieces() will update. When a row is completed, the scoring tiles fade out, the tiles above them fall into place, and new tiles fall in from above. We want all of the fading to finish before the other tiles start falling (or it would look strange as the new tiles pass through the fading old tiles). Fading pieces In the discussion of UpdateAnimatedPieces(), we stated that fading pieces are added to the board whenever the player completes a scoring chain. Each piece in the chain is replaced with a fading piece. Time for action – generating fading pieces In the Game1 class, modify the CheckScoringChain() method by adding the following call inside the foreach loop before the square is set to "Empty": gameBoard.AddFadingPiece( (int)ScoringSquare.X, (int)ScoringSquare.Y, gameBoard.GetSquare( (int)ScoringSquare.X, (int)ScoringSquare.Y)); What just happened? Adding fading pieces is simply a matter of getting the square (before it is replaced with an empty square) and adding it to the FadingPieces dictionary. We need to use the (int) typecasts because the ScoringSquare variable is a Vector2 value, which stores its X and Y components as floats. Falling pieces Falling pieces are added to the game board in two possible locations: From the FillFromAbove() method when a piece is being moved from one location on the board to another, and in the GenerateNewPieces() method, when a new piece falls in from the top of the game board. Time for action – generating falling pieces Modify the FillFromAbove() method of the GameBoard class by adding a call to generate falling pieces right before the rowLookup = -1; line: AddFallingPiece(x, y, GetSquare(x, y), GamePiece.PieceHeight *(y-rowLookup)); Update the GenerateNewPieces() method by adding the following call right after the RandomPiece(x,y) line: AddFallingPiece(x, y, GetSquare(x, y), GamePiece.PieceHeight * GameBoardHeight); What just happened? When FillFromAbove() moves a piece downward, we now create an entry in the FallingPieces dictionary that is equivalent to the newly moved piece. The vertical offset is set to the height of a piece (40 pixels) times the number of board squares the piece was moved. For example, if the empty space was at location 5,5 on the board, and the piece above it (5,4) is being moved down one block, the animated piece is created at 5,5 with an offset of 40 pixels (5-4 = 1, times 40). When new pieces are generated for the board, they are added with an offset equal to the height (in pixels) of the game board, determined by multiplying the GamePiece.PieceHeight value by the GameBoardHeight. This means they will always start above the playing area and fall into it.
Read more
  • 0
  • 0
  • 1138