Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-introduction-hlsl-language
Packt
28 Jun 2013
8 min read
Save for later

Introduction to HLSL language

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Distance/Height-based fog Distance/Height-based fog is an approximation to the fog you would normally see outdoors. Even in the clearest of days, you should be able to see some fog far in the distance. The main benefit of adding the fog effect is that it helps the viewer estimate how far different elements in the scene are based on the amount of fog covering them. In addition to the realism this effect adds, it has the additional benefit of hiding the end of the visible range. Without fog to cover the far plane, it becomes easier to notice when far scene elements are clipped by the cameras far plane. By tuning the height of the fog you can also add a darker atmosphere to your scene as demonstrated by the following image: This recipe will demonstrate how distance/height-based fog can be added to our deferred directional light calculation. See the How it works… section for details about adding the effect to other elements of your rendering code. Getting ready We will be passing additional fog specific parameters to the directional light's pixel shader through a new constant buffer. The reason for separating the fog values into their own constant buffer is to allow the same parameters to be used by any other shader that takes fog into account. To create the new constant buffer use the following buffer descriptor: Constant buffer descriptor parameter   Value   Usage   D3D11_USAGE_DYNAMIC   BindFlags   D3D11_BIND_CONSTANT_BUFFER   CPUAccessFlags   D3D11_CPU_ACCESS_WRITE   ByteWidth   48   The reset of the descriptor fields should be set to zero. All the fog calculations will be handled in the deferred directional light pixel shader. How to do it... Our new fog constant buffer is declared in the pixel shader as follows: cbuffer cbFog : register( b2 ){float3 FogColor : packoffset( c0 );float FogStartDepth : packoffset( c0.w );float3 FogHighlightColor : packoffset( c1 );float FogGlobalDensity : packoffset( c1.w );float3 FogSunDir : packoffset( c2 );FogHeightFalloff : packoffset( c2.w );} The helper function used for calculating the fog is as follows: float3 ApplyFog(float3 originalColor, float eyePosY, float3eyeToPixel){float pixelDist = length( eyeToPixel );float3 eyeToPixelNorm = eyeToPixel / pixelDist;// Find the fog staring distance to pixel distancefloat fogDist = max(pixelDist - FogStartDist, 0.0);// Distance based fog intensityfloat fogHeightDensityAtViewer = exp( -FogHeightFalloff * eyePosY );float fogDistInt = fogDist * fogHeightDensityAtViewer;// Height based fog intensityfloat eyeToPixelY = eyeToPixel.y * ( fogDist / pixelDist );float t = FogHeightFalloff * eyeToPixelY;const float thresholdT = 0.01;float fogHeightInt = abs( t ) > thresholdT ?( 1.0 - exp( -t ) ) / t : 1.0;// Combine both factors to get the final factorfloat fogFinalFactor = exp( -FogGlobalDensity * fogDistInt *fogHeightInt );// Find the sun highlight and use it to blend the fog colorfloat sunHighlightFactor = saturate(dot(eyeToPixelNorm, FogSunDir));sunHighlightFactor = pow(sunHighlightFactor, 8.0);float3 fogFinalColor = lerp(FogColor, FogHighlightColor,sunHighlightFactor);return lerp(fogFinalColor, originalColor, fogFinalFactor);} The Applyfog function takes the color without fog along with the camera height and the vector from the camera to the pixel the color belongs to and returns the pixel color with fog. To add fog to the deferred directional light, change the directional entry point to the following code: float4 DirLightPS(VS_OUTPUT In) : SV_TARGET{// Unpack the GBufferfloat2 uv = In.Position.xy;//In.UV.xy;SURFACE_DATA gbd = UnpackGBuffer_Loc(int3(uv, 0));// Convert the data into the material structureMaterial mat;MaterialFromGBuffer(gbd, mat);// Reconstruct the world positionfloat2 cpPos = In.UV.xy * float2(2.0, -2.0) - float2(1.0, -1.0);float3 position = CalcWorldPos(cpPos, gbd.LinearDepth);// Get the AO valuefloat ao = AOTexture.Sample(LinearSampler, In.UV);// Calculate the light contributionfloat4 finalColor;finalColor.xyz = CalcAmbient(mat.normal, mat.diffuseColor.xyz) * ao;finalColor.xyz += CalcDirectional(position, mat);finalColor.w = 1.0;// Apply the fog to the final colorfloat3 eyeToPixel = position - EyePosition;finalColor.xyz = ApplyFog(finalColor.xyz, EyePosition.y,eyeToPixel);return finalColor;} With this change, we apply the fog on top of the lit pixels color and return it to the light accumulation buffer. How it works… Fog is probably the first volumetric effect implemented using a programmable pixel shader as those became commonly supported by GPUs. Originally, fog was implemented in hardware (fixed pipeline) and only took distance into account. As GPUs became more powerful, the hardware distance based fog was replaced by a programmable version that also took into account things such as height and sun effect. In reality, fog is just particles in the air that absorb and reflect light. A ray of light traveling from a position in the scene travels, the camera interacts with the fog particles, and gets changed based on those interactions. The further this ray has to travel before it reaches the camera, the larger the chance is that this ray will get either partially or fully absorbed. In addition to absorption, a ray traveling in a different direction may get reflected towards the camera and add to the intensity of the original ray. Based on the amount of particles in the air and the distance a ray has to travel, the light reaching our camera may contain more reflection and less of the original ray which leads to a homogenous color we perceive as fog. The parameters used in the fog calculation are: FogColor: The fog base color (this color's brightness should match the overall intensity so it won't get blown by the bloom) FogStartDistance: The distance from the camera at which the fog starts to blend in FogHighlightColor: The color used for highlighting pixels with pixel to camera vector that is close to parallel with the camera to sun vector FogGlobalDensity: Density factor for the fog (the higher this is the denser the fog will be) FogSunDir: Normalized sun direction FogHeightFalloff: Height falloff value (the higher this value, the lower is the height at which the fog disappears will be) When tuning the fog values, make sure the ambient colors match the fog. This type of fog is designed for outdoor environments, so you should probably disable it when lighting interiors. You may have noticed that the fog requires the sun direction. We already store the inversed sun direction for the directional light calculation. You can remove that value from the directional light constant buffer and use the fog vector instead to avoid the duplicate values This recipe implements the fog using the exponential function. The reason for using the exponent function is because of its asymptote on the negative side of its graph. Our fog implementation uses that asymptote to blend the fog in from the starting distances. As a reminder, the exponent function graph is as follows: The ApplyFog function starts off by finding the distance our ray traveled in the fog (fogDepth). In order to take the fog's height into account, we also look for the lowest height between the camera and the pixel we apply the fog to which we then use to find how far our ray travels vertically inside the fog (fogHeight). Both distance values are negated and multiplied by the fog density to be used as the exponent. The reason we negate the distance values is because it's more convenient to use the negative side of the exponential functions graph which is limited to the range 0 to 1. As the function equals 1 when the exponent is 0, we have to invert the results (stored in fogFactors). At this point we have one factor for the height which gets larger the further the ray travels vertically into the fog and a factor that gets larger the further the ray travels in the fog in any direction. By multiplying both factors with each other we get the combined fog effect on the ray: the higher the result is, the more the original ray got absorbed and light got reflected towards the camera in its direction (this is stored in fogFinalFactor). Before we can compute the final color value, we need to find the fog's color based on the camera and sun direction. We assume that the sun intensity is high enough to get more of its light rays reflected towards the camera direction and sun direction are close to parallel. We use the dot product between the two to determine the angle and narrow the result by raising it to the power of 8 (the result is stored in sunHighlightFactor). The result is used to lerp between the fog base color and the fog color highlighted by the sun. Finally, we use the fog factor to linearly interpolate between the input color and the fog color. The resulting color is then returned from the helper function and stored into the light accumulation buffer. As you can see, the changes to the directional light entry point are very minor as most of the work is handled inside the helper function ApplyFog. Adding the fog calculation to the rest of the deferred and forward light sources should be pretty straightforward. One thing to take into consideration is that fog also has to be applied to scene elements that don't get lit, like the sky or emissive elements. Again, all you have to do is call ApplyFog to get the final color with the fog effect. Summary In this article, we learned how to apply fog effect and add atmospheric scenes to the images. Resources for Article : Further resources on this subject: Creating and Warping 3D Text with Away3D 3.6 [Article] 3D Vector Drawing and Text with Papervision3D: Part 1 [Article] 3D Vector Drawing and Text with Papervision3D: Part 2 [Article]
Read more
  • 0
  • 0
  • 4572

article-image-introduction-3d-design-using-autocad
Packt
06 Jun 2013
10 min read
Save for later

Introduction to 3D Design using AutoCAD

Packt
06 Jun 2013
10 min read
(For more resources related to this topic, see here.) The Z coordinate 3D is all about the third Z coordinate. In 2D, we only care for the X and Y axes, but never used the Z axis. And most of the time, we don't even use coordinates, just the top-twenty AutoCAD commands, the Ortho tool, and so on. But in 3D, the correct use of coordinates can substantially accelerate our work. We will first briefly cover how to introduce points by coordinates and how to extrapolate to the third dimension. Absolute coordinates The location of all entities in AutoCAD is related to a coordinate system. Any coordinate system is characterized by an origin and positive directions for the X and Y axes. The Z axis is obtained directly from the X and Y axes by the right-hand rule: if we rotate the right hand from the X axis to the Y axis, the thumb indicates the positive Z direction. Picture that when prompting for a point; besides specifying it in the drawing area with a pointing device such as a mouse, we can enter coordinates using the keyboard. The format for the absolute Cartesian coordinates related to the origin is defined by the values of the three orthogonal coordinates, namely, X, Y, and Z, separated by commas: X coordinate, Y coordinate, Z coordinate The Z coordinate can be omitted. For instance, if we define a point with the absolute coordinates 30, 20, and 10, this means 30 absolute is in the X direction, 20 is in the Y direction, and 10 is in the Z direction. Relative coordinates Frequently, we want to specify a point in the coordinates, but one that is related to the previous point. The format for the relative Cartesian coordinates is defined by the symbol AT (@), followed by increment values in the three directions, separated by commas: @X increment, Y increment, Z increment Of course, one or more increments can be 0. The Z increment can be omitted. For instance, if we define a point with relative coordinates, @0,20,10, this means in relation to the previous point, 0 is in X, 20 is in Y, and 10 is in Z directions. Point filters When we want to specify a point but decompose it step-by-step, that is, separate its coordinates based on different locations, we may use filters. When prompting for a point, we access filters by digitizing the X, Y, or Z axes for individual coordinates, or XY, YZ, or ZX for pairs of coordinates. Another way is from the osnap menu, CTRL + mouse right-click, and then Point Filters. AutoCAD requests for the remaining coordinates until the completion of point definition. Imagine that we want to specify a point, for instance, the center of a circle, where its X coordinate is given by the midpoint of an edge, its y coordinate is the midpoint of another edge, and finally its Z coordinate is any point on a top face. Assuming that Midpoint osnap is predefined, the dialog should be: Command: CIRCLESpecify center point for circle or [3P/2P/Ttr (tan tan radius)]: .Xof midpoint of edge(need YZ): .Yof midpoint of edge(need Z): any point on top faceSpecify radius of circle or [Diameter]: value Workspaces AutoCAD comes with several workspaces. It's up to each of us to choose a workspace based on a classic environment or the ribbon. To change workspaces, we can pick the workspace switching button on the status bar: There are other processes for acceding this command such as the workspaces list on the Quick Access Toolbar (title bar), the Workspaces toolbar, or by digitizing WSCURRENT, but the access shown is consistent among all versions and always available. Classic environment The classic environment is based on the toolbars and the menu bar and doesn't use the ribbon. AutoCAD comes with AutoCAD Classic workspace, but it's very simple to adapt and view the suitable toolbars for 3D. The advantages of using this environment are speed and consistency. To show another toolbar, we right-click over any toolbar and choose it. Typically, we want to have the following toolbars visible besides Standard and Layers: Layers II, Modeling, Solid Editing, and Render: Ribbon environment Since the 2009 version, AutoCAD also allows for a ribbon-based environment. Normally, this environment uses neither toolbars nor the menu bar. AutoCAD comes with two ribbon workspaces, namely, 3D Basics and 3D Modeling; the first being less useful than the second. The advantages are that we have consistency with other software, commands are divided into panels and tabs, the ribbon can be collapsed to a single line, and it includes some commands not available on the toolbars. The disadvantage is that as it's a dynamic environment, we frequently have to activate other panels to access commands and some important commands and functions are not always visible: When modeling in 3D, the layers list visibility is almost mandatory. We may add this list to the Quick Access Toolbar by applying the CUI command or by right-clicking above the command icon we want to add. Another way is to pull the Layers panel to the drawing area, thus making it permanently visible. Layers, transparency, and other properties When we are modeling in AutoCAD, the ability to control object properties is essential. After some hours spent on a new 3D model, we can have hundreds of objects that overlap and obscure the model's visibility. Here are the most important properties. Layers If a correct layers application is fundamental in 2D, in 3D it assumes extreme importance. Each type of 3D object should be in a proper layer, thus allowing us to control its properties: Name: A good piece of advice is to not mix 2D with 3D objects in the same layers. So, layers for 3D objects must be easily identified, for instance, by adding a 3D prefix. Freeze/Thaw: In 3D, the density of screen information can be huge. So freezing and unfreezing layers is a permanent process. It's better to freeze the layers than to turn off because objects on frozen layers are not processed (for instance, regenerating or counting for ZOOM Extents), thus accelerating the 3D process. Lock/Unlock: It's quite annoying to notice that at an advanced phase of our project, our walls moved and caused several errors. If we need that information visible, the best way to avoid these errors is to lock layers. Color: A good and logical color palette assigned to our layers can improve our understanding while modeling. Transparency: If we want to see through walls or other objects at the creation process, we may give a value between 0 and 90 percent to the layers transparency. Last but not least, the best and the easiest process to assign rendering materials to objects is by layer, so another good point is to apply a correct and detailed layer scheme. Transparency Transparency, as a property for layers or for objects, has been available since Version 2011. Besides its utility for layers, it can also be applied directly to objects. For instance, we may have a layer called 3D-SLAB and just want to see through the upper slab. We can change the objects' transparency with PROPERTIES (Ctrl + 1). To see transparencies in the drawing area, the TPY button (on the status bar) must be on. Visibility Another recent improvement in AutoCAD is the ability to hide or to isolate objects without changing layer properties. We select the objects to hide or to isolate (all objects not selected are hidden) and right-click on them. On the cursor menu, we choose Isolate and then: Isolate Objects: All objects not selected are invisible, using the ISOLATEOBJECTS command Hide Objects: The selected objects are invisible, using the HIDEOBJECTS command End Object Isolation: All objects are turned on, using the UNISOLATEOBJECTS command. There is a small lamp icon on the status bar, the second icon from the right. If the lamp is red, it means that there are hidden objects; if it is yellow, all objects are visible: Shown on the following image is the application of transparency and hide objects to the left wall and the upper slab: Auxiliary tools AutoCAD software is very precise and the correct application of these auxiliary tools is a key factor for good projects. All users should be familiar with at least Ortho and Osnap tools. Following is the application of auxiliary tools in 3D projects complemented with the first exercise. OSNAP, ORTHO, POLAR, and OTRACK auxiliary tools Let's start with object snapping, probably the most frequently used tool for precision. Every time AutoCAD prompts for a point, we can access predefined object snaps (also known as osnaps) if the OSNAP button on the status bar is on. To change it, we only have to click on the OSNAP button or press F3. If we want an individual osnap, we can, among other ways, digitize the first three letters (for instance, MID for midpoint) or use the osnap menu (CTRL + right-click). Osnaps work everywhere in 3D (which is great) and is especially useful is the Extension osnap mode, which allows you to specify a point with a distance in the direction of any edge. But what if we want to specify the projection of 3D points onto the working XY plane? Easy! If the OSNAPZ variable is set to 1, all specified points are projected onto the plane. This variable is not saved and 0 is assigned as the initial value. More great news is that ORTHO (F8) and POLAR (F10) work in 3D. That is, we can specify points by directing the cursor along the Z axis and assign distances. Lots of @ spared, no? OTRACK (F11), used to derive points from predefined osnaps, also works along the Z-axis direction. We pause over an osnap and can assign a distance along a specific direction or just obtain a crossing: 3DOsnap tool Starting with Version 2011, AutoCAD allows you to specify 3D object snaps. Also, here we can access predefined 3D osnaps keeping 3DOSNAP (F4) on, or we can access them individually. There are osnaps for vertices, midpoints on edges, centers of faces, knots (spline points), points perpendicular to faces, and points nearest to faces. Exercise 1.1 Using the LINE command, coordinates, and auxiliary tools, let's create a cabinet skeleton. All dimensions are in meters and we start from the lower-left corner. The ORTHO or POLAR button must be on and the OTRACK and OSNAP buttons with Endpoint and Midpoint predefined. As in 2D, rotating the wheel mouse forward, we zoom in; rotating the wheel backward, we zoom out; all related to cursor position. To automatically orbit around the model, we hold down SHIFT and the wheel simultaneously. The cursor changes to two small ellipses and then we drag the mouse to orbit around the model. Visualization is the subject of the next article We run the LINE command at any point, block direction X (POLAR or ORTHO) and assign the distance: Command: LINESpecify first point: any pointSpecify next point or [Undo]: 0.6 We block the Z direction and assign the distance: Specify next point or [Undo]: 0.7 The best way to specify this point is with relative coordinates: Specify next point or [Close/Undo]: @-0.3,0,0.4 We block the Z direction and assign the distance: Specify next point or [Close/Undo]: 0.7 The best way to close the left polygon is to pause over the first point, move the cursor up to find the crossing, with Polar or Ortho coming from the last point, and apply Close option to close the polygon: Specify next point or [Close/Undo]: point with OTRACKSpecify next point or [Close/Undo]: C We copy all lines 1 meter in the Y direction: Command: COPYSelect objects: Specify opposite corner: 6 foundSelect objects: EnterCurrent settings: Copy mode = MultipleSpecify base point or [Displacement/mOde] <Displacement>: pointSpecify second point or [Array] <use first point as displacement>:1Specify second point or [Array/Exit/Undo] <Exit>: Enter We complete the cabinet skeleton by drawing lines between endpoints Command: LINE
Read more
  • 0
  • 0
  • 4215

article-image-article-playing-particles
Packt
06 Jun 2013
19 min read
Save for later

Playing with Particles

Packt
06 Jun 2013
19 min read
(For more resources related to this topic, see here.) Introducing particle effects Particle effects are the decorative flourishes used in games to represent dynamic and complex phenomena, such as fire, smoke, and rain. To create a particle effect, it requires three elements: a System, Emitters, and the Particles themselves. Understanding particle systems Particle systems are the universe in which the particles and emitters live. Much like the universe, we cannot define the size but we can define a point of origin which all emitters and particles will be placed relative to. We can also have multiple particle systems in existence at any given time, which can be set to draw the particles at different depths. While we can have as many particle systems as we want, it is best to have as few as possible in order to prevent possible memory leaks. The reason for this is that once a particle system is created, it will remain in existence forever unless it is manually destroyed. Destroying the instance that spawned it or changing rooms will not remove the system, so make sure it is removed when it is no longer needed. By destroying a particle system, it will remove all the emitters and particles in that system along with it. Utilizing particle emitters Particle emitters are defined areas within a system from which particles will spawn. There are two types of emitters to choose from: Burst emitters that spawn particles a single time, and Stream emitters that spew particles continuously over time. We can define the size and shape of the region in space for each emitter, as well as how the particles should be distributed within the region. Image When defining the region in space, there are four Shape options: DIAMOND, ELLIPSE, LINE, and RECTANGLE. An example of each can be seen in the preceding diagram, all using exactly the same dimensions, amount of particles, and distribution. While there is no functional difference between using any one of these shapes, the effect itself can benefit from a properly chosen shape. For example, only a LINE can make an effect appear to be angled 30 degrees. Image The distribution of the particles can also affect how the particles are expelled from the emitter. As can be seen in the preceding diagram, there are three different distributions. LINEAR will spawn particles with an equal random distribution throughout the emitter region. GAUSSIAN will spawn particles more towards the center of the region. INVGAUSSIAN is the inverse of GAUSSIAN, wherein the particles will spawn closer to the edges of the emitter. Applying particles Particles are the graphic resources that are spawned from the emitters. There are two types of particles that can be created: Shapes and Sprites. Shapes are the collection of 64 x 64 pixel sprites that comes built-in with GameMaker: Studio for use as particles. The shapes, as seen in the next diagram, are suitable for the majority of the most common effects, such as fireworks and flames. When wanting to create something more specialized for a game, we can use any Sprite in the Resource tree. Image There are a lot of things we can do with particles by adjusting the many attributes available. We can define ranges for how long it lives, the color it should be, and how it moves. We can even spawn more particles at the point of death for each particle. There are, however, some things that we cannot do. In order to keep the graphics processing costs low, there is no ability to manipulate individual particles within an effect. Also, particles cannot interact with objects in any way, so there is no way to know if a particle has collided with an instance in the world. If we need this kind of control, we need to build objects instead. Designing the look of a particle event is generally a trial and error process that can take a very long time. To speed things up, try using one of the many particle effect generators available on the Internet, such as Particle Designer 2.5 by Alert Games found here: http://alertgames.net/index.php?page=s/pd2. HTML5 limitations Using particle effects can really improve the visual quality of a game, but when developing a game intended to be played in a browser we need to be careful. Before implementing a particle effect, it is important to understand potential problems we may encounter. The biggest issue surrounding particles is that in order for them to be rendered smoothly without any lag, they need to be rendered with the graphics processor instead of the main CPU. Most browsers allow this to happen through a JavaScript API called WebGL. It is not, however, an HTML5 standard and Microsoft has stated that they have no plans for Internet Explorer to support it for the foreseeable future. This means a potentially significant portion of the game's potential audience could suffer poor gameplay if particles are used. Additionally, even with WebGL enabled, the functionality for particles to have additive blending and advanced color blending cannot be used, as none of the browsers currently support this feature. Now that we know this we are ready to make some effects! Adding particle effects to the game We are going to build a few different particle effects to demonstrate the various ways effects can be implemented in a game, and to look into some of the issues that might arise. To keep things straightforward, all of the effects we create will be a part of a single, global particle system. We will use both types of emitters, and utilize both shape and sprite-based particles. We will start with a Dust Cloud that will be seen anytime a Pillar is broken or destroyed. We will then add a system to create a unique shrapnel effect for each Pillar type. Finally, we will create some fire and smoke effects for the TNT explosion to demonstrate moving emitters. Creating a Dust Cloud The first effect we are going to create is a simple Dust Cloud. It will burst outwards upon the destruction of each Pillar and dissolve away over time. As this effect will be used in every level of the game, we will make all of its elements global, so they only need to be declared once. Open the Tower Toppling project we were previously working on if it is not already open. We need to make sure that WebGL is enabled when we build the game. Navigate to Resources | Change Global Game Settings and click on the HTML5 tab. On the left-hand side, click on the tab for Graphics. As seen in the following screenshot, there are three options under WebGL in Options. If WebGL is Disabled, the game will not be able to use the GPU and all browsers will suffer from any potential lag. If WebGL is Required, any browser that does not have this capability will be prevented from running the game. The final option is Auto-Detect which will use WebGL if the browser supports it, but will allow all browsers to play the game no matter what. Select Auto-Detect and then click on OK. Image Now that we have WebGL activated we can build our effects. We will start by defining our particle system as a global variable by creating a new script called scr_Global_Particles. code The first effect we are going to make is the Dust Cloud which will be attached to the Pillars. For this we only need a single emitter which we will move to the appropriate position when it is needed. Create a global variable for the emitter and add it to the particle system with the following code at the end of the script: code For this particle, we are going to use one of the built-in shapes, pt_shape_explosion, which looks like a little thick cloud of dust. Add the following code to the end of the script: code Once again we have made this a global variable, so that we have to create this Dust Cloud particle only once. We have declared only the shape attribute of this particle at this time. We will add more to this later once we can see what the effect looks like in the game. We need to initialize the particle system with the other global variables. Reopen scr_Global_GameStart and call the particles script. code With everything initialized, we can now create a new script, scr_Particles_DustCloud, which we can use to set the region of the emitter and have it activate a burst of particles. code We start by defining a small area for the emitter based on the position of instance that calls this script. The region itself will be circular with a Gaussian distribution so that the particles shoot out from the center. We then activate a single burst of 10 dust particles from the emitter. All we need to do now is execute this script from the destruction of a Pillar. Reopen scr_Pillar_Destroyand insert the following line of code on the line before the instance is destroyed: code We need to add this effect to the breaking of the Pillars as well. Reopen scr_ Pillar_BreakApart and insert the same code in the same spot. Save the game and then play it. When the glass Pillars are destroyed, we should see thick white clouds appearing as shown in the following screenshot: Image The particles are boring and static at this point, because we have not told the particles to do anything other than to look like the shape of a cloud. Let's fix this by adding some attributes to the particle. Reopen scr_Global_Particles and add the following code at the end of the script: code The first attribute we add is how long we want the particle to live for, which is a range between 15 and 30 steps, or at the speed of our rooms, a half to a whole second. Next, we want the particles to explode outwards, so we set the angle and add some velocity. Both functions that we are using have similar parameters. The first value is the particle type for which this is to be applied. The next two parameters are the minimum and maximum values from which a number will be randomly chosen. The fourth parameter sets an incremental value every step. Finally, the last parameter is a wiggle value that will randomly be applied throughout the particle's lifetime. For the Dust Cloud, we are setting the direction to be in any angle and the speed is fairly slow, ranging only a few pixels per step. We also want to change the size of the particles and their transparency, so that the dust appears to dissipate. Save the game and run it again. This time the effect appears much more natural, with the clouds exploding outwards, growing slightly larger, and fading out. It should look something like the next screenshot. The Dust Cloud is now complete. Image Adding in Shrapnel The Dust Cloud effect helps the Pillar destruction appear more believable, but it lacks the bigger chunks of material one would expect to see. We want some Shrapnel of various shapes and sizes to explode outwards for each of the different types of Pillars. We will start with the Glass particles. Create a new Sprite, spr_Particle_Glass, and with Remove Background checked, load Chapter 8/Sprites/Particle_Glass.gif.mhanaje This sprite is not meant to be animated, though it does have several frames within it. Each frame represents a different shape of particle that will be randomly chosen when the particle is spawned. We will want the particles to rotate as they move outwards, so we need to center the origin. Click on OK. Reopen scr_Global_Particles and initialize the Glass particle at the end of the script. code Once we have created the global variable and the particle, we set the particle type to be a Sprite. When assigning Sprites there are a few extra parameters beyond which resources should be used. The third and fourth parameters are for whether it should be animated, and if so, should the animation stretch for the duration of the particle's life. In our case we are not using animation, so it has been set to false. The last parameter is for whether we want it to choose a random subimage of the Sprite, which is what we do want it to do. We also need to add some attributes to this particle for life and movement. Add the following code at the end of the script: code When compared with the Dust Cloud, this particle will have a shorter lifespan but will move at a much higher velocity. This will make this effect more intense while keeping the general area small. We have also added some rotational movement through part_type_orientation. The particles can be set to any angle and will rotate 20 degrees per frame with a variance of up to four degrees. This will give us a nice variety in the spin of each particle. There is one additional parameter for orientation, which is whether the angle should be relative to its movement. We have set it to false as we just want the particles to spin freely. To test this effect out, open up scr_Particles_DustCloud and insert a burst emitter before the Dust Cloud is emitted, so that the Glass particles appear behind the other effect. code Save the game and then play it. When the Pillars break apart, there should be shards of Glass exploding out along with the Dust Cloud. The effect should look something like the following screenshot: Image Next we need to create Shrapnel for the Wood and Steel particles. Create new Sprites for spr_Particle_Wood and spr_Particle_Steel with the supplied images in Chapter 8/Sprites/ in the same manner as we did for Glass. As these particles are global, we cannot just swap the Sprite out dynamically. We need to create new particles for each type. In scr_Global_Particles, add particles for both Wood and Steel with the same attributes as Glass. Currently the effect is set to Always create Glass particles, something we do not want to do. To fix this we are going to add a variable, myParticle, to each of the different Pillars to allow us to spawn the appropriate particle. Open scr_Pillar_Glass_Create and add the following code at the end of the script: code Repeat the last step for Wood and Steel with the appropriate particle assigned. In order to have the proper particle spawn, all we need to do is reopen scr_Particles_DustCloud and change the variable particle_Glass to myParticle as in the following code: code Save the game and play the game until you can destroy all the three types of Pillars to see the effect. It should look something similar to the following screenshot, where each Pillar spawns its own Shrapnel: Image Making the TNT explosion When the TNT explodes, it shoots out some TNT Fragments which are currently bland looking Sprites. We want the Fragments to be on fire as they streak across the scene. We also want a cloud of smoke to rise up from the explosion to indicate that the explosion we see is actually on fire. This is going to cause some complications. In order to make something appear to be on fire, it will need to change color, say from white to yellow to orange. As we have already mentioned, due to the fact that WebGL is not supported by all browsers, we cannot utilize any of the functions that allow us to blend colors together. This means that we need to work around this issue. The solution is to use several particles instead of one. We will start by creating some custom colors so that we can achieve the look of fire and smoke that we want. Open scr_Global_Colors and add the following colors: code We already have a nice yellow color, so we add an orange, a slightly yellow tinted white, and a partially orange black color. In order to achieve the fake blending effect we will need to spawn one particle type, and upon its death, have it spawn the next particle type. For this to work properly, we need to construct the creation of the particles in the opposite order that they will be seen. In this case, we need to start by building the smoke particle. In scr_Global_Particles add a new particle for the smoke with the following attributes: code We start by adding the particle and using the built-in smoke shape. We want the smoke to linger for a while, so we set its life to range between a minimum of a second to almost two full seconds. We then set the direction and speed to be more or less upwards so that the smoke rises. Next, we set the size and have it grow over time. With the alpha values, we don't want the smoke to be completely opaque, so we set it to start at half transparent and fade away over time. Next, we are using part_type_color1 which allows us to tint the particle without affecting the performance very much. Finally, we apply some gravity to the particles so that any angled particles float slowly upwards. The smoke is the final step of our effect and it will be spawned from an orange flame that precedes it. code Once again we set up the particle using the built-in smoke shape, this time with a much shorter lifespan. The general direction is still mainly upwards, though there is more spread than the smoke. These particles are slightly smaller, tinted orange and will be partially transparent for its entire life. We have added a little bit of upward gravity, as this particle is in between fire and smoke. Finally, we are using a function that will spawn a single particle of smoke upon the death of each orange particle. The next particle in the chain for this effect is a yellow particle. This time we are going to use the FLARE shape, which will give a better appearance of fire. It will also be a bit smaller, live slightly longer than the orange particle, and move faster, spreading in all directions. We will not add any transparency to this particle so that it appears to burn bright. code We have only one more particle to create this effect for, which is the hottest and brightest white particle. Its construction is the same as the yellow particle, except it is smaller and faster. code We now have all the particles we need for this particle effect; we just need to add an emitter to spawn them. This time we are going to use a stream emitter, so that the fire continuously flows out of each Fragment. Since the Fragments are moving, we will need to have a unique emitter for each Fragment we create. This means it cannot be a global emitter, but rather a local one. Open scr_TNT_Fragment_Create and add the following code at the end of the script: code We create an emitter with a fairly small area for spawning with balanced distribution. At every step, the emitter will create five new Fire particles as long as the emitter exists. The emitter is now created at the same time as the Fragment, but we need the emitter to move along with it. Open scr_TNT_Fragment_Step and add the following code: code As already mentioned we need to destroy the emitter, otherwise it will never stop streaming particles. For this we will need to open obj_TNT_Fragment and add a destroy event with a new Script, scr_TNT_Fragment_Destroy, which removes the emitter attached. code This function will remove the emitter from the system without removing any of the particles that had been spawned. One last thing we need to do is to uncheck the Visible checkbox, as we don't want to see the Fragment sprite, but just the particles. Save the game and detonate the TNT. Instead of just seeing a few Fragments, there are now streaks of fire jetting out of the explosion that turn into dark clouds of smoke that float up. It should look something like the following screenshot: Image Cleaning up the particles At this point, we have built a good variety of effects using various particles and emitters. The effects have added a lot of polish to the game, but there is a flaw with the particles. If the player decides to restart the room or go to the SHOP immediately after the explosion has occurred, the emitters will not be destroyed. This means that they will continue to spawn particles forever, and we will lose all references to those emitters. The game will end up looking like the following screenshot: Image The first thing we need to do is to destroy the emitters when we leave the room. Luckily, we have already written a script that does exactly this. Open obj_TNT_Fragment and add a Room End event and attach scr_TNT_Fragment_Destroy to it. Even if we destroy the emitters before changing rooms, any particles remaining in the game will still appear in the next room, if only briefly. What we need to do is clear all the particles from the system. While this might sound like it could be a lot of work, it is actually quite simple. As Overlord is in every level, but not in any other room, we can use it to clean up the scene. Open obj_Overlord, add a Room End event and attach a new Script, scr_Overlord_RoomEnd, with the following line of code: part_particles_clear(system); This function will remove any particle that exists within the system, but will not remove the particle type from memory. It is important that we do not destroy the particle type, as we would not be able to use a particle again if its type no longer exists. Save the game, explode some TNT, and restart the room immediately. You should no longer see any particles in the scene. Summary In this article, we were provided with the details to add some spit and polish to the game to really make it shine. We delved into the world of particles and created a variety of effects that add impact to the TNT and Pillar destruction. Resources for Article : Further resources on this subject: HTML5: Generic Containers [Article] HTML5 Presentations - creating our initial presentation [Article] Deploying HTML5 Applications with GNOME [Article]
Read more
  • 0
  • 0
  • 2835
Visually different images

article-image-creating-our-first-game
Packt
31 May 2013
11 min read
Save for later

Creating Our First Game

Packt
31 May 2013
11 min read
(For more resources related to this topic, see here.) Let's get serious – the game The game we will implement now is inspired by Frogger. In this old school arcade game, you played the role of a frog trying to cross the screen by jumping on logs and avoiding cars. In our version, the player is a developer who has to cross the network cable by jumping packets and then cross the browser "road" by avoiding bugs. To sum up, the game specifications are as follows: If the player presses the up arrow key once, the "frog" will go forward one step. By pressing the right and left arrow key, the player can move horizontally. In the first part (the network cable) the player has to jump on packets coming from the left of the screen and moving to the right. The packets are organized in lines where packets of each line travel at different speeds. Once the player is on a packet, he/she will move along with it. If a packet drives the player outside of the screen, or if the player jumps on the cable without reaching a packet, he/she will die and start at the beginning of the same level once again. In the second part (the browser part) the player has to cross the browser screen by avoiding the bugs coming from the left. If the player gets hit by a bug he/she will start at the beginning of the same level once again. These are very simple rules, but as you will see they will already give us plenty of things to think about. Learning the basics Throughout this article, we will use DOM elements to render game elements. Another popular solution would be to use the Canvas element. There are plus and minus points for both technologies and there are a few effects that are simply not possible to produce with only DOM elements. However, for the beginner, the DOM offers the advantage of being easier to debug, to work on almost all existing browsers (yes, even on Internet Explorer 6), and in most cases to offer reasonable speed for games. The DOM also abstracts the dirty business of having to target individual pixels and tracking which part of the screen has to be redrawn. Even though Internet Explorer supports most of the features we will see in this book, I would not recommend creating a game that supports it. Indeed, its market share is negligible nowadays (http://www.ie6countdown.com/) and you will encounter some performance issues. Now from some game terminology, sprites are the moving part of a game. They may be animated or nonanimated (in the sense of changing their aspect versus simply moving around). Other parts of the game may include the background, the UI, and tiles. Framework During this article, we will write some code; part of the code belongs to an example game and is used to describe scenes or logic that are specific to it. Some code, however, is very likely to be reused in each of your games. For this reason, we will regroup some of those functions into a framework that we will cleverly call gameFramework or gf in short. A very simple way to define a namespace in JavaScript is to create an object and add all your function directly to it. The following code gives you an example of what this might look like for two functions, shake and stir, in the namespace cocktail. // define the namespacevar cocktail = {};// add the function shake to the namespacecocktail.shake = function(){...}// add the function stir to the namespacecocktail.stir = function(){...} This has the advantage of avoiding collision with other libraries that use similar names for their objects or functions. Therefore, from now on when you see any function added to the namespace, it will mean that we think those functions will be used by the other games we will create later in this article or that you might want to create yourself. The following code is another notation for namespace. Which one you use is a personal preference and you should really use the one that feels right to you! var cocktail = {// add the function shake to the namespaceshake: function(){...},// add the function stir to the namespacestir: function(){...}}; Typically, you would keep the code of the framework in a JS file (let's say gameFramework.js) and the code of the game in another JS file. Once your game is ready to be published, you may want to regroup all your JavaScript code into one file (including jQuery if you wish so) and minimize it. However, for the whole development phase it will be way more convenient to keep them separate. Sprites Sprites are the basic building blocks of your game. They are basically images that can be animated and moved around the screen. To create them you can use any image editor. If you work on OS X, there is a free one that I find has been particularly well done, Pixen (http://pixenapp.com/). You can use animated gifs. With this method you have no way to access the index of the current frame through JavaScript, and no control over when the animation starts to play or when it ends. Furthermore, having many animated GIFs tends to slow things down a lot. You can change the source of the image. This is already a better solution, but provides worse performance if proposed and requires a large number of individual images. Another disadvantage is that you cannot choose to display only one part of the image; you have to show the entire image each time. Finally, if you want to have a sprite made of a repeating image, you will have to use many img elements. For the sake of completeness, we should mention here one advantage of img; it's really easy to scale an img element—just adjust the width and height. The proposed solution uses simple divs of defined dimensions and sets an image in the background. To generate animated sprites, you could change the background image, but instead we use the background position CSS property. The image used in this situation is called a sprite sheet and typically looks something like the following screenshot: The mechanism by which the animation is generated is shown in the following screenshot: Another advantage is that you can use a single sprite sheet to hold multiple animations. This way you will avoid having to load many different images. Depending on the situation, you may still want to use more than one sprite sheet, but it's a good thing to try to minimize their number. Implementing animations It's very simple to implement this solution. We will use .css() to change the background properties and a simple setInterval to change the current frame of the animation. Therefore, let's say that we have a sprite sheet containing 4 frames of a walk cycle where each frame measures 64 by 64 pixels. First, we simply have to create a div with the sprite sheet as its background. This div should measure 64 by 64 pixels, otherwise the next frame would leak onto the current one. In the following example, we add the sprite to a div with the ID mygame. $("#mygame").append("<div id='sprite1'>"); $("#sprite1").css("backgroundImage","url('spritesheet1.png')"); As the background image is by default aligned with the upper-left corner of the div, we will only see the first frame of the walk-cycle sprite sheet. What we want is to be able to change what frame is visible. The following function changes the background position to the correct position based on the argument passed to it. Take a look at the following code for the exact meaning of the arguments: /** * This function sets the current frame. * -divId: the Id of the div from which you want to change the * frame * -frameNumber: the frame number * -frameDimension: the width of a frame **/ gameFramework.setFrame = function(divId,frameNumber, frameDimension) { $("#"+divId) .css("bakgroundPosition", "" + frameNumber * frameDimension + "px 0px"); } Now we have to call this at regular intervals to produce the animation. We will use setInterval with an interval of 60 milliseconds, that is, around 17 frames per second. This should be enough to give the impression of walking; however, this really has to be fine-tuned to match your sprite sheet. To do this we use an anonymous function that we pass to setInterval, which will in turn call our function with the correct parameter. var totalNumberOfFrame = 4;var frameNumber = 0;setInterval(function(){gameFramework.setFrame("sprite1",frameNumber, 64);frameNumber = (frameNumber + 1) % totalNumberOfFrame;}, 60); You probably noticed that we're doing something special to compute the current frame. The goal is to cover values from 0 to 3 (as they're 4 frames) and to loop back to 0 when we reach 4. The operation we use for this is called modulo (%) and it's the rest of the integer division (also known as Euclidean division). For example, at the third frame we have 3 / 4 which is equal to 0 plus a remainder of 3, so 3 % 4 = 3. When the frame number reaches 4 we have 4 / 4 = 1 plus a remainder of 0, so 4 % 4 = 0. This mechanism is used in a lot of situations. Adding animations to our framework As you can see there are more and more variables needed to generate an animation: the URL of the image, the number of frames, their dimension, the rate of the animation, and the current frame. Furthermore, all those variables are associated with one animation, so if we need a second one we have to define twice as many variables. The obvious solution is to use objects. We will create an animation object that will hold all the variables we need (for now, it won't need any method). This object, like all the things belonging to our framework, will be in the gameFramework namespace. Instead of giving all the values of each of the properties of the animation as an argument, we will use a single object literal, and all the properties that aren't defined will default to some well-thought-out values. To do this, jQuery offers a very convenient method: $.extend. This is a very powerful method and you should really take a look at the API documentation (http://api.jquery.com/) to see everything that it can do. Here we will pass to it three arguments: the first one will be extended with the values of the second one and the resulting object will be extended with the values of the third. /*** Animation Object.**/gf.animation = function(options) {var defaultValues = {url : false,width : 64,numberOfFrames : 1,currentFrame : 0,rate : 30};$.extend(this, defaultValues, options);} To use this function we will simply create a new instance of it with the desired values. Here you can see the values used in the preceding examples: var firstAnim = new gameFramework.animation({url: "spritesheet1.png",numberOfFrames: 4,rate: 60}); As you can see, we didn't need to specify width: 64 because it's the default value! This pattern is very convenient and you should keep it in mind each time you need default values and also the flexibility to override them. We can rewrite the function to use the animation object: gf.setFrame = function(divId, animation) {$("#" + divId).css("bakgroundPosition", "" + animation.currentFrame *animation.width + "px 0px");} Now we will create a function for our framework based on the technique we've already seen, but this time it will use the new animation object. This function will start animating a sprite, either once or in a loop. There is one thing we have to be careful about—if we define an animation for a sprite that is already animated we need to deactivate the current animation and replace it with the new one. To do this we will need an array to hold the list of all intervals' handles. Then we'll only need to check if one exists for this sprite and clear it, then define it again. gf.animationHandles = {}; /** * Sets the animation for the given sprite. **/ gf.setAnimation = function(divId, animation, loop){ if(gf.animationHandles[divId]){ clearInterval(gf.animationHandles[divId]); } if(animation.url){ $("#"+divId).css("backgroundImage","url('"+animation. url+"')"); } if(animation.numberOfFrame > 1){ gf.animationHandles[divId] = setInterval(function(){ animation.currentFrame++; if(!loop && currentFrame > animation.numberOfFrame){ clearInterval(gf.animationHandles[divId]); gf.animationHandles[divId] = false; } else { animation.currentFrame %= animation. numberOfFrame; gf.setFrame(divId, animation); } }, animation.rate); } } This will provide a convenient, flexible, and quite high-level way to set an animation for a sprite.
Read more
  • 0
  • 0
  • 2325

article-image-article-making-money-your-game
Packt
17 May 2013
23 min read
Save for later

Making Money with Your Game

Packt
17 May 2013
23 min read
(For more resources related to this topic, see here.) Your game development strategy If you want to build a game to make some money, it is imperative you take a few things into consideration before starting off building one. The first question you will need to ask yourself is probably this: who am I going to build a game for? Are you aiming at everyone capable of playing games or do you want to target a very specific segment of people and meet their gaming needs? This is the difference between broad and niche targeting. An example of very broadly targeted games are most tower defense games, in which you need to build towers with diverse properties to repel an army. Games such as Tetris, Bejeweled, Minesweeper, and most light puzzle games in general. Angry Birds is another example of a game that is popular with a broad audience because of its simplicity, likeable graphics, and incredible amount of clever marketing. Casual games in general seem to appeal to the masses because of the following few factors: Simplicity prevails: most gamers get used to the game in mere minutes. There are little or no knowledge prerequisites: you are not expected to already know some of the background story or have experience in these types of games. Casual gamers tend to do well even though they put in less time to practice. Even if you do well from the start, you can still become better at it. A game at which you cannot become better by replaying doesn't hold up for long. Notable exceptions are games of chance like roulette and the slots, which do prove to be addictive; but that is for other reasons, such as the chance to win money. The main advantage to building casual games is that practically everyone is a potential user of your game. As such, the achievable success can be enormous. World of Warcraft is a game which has moved from rather hardcore and niche to more casual over the years. They did this because they had already reached most regular gamers out there, and decided to convince the masses that you can play World of Warcraft even if you don't play a lot in general. The downside of trying to please everyone is the amount of competition. Sticking out as a unique game among numerous games out there is tremendously difficult. This is especially true if you don't have an impressive marketing machine to back it up. A good example of a niche game is any game built after a movie. Games on Star Trek, Star Wars, Lord of the Rings, and so on, are mostly aimed at people who have seen and liked those movies. Niche games can also be niche because they are targeted solely at a specific group of gamers. For example, people who prefer playing FPS (First Person Shooters) games, and do so every single day. In essence, niche games have the following properties (note that they oppose casual or broadly targeted games): Steep learning curve: mastery requires many hours of dedicated gaming. Some knowledge or experience with games is required. An online shooter game such as PlanetSide 2 requires you to have at least some experience with shooter games in the past, since you are pitted against people who know what they are doing. The more you play the game, the more are the useful rewards you get. Playing a lot is often rewarded with items that make you even stronger in the game, thus reinforcing the fact that you already became better by playing more. StarCraft is a game released by Blizzard in 1998 and is still being played in tournaments today, even though there is a follow up: StarCraft 2. Games such as the original StarCraft are perfectly feasible to be built in HTML5 and run on a browser or smartphone. When StarCraft was released, the average desktop computer had less power than many smartphones have today. Technically, the road is open; replicating the same level of success is another matter though. The advantage of aiming at a niche of gamers is the unique position you can take in their lives. Maybe there are not many people in your target group, but it will be easier to get and keep their attention with your game since it is specifically built for them. In addition, it doesn't mean that because you have a well-defined aim, gamers can't drop in from unexpected corners. People you would have never thought would play your game can still like what you did. It is for this reason that knowing your gamers is so important, and why tools such as Playtomic exist. The disadvantage of the niche marketing is obvious: your game is very unlikely to grow beyond a certain point; it will probably never rank among the most played games in the world. The type of games you are going to develop is one choice, the amount of detail you put in each one of them is another. You can put as much effort into building a single game as you want. In essence, a game is never finished. A game can always have an extra level, Easter egg, or another nice little detail. When scoping your game, you must have decided whether you will use a shotgun or a sniper development strategy. In a shotgun strategy, you develop and release games quickly. Each game still has a unique element that should distinguish it from other games: the UGP (Unique Gaming Proposition). But games released under a shotgun strategy don't deliver a lot of details; they are not polished. The advantages of adopting a shotgun strategy are plenty: Low development cost; thus every game represents a low risk The short time to market allows for using world events as game settings You have several games on the market at once, but often you only need a single one to succeed in order to cover the expenses incurred for the others Short games can be given to the public for free but monetized by selling add-ons, such as levels However, it's not just rainbows and sunshine when you adopt this strategy. There are several reasons why you wouldn't choose shotgun strategy: A game that doesn't feel finished has less chance of being successful than a perfected one. Throwing your game on the market not only tests whether a certain concept works, but also exposes it to the competitors, who can now start building a copy. Of course, you have the first mover advantage, but it's not as big as it could have been. You must always be careful not to throw garbage on the market either, or you might ruin your name as a developer. However, don't get confused. The shotgun strategy is not an excuse for building mediocre games. Every game you release should have an original touch to it— something that no other game has. If a game doesn't have that new flavor, why would anyone prefer it over all the others? Then, of course, there is the sniper strategy, which involves building a decent and well-thought-out game and releasing it to the market with the utmost care and support. This is what distributors such as Apple urge developers to do, and for good reason—you wouldn't want your app store full of crappy games, would you? Some other game distributers, such as Steam, are even pickier in the games they allow to be distributed, making the shotgun strategy nearly impossible. But it is also the strategy most successful game developers use. Take a look at developers such as Rockstar (developer of the GTA series), Besthesda (developer of the Elder Scroll series), Bioware (developer of the Mass Effect series), Blizzard (developer of the Warcraft series), and many others. These are no small fries, yet they don't have that many games on the market. This tactic of developing high quality games and hoping they will pay off is obviously not without risk. In order to develop a truly amazing game, you also need the time and money to do so. If your game fails to sell, this can be a real problem for you or your company. Even for HTML5 games, this can be the case, especially since devices and browsers keep getting more and more powerful. When the machines running the games get more powerful, the games themselves often become more complex and take longer to develop. We have taken a look at two important choices one needs to make when going into the game-developing business. Let's now look at the distribution channels that allow you to make money with your game, but before that let's summarize the topic we have just covered: Deciding who you want to target is extremely important even before you start developing your game. Broad targeting is barely targeting at all. It is about making a game accessible and likeable by as many people as possible. Niche targeting is taking a deeper look and interest in a certain group of people and building a game to suit their specific gaming needs. In developing and releasing games, there are two big strategies: shotgun and sniper. In the shotgun strategy you release games rapidly. Each game still has unique elements that no other games possess, but they are not as elaborate or polished as they could be. With the sniper strategy you build only a few games, but each one is already perfected at the time of release and only needs slight polishing when patches are released for it. Making money with game apps If you built your game into an app, you have several distribution channels you can turn to, such as Firefox marketplace, the IntelAppUp center, Windows Phone Store, Amazon Appstore, SlideMe, Mobango, Getjar, and Apple Appsfire. But the most popular players on the market currently are Google Play and the iOS App Store. The iOS App Store is not to be confused with the Mac app store. iPad and Mac have two different operating systems, iOS and Mac OS, so they have separate stores. Games can be released both in the iOS and the Mac store. There could also be some confusion between Google Play and Chrome Web Store. Google Play contains all the apps available for smartphones that have Google's Android operating system. The Chrome Web Store lets you add apps to your Google Chrome browser. So there are quite a few distribution channels to pick from and here we will have a quick look at Google Play, the iOS App store, and the Chrome Web Store. Google Play Google Play is the Android's default app shop and the biggest competitor of the iOS App Store. If you wish to become an Android app developer, there is a $25 fee and a developer distribution agreement that you must read. In return for the entrance fee and signing this agreement, they allow you to make use of their virtual shelves and have all its benefits. You can set your price as you please, but for every game you sell, Google will cash in about 30 percent. It is possible to do some geological price discrimination. So you could set your price at, let's say €1 in Belgium, while charging €2 in Germany. You can change your price at any time; however, if you release a game for free, there is no turning back. The only way to monetize this app afterwards is by allowing in game advertising, selling add-ons, or creating an in-game currency that can be bought with real money. Introducing an in-game currency that is bought with real money can be a very appealing format. A really successful example of this monetizing scheme can be found in The Smurfs games. In this game you build your own Smurf village, complete with Big Smurf, Smurfette, and a whole lot of mushrooms. Your city gets bigger as you plant more crops and build new houses, but it is a slow process. To speed it up a bit you can buy special berries in exchange for real money, which in turn allows you to build exclusive mushrooms and other things. This monetization scheme becomes very popular as has been shown in games such as League Of Legends, PlanetSide 2, World of Tanks, and many others. For Google Play apps, this in-app payment system is supported by Android's Google Checkout. In addition, Google allows you access to some basic statistics for your game, such as the number of players and the devices on which they play, as shown in the following diagram: Image Information such as this allows you to redesign your game to boost your success. For example, you could notice if a certain device doesn't have that many unique users, even though it is a very popular device and is bought by many people. If this is the case, maybe your game doesn't looks nice on this particular smartphone or tablet and you should optimize for it. The biggest competitor and initiator of all apps is the iOS App Store, so let's have a look at this. iOS App Store The iOS App Store was the first of its kind and at the time of writing this book, it still has the biggest revenue. In order to publish apps in the iOS App Store, you need to subscribe to the iOS Developer Program, which costs $99 annually—almost four times the subscription fee of Google Play. In effect, they offer about the same thing as Google Play does; as you can see in this short list: You pick your own prices and get 70 percent of sales revenue You receive monthly payments without credit card, hosting, or marketing fees There is support and adequate documentation to get you started More importantly, here are the following differences between Google Play and the iOS App Store: As mentioned earlier, signing up for Google Play is cheaper. The screening process of Apple seems to be more strict than that of Google Play, which results in a longer time to reach the market and a higher chance of never even reaching the market. Google Play incorporates a refund option that allows the buyer of your app to be refunded if he or she uninstalls the app or game within 24 hours. If you want your game to make use of some Android core functionalities, this is possible since the platform is open source. Apple, on the other hand, is very protective of its iOS platform and doesn't allow the same level of flexibility for apps. This element might not seem that important for games just yet, but it might be for very innovative games that do want to make use of this freedom. iOS reaches more people than Android does, though the current trend indicates that this might change in the near future. There seem to be significant differences in the kind of people buying Apple devices and the users of smartphones or tablets with Android OS. Apple fans tend to have a lower barrier towards spending money on their apps than Android users do. In general, iPads and iPhones are more expensive than other tablets and smartphones, attracting people who have no problem with spending even more money on the device. This difference in target group seems to make it more difficult for Android game developers to make money from their games. The last option for selling apps that we will discuss here is the Chrome Web Store. The Chrome Web Store The Chrome Web Store differs from Google Play and the iOS App Store in that it provides apps specifically for the Chrome browser and not for mobile devices. The Chrome Store offers web apps. Web apps are like applications you would install on your PC, except web apps are installed in your browser and are mostly written using HTML, CSS, and JavaScript, like our ImpactJS games. The first thing worth noticing about the Chrome Store is the one-time $5 entrance fee for posting apps. If this in itself is not good enough, the transaction fee for selling an app is only 5 percent. This is a remarkable difference to both Google Play and the iOS App Store. If you are already developing a game for your own website and packaged it as an app for Android and/or Apple, you can just as well launch it on the Chrome Web Store. Turning your ImpactJS game into a web app for the Chrome Store can be done using AppMobi, yet Google itself provides detailed documentation on how to do this manually. One of the biggest benefits of the web app is the facilitation of the permission process. Let's say your web app needs the location of the user in order to work. While iPad apps ask for permission every time they need location data, the web app asks permission only once: at installation time. Furthermore, you have the same functionalities and payment modalities as in Google Play, give or take. For instance, there is also the option to incorporate a free trial version, also known as a freemium. A freemium model is when you allow downloading a demo version for free with the option of upgrading it to a full version for a price. The Smurfs game also uses the freemium model, albeit with a difference. The entire game is free, but players can opt to pay real money to buy things that would otherwise cost them a lot of time to acquire. In this freemium model, you pay for convenience and unique items. For instance, in PlanetSide 2, acquiring a certain sniper rifle might take you several days or $10, depending on how you choose to play the freemium game. If you plan on releasing an ImpactJS game for Android, there is no real reason why you wouldn't do so for the Chrome Web Store. That being said, let's have a quick recap: The time when the iOS app store was the only app store out there is long gone; there is an impressive repertoire of app stores to choose from, which includes Firefox Marketplace, Intel AppUp Center, Windows Phone Store, Amazon Appstore, SlideMe, Mobango, Getjar, Appsfire, Google Play, among others. The biggest app stores are currently Google Play and the iOS App Store. They differ greatly on several fronts, of which the most important ones are: Subscription fee Screening process Type of audience they attract The Chrome Web Store sells web apps that act like normal apps but are available in the Chrome browser. The Chrome Web Store is cheap and easy to subscribe to. You must definitely have a go at releasing your game on this platform. In-game advertising In-game advertising is another way to make money with your game. In-game advertising is a growing market and is currently already being used by major companies; it was also used by Barack Obama in both his 2008 and 2012 campaigns, as shown in the following in-game screenshot: Image There is a trend towards more dynamic in-game advertising. The game manufacturers make sure there is space for advertising in the game, but the actual ads themselves are decided upon later. Depending on what is known about you, these can then change to become relevant to you as a player and a real-life consumer. When just starting out to build games, in-game adverting doesn't get that spectacular though. Most of the best known in-game advertisers for online games don't even want their ads in startup games. The requirements for Google AdSense are the following: Game plays: Minimum 500,000 per day Game types: Web-based Flash only Integration: Must be technically capable of SDK integration Traffic source: Eighty percent of traffic must be from the US and the UK Content: Family-safe and targeted at users aged 13 and over Distribution: Must be able to report embedded destinations and have control over where the games are distributed The requirements for another big competitor, Ad4Game, aren't mild either: At least 10,000 daily unique visitors Sub-domains and blogs are not accepted The Alexa rank should be less than 400,000 Adult/violent/racist content is not allowed If you are just starting out, these prerequisites are not good news. Not only because you need such a large number of players before even starting the advertising, but also because currently all support goes to Flash games. HTML5 games are not fully supported yet, though that will probably change. Luckily, there are companies out there that do allow you to start using advertising even though you don't have 10,000 visitors a day. Tictacti is one of those companies. Once again, almost all support goes to the Flash games, but they do have one option available for an HTML5 game:pre-roll. Pre-roll simply means that a screen with an ad appears before you can start the game. Integration of the pre-roll advertising is rather straightforward and does not require a change to your game, but to your index.html file, as in the following example from Tictacti: code While adding this to your game's index.html file, you fill out your own publisher ID and you are basically ready to go. Tictacti is similar to Google Analytics, and it also provides you with some relevant information about the ads on your game's website, as shown in the following diagram: Image Be careful, however, pre-roll advertising is one of the most intruding and annoying kinds of advertising. Technically, it is not even in-game advertising at all, since it runs before you play the game. If your game is not yet well established enough to convince the player to endure the advertising before being able to play, don't choose this option. Give your game some time to build a reputation before putting your gamers through this. As the last option, we will have a look at selling your actual distribution rights with MarketJS. But let's first briefly recap on in-game advertising: In-game advertising is a growing market. Even Barack Obama made use of in-game billboards to support his campaign. There is a trend towards more dynamic in-game advertising—using your location and demographic information to adapt the ads in a game. Currently, even the most accessible companies that offer online in-game advertising are focused on Flash games and require many unique visitors before even allowing you to show their ads. Tictacti is a notable exception, as it has low prerequisites and an easy implementation; though advertising is currently limited to pre-roll ads. Always take care to first build a positive reputation for your game and allow advertisements later. Selling distribution rights with MarketJS The last option we will investigate in this chapter is selling your game's distribution rights. You can still make money by just posting your game to all the app stores and on your own website, but it becomes increasingly difficult to be noticed. Quality can only prevail if people know it is out there, thus making a good game is sometimes not enough—you need marketing. If you are a beginner game builder with great game ideas and the skills to back it up, that's great, but marketing may not be your cup of tea. This is where MarketJS comes into play. MarketJS acts as an intermediate between you as the game developer and the game publishers. The procedure is simple once you have a game: You sign up on their website, http://www.marketjs.com. Upload the game on your own website or directly to the MarketJS server. Post your game for publishers to see. You set several options such as the price and contract type that would suit you the best. You have five contract options: Complete distribtution contract: Sell all your distribution rights to the game. Exclusive distribution partner contract: Here you restrict yourself to work with one distributor but still retain the rights to the game. Non-exclusive contract: Here, any distributor can buy the rights to use your game, but you can go on selling rights as long as you want. Revenue share: Here you negotiate on how to split the revenues derived from the game. Customized contract: This can basically have any terms. You can choose this option if you are not sure yet what you want out of your game. A part of the webpage on which you fill out your contracting preferences is shown in the following screenshot: Image After you have posted a demo, it is a matter of waiting for a publisher to spot it, get stunned by its magnificence, and offer to work with you. The big contribution of MarketJS to the gaming field is this ability to let the game developer focus on developing games. Someone else takes care of the marketing aspect, which is a totally different ballgame. MarketJS also offers a few interesting statistics such as the average price of a game on their website, as shown in the following diagram. It grants you some insight on whether you should take up game developing as a living or keep doing it as a hobby. Image According to MarketJS, prices for non-exclusive rights average between $500 and $1000, while selling exclusive rights to a game range somewhere between $1500 and $2000. If you can build a decent game within this price range, you are more than ready to go: MarketJS is a company that brings game distributers and developers closer together. Their focus is on HTML5 games, so they are great if you are a startup ImpactJS game developer. They require no subscription fee and have a straightforward process to turn your game into a showcase with a price tag. Summary In this article we have taken a look at some important elements while considering your game development strategy. Do you wish to adapt a shotgun approach and develop a lot of games in a short time span? Or will you use the sniper strategy and only build a few, but very polished games? You also need to decide upon the audience you wish to reach out to with your game. You have the option of building a game that is liked by everyone, but the competition is steep. Making money on the application stores is possible, but for Android and Apple there are registration fees. If you decide to develop apps, it is worth giving the Chrome Web Store a try (it runs web apps). In-game advertising is another way to fund your efforts, though most companies offering this service for online games have high prerequisites and support Flash games more than they do the newer HTML5 games. One of the most promising of monetization schemes is the freemium model. Players are allowed to freely play your game but they pay real money for extras. This is an easily tolerated model since the game is essentially free to play for anyone not willing to spend money, and no annoying advertising is present either. A combination of in-game advertising and freemium is possible as well: people annoyed by the advertising pay a fee and in return they are not bothered by it anymore. A final option is leaving the marketing aspect to someone else by selling your distribution rights with the help of MarketJS. They aim for HTML5 games and this option is especially useful for the beginner game developer who has difficulty in marketing his or her game. Resources for Article : Further resources on this subject: HTML5: Developing Rich Media Applications using Canvas [Article] Flash 10 Multiplayer Game: The Lobby and New Game Screen Implementation [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 4079

article-image-creating-custom-hud
Packt
29 Apr 2013
5 min read
Save for later

Creating a Custom HUD

Packt
29 Apr 2013
5 min read
(For more resources related to this topic, see here.) Mission Briefing In this project we will be creating a HUD that can be used within a Medieval RPG and that will fit nicely into the provided Epic Citadel map, making use of Scaleform and ActionScript 3.0 using Adobe Flash CS6. As usual, we will be following a simple step—by—step process from beginning to end to complete the project. Here is the outline of our tasks: Setting up Flash Creating our HUD Importing Flash files into UDK Setting up Flash Our first step will be setting up Flash in order for us to create our HUD. In order to do this, we must first install the Scaleform Launcher. Prepare for Lift Off At this point, I will assume that you have run Adobe Flash CS6 at least once beforehand. If not, you can skip this section to where we actually import the .swf file into UDK. Alternatively, you can try to use some other way to create a Flash animation, such as FlashDevelop, Flash Builder, or SlickEdit; but that will have to be done on your own. Engage Thrusters The first step will be to install the Scaleform Launcher. The launcher will make it very easy for us to test our Flash content using the GFX hardware—accelerated Flash Player, which is what UDK will use to play it. Let's get started. Open up Adobe Flash CS6 Professional. Once the program starts up, open up Adobe Extension Manager by going to Help | Manage Extensions.... You may see the menu say Performing configuration tasks, please wait.... This is normal; just wait for it to bring up the menu as shown in the following screenshot: Click on the Install option from the top menu on the right—hand side of the screen. In the file browser, locate the path of your UDK installation and then go into the BinariesGFxCLICK Tools folder. Once there, select the ScaleformExtensions.mxp file and then select OK. When the agreement comes up, press the Accept button; then select whether you want the program to be installed for just you or everyone on your computer. If Flash is currently running, you should get a window popping up telling you that the program will not be ready until you restart the program. Close the manager and restart the program. With your reopened version of Flash start up the Scaleform Launcher by clicking on Window | Other Panels | Scaleform Launcher. At this point you should see the Scaleform Launcher panel come up as shown in the following screenshot: At this point all of the options are grayed out as it doesn't know how to access the GFx player, so let's set that up now. Click on the + button to add a new profile. In the profile name section, type in GFXMediaPlayer. Next, we need to reference the GFx player. Click on the + button in the player EXE section. Go to your UDK directory, BinariesGFx, and then select GFxMediaPlayerD3d9.exe. It will then ask you to give a name for the Player Name field with the value already filled in; just hit the OK button. UDK by default uses DirectX 9 for rendering. However, since GDC 2011, it has been possible for users to use DirectX 11. If your project is using 11, feel free to check out http://udn.epicgames.com/Three/DirectX11Rendering.html and use DX11. In order to test our game, we will need to hit the button that says Test with: GFxMediaPlayerD3d9 as shown in the following screenshot: If you know the resolution in which you want your final game to be, you can set up multiple profiles to preview how your UI will look at a specific resolution. For example, if you'd like to see something at a resolution of 960 x 720, you can do so by altering the command params field after %SWF PATH% to include the text —res 960:720. Now that we have the player loaded, we need to install the CLIK library for our usage. Go to the Preferences menu by selecting Edit | Preferences. Click on the ActionScript tab and then click on the ActionScript 3.0 Settings... button. From there, add a new entry to our Source path section by clicking on the + button. After that, click on the folder icon to browse to the folder we want. Add an additional path to our CLIK directory in the file explorer by first going to your UDK installation directory and then going to DevelopmentFlashAS3CLIK. Click on the OK button and drag—and—drop the newly created Scaleform Launcher to the bottom—right corner of the interface. Objective Complete — Mini Debriefing Alright, Flash is now set up for us to work with Scaleform within it, which for all intents and purposes is probably the hardest part about working with Scaleform. Now that we have taken care of it, let's get started on the HUD! As long as you have administrator access to your computer, these settings should be set for whenever you are working with Flash. However, if you do not, you will have to run through all of these settings every time you want to work on Scaleform projects.
Read more
  • 0
  • 0
  • 4462
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-creating-weapons-your-game-using-unrealscript
Packt
23 Apr 2013
18 min read
Save for later

Creating weapons for your game using UnrealScript

Packt
23 Apr 2013
18 min read
(For more resources related to this topic, see here.) Creating a gun that fires homing missiles UDK already has a homing rocket launcher packaged with the dev kit (UTWeap_ RocketLauncher). The problem however, is that it isn't documented well; it has a ton of excess code only necessary for multiplayer games played over a network, and can only lock on when you have loaded three rockets. We're going to change all of that, and allow our homing weapon to lock onto a pawn and fire any projectile of our choice. We also need to change a few functions, so that our weapon fires from the correct location and uses the pawn's rotation and not the camera's. Getting ready As I mentioned earlier, our main weapon for this article will extend from the UTWeap_ ShockRifle, as that gun offers a ton of great base functionality which we can build from. Let's start by opening your IDE and creating a new weapon called MyWeapon, and have it extend from UTWeap_ShockRifle as shown as follows: class MyWeapon extends UTWeap_ShockRifle; How to do it... We need to start by adding all of the variables that we'll be needing for our lock on feature. There are quite a few here, but they're all commented in pretty great detail. Much of this code is straight from UDK's rocket launcher, that is why it looks familiar. In this recipe, we'll be creating a base weapon which extends from one of the Unreal Tournament's most commonly used weapons, the shock rifle, and base all of our weapons from that. I've gone ahead and removed an unnecessary information, added comments, and altered functionality so that we can lock onto pawns with any weapon, and fire only one missile while doing so. /********************************************************* Weapon lock on support********************************************************//** Class of the rocket to use when seeking */var class<UTProjectile> SeekingRocketClass;/** The frequency with which we will check for a lock */var(Locking) float LockCheckTime;/** How far out should we be considering actors for a lock */var float LockRange;/** How long does the player need to target an actor to lock on toit*/var(Locking) float LockAcquireTime;/** Once locked, how long can the player go without painting theobject before they lose the lock */var(Locking) float LockTolerance;/** When true, this weapon is locked on target */var bool bLockedOnTarget;/** What "target" is this weapon locked on to */var Actor LockedTarget;var PlayerReplicationInfo LockedTargetPRI;/** What "target" is current pending to be locked on to */var Actor PendingLockedTarget;/** How long since the Lock Target has been valid */var float LastLockedOnTime;/** When did the pending Target become valid */var float PendingLockedTargetTime;/** When was the last time we had a valid target */var float LastValidTargetTime;/** angle for locking for lock targets */var float LockAim;/** angle for locking for lock targets when on Console */var float ConsoleLockAim;/** Sound Effects to play when Locking */var SoundCue LockAcquiredSound;var SoundCue LockLostSound;/** If true, weapon will try to lock onto targets */var bool bTargetLockingActive;/** Last time target lock was checked */var float LastTargetLockCheckTime; With our variables in place, we can now move onto the weapon's functionality. The InstantFireStartTrace() function is the same function we added in our weapon. It allows our weapon to start its trace from the correct location using the GetPhysicalFireStartLoc() function. function. As mentioned before, this simply grabs the rotation of the weapon's muzzle flash socket, and tells the weapon to fire projectiles from that location, using the socket's rotation. The same goes for GetEffectLocation(), which is where our muzzle flash will occur. The v in vector for the InstantFireStartTrace() function is not capitalized. The reason being that vector is actually of struct type, and not a function, and that is standard procedure in UDK. /********************************************************* Overriden to use GetPhysicalFireStartLoc() instead of* Instigator.GetWeaponStartTraceLocation()* @returns position of trace start for instantfire()********************************************************/simulated function vector InstantFireStartTrace(){return GetPhysicalFireStartLoc();}/********************************************************* Location that projectiles will spawn from. Works for secondaryfire on* third person mesh********************************************************/simulated function vector GetPhysicalFireStartLoc(optional vectorAimDir){Local SkeletalMeshComponent AttachedMesh;local vector SocketLocation;Local TutorialPawn TutPawn;TutPawn = TutorialPawn(Owner);AttachedMesh = TutPawn.CurrentWeaponAttachment.Mesh;/** Check to prevent log spam, and the odd situation winwhich a cast to type TutPawn can fail */if (TutPawn != none){AttachedMesh.GetSocketWorldLocationAndRotation(MuzzleFlashSocket, SocketLocation);}return SocketLocation;}/********************************************************* Overridden from UTWeapon.uc* @return the location + offset from which to spawn effects(primarily tracers)********************************************************/simulated function vector GetEffectLocation(){Local SkeletalMeshComponent AttachedMesh;local vector SocketLocation;Local TutorialPawn TutPawn;TutPawn = TutorialPawn(Owner);AttachedMesh = TutPawn.CurrentWeaponAttachment.Mesh;if (TutPawn != none){AttachedMesh.GetSocketWorldLocationAndRotation(MuzzleFlashSocket, SocketLocation);}MuzzleFlashSocket, SocketLocation);return SocketLocation;} Now we're ready to dive into the parts of code that are applicable to the actual homing of the weapon. Let's start by adding our debug info, which allows us to troubleshoot any issues we may have along the way. ********************************************************** Prints debug info for the weapon********************************************************/simulated function GetWeaponDebug( out Array<String> DebugInfo ){Super.GetWeaponDebug(DebugInfo);DebugInfo[DebugInfo.Length] = "Locked:"@bLockedOnTarget@LockedTarget@LastLockedontime@(WorldInfo.TimeSeconds-LastLockedOnTime);DebugInfo[DebugInfo.Length] ="Pending:"@PendingLockedTarget@[email protected];} Here we are simply stating which target our weapon is currently locked onto, in addition to the pending target. It does this by grabbing the variables we've listed before, after they've returned from their functions, which we'll add in the next part. We need to have a default state for our weapon to begin with, so we mark it as inactive. /********************************************************* Default state. Go back to prev state, and don't use our* current tick********************************************************/auto simulated state Inactive{ignores Tick;simulated function BeginState(name PreviousStateName){Super.BeginState(PreviousStateName);// not looking to lock onto a targetbTargetLockingActive = false;// Don't adjust our target lockAdjustLockTarget(None);} We ignore the tick which tells the weapon to stop updating any of its homing functions. Additionally, we tell it not to look for an active target or adjust its current target, if we did have one at the moment. While on the topic of states, if we finish our current one, then it's time to move onto the next: /********************************************************* Finish current state, & prepare for the next one********************************************************/simulated function EndState(Name NextStateName){Super.EndState(NextStateName);// If true, weapon will try to lock onto targetsbTargetLockingActive = true;}} If our weapon is destroyed or we are destroyed, then we want to prevent the weapon from continuing to lock onto a target. /********************************************************* If the weapon is destroyed, cancel any target lock********************************************************/simulated event Destroyed(){// Used to adjust the LockTarget.AdjustLockTarget(none);//Calls the previously defined Destroyed functionsuper.Destroyed();} Our next chunk of code is pretty large, but don't let it intimidate you. Take your time and read it through to have a thorough understanding of what is occurring. When it all boils down, the CheckTargetLock() function verifies that we've actually locked onto our target. We start by checking that we have a pawn, a player controller, and that we are using a weapon which can lock onto a target. We then check if we can lock onto the target, and if it is possible, we do it. At the moment we only have the ability to lock onto pawns. /****************************************************************** Have we locked onto our target?****************************************************************/function CheckTargetLock(){local Actor BestTarget, HitActor, TA;local UDKBot BotController;local vector StartTrace, EndTrace, Aim, HitLocation,HitNormal;local rotator AimRot;local float BestAim, BestDist;if((Instigator == None)||(Instigator.Controller ==None)||(self != Instigator.Weapon) ){return;}if ( Instigator.bNoWeaponFiring)// TRUE indicates that weapon firing is disabled for thispawn{// Used to adjust the LockTarget.AdjustLockTarget(None);// "target" is current pending to be locked on toPendingLockedTarget = None;return;}// We don't have a targetBestTarget = None;BotController = UDKBot(Instigator.Controller);// If there is BotController...if ( BotController != None ){// only try locking onto bot's targetif((BotController.Focus != None) &&CanLockOnTo(BotController.Focus) ){// make sure bot can hit itBotController.GetPlayerViewPoint( StartTrace, AimRot );Aim = vector(AimRot);if((Aim dot Normal(BotController.Focus.Location -StartTrace)) > LockAim ){HitActor = Trace(HitLocation, HitNormal,BotController.Focus.Location, StartTrace, true,,,TRACEFLAG_Bullet);if((HitActor == None)||(HitActor == BotController.Focus) ){// Actor being looked atBestTarget = BotController.Focus;}}}} Immediately after that, we do a trace to see if our missile can hit the target, and check for anything that may be in the way. If we determine that we can't hit our target then it's time to start looking for a new one. else{// Trace the shot to see if it hits anyoneInstigator.Controller.GetPlayerViewPoint( StartTrace, AimRot );Aim = vector(AimRot);// Where our trace stopsEndTrace = StartTrace + Aim * LockRange;HitActor = Trace(HitLocation, HitNormal, EndTrace, StartTrace,true,,, TRACEFLAG_Bullet);// Check for a hitif((HitActor == None)||!CanLockOnTo(HitActor) ){/** We didn't hit a valid target? Controllerattempts to pick a good target */BestAim = ((UDKPlayerController(Instigator.Controller)!=None)&&UDKPlayerController(Instigator.Controller).bConsolePlayer) ? ConsoleLockAim : LockAim;BestDist = 0.0;TA = Instigator.Controller.PickTarget(class'Pawn', BestAim, BestDist, Aim, StartTrace,LockRange);if ( TA != None && CanLockOnTo(TA) ){/** Best target is the target we've locked */BestTarget = TA;}}// We hit a valid targetelse{// Best Target is the one we've done a trace onBestTarget = HitActor;}} If we have a possible target, then we note its time mark for locking onto it. If we can lock onto it, then start the timer. The timer can be adjusted in the default properties and determines how long we need to track our target before we have a solid lock. // If we have a "possible" target, note its time markif ( BestTarget != None ){LastValidTargetTime = WorldInfo.TimeSeconds;// If we're locked onto our best targetif ( BestTarget == LockedTarget ){/** Set the LLOT to the time in seconds sincelevel began play */LastLockedOnTime = WorldInfo.TimeSeconds;} Once we have a good target, it should turn into our current one, and start our lock on it. If we've been tracking it for enough time with our crosshair (PendingLockedTargetTime), then lock onto it. else{if ( LockedTarget != None&&((WorldInfo.TimeSeconds - LastLockedOnTime >LockTolerance)||!CanLockOnTo(LockedTarget)) ){// Invalidate the current locked TargetAdjustLockTarget(None);}/** We have our best target, see if they shouldbecome our current target Check for a newpending lock */if (PendingLockedTarget != BestTarget){PendingLockedTarget = BestTarget;PendingLockedTargetTime =((Vehicle(PendingLockedTarget) != None)&&(UDKPlayerController(Instigator.Controller)!=None)&&UDKPlayerController(Instigator.Controller).bConsolePlayer)? WorldInfo.TimeSeconds + 0.5*LockAcquireTime: WorldInfo.TimeSeconds + LockAcquireTime;}/** Otherwise check to see if we have beentracking the pending lock long enough */else if (PendingLockedTarget == BestTarget&& WorldInfo.TimeSeconds = PendingLockedTargetTime ){AdjustLockTarget(PendingLockedTarget);LastLockedOnTime = WorldInfo.TimeSeconds;PendingLockedTarget = None;PendingLockedTargetTime = 0.0;}}} Otherwise, if we can't lock onto our current or our pending target, then cancel our current target, along with our pending target. else{if ( LockedTarget != None&&((WorldInfo.TimeSeconds -LastLockedOnTime > LockTolerance)||!CanLockOnTo(LockedTarget)) ){// Invalidate the current locked TargetAdjustLockTarget(None);}// Next attempt to invalidate the Pending Targetif ( PendingLockedTarget != None&&((WorldInfo.TimeSeconds - LastValidTargetTime >LockTolerance)||!CanLockOnTo(PendingLockedTarget)) ){// We are not pending another target to lock ontoPendingLockedTarget = None;}}} That was quite a bit to digest. Don't worry, because the functions from here on out are pretty simple and straightforward. As with most other classes, we need a Tick() function to check for something in each frame. Here, we'll be checking whether or not we have a target locked in each frame, as well as setting our LastTargetLockCheckTime to the number of seconds passed during game-time. /********************************************************* Check target locking with each update********************************************************/event Tick( Float DeltaTime ){if ( bTargetLockingActive && ( WorldInfo.TimeSeconds >LastTargetLockCheckTime + LockCheckTime ) ){LastTargetLockCheckTime = WorldInfo.TimeSeconds;// Time, in seconds, since level began playCheckTargetLock();// Checks to see if we are locked on a target}} As I mentioned earlier, we can only lock onto pawns. Therefore, we need a function to check whether or not our target is a pawn. /********************************************************* Given an potential target TA determine if we can lock on to it.By* default, we can only lock on to pawns.********************************************************/simulated function bool CanLockOnTo(Actor TA){if ( (TA == None) || !TA.bProjTarget || TA.bDeleteMe ||(Pawn(TA) == None) || (TA == Instigator) ||(Pawn(TA).Health <= 0) ){return false;}return ( (WorldInfo.Game == None) ||!WorldInfo.Game.bTeamGame || (WorldInfo.GRI == None) ||!WorldInfo.GRI.OnSameTeam(Instigator,TA) );} Once we have a locked target we need to trigger a sound, so that the player is aware of the lock. The whole first half of this function simply sets two variables to not have a target, and also plays a sound cue to notify the player that we've lost track of our target. /********************************************************* Used to adjust the LockTarget.********************************************************/function AdjustLockTarget(actor NewLockTarget){if ( LockedTarget == NewLockTarget ){// No need to updatereturn;}if (NewLockTarget == None){// Clear the lockif (bLockedOnTarget){// No targetLockedTarget = None;// Not locked onto a targetbLockedOnTarget = false;if (LockLostSound != None && Instigator != None &&Instigator.IsHumanControlled() ){// Play the LockLostSound if we lost track of thetargetPlayerController(Instigator.Controller).ClientPlaySound(LockLostSound);}}}else{// Set the lockbLockedOnTarget = true;LockedTarget = NewLockTarget;LockedTargetPRI = (Pawn(NewLockTarget) != None) ?Pawn(NewLockTarget).PlayerReplicationInfo : None;if ( LockAcquiredSound != None && Instigator != None &&Instigator.IsHumanControlled() ){PlayerController(Instigator.Controller).ClientPlaySound(LockAcquiredSound);}}} Once it looks like everything has checked out we can fire our ammo! We're just setting everything back to 0 at this point, as our projectile is seeking our target, so it's time to start over and see whether we will use the same target or find another one. /********************************************************* Everything looks good, so fire our ammo!********************************************************/simulated function FireAmmunition(){Super.FireAmmunition();AdjustLockTarget(None);LastValidTargetTime = 0;PendingLockedTarget = None;LastLockedOnTime = 0;PendingLockedTargetTime = 0;} With all of that out of the way, we can finally work on firing our projectile, or in our case, our missile. ProjectileFile() tells our missile to go after our currently locked target, by setting the SeekTarget variable to our currently locked target. /********************************************************* If locked on, we need to set the Seeking projectile's* LockedTarget.********************************************************/simulated function Projectile ProjectileFire(){local Projectile SpawnedProjectile;SpawnedProjectile = super.ProjectileFire();if (bLockedOnTarget &&UTProj_SeekingRocket(SpawnedProjectile) != None){/** Go after the target we are currently lockedonto */UTProj_SeekingRocket(SpawnedProjectile).SeekTarget =LockedTarget;}return SpawnedProjectile;} Really though, our projectile could be anything at this point. We need to tell our weapon to actually use our missile (or rocket, they are used interchangeably) which we will define in our defaultproperties block. /********************************************************* We override GetProjectileClass to swap in a Seeking Rocket if weare* locked on.********************************************************/function class<Projectile> GetProjectileClass(){// if we're locked on...if (bLockedOnTarget){// use our homing rocketreturn SeekingRocketClass;}// Otherwise...else{// Use our default projectilereturn WeaponProjectiles[CurrentFireMode];}} If we don't have a SeekingRocketClass class defined, then we just use the currently defined projectile from our CurrentFireMode array. The last part of this class involves the defaultproperties block. This is the same thing we saw in our Camera class. We're setting our muzzle flash socket, which is used for not only firing effects, but also weapon traces, to actually use our muzzle flash socket. defaultproperties{// Forces the secondary fire projectile to fire fromthe weapon attachment */MuzzleFlashSocket=MuzzleFlashSocket} Our MyWeapon class is complete. We don't want to clog our defaultproperties block and we have some great base functionality, so from here on out our weapon classes will generally be only changes to the defaultproperties block. Simplicity! Create a new class called MyWeapon_HomingRocket. Have it extend from MyWeapon. class MyWeapon_HomingRocket extends MyWeapon; In our defaultproperties block, let's add our skeletal and static meshes. We're just going to keep using the shock rifle mesh. Although it's not necessary to do this, as we're already a child class of (that is, inheriting from) UTWeap_ShockRifle, I still want you to see where you would change the mesh if you ever wanted to. defaultproperties{// Weapon SkeletalMeshBegin Object class=AnimNodeSequence Name=MeshSequenceAEnd Object// Weapon SkeletalMeshBegin Object Name=FirstPersonMeshSkeletalMesh=SkeletalMesh'WP_ShockRifle.Mesh.SK_WP_ShockRifle_1P'AnimSets(0)=AnimSet'WP_ShockRifle.Anim.K_WP_ShockRifle_1P_Base'Animations=MeshSequenceARotation=(Yaw=-16384)FOV=60.0End Object// PickupMeshBegin Object Name=PickupMeshSkeletalMesh=SkeletalMesh'WP_ShockRifle.Mesh.SK_WP_ShockRifle_3P'End Object// Attachment classAttachmentClass=class'UTGameContent.UTAttachment_ShockRifle' Next, we want to declare the type of projectile, the type of damage it does, and the frequency at which it can be fired. Moreover, we want to declare that each shot fired will only deplete one round from our inventory. We can declare how much ammo the weapon starts with too. // Defines the type of fire for each modeWeaponFireTypes(0)=EWFT_InstantHitWeaponFireTypes(1)=EWFT_ProjectileWeaponProjectiles(1)=class'UTProj_Rocket'// Damage typesInstantHitDamage(0)=45FireInterval(0)=+1.0FireInterval(1)=+1.3InstantHitDamageTypes(0)=class'UTDmgType_ShockPrimary'InstantHitDamageTypes(1)=None// Not an instant hit weapon, so set to "None"// How much ammo will each shot use?ShotCost(0)=1ShotCost(1)=1// # of ammo gun should start withAmmoCount=20// Initial ammo count if weapon is lockedLockerAmmoCount=20// Max ammo countMaxAmmoCount=40 Our weapon will use a number of sounds that we didn't previously need, such as locking onto a pawn, as well as losing lock. So let's add those now. // Sound effectsWeaponFireSnd[0] =SoundCue'A_Weapon_ShockRifle.Cue.A_Weapon_SR_FireCue'WeaponFireSnd[1]=SoundCue'A_Weapon_RocketLauncher.Cue.A_Weapon_RL_Fire_Cue'WeaponEquipSnd=SoundCue'A_Weapon_ShockRifle.Cue.A_Weapon_SR_RaiseCue'WeaponPutDownSnd=SoundCue'A_Weapon_ShockRifle.Cue.A_Weapon_SR_LowerCue'PickupSound=SoundCue'A_Pickups.Weapons.Cue.A_Pickup_Weapons_Shock_Cue'LockAcquiredSound=SoundCue'A_Weapon_RocketLauncher.Cue.A_Weapon_RL_SeekLock_Cue'LockLostSound=SoundCue'A_Weapon_RocketLauncher.Cue.A_Weapon_RL_SeekLost_Cue' We won't be the only one to use this weapon, as bots will be picking it up during Deathmatch style games as well. Therefore, we want to declare some logic for the bots, such as how strongly they will desire it, and whether or not they can use it for things like sniping. // AI logicMaxDesireability=0.65 // Max desireability for botsAIRating=0.65CurrentRating=0.65bInstantHit=false // Is it an instant hit weapon?bSplashJump=false// Can a bot use this for splash damage?bRecommendSplashDamage=true// Could a bot snipe with this?bSniping=false// Should it fire when the mouse is released?ShouldFireOnRelease(0)=0// Should it fire when the mouse is released?ShouldFireOnRelease(1)=0 We need to create an offset for the camera too, otherwise the weapon wouldn't display correctly as we switch between first and third person cameras. // Holds an offset for spawning projectile effectsFireOffset=(X=20,Y=5)// Offset from view center (first person)PlayerViewOffset=(X=17,Y=10.0,Z=-8.0) Our homing properties section is the bread and butter of our class. This is where you'll alter the default values for anything to do with locking onto pawns. // Homing properties/** angle for locking for locktargets when on Console */ConsoleLockAim=0.992/** How far out should we be before considering actors fora lock? */LockRange=9000// Angle for locking, for lockTargetLockAim=0.997// How often we check for lockLockChecktime=0.1// How long does player need to hover over actor to lock?LockAcquireTime=.3// How close does the trace need to be to the actual targetLockTolerance=0.8SeekingRocketClass=class'UTProj_SeekingRocket' Animations are an essential part of realism, so we want the camera to shake when firing a weapon, in addition to an animation for the weapon itself. // camera anim to play when firing (for camera shakes)FireCameraAnim(1)=CameraAnim'Camera_FX.ShockRifle.C_WP_ShockRifle_Alt_Fire_Shake'// Animation to play when the weapon is firedWeaponFireAnim(1)=WeaponAltFire While we're on the topic of visuals, we may as well add the flashes at the muzzle, as well as the crosshairs for the weapon. // Muzzle flashesMuzzleFlashPSCTemplate=WP_ShockRifle.Particles.P_ShockRifle_MF_AltMuzzleFlashAltPSCTemplate=WP_ShockRifle.Particles.P_ShockRifle_MF_AltMuzzleFlashColor=(R=200,G=120,B=255,A=255)MuzzleFlashDuration=0.33MuzzleFlashLightClass=class'UTGame.UTShockMuzzleFlashLight'CrossHairCoordinates=(U=256,V=0,UL=64,VL=64)LockerRotation=(Pitch=32768,Roll=16384)// CrosshairIconCoordinates=(U=728,V=382,UL=162,VL=45)IconX=400IconY=129IconWidth=22IconHeight=48/** The Color used when drawing the Weapon's Name on theHUD */WeaponColor=(R=160,G=0,B=255,A=255) Since weapons are part of a pawn's inventory, we need to declare which slot this weapon will fall into (from one to nine). // InventoryInventoryGroup=4 // The weapon/inventory set, 0-9GroupWeight=0.5 // position within inventory group.(used by prevweapon and nextweapon) Our final piece of code has to do with rumble feedback with the Xbox gamepad. This is not only used on consoles, but also it is generally reserved for it. /** Manages the waveform data for a forcefeedback device,specifically for the xbox gamepads. */Begin Object Class=ForceFeedbackWaveformName=ForceFeedbackWaveformShooting1Samples(0)=(LeftAmplitude=90,RightAmplitude=40,LeftFunction=WF_Constant,RightFunction=WF_LinearDecreasing,Duration=0.1200)End Object// controller rumble to play when firingWeaponFireWaveForm=ForceFeedbackWaveformShooting1} All that's left to do is to add the weapon to your pawn's default inventory. You can easily do this by adding the following line to your TutorialGame class's defaultproperties block: defaultproperties{DefaultInventory(0)=class'MyWeapon_HomingRocket'} Load up your map with a few bots on it, hold your aiming reticule over it for a brief moment and when you hear the lock sound, fire away! How it works... To keep things simple we extend from UTWeap_ShockRifle. This gave us a great bit of base functionality to work from. We created a MyWeapon class which offers not only everything that the shock rifle does, but also the ability to lock onto targets. When we aim our target reticule over an enemy bot, it checks for a number of things. First, it verifies that it is an enemy and also whether or not the target can be reached. It does this by drawing a trace and returns any actors which may fall in our weapon's path. If all of these things check out, then it begins to lock onto our target after we've held the reticule over the enemy for a set period of time. We then fire our projectile, which is either the weapon's firing mode, or in our case, a rocket. We didn't want to clutter the defaultproperties block for MyWeapon; so we create a child class called MyWeapon_HomingRocket that makes use of all the functionality and only changes the defaultproperties block, which will influence the weapon's aesthetics, sound effects, and even some functionality with the target lock.
Read more
  • 0
  • 0
  • 1903

article-image-running-simple-game-using-pygame
Packt
29 Mar 2013
5 min read
Save for later

Running a simple game using Pygame

Packt
29 Mar 2013
5 min read
How to do it... Imports: First we will import the required Pygame modules. If Pygame is installed properly, we should get no errors: import pygame, sys from pygame.locals import * Initialization: We will initialize Pygame by creating a display of 400 by 300 pixels and setting the window title to Hello world: pygame.init() screen = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Hello World!') The main game loop: Games usually have a game loop, which runs forever until, for instance, a quit event occurs. In this example, we will only set a label with the text Hello world at coordinates (100, 100). The text has a font size of 19, red color, and falls back to the default font: while True: sys_font = pygame.font.SysFont("None", 19) rendered = sys_font.render('Hello World', 0, (255, 100, 100)) screen.blit(rendered, (100, 100)) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() We get the following screenshot as the end result: The following is the complete code for the Hello World example: import pygame, sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Hello World!') while True: sysFont = pygame.font.SysFont("None", 19) rendered = sysFont.render('Hello World', 0, (255, 100, 100)) screen.blit(rendered, (100, 100)) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() How it works... It might not seem like much, but we learned a lot in this recipe. The functions that passed the review are summarized in the following table: Function Description pygame.init() This function performs the initialization and needs to be called before any other Pygame functions are called. pygame.display.set_mode((400, 300)) This function creates a so-called   Surface object to draw on. We give this function a tuple representing the width and height of the surface. pygame.display.set_caption('Hello World!') This function sets the window title to a specified string value. pygame.font.SysFont("None", 19) This function creates a system font from a comma-separated list of fonts (in this case none) and a font size parameter. sysFont.render('Hello World', 0, (255, 100, 100)) This function draws text on a surface. The second parameter indicates whether anti-aliasing is used. The last parameter is a tuple representing the RGB values of a color. screen.blit(rendered, (100, 100)) This function draws on a surface. pygame.event.get() This function gets a list of Event objects. Events represent some special occurrence in the system, such as a user quitting the game. pygame.quit() This function cleans up resources used by Pygame. Call this function before exiting the game. pygame.display.update() This function refreshes the surface.   Drawing with Pygame Before we start creating cool games, we need an introduction to the drawing functionality of Pygame. As we noticed in the previous section, in Pygame we draw on the Surface objects. There is a myriad of drawing options—different colors, rectangles, polygons, lines, circles, ellipses, animation, and different fonts. How to do it... The following steps will help you diverge into the different drawing options you can use with Pygame: Imports: We will need the NumPy library to randomly generate RGB values for the colors, so we will add an extra import for that: import numpy Initializing colors: Generate four tuples containing three RGB values each with NumPy: colors = numpy.random.randint(0, 255, size=(4, 3)) Then define the white color as a variable: WHITE = (255, 255, 255) Set the background color: We can make the whole screen white with the following code: screen.fill(WHITE) Drawing a circle: Draw a circle in the center with the window using the first color we generated: pygame.draw.circle(screen, colors[0], (200, 200), 25, 0) Drawing a line: To draw a line we need a start point and an end point. We will use the second random color and give the line a thickness of 3: pygame.draw.line(screen, colors[1], (0, 0), (200, 200), 3) Drawing a rectangle: When drawing a rectangle, we are required to specify a color, the coordinates of the upper-left corner of the rectangle, and its dimensions: pygame.draw.rect(screen, colors[2], (200, 0, 100, 100)) Drawing an ellipse: You might be surprised to discover that drawing an ellipse requires similar parameters as for rectangles. The parameters actually describe an imaginary rectangle that can be drawn around the ellipse: pygame.draw.ellipse(screen, colors[3], (100, 300, 100, 50), 2) The resulting window with a circle, line, rectangle, and ellipse using random colors: The code for the drawing demo is as follows: import pygame, sys from pygame.locals import * import numpy pygame.init() screen = pygame.display.set_mode((400, 400)) pygame.display.set_caption('Drawing with Pygame') colors = numpy.random.randint(0, 255, size=(4, 3)) WHITE = (255, 255, 255) #Make screen white screen.fill(WHITE) #Circle in the center of the window pygame.draw.circle(screen, colors[0], (200, 200), 25, 0) # Half diagonal from the upper-left corner to the center pygame.draw.line(screen, colors[1], (0, 0), (200, 200), 3) pygame.draw.rect(screen, colors[2], (200, 0, 100, 100)) pygame.draw.ellipse(screen, colors[3], (100, 300, 100, 50), 2) while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() Summary Here we saw how to create a basic game to get us started. The game demonstrated fonts and screen management in the time-honored tradition of Hello world examples. The next section of drawing with Pygame taught us how to draw basic shapes such as rectangles, ovals, circles, lines, and others. We also learned important information about colors and color management. Resources for Article : Further resources on this subject: Using Execnet for Parallel and Distributed Processing with NLTK [Article] TortoiseSVN: Getting Started [Article] Python 3 Object Oriented Programming: Managing objects [Article]
Read more
  • 0
  • 0
  • 10026

article-image-thumping-moles-fun
Packt
07 Mar 2013
22 min read
Save for later

Thumping Moles for Fun

Packt
07 Mar 2013
22 min read
(For more resources related to this topic, see here.) The project is… In this article, we will be building a mole thumping game. Inspired by mechanical games of the past, we will build molehills on the screen and randomly cause animated moles to pop their heads out. The player taps them to score. Simple in concept, but there are a few challenging design considerations in this deceptively easy game. To make this game a little unusual, we will be using a penguin instead of a mole for the graphics, but we will continue to use the mole terminology throughout, since a molehill is easier to consider than a penguin-hill. Design approach Before diving into the code, let's start with a discussion of the design of the game. First, we will need to have molehills on the screen. To be aesthetically pleasing, the molehills will be in a 3 x 4 grid. Another approach would be to use random molehill positions, but that doesn't really work well on the limited screen space of the iPhone.Moles will randomly spawn from the molehills. Each mole will rise up, pause, and drop down. We will need touch handling to detect when a mole has been touched, and that mole will need to increase the player's score and then go away. How do we make the mole come up from underground? If we assume the ground is a big sprite with the molehills drawn on it, we would need to determine where to make the "slot" from which the mole emerges, and somehow make the mole disappear when it is below that slot. One approach is to adjust the size of the mole's displayed frame by clipping the bottom of the image so that the part below the ground is not visible. This needs to be done as a part of every update cycle for every mole for the entire game. From a programming standpoint this will work, but you may experience performance issues. Another consideration is that this usually means the hole in the molehill will always appear to be a straight-edged hole, if we trim the sprite with a straight line. This lacks the organic feel we want for this game. The approach we will take is to use Z-ordering to trick the eye into seeing a flat playfield when everything is really on staggered Z-orders. We will create a "stair step" board, with multiple "sandwiches" of graphics for every row of molehills on the board. For each "step" of the "stair step", we have a sandwich of Z-ordered elements in this order, from back to front: molehill top, mole, ground, and molehill bottom. We need to have everything aligned so that the molehill top graphic overlaps the ground of the next "step" further towards the top of the screen. This will visually contain the mole, so it appears to be emerging from inside the molehill. We intentionally skipped the Z value of 1, to provide an extra expansion space if we later decide that we need another element in the "sandwich". It is easier to leave little holes like this than to worry about changing everything later, if we enhance our design. So throughout our layout, we will consider it as a sandwich of five Z values, even though we only use four elements in the sandwich. As we said, we need this to be a "stair step" board. So for each row of molehills, from the top of the screen to the bottom, we will need to increase the Z-ordering between layers to complete the illusion. This is needed so that each mole will actually pass in front of the ground layer that is closer to the top of the screen, yet will hide completely behind the ground layer in its own sandwich of layers. Designing the spawn That covers the physical design of the game, but there is one additional design aspect we need to discuss: spawning moles. We need to spawn the moles whenever we need one to be put into the play. Just as we reviewed two approaches to the hiding mole problem earlier, we will touch on two approaches to mole spawning. The first approach (and most common) is to create a new mole from scratch each time you need one. When you are done with it, you destroy it. This works fine for games with a small number of objects or games of more limited complexity, but there is a performance penalty to create and destroy a lot of objects in a short amount of time. Strictly speaking, our mole thumping game would likely work fine with this approach. Even though we will be creating and destroying quite a few moles all the time, we only have a dozen possible moles, not hundreds. The other approach is to create a spawning pool. This is basically a set number of the objects that are created when you start up. When you need a mole, in our case, you ask the pool for an unused "blank mole", set any parameters that are needed, and use it. When you are done with it, you reset it back to the "blank mole" state, and it goes back into the pool. For our game the spawning pool might be a little more heavily coded than needed, as it is doubtful that we would run into any performance issues with this relatively simple game. Still, if you are willing to build the additional code as we are doing here, it does provide a strong foundation to add more performance-heavy effects later on. To clarify our design approach, we will actually implement a variation of the traditional spawning pool. Instead of a general pool of moles, we will build our "blank mole" objects attached to their molehills. A more traditional spawning pool might have six "blnk moles" in the pool, and they are assigned to a molehill when they are needed. Both approaches are perfectly valid. Portrait mode The default orientation supported by cocos2d is landscape mode, which is more commonly used in games. However, we want our game to be in portrait mode. The changes are very simple to make this work. If you click once on the project name (and blue icon) in the Project Navigator pane (where all your files are listed), and then click on the name of your game under TARGETS , you will see the Summary pane. Under the Supported Interface Orientations, select Portrait, and deselect Landscape Left and Landscape Right . That will change your project to portrait. The one adjustment to the cocos2d template code we need is in the IntroLayer.m. After it sets the background to Default.png, there is a command to rotate the background. Remove, or comment out this line, and everything will work correctly. Custom TTF fonts In this project we will be using a custom TTF font. In cocos2d 1.x, you could simply add the font to your project and use it. Under cocos2d 2.0, which we are using, we have to approach this a little differently. We add the font to our project (we are using anudrg.ttf). Then we edit the Info.plist for our project, and add a new key to the list, like this: This tells the project that we need to know about this font. To actually use the font, we need to call it by the proper name for the font, not the filename. To find out this name, in Finder, select the file and choose File Info . In the info box, there is an entry for Full Name . In our case, the file name is AnuDaw. Any time we create a label with CCLabelTTF, we simply need to use this as the font name, and everything works perfectly. Defining a molehill We have created a new subclass of CCNode to represent the MXMoleHill object. Yes, we will be using a subclass of CCNode, not a subclass of CCSprite . Even though we initially would consider the molehill to be a sprite, referring back to our design, it is actually made up of two sprites, one for the top of the hill and one for the bottom. We will use CCNode as a container that will then contain two CCSprite objects as variables inside the MXMoleHill class. Filename: MXMoleHill.h @interface MXMoleHill : CCNode { NSInteger moleHillID; CCSprite *moleHillTop; CCSprite *moleHillBottom; NSInteger moleHillBaseZ; MXMole *hillMole; BOOL isOccupied; } @property (nonatomic, assign) NSInteger moleHillID; @property (nonatomic, retain) CCSprite *moleHillTop; @property (nonatomic, retain) CCSprite *moleHillBottom; @property (nonatomic, assign) NSInteger moleHillBaseZ; @property (nonatomic, retain) MXMole *hillMole; @property (nonatomic, assign) BOOL isOccupied; @end If this seems rather sparse to you, it is. As we will be using this as a container for everything that defines the hill, we don't need to override any methods from the standard CCNode class. Likewise, the @implementation file contains nothing but the @synthesize statements for these variables. It is worth pointing out that we could have used a CCSprite object for the hillTop sprite, with the hillBottom object as a child of that sprite, and achieved the same effect. However, we prefer consistency in our object structure, so we have opted to use the structure noted previously. This allows us to refer to the two sprites in exactly the same fashion, as they are both children of the same parent. Building the mole When we start building the playfield, we will be creating "blank mole" objects for each hill, so we need to look at the MXMole class before we build the playfield. Following the same design decision as we did with the MXMoleHill class, the MXMole class is also a subclass of CCNode. Filename: MXMole.h #import #import "cocos2d.h" #import "MXDefinitions.h" #import "SimpleAudioEngine.h" // Forward declaration, since we don't want to import it here @class MXMoleHill; @interface MXMole : CCNode { CCSprite *moleSprite; // The sprite for the mole MXMoleHill *parentHill; // The hill for this mole float moleGroundY; // Where "ground" is MoleState _moleState; // Current state of the mole BOOL isSpecial; // Is this a "special" mole? } @property (nonatomic, retain) MXMoleHill *parentHill; @property (nonatomic, retain) CCSprite *moleSprite; @property (nonatomic, assign) float moleGroundY; @property (nonatomic, assign) MoleState moleState; @property (nonatomic, assign) BOOL isSpecial; -(void) destroyTouchDelegate; @end We see a forward declaration here (the @class statement). Use of forward declaration avoids creating a circular loop, because the MXMoleHill.h file needs to import MXMole.h . In our case, MXMole needs to know there is a valid class called MXMoleHill, so we can store a reference to an MXMoleHill object in the parentHill instance variable, but we don't actually need to import the class. The @class declaration is an instruction to the compiler that there is a valid class called MXMoleHill, but doesn't actually import the header while compiling the MXMole class. If we needed to call the methods of MXMoleHill from the MXMole class, we could then put the actual #import "MXMoleHill.h" line in the MXMole.m file. For our current project, we only need to know the class exists, so we don't need that additional line in the MXMole.m file. We have built a simple state machine for MoleState. Now that we have reviewed the MXMole.h file, we have a basic idea of what makes up a mole. It tracks the state of the mole (dead, alive, and so on), it keeps a reference to its parent hill, and it has CCSprite as a child where the actual mole sprite variable will be held. There are a couple of other variables (moleGroundY and isSpecial), but we will deal with these later. Filename: MXDefinitions.h typedef enum { kMoleDead = 0, kMoleHidden, kMoleMoving, kMoleHit, kMoleAlive } MoleState; #define SND_MOLE_NORMAL @"penguin_call.caf" #define SND_MOLE_SPECIAL @"penguin_call_echo.caf" #define SND_BUTTON @"button.caf" Unlike in the previous article, we do not have typedef enum that defines the MoleState type inside this header file. We have moved our definit ions to the MXDefinitions.h file, which helps to maintain slightly cleaner code. You can storethese "universal" definitions in a single header file, and include the header in any .h or .m files where they are needed, without needing to import classes just to gain access to these definitions. The MXDefinitions.h file only includes the definitions; there are no @interface or @implementation sections, nor a related .m file. Making a molehill We have our molehill class and we've seen the mole class, so now we can look at how we actually build the molehills in the MXPlayfieldLayer class: Filename: MXPlayfieldLayer.m -(void) drawHills { NSInteger hillCounter = 0; NSInteger newHillZ = 6; // We want to draw a grid of 12 hills for (NSInteger row = 1; row <= 4; row++) { // Each row reduces the Z order newHillZ--; for (NSInteger col = 1; col <= 3; col++) { hillCounter++; // Build a new MXMoleHill MXMoleHill *newHill = [[MXMoleHill alloc] init]; [newHill setPosition:[self hillPositionForRow:row andColumn:col]]; [newHill setMoleHillBaseZ:newHillZ]; [newHill setMoleHillTop:[CCSprite spriteWithSpriteFrameName:@"pileTop.png"]]; [newHill setMoleHillBottom:[CCSprite spriteWithSpriteFrameName:@"pileBottom.png"]]; [newHill setMoleHillID:hillCounter]; // We position the two moleHill sprites so // the "seam" is at the edge. We use the // size of the top to position both, // because the bottom image // has some overlap to add texture [[newHill moleHillTop] setPosition: ccp(newHill.position.x, newHill.position.y + [newHill moleHillTop].contentSize.height / 2)]; [[newHill moleHillBottom] setPosition: ccp(newHill.position.x, newHill.position.y - [newHill moleHillTop].contentSize.height / 2)]; //Add the sprites to the batch node [molesheet addChild:[newHill moleHillTop] z:(2 + (newHillZ * 5))]; [molesheet addChild:[newHill moleHillBottom] z:(5 + (newHillZ * 5))]; //Set up a mole in the hill MXMole *newMole = [[MXMole alloc] init]; [newHill setHillMole:newMole]; [[newHill hillMole] setParentHill:newHill]; [newMole release]; // This flatlines the values for the new mole [self resetMole:newHill]; [moleHillsInPlay addObject:newHill]; [newHill release]; } } } This is a pretty dense method, so we'll walk through it one section at a time. We start by creating two nested for loops so we can iterate over every possible row and column position. For clarity, we named our loop variables as row and column, so we know what each represents. If you recall from the design, we decided to use a 3 x 4 grid, so we will have three columns and four rows of molehills. We create a new hill using an alloc/init, and then we begin filling in the variables. We set an ID number (1 through 12), and we build CCSprite objects to fill in the moleHillTop and moleHillBottom variables. Filename: MXPlayfieldLayer.m -(CGPoint) hillPositionForRow:(NSInteger)row andColumn:(NSInteger)col { float rowPos = row * 82; float colPos = 54 + ((col - 1) * 104); return ccp(colPos,rowPos); } We also set the position using the helper method, hillPositionForRow:andColumn:, that returns a CGPoint for each molehill. (It is important to remember that ccp is a cocos2d shorthand term for a CGPoint. They are interchangeable in your code.) These calculations are based on experimentation with the layout, to create a grid that is both easy to draw as well as being visually appealing. The one variable that needs a little extra explaining is moleHillBaseZ . This represents which "step" of the Z-order stair-step design this hill belongs to. We use this to aid in the calculations to determine the proper Z-ordering across the entire playfield. If you recall, we used Z-orders from 2 to 5 in the illustration of the stack of elements. When we add the moleHillTop and moleHillBottom as children of the moleSheet (our CCSpriteBatchNode), we add the Z-order of the piece of the sandwich to the "base Z" times 5. We will use a "base Z" of 5 for the stack at the bottom of the screen, and a "base Z" of 2 at the top of the screen. This will be easier to understand the reason if we look at the following chart, which shows the calculations we use for each row of molehills: As we start building our molehills at the bottom of the screen, we start with a higher Z-order first. In the preceding chart, you will see that the mole in hole 4 (second row of molehills from the bottom) will have a Z-order of 23. This will put it behind its own ground layer, which is at a Z-order of 24, but in front of the ground higher on the screen, which would be at a Z-order of 19. It is worth calling out that since we have a grid of molehills in our design, all Z-ordering will be identical for all molehills in the same row. This is why the decrement of the baseHillZ variable occurs only when we are iterating through a new row. If we refer back to the drawHills method itself, we also see a big calculation for the actual position of the moleHillTop and moleHillBottom sprites. We want the "seam" between these two sprites to be at the top edge of the ground image of their stack, so we set the y position based on the position of the MXMoleHill object. At first it may look like an error, because both setPosition statements use contentSize of the moleHillTop sprite as a part of the calculation. This is intentional, because we have a little jagged overlap between those two sprites to give it a more organic feel. To wrap up the drawHills method, we allocate a new MXMole, assign it to the molehill that was just created, and set the cross-referencing hillMole and parentHill variables in the objects themselves. We add the molehill to our moleHillsInPlay array, and we clean everything up by releasing both the newHill and the newMole objects. Because the array retains a reference to the molehill, and the molehill retains a reference to the mole, we can safely release both the newHill and newMole objects in this method. Drawing the ground Now that we have gone over the Z-ordering "trickery", we should look at the drawGround method to see how we accomplish the Z-ordering in a similar fashion: Filename: MXPlayfieldLayer.m -(void) drawGround { // Randomly select a ground image NSString *groundName; NSInteger groundPick = CCRANDOM_0_1() * 2; switch (groundPick) { case 1: groundName = @"ground1.png"; break; default: // Case 2 also falls through here groundName = @"ground2.png"; break; } // Build the strips of ground from the selected image for (int i = 0; i < 5; i++) { CCSprite *groundStrip1 = [CCSprite spriteWithSpriteFrameName:groundName]; [groundStrip1 setAnchorPoint:ccp(0.5,0)]; [groundStrip1 setPosition:ccp(size.width/2,i*82)]; [molesheet addChild:groundStrip1 z:4+((5-i) * 5)]; } // Build a skybox skybox = [CCSprite spriteWithSpriteFrameName:@"skybox1.png"]; [skybox setPosition:ccp(size.width/2,5*82)]; [skybox setAnchorPoint:ccp(0.5,0)]; [molesheet addChild:skybox z:1]; } This format should look familiar to you. We create five CCSprite objects for the five stripes of ground, tile them from the bottom of the screen to the top, and assign the Z-order as z:4+((5-i) * 5). We do include a randomizer with two different background images, and we also include a skybox image at the top of the screen, because we want some sense of a horizon line above the mole-thumping area. anchorPoint is the point that is basically "center" for the sprite. The acceptable values are floats between 0 and 1. For the x axis, an anchorPoint of 0 is the left edge, and 1 is the right edge (0.5 is centered). For the y axis, an anchorPoint of 0 is the bottom edge, and 1 is the top edge. This anchorPoint is important here because that anchorPoint is the point on the object to which the setPosition method will refer. So in our code, the first groundStrip1 created will be anchored at the bottom center. When we call setPosition, the coordinate passed to setPosition needs to relate to that anchorPoint; the position set will be the bottom center of the sprite. If this is still fuzzy for you, it is a great exercise to change anchorPoint of your own CCSprite objects and see what happens on the screen. Mole spawning The only piece of the "sandwich" of elements we haven't seen in detail is the mole itself, so let's visit the mole spawning method to see how the mole fits in with our design: Filename: MXPlayfieldLayer.m -(void) spawnMole:(id)sender { // Spawn a new mole from a random, unoccupied hill NSInteger newMoleHill; BOOL isApprovedHole = FALSE; NSInteger rand; if (molesInPlay == [moleHillsInPlay count] || molesInPlay == maxMoles) { // Holes full, cannot spawn a new mole } else { // Loop until we pick a hill that isn't occupied do { rand = CCRANDOM_0_1() * maxHills; if (rand > maxHills) { rand = maxHills; } MXMoleHill *testHill = [moleHillsInPlay objectAtIndex:rand]; // Look for an unoccupied hill if ([testHill isOccupied] == NO) { newMoleHill = rand; isApprovedHole = YES; [testHill setIsOccupied:YES]; } } while (isApprovedHole == NO); // Mark that we have a new mole in play molesInPlay++; // Grab a handle on the mole Hill MXMoleHill *thisHill = [moleHillsInPlay objectAtIndex:newMoleHill]; NSInteger hillZ = [thisHill moleHillBaseZ]; // Set up the mole for this hill CCSprite *newMoleSprite = [CCSprite spriteWithSpriteFrameName:@"penguin_forward.png"]; [[thisHill hillMole] setMoleSprite:newMoleSprite]; [[thisHill hillMole] setMoleState:kMoleAlive]; // We keep track of where the ground level is [[thisHill hillMole] setMoleGroundY: thisHill.position.y]; // Set the position of the mole based on the hill float newMolePosX = thisHill.position.x; float newMolePosY = thisHill.position.y - (newMoleSprite.contentSize.height/2); [newMoleSprite setPosition:ccp(newMolePosX, newMolePosY)]; // See if we need this to be a "special" mole NSInteger moleRandomizer = CCRANDOM_0_1() * 100; // If we randomized under 5, make this special if (moleRandomizer < 5) { [[thisHill hillMole] setIsSpecial:YES]; } //Trigger the new mole to raise [molesheet addChild:newMoleSprite z:(3 + (hillZ * 5))]; [self raiseMole:thisHill]; } } The first thing we check is to make sure we don't have active moles in every molehill, and that we haven't reached the maximum number of simultaneous moles we want on screen at the same time (the maxMoles variable). If we have enough moles, we skip the rest of the loop. If we need a new mole, we enter a do…while loop that will randomly pick a molehill and check if it has the isOccupied variable set to NO (that is, no active mole in this molehill). If the randomizer picks a molehill that is already occupied, the do…while loop will pick another molehill and try again. When we find an unoccupied molehill, the code breaks out of the loop and starts to set up the mole. As we saw earlier, there is already a "blank mole" attached to every molehill. At this point we build a new sprite to attach to the moleSprite variable of MXMole, change the moleState to kMoleAlive, and set up the coordinates for the mole to start. We want the mole to start from underground (hidden by the ground image), so we set the mole's y position as the position of the molehill minus the height of the mole. Once we have set up the mole, we assign our calculated Z-order for this mole (based on the moleHillBaseZ variable we stored earlier for each molehill), and call the raiseMole method, which controls the animation and movement of the mole. Special moles We have seen two references to the isSpecial variable from the MXMole class, so now is a good time to explain how it is used. In order to break the repetitive nature of the game, we have added a "special mole" feature. When a new mole is requested to spawn in the spawnMole method, we generate a random number between 1 and 100. If the resulting number is less than five, then we set the isSpecial flag for that mole. This means that roughly 5 percent of the time the player will get a special mole. Our special moles use the same graphics as the standard mole, but we will make them flash a rainbow of colors when they are in the play. It is a small difference, but enough to set up the scoring to give extra points for the "special mole". To implement this special mole, we only need to adjust coding in three logic areas: When raiseMole is setting the mole's actions (to make it flashy) When we hit the mole (to play a different sound effect) When we score the mole (to score more points) This is a very small task, but it is the small variations in the gameplay that will draw the players in further. Let's see the game with a special mole in the play: Moving moles When we call the raiseMole method, we build all of the mole's behavior. The absolute minimum we need is to raise the mole from the hill and lower it again. For our game, we want to add a little randomness to the behavior, so that we don't see exactly the same motions for every mole. We use a combination of pre-built animations with actions to achieve our result. As we haven't used any CCAnimate calls before, we should talk about them first. The animation cache Cocos2d has many useful caches to store frequently used data. When we use a CCSpriteBatchNode, we are using the CCSpriteFrameCache to store all of the sprites we need by name. There is an equally useful CCAnimationCache as well. It is simple to use. You build your animation as a CCAnimation, and then load it to the CCAnimationCache by whatever name you would like. When you want to use your named animation, you can create a CCAnimate action that loads directly from CCAnimationCache. The only caution is that if you load two animations with the same name to the cache, they will collide in the cache, and the second one will replace the first. For our project, we preload the animation during the init method by calling the buildAnimations method. We only use one animation here, but you could preload as many as you need to the cache ahead of time. Filename: MXPlayfieldLayer.m -(void) buildAnimations { // Load the Animation to the CCSpriteFrameCache NSMutableArray *frameArray = [NSMutableArray array]; // Load the frames [frameArray addObject:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:@"penguin_forward.png"]]; [frameArray addObject:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:@"penguin_left.png"]]; [frameArray addObject:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:@"penguin_forward.png"]]; [frameArray addObject:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:@"penguin_right.png"]]; [frameArray addObject:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:@"penguin_forward.png"]]; [frameArray addObject:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:@"penguin_forward.png"]]; // Build the animation CCAnimation *newAnim = [CCAnimation animationWithSpriteFrames:frameArray delay:0.4]; // Store it in the cache [[CCAnimationCache sharedAnimationCache] addAnimation:newAnim name:@"penguinAnim"]; } We only have three unique frames of animation, but we load them multiple times into the frameArray to fit our desired animation. We create a CCAnimation object from the frameArray, and then commit it to CCAnimationCache under the name penguinAnim. Now that we have loaded it to the cache, we can reference it anywhere we want it, just by requesting it from CCAnimationCache, like this: [[CCAnimationCache sharedAnimationCache] animationByName:@"penguinAnim"]]
Read more
  • 0
  • 0
  • 1187

article-image-creating-virtual-landscapes
Packt
06 Mar 2013
9 min read
Save for later

Creating Virtual Landscapes

Packt
06 Mar 2013
9 min read
(For more resources related to this topic, see here.) Describing a world in data Just like modern games, early games like Ant Attack required data that described in some meaningful way how the landscape was to appear. The eerie city landscape of "Antchester" (shown in the following screenshot) was constructed in memory as a 128 x 128 byte grid, the first 128 bytes defined the upper-left wall, and the 128 byte row below that, and so on. Each of these bytes described the vertical arrangement of blocks in lower six bits, for game logic purposes the upper two bits were used for game sprites. Heightmaps are common ground The arrangement of numbers in a grid pattern is still extensively used to represent terrain. We call these grids "maps" and they are popular by virtue of being simple to use and manipulate. A long way from "Antchester", maps can now be measured in megabytes or Gigabytes (around 20GB is needed for the whole earth at 30 meter resolution). Each value in the map represents the height of the terrain at that location. These kinds of maps are known as heightmaps. However, any information that can be represented in the grid pattern can use maps. Additional maps can be used by 3D engines to tell it how to mix many textures together; this is a common terrain painting technique known as "splatting". Splats describe the amount of blending between texture layers. Another kind of map might be used for lighting, adding light, or shadows to an area of the map. We also find in some engines something called visibility maps which hide parts of the terrain; for example we might want to add holes or caves into a landscape. Coverage maps might be used to represent objects such as grasses, different vegetation layers might have some kind of map the engine uses to draw 3D objects onto the terrain surface. GROME allows us to create and edit all of these kinds of maps and export them, with a little bit of manipulation we can port this information into most game engines. Whatever the technique used by an engine to paint the terrain, height-maps are fairly universal in how they are used to describe topography. The following is an example of a heightmap loaded into an image viewer. It appears as a gray scale image, the intensity of each pixel represents a height value at that location on the map. This map represents a 100 square kilometer area of north-west Afghanistan used in a flight simulation. GROME like many other terrain editing tools uses heightmaps to transport terrain information. Typically importing the heightmap as a gray scale image using common file formats such as TIFF, PNG, or BMP. When it's time to export the terrain project you have similar options to save. This commonality is the basis of using GROME as a tool for many different engines. There's nothing to stop you from making changes to an exported heightmap using image editing software. The GROME plugin system and SDK permit you to make your own custom exporter for any unsupported formats. So long as we can deal with the material and texture format requirements for our host 3D engine we can integrate GROME into the art pipeline. Texture sizes Using textures for heightmap information does have limitations. The largest "safe" size for a texture is considered 4096 x 4096 although some of the older 3D cards would have problems with anything higher than 2048 x 2048. Also, host 3D engines often require texture dimensions to be a power of 2. A table of recommended dimensions for images follow: SafeTexture dimensions 64 x 64 128 x 128 256 x 256 512 x 512 1024 x 1024 2048 x 2048 4096 x 4096 512 x 512 provides efficient trade-off between resolution and performance and is the default value for GROME operations. If you're familiar with this already then great, you might see questions posted on forums about texture corruption or materials not looking correct. Sometimes these problems are the result of not conforming to this arrangement. Also, you'll see these numbers crop up a few times in GROME's drop-down property boxes. To avoid any potential problems it is wise to ensure any textures you use in your projects conform to these specifications. One exception is Unreal Development Kit ( UDK) in which you'll see numbers such as 257 x 257 used. If you have a huge amount of terrain data that you need to import for a project you can use the texture formats mentioned earlier but I recommend using RAW formats if possible. If your project is based on real-world topography then importing DTED or GeoTIFF data will extract geographical information such as latitude, longitude, and number of arc seconds represented by the terrain. Digital Terrain Elevation Data (DTED) A file format used by geographers and mappers to map the height of a terrain. Often used to import real-world topography into flight simulations. Shuttle Radar Topography Mission (SRTM) data is easily obtained and converted. The huge world problem Huge landscapes may require a lot of memory, potentially more than a 3D card can handle. In game consoles memory is a scarce resource, on mobile devices transferring the app and storing is a factor. Even on a cutting edge PC large datasets will eat into that onboard memory especially when we get down to designing and building them using high-resolution data. Requesting actions that eat up your system memory may cause the application to fail. We can use GROME to create vast worlds without worrying too much about memory. This is done by taking advantage of how GROME manages data through a process of splitting terrain into "zones" and swapping it out to disk. This swapping is similar to how operating systems move memory to disk and reload it on demand. By default whenever you import large DTED files GROME will break the region into multiple zones and hide them. Someone new to GROME might be confused by a lengthy file import operation only to be presen ted with a seemingly empty project space. When creating terrain for engines such as Unity, UDK, Ogre3D, and others you should keep in mind their own technical limitations of what they can reasonably import. Most of these engines are built for small scale scenes. While GROME doesn't impose any specific unit of measure on your designs, one unit equals one meter is a good rule of thumb. Many third-party models are made to this scale. However it's up to the artist to pick a unit of scale and importantly, be consistent. Keep in mind many 3D engines are limited by two factors: Floating point math precision Z-buffer (depth buffer) precision Floating point precision As a general rule anything larger than 20,000 units away from the world origin in any direction is going to exhibit precision errors. This manifests as vertex jitter whenever vertices are rotated and transformed by large values. The effects are not something you can easily work around. Changing the scale of the object shifts the error to another decimal point. Normally in engines that specialize in rendering large worlds they either use a camera-relative rendering or some kind of paging system. Unity and UDK are not inherently capable of camera-relative rendering but a method of paging is possible to employ. Depth buffer precision The other issue associated with large scene rendering is z-fighting. The depth buffer is a normally invisible part of a scene used to determine what part is hidden by another, depth-testing. Whenever a pixel is written to a scene buffer the z component is saved in the depth buffer. Typically this buffer has 16 bits of precision, meaning you have a linear depth of 0 to 65,536. This depth value is based on the 3D camera's view range (the difference between the camera near and far distance). Z-fighting occurs when objects appear to be co-planer polygons written into the z-buffer with similar depth values causing them to "fight" for visibility. This flickering is an indicator that the scene and camera settings need to be rethought. Often the easy fix is to increase the z-buffer precision by increasing the camera's near distance. The downside is that this can clip very near objects. GROME will let you create such large worlds. Its own Graphite engine handles them well. Most 3D engines are designed for smaller first and third-person games which will have a practical limit of around 10 to 25 square kilometers (1 meter = 1 unit). GROME can mix levels of detail quite easily, different regions of the terrain have their own mesh density. If for example you have a map on an island, you will want lots of detail for the land and less in the sea region. However, game engines such as Unity, UDK, and Ogre3 Dare are not easily adapted to deal with such variability in the terrain mesh since they are optimized to render a large triangular grid of uniform size. Instead, we use techniques to fake extra detail and bake it into our terrain textures, dramatically reducing the triangle count in the process. Using a combination of Normal Maps and Mesh Layers in GROME we can create the illusion of more detail than there is at a distance. Normal map A Normal is a unit vector (a vector with a total length of one) perpendicular to a surface. When a texture is used as a Normal map, the red, green, and blue channels represent the vector (x,y,z). These are used to generate the illusion of more detail by creating a bumpy looking surface. Also known as bump-maps. Summary In this article we looked at heightmaps and how they allow us to import and export to other programs and engines. We touched upon world sizes and limitations commonly found in 3D engines. Resources for Article : Further resources on this subject: Photo Manipulation with GIMP 2.6 [Article] Setting up a BizTalk Server Environment [Article] Creating and Warping 3D Text with Away3D 3.6 [Article]
Read more
  • 0
  • 0
  • 3260
article-image-2d-graphics
Packt
04 Mar 2013
8 min read
Save for later

2D Graphics

Packt
04 Mar 2013
8 min read
(For more resources related to this topic, see here.) Adding content Create a new project and call it Chapter2Demo. XNA Game Studio created a class called Game1. Rename it to MainGame so it has a proper name. When we take a look at our solution, we can see two projects. A game project called Chapter2Demo that contains all our code, and a content project called Chapter2DemoContent. This content project will hold all our assets, and compile them to an intermediate file format (xnb). This is often done in game development to make sure our games start faster. The resulting files are uncompressed, and thus larger, but can be read directly into memory without extra processing. Note that we can have more than one content project in a solution. We might add one per platform, but this is beyond the scope of this article. Navigate to the content project using Windows Explorer, and place our textures in there. The start files can be downloaded from the previously mentioned link. Then add the files to the content project by right-clicking on it in the Solution Explorer and choosing the Add | Existing Item.... Make sure to place the assets in a folder called Game2D. When we click on the hero texture in the content project, we can see several properties. First of all, our texture has a name, Hero. We can use that name to load our texture in code. Note that this has no extension, because the files will be compiled to an intermediate format anyway. We can also specify a Content Importer and Content Processor. Our .png file gets recognized as texture so XNA Game studio automatically selects the Texture importer and processor for us. An importer will convert our assets into the "Content Document Object Model", a format that can be read by the processor. The processor will compile the asset into a managed code object, which can then be serialized into the intermediate .xnb file. That file will then be loaded at runtime. Drawing sprites Everything is set up for us to begin. Let's start drawing some images. We'll draw a background, an enemy, and our hero. Adding fields At the top of our MainGame, we need to add a field for each of our objects.The type used here is Texture2D. Texture2D _background, _enemy, _hero; Loading textures In the LoadContent method, we need to load our textures using the content manager. // TODO: use this.Content to load your game content here _background = Content.Load<Texture2D>("Game2D/Background"); _enemy = Content.Load<Texture2D>("Game2D/Enemy"); _hero = Content.Load<Texture2D>("Game2D/Hero"); The content manager has a generic method called Load. Generic meaning we can specify a type, in this case Texture2D. It has one argument, being the asset name. Note that you do not specify an extension, the asset name corresponds with the folder structure and then the name of the asset that you specified in the properties. This is because the content is compiled to .xnb format by our content project anyway, so the files we load with the content manager all have the same extension. Also note that we do not specify the root directory of our content, because we've set it in the game's constructor. Drawing textures Before we start drawing textures, we need to make sure our game runs in full screen. This is because the emulator has a bug and our sprites wouldn't show up correctly. You can enable full screen by adding the following code to the constructor: graphics.IsFullScreen = true; Now we can go to the Draw method. Rendering textures is always done in a specific way: First we call the SpriteBatch.Begin() method. This will make sure all the correct states necessary for drawing 2D images are set properly. Next we draw all our sprites using the Draw method of the sprite batch. This method has several overloads. The first is the texture to draw. The second an object of type Vector2D that will store the position of the object. And the last argument is a color that will tint your texture. Specify Color.White if you don't want to tint your texture. Finally we call the SpriteBatch.End() method. This will sort all sprites we've rendered (according the the specified sort mode) and actually draw them. If we apply the previous steps, they result in the following code: // TODO: Add your drawing code here spriteBatch.Begin(); spriteBatch.Draw(_background, new Vector2(0, 0), Color.White); spriteBatch.Draw(_enemy, new Vector2(10, 10), Color.White); spriteBatch.Draw(_hero, new Vector2(10, 348), Color.White); spriteBatch.End(); Run the game by pressing F5. The result is shown in the following screenshot: Refactoring our code In the previous code, we've drawn three textures from our game class. We hardcoded the positions, something we shouldn't do. None of the textures were moving but if we want to add movement now, our game class would get cluttered, especially if we have many sprites. Therefore we will refactor our code and introduce some classes. We will create two classes: a GameObject2D class that is the base class for all 2D objects, and a GameSprite class, that will represent a sprite. We will also create a RenderContext class. This class will hold our graphics device, sprite batch, and game time objects. We will use all these classes even more extensively when we begin building our own framework. Render context Create a class called RenderContext. To create a new class, do the following: Right-click on your solution. Click on Add | New Item. Select the Code template on the left. Select Class and name it RenderContext. Click on OK. This class will contain three properties: SpriteBatch, GraphicsDevice, and GameTime. We will use an instance of this class to pass to the Update and Draw methods of all our objects. That way they can access the necessary information. Make sure the class has public as access specifier. The class is very simple: public class RenderContext { public SpriteBatch SpriteBatch { get; set; } public GraphicsDevice GraphicsDevice { get; set; } public GameTime GameTime { get; set; } } When you build this class, it will not recognize the terms SpriteBatch, GraphicsDevice, and GameTime. This is because they are stored in certain namespaces and we haven't told the compiler to look for them. Luckily, XNA Game Studio can find them for us automatically. If you hover over SpriteBatch, an icon like the one in the following screenshot will appear on the left-hand side. Click on it and choose the using Microsoft.Xna.Framework.Graphics; option. This will fix the using statement for you. Do it each time such a problem arises. The base class The base class is called GameObject2D. The only thing it does is store the position, scale, and rotation of the object and a Boolean that determines if the object should be drawn. It also contains four methods: Initialize, LoadContent, Draw, and Update. These methods currently have an empty body, but objects that will inherit from this base class later on will add an implementation. We will also use this base class for our scene graph, so don't worry if it still looks a bit empty. Properties We need to create four automatic properties. The Position and the Scale parameters are of type Vector2. The rotation is a float and the property that determines if the object should be drawn is a bool. public Vector2 Position { get; set; } public Vector2 Scale { get; set; } public float Rotation { get; set; } public bool CanDraw { get; set; } Constructor In the constructor, we will set the Scale parameter to one (no scaling) and set the CanDraw parameter to true. public GameObject2D() { Scale = Vector2.One; CanDraw = true; } Methods This class has four methods. Initialize: We will create all our new objects in this method. LoadContent: This method will be used for loading our content. It has one argument, being the content manager. Update: This method shall be used for updating our positions and game logic. It also has one argument, the render context. Draw: We will use this method to draw our 2D objects. It has one argument, the render context. public virtual void Initialize() { } public virtual void LoadContent(ContentManager contentManager) { } public virtual void Update(RenderContext renderContext) { } public virtual void Draw(RenderContext renderContext) { } Summary In this Article we have got used to the 2D coordinate system. Resources for Article : Further resources on this subject: 3D Animation Techniques with XNA Game Studio 4.0 [Article] Advanced Lighting in 3D Graphics with XNA Game Studio 4.0 [Article] Environmental Effects in 3D Graphics with XNA Game Studio 4.0 [Article]
Read more
  • 0
  • 0
  • 1054

article-image-miscellaneous-gameplay-features
Packt
01 Mar 2013
13 min read
Save for later

Miscellaneous Gameplay Features

Packt
01 Mar 2013
13 min read
(For more resources related to this topic, see here.) How to have a sprinting player use up energy Torque 3D's Player class has three main modes of movement over land: sprinting, running, and crouching. Some are designed to allow a player to sprint as much as they want, but perhaps with other limitations while sprinting. This is the default method of sprinting in the Torque 3D templates. Other game designs allow the player to sprint only for short bursts before the player becomes "tired". In this recipe, we will learn how to set up the Player class such that sprinting uses up a pool of energy that slowly recharges over time; and when that energy is depleted, the player is no longer able to sprint. How to do it... We are about to modify a PlayerData Datablock instance so that sprint uses up the player's energy as follows: Open your player's Datablock in a text editor, such as Torsion. The Torque 3D templates have the DefaultPlayerData Datablock template in art/ datablocks/player.cs. Find the sprinting section of the Datablock instance and make the following changes: sprintForce = 4320; sprintEnergyDrain = 0.6; // Sprinting now drains energy minSprintEnergy = 10; // Minimum energy to sprint maxSprintForwardSpeed = 14; maxSprintBackwardSpeed = 8; maxSprintSideSpeed = 6; sprintStrafeScale = 0.25; sprintYawScale = 0.05; sprintPitchScale = 0.05; sprintCanJump = true; Start up the game and have the player sprint. Sprinting should now be possible for about 5.5 seconds before the player falls back to a run. If the player stops sprinting for about 7.5 seconds, their energy will be fully recharged and they will be able to sprint again. How it works... The maxEnergy property on the PlayerData Datablock instance determines the maximum amount of energy a player has. All of Torque 3D's templates set it to a value of 60. This energy may be used for a number of different activities (such as jet jumping), and even certain weapons may draw from it. By setting the sprintEnergyDrain property on the PlayerData Datablock instance to a value greater than zero, the player's energy will be drained every tick (about one-thirty-second of a second) by that amount. When the player's energy reaches zero they may no longer sprint, and revert back to running. Using our previous example, we have a value for the sprintEnergyDrain property of 0.6 units per tick. This works out to 19.2 units per second. Given that our DefaultPlayerData maxEnergy property is 60 units, we should run out of sprint energy in 3.125 seconds. However, we were able to sprint for about 5.5 seconds in our example before running out of energy. Why is this? A second PlayerData property affects energy use over time: rechargeRate. This property determines how much energy is restored to the player per tick, and is set to 0.256 units in DefaultPlayerData. When we take both the sprintEnergyDrain and recharcheRate properties into account, we end up with an effective rate of (0.6 – 0.256) 0.344 units drained per tick while sprinting. Assuming the player begins with the maximum amount of energy allowed by DefaultPlayerData, this works out to be (60 units / (0.344 units per tick * 32 ticks per second)) 5.45 seconds. The final PlayerData property that affects sprinting is minSprintEnergy. This property determines the minimum player energy level required before being able to sprint. When this property is greater than zero, it means that a player may continue to sprint until their energy is zero, but cannot sprint again until they have regained a minSprintEnergy amount of energy. There's more... Let's continue our discussion of player sprinting and energy use. Balance energy drain versus recharge rate With everything set up as described previously, every tick the player is sprinting his energy pool will be reduced by the value of sprintEnergyDrain property of PlayerData, and increased by the value of the rechargeRate property. This means that in order for the player's energy to actually drain, his sprintEnergyDrain property must be greater than his rechargeRate property. As a player's energy may be used for other game play elements (such as jet jumping or weapons fire), sometimes we may forget this relationship while tuning the rechargeRate property, and end up breaking a player's ability to sprint (or make them sprint far too long). Modifying other sprint limitations The way the DefaultPlayerData Datablock instance is set up in all of Torque 3D's templates, there are already limitations placed on sprinting without making use of an energy drain. This includes not being able to rotate the player as fast as when running, and limited strafing ability. Making sprinting rely on the amount of energy a player has is often enough of a limitation, and the other default limitations may be removed or reduced. In the end it depends on the type of game we are making. To change how much the player is allowed to rotate while sprinting, we modify the sprintYawScale and sprintPitchScale properties of the PlayerData property. These two properties represent the fraction of rotation allowed while sprinting compared with running and default to 0.05 each. To change how much the player is allowed to strafe while sprinting, we modify the sprintStrafeScale property of the PlayerData property. This property is the fraction of the amount of strafing movement allowed while running and defaults to 0.25. Disabling sprint During a game we may want to disable a player's sprinting ability. Perhaps they are too injured, or are carrying too heavy a load. To allow or disallow sprinting for a specific player we call the following Player class method on the server: Player.allowSprinting( allow ); In the previous code, the allow parameter is set to true to allow a player the ability to sprint, and to false to not allow a player to sprint at all. This method is used by the standard weapon mounting system in scripts/server/ weapon.cs to disable sprinting. If the ShapeBaseImageData Datablock instance for the weapon has a dynamic property of sprintDisallowed set to true, the player may not sprint while holding that weapon. The DeployableTurretImage Datablock instance makes use of this by not allowing the player to sprint while holding a turret. Enabling and disabling air control Air control is a fictitious force used by a number of games that allows a player to control their trajectory while falling or jumping in the air. Instead of just falling or jumping and hoping for the best, this allows the player to change course as necessary and trades realism for playability. We can find this type of control in first-person shooters, platformers, and adventure games. In this recipe we will learn how to enable or disable air control for a player, as well as limit its effect while in use. How to do it... We are about to modify a PlayerData Datablock instance to enable complete air control as follows: Open your player's Datablock in a text editor, such as Torsion. The Torque 3D templates have the DefaultPlayerData Datablock instance in art/ datablocks/player.cs. Find the section of the Datablock instance that contains the airControl property and make the following change: jumpForce = "747"; jumpEnergyDrain = 0; minJumpEnergy = 0; jumpDelay = "15"; // Set to maximum air control airControl = 1.0; Start up the game and jump the player off of a building or a sand dune. While in the air press one of the standard movement keys: W, A, S, and D. We now have full trajectory control of the player while they are in the air as if they were running. How it works... If the player is not in contact with any surface and is not swimming, the airControl property of PlayerData is multiplied against the player's direction of requested travel. This multiplication only happens along the world's XY plane and does not affect vertical motion. Setting the airControl property of PlayerData to a value of 0 will disable all air control. Setting the airControl property to a value greater than 1 will cause the player to move faster in the air than they can run. How to jump jet In game terms, a jump jet is often a backpack, a helicopter hat, or a similar device that a player wears, that provides them a short thrust upwards and often uses up a limited energy source. This allows a player to reach a height they normally could not, jump a canyon, or otherwise get out of danger or reach a reward. In this recipe we will learn how to allow a player to jump jet. Getting ready We will be making TorqueScript changes in a project based on the Torque 3D Full template using the Empty Terrain level. If you haven't already, use the Torque Project Manager (Project Manager.exe) to create a new project from the Full template. It will be found under the My Projects directory. Then start up your favorite script editor, such as Torsion, and let's get going! How to do it... We are going to modify the player's Datablock instance to allow for jump jetting and adjust how the user triggers the jump jet as follows: Open the art/datablocks/player.cs file in your text editor. Find the DefaultPlayerData Datablock instance and just below the section on jumping and air control, add the following code: // Jump jet jetJumpForce = 500; jetJumpEnergyDrain = 3; jetMinJumpEnergy = 10; Open scripts/main.cs and make the following addition to the onStart() function: function onStart() { // Change the jump jet trigger to match a regular jump $player::jumpJetTrigger = 2; // The core does initialization which requires some of // the preferences to loaded... so do that first. exec( "./client/defaults.cs" ); exec( "./server/defaults.cs" ); Parent::onStart(); echo("n--------- Initializing Directory: scripts ---------"); // Load the scripts that start it all... exec("./client/init.cs"); exec("./server/init.cs"); // Init the physics plugin. physicsInit(); // Start up the audio system. sfxStartup(); // Server gets loaded for all sessions, since clients // can host in-game servers. initServer(); // Start up in either client, or dedicated server mode if ($Server::Dedicated) initDedicated(); else initClient(); } Start our Full template game and load the Empty Terrain level. Hold down the Space bar to cause the player to fly straight up for a few seconds. The player will then fall back to the ground. Once the player has regained enough energy it will be possible to jump jet again. How it works... The only property that is required to be set for jump jetting to work is the jetJumpForce property of the PlayerData Datablock instance. This property determines the amount of continuous force applied on the player object to have them flying up in the air. It takes some trial and error to determine what force works best. Other Datablock properties that are useful to set are jetJumpEnergyDrain and jetMinJumpEnergy. These two PlayerData properties make jet jumping use up a player's energy. When the energy runs out, the player may no longer jump jet until enough energy has recharged. The jetJumpEnergyDrain property is how much energy per tick is drained from the player's energy pool, and the jetMinJumpEnergy property is the minimum amount of energy the player needs in their energy pool before they can jump jet again. Please see the How to have a sprinting player use up energy recipe for more information on managing a player's energy use. Another change we made in our previous example is to define which move input trigger number will cause the player to jump jet. This is defined using the global $player::jumpJetTrigger variable. By default, this is set to trigger 1, which is usually the same as the right mouse button. However, all of the Torque 3D templates make use of the right mouse button for view zooming (as defined in scripts/client/default.bind.cs). In our previous example, we modified the global $player::jumpJetTrigger variable to use trigger 2, which is usually the same as for regular jumping as defined in scripts/ client/default.bind.cs: function jump(%val) { // Touch move trigger 2 $mvTriggerCount2++; } moveMap.bind( keyboard, space, jump ); This means that we now have jump jetting working off of the same key binding as regular jumping, which is the Space bar. Now holding down the Space bar will cause the player to jump jet, unless they do not have enough energy to do so. Without enough energy, the player will just do a regular jump with their legs. There's more... Let's continue our discussion of using a jump jet. Jump jet animation sequence If the shape used by the Player object has a Jet animation sequence defined, it will play while the player is jump jetting. This sequence will play instead of all other action sequences. The hierarchy or order of action sequences that the Player class uses to determine which action sequence to play is as follows: Jump jetting Falling Swimming Running (known internally as the stand pose) Crouching Prone Sprinting Disabling jump jetting During a game we may no longer want to allow a player to jump jet. Perhaps they have run out of fuel or they have removed the device that allowed them to jump jet. To allow or disallow jump jetting for a specific player, we call the following Player class method on the server: Player.allowJetJumping( allow ); In the previous code, the allow parameter is set to true to allow a player to jump jet, and to false for not allowing him to jump jet at all. More control over the jump jet The PlayerData Datablock instance has some additional properties to fine tune a player's jump jet capability. The first is the jetMaxJumpSpeed property. This property determines the maximum vertical speed at which the player may use their jump jet. If the player is moving upwards faster than this, then they may not engage their jump jet. The second is the jetMinJumpSpeed property. This property is the minimum vertical speed of the player before a speed multiplier is applied. If the player's vertical speed is between jetMinJumpSpeed and jetMaxJumpSpeed, the applied jump jet speed is scaled up by a relative amount. This helps ensure that the jump jet will always make the player move faster than their current speed, even if the player's current vertical speed is the result of some other event (such as being thrown by an explosion). Summary These recipes will help you to fully utilize the gameplay's features and make your game more interesting and powerful. The tips and tricks mentioned in the recipes will surely help you in making the game more real, more fun to play, and much more intriguing. Resources for Article : Further resources on this subject: Creating and Warping 3D Text with Away3D 3.6 [Article] Retopology in 3ds Max [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 1 [Article]
Read more
  • 0
  • 0
  • 1661

article-image-keyshots-overview
Packt
08 Feb 2013
11 min read
Save for later

KeyShot's Overview

Packt
08 Feb 2013
11 min read
(For more resources related to this topic, see here.) Introducing KeyShot Formerly known as HyperShot, KeyShot is an application developed by the company Luxion, that is run today by professionals in various disciplines to deliver images with hyperrealistic quality. KeyShot delivers physically accurate lighting and a library of materials that allow us to experiment and make changes all through our viewport in real time. Whether we are engineers, artists, or designers, time is a precious element that we are always racing against, and this is particularly true when it comes to rendering 3D data. On some occasions, the quality of our work is compromised as we need to spend time learning complex new software. KeyShot has been designed with simplicity in mind, allowing the user to create high-quality images while putting aside the technical details. Unlike other rendering packages on the market, KeyShot is a processor-based rendering program. All the rendering calculations are 100 percent CPU-based, which means we don't need a high-performance graphics card to get the job done. KeyShot utilizes all the cores and threads in your processor, and because it was built on 64-bit architecture, it also gives us more room to increase performance. KeyShot versus traditional rendering programs In order to work properly, it is important to have the right tools. KeyShot allows you to apply materials, set up the lighting, and obtain hyperrealistic images in a matter of minutes. Traditional rendering applications often have, too many settings, each giving the user a different level of control over the appearance of the project. Although a large number of settings allows for more flexibility, understanding how each of them works can be a time-consuming process. In this section, we have laid out several points that we consider helpful when using KeyShot for your projects compared to other rendering applications. The following are some basic points related to working with KeyShot: Workflow—import your 3D data, apply and fine-tune your textures and materials, set up your lighting, find your preferred camera view, and then render. KeyShot is fully integrated, just like any other rendering application, but it's been designed to be user friendly. You will find that most menu tabs and preferences are intuitive and easy to understand. It offers different arrays of mapping options, such as cylindrical, box shaped, spherical, or using UV coordinates, depending on your preference. It uses the high dynamic range imaging (HDRI) method to produce realistic lighting conditions. It provides physically accurate materials based on real-world properties. Each material found in KeyShot's library has been set up to produce a specific type of look when applied. This allows you to save time fine-tuning your materials for that specific look. It offers basic animation tools that allow you to set up professional presentations. The following are a few basic points related to traditional rendering tools: They require some experience in rendering techniques, and they often have a steep learning curve. The user interfaces are cluttered with options and preferences and can be intimidating for first-time users. They are more flexible in terms of controlling the look of each individual feature of your project. The settings are broken down and laid out separately, allowing you to control everything from the number of lights and shadows per scene to the look of a material. A consequence of this, however, is that, there are more opportunities for errors and users are often overwhelmed by the amount of settings and controls. Materials and lighting are not always physically accurate. Reproducing a particular type of material or lighting setup is often time-consuming. They provide more robust animation tools and often include a rigging system, which allows for more complex animations. KeyShot is a powerful rendering tool that is used in a variety of fields within the CG industry. However, it is important to remember that KeyShot has a limited set of animation tools, and I recommend using a different application such as Maya, 3ds Max, or Softimage if your project requires complex character animations or special effects. Getting started Now that we understand the fundamentals of KeyShot and its benefits, we will take a look at how to start using KeyShot for your projects, from the beginning to the end. If you do not have KeyShot, you can download a trial version from the website by performing the following steps: Go to http://www.keyshot.com/try/. Select your operating system (Windows 32-bit, Windows 64-bit, or Mac OS X) and download it. Install your trial version and select Continue without registering. Importing projects KeyShot supports a variety of file formats from third-party applications. A list of the files currently supported can be found on the KeyShots website. For our projects, we will be working with files with the OBJ (object file) extension. Let's go ahead and get started. Perform the following steps: Open KeyShot. At the bottom of your viewport, you will see six icons—Import, Library, Project, Animation, Screenshot, and Render, as shown in the following screenshot: Go ahead and click on Import. Let's choose our lesson file, Wacom_2, from the data folder. A new window for configuring imported files will appear. The new settings window allows you to choose the orientation or the direction in which your 3D object will be placed in the viewport. Depending on which application we are importing our files to, some of them have their Cartesian axis orientation set up differently. In this case, the file we will be working with is an OBJ file imported from Maya, and this file has Y Up as its Orientation, as shown in the following screenshot: When working with our project files in KeyShot, it is important to remember prior to importing any models that, all parts of the model need to have their own material assigned to them. To do this, before exporting any of our 3D files from other applications, make sure that the option material is checked in the export options. Once all the pieces of our model have been assigned with their own material, KeyShot will be able to understand how to assign materials properly to all parts of the mesh. A new feature called Material Template, currently available in KeyShot v3.3 and later versions, allows us to link materials and parts of our models to the materials found inside KeyShot's library. For example, instead of copying and pasting materials from one object to another, we can create a template that automatically applies all the materials to the corresponding parts of a model when it is imported into the scene. When creating a template, we need to specify a source name and a destination name. The source name is essentially the name of the part or the material exported directly from a third-party application such as Maya or SolidWorks. Once it is added to the template list, KeyShot will search for any parts or materials associated with the names in the source list and apply any assigned materials in the destination list. We can see an example of a template list in the following screenshot, with the parts of our Wacom tablet listed on the left-hand side and the materials we assigned them with on the right-hand side: Next, let's see how to move, rotate, or scale our model in the viewport. Perform the following steps to do so: Right-click on the model. A new selection box will appear; choose to either move a part of the object or the entire object. When working with a mesh that has multiple parts, it is good practice to hide the parts we don't currently need. To do this, simply right-click on the part we wish to hide and select the Hide Part option from the new menu. The interface Once our project model has been imported, our 3D file should be displayed in our viewport along with a new project window. This window contains five different tabs, of which we will discuss three in the following sections. Scene The Scene tab shows all the parts of our model. The left-hand side of our Project window shows the parts of our mesh under the Parts heading. The order and the name of each of the parts are listed according to the name of the material that was assigned to it by its original application. In this case, our 3D tablet was imported from Maya and all its parts were assigned with a specific material inside Maya. The right-hand side of our Project window shows the list of the current materials that have been applied to the parts inside KeyShot under the Materials heading, as shown in the following screenshot: Material The Material tab lists all the available materials in KeyShot according to their category. In the lower part of our Project window, we can see the materials that belong to the specific folder we have chosen from the list. To apply any material to our model, simply drag the material and drop it onto the part of our model where we wish to apply it. Another way of accessing the list of materials is by clicking on the Library tab in our main viewport. If we need to access the material properties of a specific part of our model, we can do so by double-clicking on any part of the model. To apply any material to our project, we perform the following steps: Open the Library window by clicking on the Library icon from the viewport menu. Drag the desired material and drop it onto our project. Double-click on our model with the applied material to open the Material Properties window. Material properties window The material properties window allows us to modify the attributes of the material we choose. Depending on the material, certain properties will be available for us to modify. For example, any glass models will have the refraction attribute, which won't be available to us if we choose a metallic shader. In general, we have to fine-tune the properties of the materials in order to reproduce the look of real-life materials for most 3D applications such as Maya or 3ds Max. In KeyShot, however, this is no longer necessary since all its materials have been configured to be physically accurate. When using materials in KeyShot, each time a new material is applied to our model, it will show up at the bottom of our material's property window. This is to allow us to recycle a material and use it again if needed. Environment Right next to the Material tab we will find the Environment tab, which contains HDRIs that come as part of KeyShot. Here, we will be able to drag-and-drop HDRIs as well as backplates onto our scene. The Environment tab, just like the Material tab, has its own property window, which has more advanced attributes that let us assume greater control of the appearance of our scene. In the Pro version of KeyShot, an HDRI editor preference is also available for further control of our HDRIs. Certain features allow us to control the saturation, hue, brightness, contrast, and even the shape of the HDRI. Environment properties window The environment's properties window houses the entire list of attributes that allow us to control the lighting of our scene. We will discuss this property window in more depth in the lighting section later in this book. To access the property window, perform the following steps: Double-click on any part of our model. Select the Environment tab from the property window, as follows: Summary In this article, we have learned how to import our models into KeyShot by clicking on the Import tab from the main viewport, and we have also taken a look at creating material template, which is a newly added feature of KeyShot 3. We have also gone brie fl y over the three major tabs that can be found in the Project window, which are the Scene, Material, and Environment tabs. Lastly, we mentioned during the article that there is also a separate material properties window and a properties environment window, both of which are in charge of controlling the look of our materials and lighting. Resources for Article : Further resources on this subject: 3D Vector Drawing and Text with Papervision3D: Part 2 [Article] Ogre 3D FAQs [Article] Tips and Tricks on Away3D 3.6 [Article]
Read more
  • 0
  • 0
  • 1698
article-image-getting-started-marmalade
Packt
02 Jan 2013
4 min read
Save for later

Getting Started with Marmalade

Packt
02 Jan 2013
4 min read
(For more resources related to this topic, see here.) Installing the Marmalade SDK The following sections will show you how to get your PC set up for development using Marmalade, from installing a suitable development environment through to licensing, downloading, and installing your copy of Marmalade. Installing a development environment Before we can start coding, we will first need to install a version of Microsoft's Visual C++, which is the Windows development environment that Marmalade uses. If you don't already have a version installed, you can download a copy for free. At the time of writing, the Express 2012 version had just been released but the most recent, free version directly supported by Marmalade was still Visual C++ 2010 Express, which can be downloaded from the following URL: http://www.microsoft.com/visualstudio/en-us/products/2010-editions/visual-cpp-express Follow the instructions on this web page to download and install the product. For the Apple Mac version of Marmalade, the supported development environment is Xcode, which is available as a free download from the Mac App Store. In this article, we will be assuming that the Windows version of Marmalade will be used, unless specifically stated otherwise. Choosing your Marmalade license type With a suitable development environment in place, we can now get on to downloading Marmalade itself. First, you need to head over to the Marmalade website using the following URL: http://www.madewithmarmalade.com At the top of the website are two buttons labeled Buy and Free Trial. Click on one of these (it doesn't matter which, as they both go to the same place!) and you'll see a page explaining the licensing options, which are also described in the following table: License type Description Evaluation This is free to use but is time limited (currently 45 days), and while you can deploy it to all supported platforms, you are not allowed to distribute the applications built with this version. Community This is the cheapest way of getting started with Marmalade, but you are limited to only being able to release it on iOS and Android, and your application will also feature a Marmalade splash screen on startup. Indie This version removes the limitations of the basic license, with no splash screen and the ability to target any supported platform. Professional This version adds dedicated support from Marmalade should you face any issues during development, and provides early access to the new versions of Marmalade. When you have chosen the license level, you will first need to register with the Marmalade website by providing an e-mail address and password. The e-mail address you register will be linked to your license and will be used to activate it later. Make sure you use a valid e-mail address when registering. Once you are registered, you will be taken to a web page where you can choose the level of license you require. After confirming payment, you will be sent an e-mail that allows you to activate your license and download the Marmalade installer. Downloading and installing Marmalade Now that you have a valid license, head back to the Marmalade website using the same URL we used earlier. If you are not already logged on to the website, do so using the Login link at the top-right corner of the web page. Click on the Download button, and you will be taken to a page where you can download both the most recent and previous releases of the Marmalade installer. Click on the button for the version you require, to start downloading it. Once the download is complete, run the installer and follow the instructions. The installer will first ask you to accept the End User License Agreement by selecting a radio button, and will then ask for an installation location. Next, enter the file location you want to install to. The default installation directory drops the minor revision number (so version 6.1.1 will be installed into a subdirectory called 6.1). You may want to add the minor revision number back in, to make it easier to have multiple versions of Marmalade installed at the same time. Once the installer has finished copying the files to your hard drive, it will then display the Marmalade Configuration Utility, which is described in greater detail in the next section. Once the Configuration Utility has been closed, the installer will then offer you the option of launching some useful resources, such as the SDK documentation, before it exits. It is possible to have more than one version of the Marmalade SDK installed at a time and switch between versions as you need, hence the advice regarding the installation directory. This becomes very useful when device-specific bugs are fixed in a new version of Marmalade, but you still need to support an older project that requires a different version of Marmalade.
Read more
  • 0
  • 0
  • 2181

article-image-getting-started
Packt
26 Dec 2012
6 min read
Save for later

Getting Started

Packt
26 Dec 2012
6 min read
(For more resources related to this topic, see here.) System requirements Before we take a look at how to download and install ShiVa3D, it might be a good idea to see if your system will handle it. The minimum requirements for the ShiVa3D editor are as follows: Microsoft Windows XP and above, Mac OS with Parallels Intel Pentium IV 2 GHz or AMD Athlon XP 2600+ 512 MB of RAM 3D accelerated graphics card with 64 MB RAM and 1440 x 900 resolution Network interface In addition to the minimum requirements, the following suggestions will give you the best gaming experience: Intel Core Duo 1.8 GHz or AMD Athlon 64 X2 3600+ 1024 MB of RAM Modern 3D accelerated graphics card with 256 MB RAM and 1680 x 1050 resolution Sound card Downloading ShiVa3D Head over to http://www.stonetrip.com and get a copy of ShiVa3D Web Edition. Currently, there is a download link on the home page. Once you get to the Download page, enter your email address and click on the Download button. If everything goes right, you will be prompted for a save location—save it in a place that will be easy to find later. That's it for the download, but you may want to take a second to look around Stonetrip's website. There are links to the documentation, forum, wiki, and news updates. It will be well worth your time to become familiar with the site now since you will be using it frequently. Installing ShiVa3D Assuming your computer meets the minimum requirements, installation should be pretty easy. Simply find the installation file that you downloaded and run it. I recommend sticking with the default settings. If you do have issues getting it installed, it is most likely due to a technical problem, so head on over to the forums, and we will be more than glad to lend a helping hand. The ShiVa editor Several different applications were installed, if you accepted the default installation choices. The only one we are going to worry about for most of this book is the ShiVa Web Edition editor, so go ahead and open it now. By default, ShiVa opens with a project named Samples loaded. You can tell by looking at the lower right-hand quadrant of the screen in the Data Explorer—the root folder is named Samples, as shown in the following screenshot: This is actually a nice place to start, because there are all sorts of samples that we can play with. We'll come back to those once we have had a chance to make our own game. We will cover the editor in more detail later, but for now it is important to notice that the default layout has four sections: Attributes Editor, Game Editor, Scene Viewer, and Data Explorer. Each of these sections represents a module within the editor. The Data Explorer window, for example, gives us access to all of the resources that can be used in our project such as materials, models, fonts, and so on. Creating a project A project is the way by which we can group games that share the same resources.To create a new project, click on Main | Projects in the upper left-hand corner of the screen. The project window will open, as shown in the following screenshot: In this window, we can see the Samples project along with its path. The green light next to the name indicates that Samples is the project currently loaded into the editor. If there were other projects listed, the other projects would have red lights besides their names. The steps for creating a new project are as follows: Click on the Add button to create a new project. Navigate to the location we want for our project and then right-click in the explorer area and select New | Folder. Name the folder as IntroToShiva, highlight the folder and click on Select. The project window will now show our new project has the green light and the Samples project has a red light. Click on the Close button to finish. Notice that the root folder in the Data Context window now says IntroToShiva. Creating a game Games are exactly what you would think they are and it's time we created ours. The steps for creating our own games are as follows: Go to the Game Editor window in the lower left-hand corner and click on Game | Create. A window will pop up asking for the game name.We will be creating a game in which the player must fly a spaceship through a tunnel or cave and avoid obstacles; so let's call the game CaveRunner. Click on the OK button and the bottom half of our editor should look like the following screenshot: Notice that there is now some information displayed in the Game Editor window and the Data Explorer window shows the CaveRunner game in the Games folder. A game is simply the empty husk of what we are really trying to build. Next, we will begin building out our game by adding a scene. Making a scene We can think of a scene as a level in a game—it is the stage upon which we place our objects, so that the player can interact with them. We can create a scene by performing the following steps: Click on Edit | Scene | Create in the Game Editor window. Name the scene as Level1 and click on the OK button. The new scene is created and opened for immediate use, as shown in the following screenshot: We can tell Level1 is open, because the Game Editor window switched to the Scenes tab and now Level1 has a green check mark next to it; we can also see a grid in the Scene Viewer window. Additionally, the scene information is displayed in the upper left-hand corner of the Scene Viewer window and the Scene tag says Level1. So we were able to get a scene created, but it is sadly empty—it's not much of a level in even the worst of games. If we want this game to be worth playing, we better add something interesting. Let's start by importing a ship.
Read more
  • 0
  • 0
  • 3292