Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-mograph
Packt
18 Dec 2012
8 min read
Save for later

MoGraph

Packt
18 Dec 2012
8 min read
(For more resources related to this topic, see here.) Before we begin Most of the tools we'll be featuring here are only available in the Broadcast and Studio installations of Cinema 4D. As discussed, this article will cover the basics of MoGraph objects and introduce a couple of sample animation ideas, but as you continue to learn and grow as an animator, you'll most likely be taken aback at how many possibilities there are! MoGraph allows you to create objects with a basic set of parameters and combine them in endless ways to create unique animations. Let's dive in and start imagining! Cloner objects The backbone of MoGraph objects is the cloner object. At its most basic level, it allows you to create multiple clones of an object in your scene. These clones can then be influenced by effectors, which we will discuss shortly. All MoGraph objects can be accessed through the MoGraph menu at the top of your screen. Your menu should look like the following screenshot: Let's open a new scene to explore cloners. Create a sphere, make sure it is selected, then navigate to MoGraph | Cloner. You can parent the sphere to the cloner manually, or hold down the Alt key while creating the cloner to parent it automatically: We've cloned our object, but it doesn't look much like clones so far—just a bumpy, vertical pill shape! This is due to the default sizes of our objects not playing well together. Our sphere has a 100 cm radius, and our clones are set 50 cm apart. Let's change the size of our sphere to 25 cm to start. You should now see three distinct spheres stacked on top of each other. As we create more and more spheres to experiment with cloner settings, you may find that your computer is getting bogged down. We're using a sphere here, but a cube would work just as well and creates far less geometry. You can also reduce your segments on the sphere if desired, but using a simpler form will probably be the most effective method. Let's take a look at the cloner settings in the Attributes Manager: The Basic and Coordinates tabs follow the same standard as the other object types we've encountered so far, but the Object tab is where most of our work will happen. The first step in using a cloner is to choose a Mode: Object mode arranges clones around any specified additional object in your scene. If you switch your cloner to Object mode, you'll see that you still have an object selected, but the clones have disappeared. This is because the cloner is relying on an object to arrange clones, but we haven't specified one. Try creating any primitive—we'll use a Capsule for the following example, then drag it from the Objects Manager into the Object field in the Attributes Manager. Since our sphere is relatively large compared to the Capsule, for the moment, let's change its radius to 4 cm. Your objects should be arranged as shown in the following screenshot: By default, the clones are distributed at each vertex of the object (specified in the Distribution field). If you want more or less clones while in Vertex mode, select the capsule and change its height and cap segments accordingly. Also, the visibility of the clones is linked only to the cloner, and not to the original object. If we turn off visibility on the capsule, the clones stay where they are. Vertex: This aligns clones to all vertices (objects can be parametric or polygonal). Edge: This aligns clones along edges. Edge will look relatively similar to Vertex but will most likely have significantly more clones. Also this can be used with selection sets to specify which edges should be used. Polygon Center: This will look similar to Vertex, but with clones aligned to each polygon. This can be used with selection sets to specify which polygons should be used. Surface: This aligns clones randomly to the surface; number of clones is determined by the count value. Volume: This fills the object with clones and requires either a transparent material on the original object or turning off visibility: Now that we've explored distribution, let's take a look at the different cloner modes. Linear mode arranges clones in a straight line, while Radial mode arranges clones in a circle—you can think of it as a more advanced version of the Array objects we used when creating our desk chair. Grid Array mode arranges clones in a 3D grid, filling a cube, sphere or cylinder, as shown in the following screenshot. Sounds simple, right? Grid Array, when partnered with effectors, is one of the most powerful tools in your MoGraph toolbox. Let's take a look at the settings. The Count field allows you to specify how many clones there are in all three directions. The Size field will specify the dimensions of the container that the clones fill. This is the key difference from the Duplicate function we learned previously; Duplicate will arrange instances that are spaced x distance apart, while the Size field on cloners specifies the total distance between the outer-most and inner-most objects. Note that if you change the count of any objects, it adds additional clones inside our cube rather than adding additional rows at the top or bottom, as shown in the following screenshot: Cloners are incredibly versatile, and you may find yourself using them as a modeling tool as you become more comfortable with the software. Now that we've gotten the basics of cloners down, let's add an Effector and see why this tool is so powerful! Effectors Effectors are, very simply, invisible objects in Cinema 4D that influence the behavior of other objects. The easiest way to learn how they work is to dive right in, so let's get started! With your cloner object selected (and set back to Grid Array, if you've been experimenting with the different modes), navigate to MoGraph | Effector | Random as shown in the previous screenshot. You should see all of the clones move in random directions! If you did not select the cloner before creating an effector, they will not be automatically linked. If the clones were unaffected, select the cloner, switch to the Effectors tab, and drag the Random effector from the Objects Manager into the open window as shown in the following screenshot: The Random effector is set, by default, to move all objects a maximum of 50 cm in any direction. This takes our clones that exist within the 200 cm cube and allows them to shift an additional 50 cm at random. We're even given an amount of control over that randomness, allowing for endless organic animations. Let's take a look at the settings for the Random effector: Click-and-drag on the Strength slider. As you approach 0 percent, the spheres move closer together. The Strength field works directly with the Transform parameters, so if you change the strength to 50 percent but leave the Transform values the same, your positions will be identical to a Random effector with 100 percent strength and 25 cm in all directions, as demonstrated in the following screenshot. The cloner on the left is having 50 percent strength, 50 cm x 50 cm x 50 cm, while the cloner on the right is having 100 percent strength, 25 cm x 25 cm x 25 cm: The reason these appear identical is due to their Seed value. True randomness is near impossible to create, so random algorithms often rely on a unique number to determine the position of objects. If you change the seed value, it will change the random positions. If you create a Random effector and dislike the result, clicking through seed values until you have a more desirable configuration is a quick and easy way to completely change the scene. This value can be keyframed as well, which can be combined with keyed transformation values to create complicated organic animations very quickly. In addition to position, you can also randomize scale and rotation. Scale values represent multipliers, rather than a percentage—so a scale of 2 equates to a potential 200 percent increase. 1 is equivalent to 100 percent, meaning a 25 cm sphere may be up to 50 cm—a 100 percent increase. Clicking on the Uniform Scale option prevents distorting the sphere. If you want to test the rotation option and are still using spheres, you may want to create a basic patterned material and add it to your object as shown in the following screenshot - otherwise it'll be impossible to tell that they're rotated! Cloners can have multiple effectors as well. With a cloner selected, navigate to MoGraph | Effector | Time. In the Attributes Manager, choose the attributes you'd like to manage over time—perhaps leave the position attributes to the Random effector and add Scale and Rotation to Time—then scroll through the timeline to see how the objects are affected:
Read more
  • 0
  • 0
  • 2245

article-image-adding-bodies-world
Packt
11 Dec 2012
4 min read
Save for later

Adding Bodies to the World

Packt
11 Dec 2012
4 min read
(For more resources related on Spring, see here.) Creating a fixture A fixture is used to bind the shape on a body, and to define its material setting density, friction, and restitution. The first step is to create the fixture: var fixtureDef:b2FixtureDef = new b2FixtureDef(); fixtureDef.shape=circleShape; Once we have created the fixture with the constructor, we assign the previously created shape using the shape property. Finally we are ready to add the ball to the world: var theBall_b2Body=world.CreateBody(bodyDef); theBall.CreateFixture(fixtureDef); b2Body is the body itself: the physical, concrete body that has been created using the bodyDef attribute. To recap, use the following steps when you want to place a body in the world: i. Create a body definition, which will hold body information such as its position. ii. Create a shape, which is how the body will look. iii. Create a fixture to attach the shape to the body definition. iv. Create the body itself in the world using the fixture. Once you know the importance of each step, adding bodies to your Box2D World will be easy and fun. Back to our project. The following is how the class should look now: package {import flash.display.Sprite;import flash.events.Event;import Box2D.Dynamics.*;import Box2D.Collision.*;import Box2D.Collision.Shapes.*;import Bo x2D.Common.Math.*;public class Main extends Sprite {private var world:b2World;private var worldScale_Number=30;public function Main() {world=new b2World(new b2Vec2(0,9.81),true);var bodyDef_b2BodyDef=new b2BodyDef();bodyDef.position.Set(320/worldScale,30/worldScale);var circleShape:b2CircleShape;circleShape=new b2CircleShape(25/worldScale);var fixtureDef:b2FixtureDef = new b2FixtureDef();fixtureDef.shape=circleShape;var theBall_b2Body=world.CreateBody(bodyDef);theBall.CreateFixture(fixtureDef);addEventListener(Event.ENTER_FRAME,updateWorld);}private function updateWorld(e:Event):void {world.Step(1/30,10,10);world.ClearForces;}}} Time to save the project and test it. Ready to see your first Box2D body in action? Run the movie! Ok, it did not display anything. Before you throw this article, let me tell you that Box2D only simulates the physic world, but it does not display anything. This means your body is alive and kicking in your Box2D World; it's just that you can't see it. Creating a box shape Let's perform the following steps: First, body and fixture definitions can be reassigned to define our new body. This way, we don't need to declare another bodyDef variable, but we just need to reuse the one we used for the creation of the sphere by changing its position: bodyDef.position.Set(320/worldScale,470/worldScale); Now the body definition is located in the horizontal center, and close to the bottom of the screen. To create a polygon shape, we will use the b2PolygonShape class : var polygonShape_b2PolygonShape=new b2PolygonShape(); This way we create a polygon shape in the same way we created the circle shape earlier. Polygon shapes must follow some restrictions, but at the moment because we only need an axis-aligned box, the SetAsBox method is all we need. polygonShape.SetAsBox(320/worldScale,10/worldScale); The method requires two arguments: the half-width and the half-height of the box. In the end, our new polygon shape will have its center at pixels (320, 470), and it will have a width of 640 pixels and a height of 20 pixels—just what we need to create a fl oor. Now we change the shape attribute of the fixture definition, attaching the new polygon shape: fixtureDef.shape=polygonShape; Finally, we can create the world body and embed the fixture in it, just like we did with the sphere. var theFloor_b2Body=world.CreateBody(bodyDef); theFloor.CreateFixture(fixtureDef); The following is how your Main function should look now: public function Main() {world=new b2World(new b2Vec2(0,9.81),true);var bodyDef_b2BodyDef=new b2BodyDef();bodyDef.position.Set(320/worldScale,30/worldScale);var circleShape:b2CircleShape;circleShape=new b2CircleShape(25/worldScale);var fixtureDef_b2FixtureDef=new b2FixtureDef();fixtureDef.shape=circleShape;var theBall_b2Body=world.CreateBody(bodyDef);theBall.CreateFixture(fixtureDef);bodyDef.position.Set(320/worldScale,470/worldScale);var polygonShape_b2PolygonShape=new b2PolygonShape();polygonShape.SetAsBox(320/worldScale,10/worldScale);fixtureDef.shape=polygonShape;var theFloor_b2Body=world.CreateBody(bodyDef);theFloor.CreateFixture(fixtureDef);var debugDraw_b2DebugDraw=new b2DebugDraw();var debugSprite_Sprite=new Sprite();addChild(debugSprite);debugDraw.SetSprite(debugSprite);debugDraw.SetDrawScale(worldScale);debugDraw.SetFlags(b2DebugDraw.e_shapeBit);debugDraw.SetFillAlpha(0.5);world.SetDebugDraw(debugDraw);addEventListener(Event.ENTER_FRAME,updateWorld);} Test the movie and you'll see the floor:
Read more
  • 0
  • 0
  • 2058

article-image-blender-engine-characters
Packt
09 Nov 2012
4 min read
Save for later

Blender Engine : Characters

Packt
09 Nov 2012
4 min read
(For more resources related to this topic, see here.) An example — save the whale! The game can augment its levels of difficulty as we develop our world using different environments. We can always increase the capability of your character with new keyboard functions. Obviously, this is an example. Feel free to change the game, remake it, create another completely different character, and provide a gameplay of another gaming genre. There are thousands of possibilities, and it's fine if you deviate from our idea. It is important that you clear your design before you start your game library. That's all. How to create a library If we start working with the Blender Game Engine (BGE), we must have a library of all of the objects we use in our game. For example, the basic character, or even the smallest details, such as the appearance of health levels of our enemies. On the Internet, we can find plenty of 3D objects, which can be useful for our game. Let's make sure we use free models and read the instructions to run the model. Do not forget to mention the authorship of each object that you download. Time for action — downloading models from the Internet Let's go to one of the repositories for Blender, which can be found at http://www.opengameart.org/ and let's try to search for what is closest to our character. Write sea in the Search box, and choose 3D Art for Art Type, as shown in the following screenshot: We have some interesting options. We see a shark, seaweed, and some icebergs to select from. Choose and click on the thumbnail with the name ICEBERGS IN 3D: At the bottom of the page, you will find the file.blend downloadable. Click on it to start the download. Remember to click on RMB before the download begins. Now, let's try web pages, which have libraries that offer 3D models in other formats. An example of a very extensive library is http://sketchup.google.com/3dwarehouse/. Write trawler in the Search box, and choose the one that you like. In our case, we decided to go for the Google 3D model with the title Trawler boat, 28': Click on the Download model button: Save the file on your hard disk, in a folder of your game. What just happened? We have searched the Internet for 3D models, which will allow us to start a library for our game objects in Blender. Whether they are .blend files (original blender format) or of a 3D-model format, you can import them and work with them. Don't download models that you will not use. The libraries on the Internet grow every day, and we don't need to save all of the models that we like. Remember that before downloading the model and using it, we need to check if it has a free license. If you are releasing your project under some other free and/or open source license, then there could be licensing conflicts depending on what license the art is released under. It is your responsibility to verify the compatibility of the art license with the license you are using. Importing other files into Blender Before the imported mesh can be used, some scene and mesh prepping in Blender is usually required. It basically cleans up the model imported in Blender. Google SketchUp is another free, 3D software option. You can build models from scratch, and you can upload or download what you need, as you have seen. People all over the world share what they've made on the Google 3D Warehouse. It's our time to do the same. Download the program from http://sketchup.comand install it. You can uninstall it later. Open the boat file in SketchUp, click on Save as, and export the 3D model using the COLLADA format. The *.dae Collada format is a common, cross-platf orm file, which can be imported directly into Blender.
Read more
  • 0
  • 0
  • 2176
Visually different images

article-image-retopology-3ds-max
Packt
06 Nov 2012
6 min read
Save for later

Retopology in 3ds Max

Packt
06 Nov 2012
6 min read
(For more resources related to this topic, see here.) High poly model import Different applications are biased to different file formats and may therefore have different import procedures. The Send To functionality between 3ds Mudbox and 3ds Max(which is possible as both are Autodesk products).This is essentially a .fbx transfer. If you are using ZBrush, you will want to get used to the GoZequivalent transfer feature. Note that GoZ must be run from ZBrush to 3ds Max before it can go in the other direction. GoZ also works the free, mini-modeler tool from Pixologic(who make ZBrush too)called Sculptris, which is available at http://www.pixologic.com/sculptris/. In the following example, we'll directly export from Sculptris, a model made from a sphere(so it needs retopology to get a clean base mesh). We'll export it as a .obj and import it to 3ds Max in order to show a few of the idiosyncrasies of this situation. With a model that is sculpted from a primitive base, such as a sphere or box, there are no meaningful texture coordinates, so it would be impossible to paint the model. Although many sculpting programs, including Sculptris, do automapping, the results are seldom optimal. Importing a model into Sculptris The following steps detail the instructions on importing a model into Sculptris: Install Sculptris 6 Alpha and run it. Note that the default scene is a sphere made of triangle faces that dynamically subdivide where you paint. Use the brush tools to experiment with this a while. Click on Open and browse the provided content for this article, and open Packt3dsMaxChapter 9Creature.sc1. The file format .sc1 is native to Sculptris. To get this model to work in 3ds Max, you will need to choose Export and save it instead as Sculptris.obj. Importing the Sculptris.OBJ mesh in 3ds Max After we have imported a model into Sculptris, we'll move on to see how we can save this file into 3ds Max. The importing part is fairly easy. In 3ds Max, choose File | Import and browse to Sculptris.obj, the mesh you just exported from Sculptris. You could also try the example .obj called Packt3dsMaxChapter 9RetopoBullStart.obj. The import options you set matter a lot. You will need to make sure that the options Import as single mesh and Import as Editable Poly are on. This makes sure that the symmetrical object employed in the Sculptris scene (actually a separate mesh that conforms to the model) doesn't prevent the import. While importing, you should also swap the Normals radio button from the From SM group to Auto Smooth, to avoid the triangulated mesh looking faceted. A model begun in Sculptris won't contain any smoothing information when sent to 3ds Max and will come in faceted if you don't choose Auto Smooth. After importing, another way to do the same thing is to apply a Smooth modifier. The Auto Smooth value should be 90 or so, to reduce the likelihood of any sharp creases. Finally, once the model is imported into the 3ds Max scene, move it in the Z plane so it is standing on the ground plane, and make sure its pivot is at 0,0,0. This can be done by choosing Edit | Transform Toolbox and clicking on Origin in the Pivot section. Note that the model's edges are all tiny triangles. This is a result of the way Sculptris adds detail to a model. Retopology will help us get a quad-based model to continue working from. The idea of retopology is to build up a new, nicely constructed model on top of the high-resolution model. The high-resolution model serves as a guide surface. If you are curious, apply an Unwrap UVW modifier to the model and see how its UV mapping looks. Probably a bit scary. A high-resolution model such as this one (250K polys) is virtually impossible to manually UV map,at least not quickly. So we need to simplify the model. If you can't see the Ribbon, go to Customize | Show UI | Show Ribbon, or press the icon in the main toolbar. Then click on the Freeform tab. With the creature mesh selected, click on Grid in the Freeform tab. This specifies the source to which we'll conform the new mesh that we're going to generate next. We don't want to conform to the grid, so change this to Draw On: Surface and then assign the source mesh using the Pick button below the Surface button, shown in the following screenshot: Each time you relaunch 3ds Max to keep working on the retopology, you'll have to reassign the high-resolution mesh as the source surface in the same way. You could also use Draw On: Selection, which would be handy if the source was, in fact, a bunch of different meshes. There is an Offset value you can adjust so that the mesh you'll generate next will sit slightly above the source mesh that can help reduce frustration from the lower-resolution mesh, which is likely to sink in places within the more curvy, high-resolution mesh. If you're just starting out, try leaving the setting alone and see how it turns out. An additional way to help see what you are doing is to apply a semitransparent material or low Visibility value to the high-resolution model (or press Alt + X while it is selected). Next, in a nested part of the Ribbon, we have to set a new object or model to work on (that doesn't exist yet). Click on the PolyDraw rollout at the bottom of the Freeform tab. Having expanded PolyDraw, click on the New Object button and we're ready to start retopologizing. I would strongly suggest raising the Min Distance value in the PolyDraw section, so when you create the first polygons they aren't too small. When using the Strips brush, usually I set the Min Distance to around 25-35, but it depends on the model scale and the level of detail you want. Just like with modeling, when you retopologize, it is best to move from large forms to small details. The object will be called something like Box001, an Editable Poly beginning in the Vertex mode. You can rename it to Retopo or something more memorable. Turn on the Strips mode and make sure Edged Faces is toggled on (F4) so you can see the high-resolution model's center line. Starting at the head, draw a strip of polygons along the symmetry line so that there's an edge on either side. As this model is symmetrical, we only have to work on half of it. If you hold the mouse over the Strips mode icon , you'll get a tool tip that explains how Strips are made, and if you press Y, you can watch a video demo albeit drawing on the Grid. Note that the size of the polygons, as you draw, is determined by the Min Distance value under PolyDraw. Bear in mind that apart from the Min Distance value, the size of the polygons drawn also depends on the current viewport zoom. This is handy because when working on tighter detail, you'll tend to zoom in closer to the source mesh.
Read more
  • 0
  • 0
  • 5331

article-image-cryengine-3-breaking-ground-sandbox
Packt
23 Oct 2012
13 min read
Save for later

CryENGINE 3: Breaking Ground with Sandbox

Packt
23 Oct 2012
13 min read
The majority of games created using the CryENGINE SDK have historically been first-person shooters containing a mix of sandbox and directed gameplay. If you have gone so far as to purchase a book on the use of the CryENGINE 3 SDK, then I am certain that you have had some kind of idea for a game, or even improvements to existing games, that you might want to make. It has been my experience professionally that should you have any of these ideas and want to share or sell them, the ideas that are presented in a playable format, even in early prototype form, are far more effective and convincing than any PowerPoint presentation or 100-page design document. Reducing, reusing, recycling Good practice when creating prototypes and smaller scale games, especially if you lack the expertise in creating certain assets and code, is to reduce, reuse, and recycle. To break down what I mean: Reduce the amount of new assets and new code you need to make Reuse existing assets and code in new and unique ways Recycle the sample assets and code provided, and then convert them for your own uses Developing with CryEngine out of the box As mentioned earlier, the CryENGINE 3 SDK has a huge amount of out-of-the-box features for creating games. Let's begin by following a few simple steps to make our first game world. Before proceeding with this example, it's important to understand the features it is displaying; the level we will have created by the end of this article will not be a full, playable game, but rather a unique creation of yours, which will be constructed using the first major features we will need in our game. It will provide an environment in to which we can design gameplay. With the ultimate goal of this article being to create our own level with the core features immediately available to us, we must keep in mind that these examples are orientated to compliment a first-person shooter and not other genres. The first-person shooter genre is quite well defined as new games come out every year within this genre. So, it should be fairly easy for any developer to follow these examples. In my career, I have seen that you can indeed accomplish a good cross section of different games with the CryENGINE 3 SDK. However, the third- and first-person genres are significantly easier to create, immediately with the example content and features available right out of the box. For the designers: This article is truly a must-have for designers working with the engine. Though, I would highly recommend that all users of sandbox know how to use these features, as they are the principal features typically used within most levels of the different types of games in the CryENGINE. Time for action - creating a new level Let's follow a few simple steps to create our own level: Start the Editor.exe application. Select File | New. This will present you with a New Level dialog box that allows you to do the adjustments of some principal properties of your masterpiece to come. The following screenshot shows the properties available in New Level: Name this New Level, as Book_Example_1. The name that you choose here will identify this level for loading later as well as creating a folder and .cry file of the same name. In the Terrain section of the dialog box, set Heightmap Resolution to 1024x1024 , and Meters Per Unit to 1. Click on OK and your New Level will begin to load. This should occur relatively fast, but will depend on your computer's specifications. You will know the level has been loaded when you see Ready in the status bar. You will also see an ocean stretching out infinitely and some terrain slightly underneath the water. Maneuver your camera so that you have a good, overall view of the map you will create, as seen in the following screenshot: (Move the mouse over the image to enlarge.) What just happened? Congratulations! You now have an empty level to mold and modify at your will. Before moving on, let's talk a little about the properties that we just set, as they are fundamental properties of the levels within CryENGINE. It is important to understand these, as depending on the type of game you are creating, you may need bigger or smaller maps, or you may not even need terrain at all. Using the right Heightmap Resolution When we created the New Level, we chose a Heightmap Resolution of 1024x1024. To explain this further, each pixel on the heightmap has a certain grey level. This pixel then gets applied to the terrain polygons, and depending on the level of grey, will move the polygon on the terrain to a certain height. This is called displacement. Heightmaps always have varying values from full white to full black, where full white is maximum displacement and full black is minimum or no displacement. The higher the resolution of the heightmap, the more the pixels that are available to represent different features on the said heightmap. You can thus achieve more definition and a more accurate geometrical representation of your heightmap using higher resolutions. The settings can range from the smallest resolution of 128x128, all the way to the largest supported resolution of 8192x8192 . The following screenshot shows the difference between high resolution and low-resolution heightmaps: Scaling your level with Meters Per Unit If the Heightmap Resolution parameter is examined in terms of pixel size, then this dialog box can be viewed also as the Meters Per Pixel parameter. This means that each pixel of the heightmap will be represented by so many meters. For example, if a heightmap's resolution has 4 Meters Per Unit, then each pixel on the generated heightmap will measure to be 4 meters in length and width on the level. Even though Meters Per Unit can be used to increase the size of your level, it will decrease the fidelity of the heightmap. You will notice that attempting to smoothen out the terrain may be difficult since there will be a wider, minimum triangle size set by this value. Keep in mind that you can adjust the unit size even after the map has been created. This is done through the terrain editor, which we will discuss shortly. Calculating the real-world size of the terrain The expected size of the terrain can easily be calculated before making the map, because the equation is not so complicated. The real-world size of the terrain can be calculated as: (Heightmap Resolution) x Meters Per Unit = Final Terrain Dimensions. For example: (128x128) x 2m = 256x256m (512x512) x 8m = 4096x4096m (1024x1024) x 2m = 2048x2048m Using or not using terrain In most cases, levels in CryENGINE will use some amount of the terrain. The terrain itself is a highly optimized system that has levels of dynamic tessellation, which adjusts the density of polygons depending on the distance from the camera to the player. Dynamic tessellation is used to make the more defined areas of the terrain closer to the camera and the less defined ones further away, as the number of terrain polygons on the screen will have a significant impact on the performance of the level. In some cases, however, the terrain can be expensive in terms of performance, and if the game is made in an environment like space or interior corridors and rooms, then it might make sense to disable the terrain. Disabling the terrain in these cases will save an immense amount of memory, and speed up level loading and runtime performance. In this particular example, we will use the terrain, but should you wish to disable it, simply go to the second tab in the RollupBar (usually called the environment tab) and set the ShowTerrainSurface parameter to false, as shown in the following screenshot:   Time for action - creating your own heightmap You must have created a new map to follow this example. Having sufficiently beaten the terrain system to death through explanation, let's get on with what we are most interested in, which is creating our own heightmap to use for our game: As discussed in the previous example, you should now see a flat plane of terrain slightly submerged beneath the ocean. At the top of the Sandbox interface in the main toolbar, you will find a menu selection called Terrain; open this. The following screenshot shows the options available in the Terrain menu. As we want to adjust the terrain, we will select the Edit Terrain option. This will open the Terrain Editor window, which is shown in the following screenshot: (Move the mouse over the image to enlarge.) You can zoom in and pan this window to further inspect areas within the map. Click-and-drag using the right mouse button to pan the view and use the mouse wheel to zoom in and zoom out. The Terrain Editor window has a multitude of options, which can be used to manipulate the heightmap of your level. Before we start painting anything, we should first set the maximum height of the map to something more manageable: Click on Modify. Click on Set Max Height. Set your Max Terrain Height to 256. Note that the terrain height is measured in meters. Having now set the Max Height parameter, we are ready to paint! Using a second monitor: This is a good time to take advantage of a second monitor should you have one, as you can leave the perspective view on your primary monitor and view the changes made in the Terrain Editor on your second monitor, in real time. On the right-hand side of the Terrain Editor , you will see a rollout menu named Terrain Brush. We will first use this to flatten a section of the level. Change the Brush Settings to Flatten, and set the following values: Outside Radius = 100 Inside Radius = 100 Hardness = 1 Height = 20 NOTE: You can sample the terrain height in the Terrain Editor or the view port using the shortcut Control when the flatten brush is selected. Now paint over the top half of the map. This will flatten the entire upper half of the terrain to 20 meters in height. You will end up with the following screenshot, where the dark portion represents the terrain, and since it is relatively low compared to our max height, it will appear black: (Move the mouse over the image to enlarge.) Note that, by default, the water is set to a height of 16 meters. Since we flattened our terrain to a height of 20 meters, we have a 4-meter difference from the terrain to the water in the center of the map. In the perspective viewport, this will look like a steep cliff going into the water. At the location where the terrain meets the water, it would make sense to turn this into a beach, as it's the most natural way to combine terrain and water. To do this, we will smoothen the hard edge of the terrain along the water. As this is to become our beach area, let's now use the smooth tools to make it passable by the player: Change the Type of brush to Smooth and set the following parameters: Outside Radius = 50 Hardness = 1 I find it significantly easier to gauge the effects of the smooth brush in the perspective viewport. Paint the southern edge of the terrain, which will become our beach. It might be difficult to view the effects of the smooth brush simply in the terrain editor, so I recommend using the perspective viewport to paint your beach. Now that we have what will be our beach, let's sculpt some background terrain. Select the Rise/Lower brush and set the following parameters: Outside Radius = 75 Inside Radius = 50 Hardness = 0.8 Height = 1 Before painting, set the Noise Settings for the brush; to do so, check Enable Noise to true. Also set: Scale = 5 Frequency = 25 Paint the outer edges of the terrain while keeping an eye on the perspective viewport at the actual height of the mountain type structure that this creates. You can see the results in the Terrain Editor and perspective view, as seen in the following screenshots: (Move the mouse over the image to enlarge.) (Move the mouse over the image to enlarge.) It is a good time to use the shortcut to switch to smooth brush while painting the terrain. While in perspective view, switch to the smooth brush using the Shift shortcut. A good technique is to use the Rise/Lower brush and only click a few times, and then use Shift to switch to the smooth brush and do this multiple times on the same area. This will give you some nice terrain variation, which will serve us nicely when we go to texture it. Don't forget the player's perspective: Remember to switch to game mode periodically to inspect your terrain from the players level. It is often the case that we get caught up in the appearance of a map by looking at it from our point of view while building it, rather than from the point of view of the player, which is paramount for our game to be enjoyable to anyone playing it. Save this map as Book_Example_1_no_color.cry. What just happened? In this particular example, we used one of the three different techniques to create height maps within the CryENGINE sandbox: The first technique, which we performed here, was manually painting the heightmap with a brush directly in the sandbox. The second technique, which we will explore later, is generating procedural terrain using the tools provided in sandbox. Finally, the third technique is to import a previously created heightmap from another program. You now have a level with some terrain that looks somewhat like a beach, a flat land area, and some mountains. This is a great place to start for any outdoor map as it allows us to use some powerful out of the box engine features like the water and the terrain. Having the mountains surrounding the map also encourages the illusion of having more terrain behind it. Have a go hero – using additional brush settings With the settings we just explored, try to add some more terrain variation into the map to customize it further, as per your game's needs. Try using different settings for the brushes we explored previously. You could try adding some islands out in the water off the coast of your beach or some hills on the flat portion of the map. Use the Inside Radius and Outside Radius, which have a falloff of the brushes settings from the inner area having the strongest effect and the outer having the least. To create steeper hills or mountains, set the Inside Radius and Outside Radius to be relatively similar in size. To get a shallower and smoother hill set the Inside Radius and Outside Radius further apart. Finally, try using the Hardness, which acts like the pressure applied to a brush by a painter on canvas. A good way to explain this is that if the Hardness is set to 1, then within one click you will have the desired height. If set to 0.01, then it will take 100 clicks to achieve an identical result. You can save these variations into different .cry files should you wish to do so.
Read more
  • 0
  • 0
  • 2768

article-image-mission-running-eve-online
Packt
17 Oct 2012
14 min read
Save for later

Mission Running in EVE Online

Packt
17 Oct 2012
14 min read
Mission types Missions in EVE fall into five general categories: Courier, Kill, Mining, Trade, and Special missions. Courier, Kill, Mining, and Trade missions are further categorized into five different levels: level I through level V. Let's take a closer look at each category to see what each type of mission entails. Courier missions Courier missions are simply delivery missions where you move items from one station to another. The type of items can range from common items found on the market to mission-specific cargo containers. The size and quantity of items to move can vary greatly and can either be moved using whatever ship you are currently flying or may require the use of an industrial ship.   While the ISK reward for courier missions is a bit on the low side, it having no negative standing impact on opposing factions is a huge positive. This, added to the fact that courier missions are quite easy and can often be completed very quickly, means they are generally worth running. Every so often you will receive a courier mission that takes you into Low-Sec space and with the risks involved in Low-Sec, it is best to decline these missions. You are able to decline one mission every four hours per agent without taking a negative standing hit with that agent. Be very careful when declining missions, since if your standing falls too low with an agent you will no longer be able to receive mission offers from that agent. Kill missions By far the most common, the most profitable, and let's be honest, the most fun missions to run are kill missions. The only thing better than flying beautiful space ships is seeing beautiful space ships blow up. There are currently two types of kill missions. In one, you warp to a set of coordinates in space where you engage NPC targets and in the other you warp to an acceleration gate which takes you into a series of pockets all connected by more acceleration gates. Think of this second type of kill mission like a dungeon with multiple rooms, in which you fight NPC targets in each of the rooms. Kill missions are a great way to make ISK, especially when you are able to access higher-level agents. However, as you can guess, kill missions also get more and more difficult the higher the level of the agent. Another thing that you need to be very careful about when running kill missions is that you run the risk of having negative standing impact on factions opposed to the faction you are running missions for. For example, if you were running missions for the Caldari state and you completed a mission that had you destroy ships belonging to or friendly with the Gallente Federation, then you would lose standing with the Gallente. A great way to negate the standing loss is to run missions for as many agents as you can find and to decline any mission that will have you attack the ships of another empire. But remember you can only decline one mission per agent every four hours, hence having multiple agents. Agents and how they work Now would be a great time to take a closer look at the agent system of EVE. Each agent works in a specific division of a NPC corporation. The division that the agent works in determines the type of missions you will receive (more on this later), and the NPC corporation that the agent works for determines which faction your standing rewards will affect. An agent's information sheet is shown in the following image: As you can see, the agent Aariwa Tenanochi works for the Home Guard in their Security division and is a level 1 agent. Aariwa can be found in the Nonni system at the Home Guard Assembly Plant located at planet III Moon 1. You can also see that the only requirement to receive missions from Aariwa is that you meet his standing requirements. Doing the tutorial missions will allow you to access all level 1 agents for your chosen faction. All agents are rated level 1 to level 5. The higher the level of the agent the harder their missions are, and the harder the mission the better and bigger the reward. The difficulty level of each agent level can best be described as follows: Level 1: Very Easy. Designed for new players to learn about mission running and PvE in general. Frigate or destroyer for kill missions, short range and low cost items for courier and trade missions, and small amounts of low grade ore or minerals for mining missions. Level 2: Easy. Still designed beginner players, but will require you to start thinking tactically. Destroyer or cruiser for kill missions, slightly longer range and higher cost items for courier and trade missions, and higher amounts of low grade ore or minerals for mining missions. Level 3: Medium. You will need to have a firm understanding of how your ship works and have solid tactics for it. Battlecruiser or Heavy Assault Cruiser for kill missions, much longer range and higher cost items for courier and trade missions, and large amounts of low grade ore or minerals for mining missions. Level 4: Hard. You will need to understand your ship inside out, have solid tactics, and a solid fit for your ship. Battleship for kill missions, very long range and very costly items for courier and trade missions, and very large amounts of low grade ore or minerals for mining missions. Level 5: Very Hard. Same as level 4 agents but only found in Low-Sec systems and comes with all the risks of Low-Sec. It is a good idea to only do these missions in a fleet with a group of people. The standing requirements to access the different level of agents are shown in the following table:   Level 1 Level 2 Level 3 Level 4 Level 5 Standing required - 1.00 3.00 5.00 7.00 While you can use larger ships to run lower level kill missions, it is generally not a good idea. First, there can be ship limitation for a mission that does not allow you to use your larger ship and second, while your larger ship will be able to handle the incoming damage much easier, the larger weapons that your ship uses will have a harder time hitting the smaller targets. And since ammo costs ISK, it cuts into your profit per mission. I generally like to use the cheapest ship I can get away with when running missions. This ensures that if I ever get scanned down and killed by player pirates, my loss is minimal. Understanding your foe The biggest key to success when running kill missions is understanding who your targets will be and how they like to fight. Whenever you are offered a kill mission by an agent, you can find out which faction your target belongs to inside the mission description, either in the text of the description or by an emblem representing your target's faction. It is important to know which faction your target belongs to because then you will know what kind of damage to expect from your target. In the mission offer shown in the following image you can see that your targets for this mission are Rogue Drones: There are four types of damage, EM, Thermal , Kinetic, and Explosive, and each faction has a preference toward specific damage types. Once you know what kind of damage to expect you can then tailor the defense of your ship for maximum protection towards that damage. Knowing which faction your target belongs to will also tell you which type of damage you should be doing to maximize your damage output. Each faction has weaknesses towards specific damage types. Along with what types of damage to defend against and what types of damage to utilize, knowing which faction your target belongs to will also tell you what kind of special tactic ships you may encounter in your mission. For example, the Caldari likes to use ECM to jam your targeting systems and the Amarr uses energy neutralizers to drain your ship's capacitor, leaving you unable to use weapons or even to warp away. The following table shows all the factions and the damage types to defend and use: NPC faction Damage type to defend Damage type to deal Guristas Kinetic, Thermal Kinetic, Thermal Serpentis Thermal, Kinetic Thermal Blood Raider EM, Thermal EM, Thermal Sansha's Nation EM, Thermal EM, Thermal Angel Cartel Explosive, Kinetic, Thermal, EM Explosive Mordu's Legion Kinetic, Thermal, Explosive, EM Thermal, Kinetic Mercenary Kinetic, Thermal Thermal, Kinetic Republic Fleet Explosive, Thermal, Kinetic, EM Explosive, Kinetic Caldari Navy Kinetic, Thermal Kinetic, Thermal Amarr Navy EM, Thermal, Kinetic EM, Thermal Federation Navy Thermal, Kinetic Kinetic, Thermal Rogue Drones Explosive, Kinetic, EM, Thermal EM Thukker Tribe Explosive, Thermal EM CONCORD EM, Thermal, Kinetic, Explosive Explosive, Kinetic Equilibrium of Mankind Kinetic, Thermal Kinetic, EM You will most likely have noticed that several factions use the same damage types but they are listed in different orders. The damage types are listed this way because the damage output or weakness is not divided evenly. So for the Serpentis pirate faction for example, they prefer a combination of Kinetic and Thermal damage with a higher percentage being Kinetic damage than Thermal damage. What ship to use to maximize earnings The first question that most new mission runners ask is "What race of ships should I use?" While each race has its own advantages and disadvantages and are all very good choices, in my opinion Gallente drone ships are the best ships to use for mission running, from a pure ISK making stand point. Gallente drone ships are the best when it comes to mission running because of the lack of ammo costs and the versatility they offer. With drones being your primary method of dealing damage you do not have to worry about the cost associated with needing a large supply of ammo like the Caldari or Minmatar. What about the Amarr you say? They don't need ammo. While it is true that the Amarr also do not require ammo, the heavy capacitor drain of lasers limits the amount of defense your ship can have. Drone ships do not need to fit weapons in their high slots and therefore you can fit your ship with the maximum amount of defense possible. Drone ships can also utilize the speed and range of drones to their advantage. By using modules such as the Drone Link Augmentor you can increase the range of your drones so that you can comfortably sit outside the attack range of your targets and let your drones do all the work. Being outside of the attack range also means that you are outside the range of electronic warfare, so it's a win-win situation. The best feature of drone ships is the ability to carry different types of drones. Light Drones for frigate-sized targets, Medium Drones for cruiser-sized targets, and Heavy Drones for battleship-sized targets. You can even carry Electronic Warfare Drones that can jam your targets, drain their capacitors, or even provide more defense for your ship by way of shield or armor repairs. How should I fit my ship? The second most common question is "How should I have my ship fitted for mission running?" Ship fitting is very subjective and almost an art form in itself. I can easily go on for pages about the different concepts behind ship fittings and why one way is better than another but there will always be people that will disagree because what works for one person does not necessarily work for you. The best thing to do is to visit websites such as eve.battleclinic.com/browse_loadouts.php to get ideas on how ship fittings should work and to use 3rd party software such as Python Fitting Assistant to come up with fittings that suit your style of play the best. When browsing fittings on BattleClinic, it is a good idea to make sure the filters at the bottom right are set for the most recent expansions. You wouldn't want to use a completely out-of-date fitting that will only get you killed. Basic tactics When you start a kill mission you will likely be faced with two different scenarios. The first is as soon as you come out of warp, targets will be on the offensive and immediately start attacking you. The second and more favorable scenario is to warp onto the field with multiple groups of targets in a semi-passive state. No matter the scenario you come across, the following are a few simple tactics will ensure everything goes that much smoother. These tactics are as follows: Zoom out. I know it's hard because your ship is so pretty, but zoom the camera out. You need to be able to have a high level of situational awareness at all times and you can't do that if you're zoomed in on your ship all the time. Finish each wave or group of targets before engaging other targets. This will ensure that you will never be too outnumbered and be able to more easily handle the incoming damage. Kill the tacklers first. Tacklers are ships that use electronic warfare to slow your ship or to prevent you from warping away. Since flying out of range or warping away is the best way of escape when things go bad, killing off the tacklers first will give you the best chance to escape intact. Kill the largest targets first. Taking out the largest targets first gives you two critical advantages. The first is you are taking a lot of the damage against you off the field and the second is that larger ships are much easier to hit. Save structures for last. If part of your mission is to shoot or destroy a structure, save that for last. The majority of times, firing on a structure will cause all the hostiles to attack you at once and may even spawn stationary defenses. This tactic does not apply to defensive towers, such as missile and tackling towers. You should always kill these towers as soon as possible. Mining missions Mining missions come in two flavors. The first will have you travel to a set of coordinates given to you by your agent, to mine a specific amount of a specific type of ore and then return to your agent with the ore you have mined. The second will simply have you supply your agent with an amount of ore or mineral. For the second type of mining mission you can either mine and refine the ore yourself or purchase it from the market. The ISK reward for mining missions is really bad and in general you will make more ISK if you had simply spent the time mining, but once again, having no negative standing impact on opposing factions is a huge plus. So if you are a career miner, it can be worth it for you to run these missions for the standings gain. After all, you will have to increase your standing in order to maximize your refine yield. It would be best to only accept the missions in which you already have the requested ore or mineral in your stock and to decline the rest. Trade missions Trade missions are simple missions that require you to provide items to your agent at a specific station. These items can either be made by you or purchased off the market. Like courier and mining missions, the only upside to trade missions is that they can be completed without the negative standing impact on opposing factions. But with the high amount of ISK needed to complete these missions and the time involved, it is best to avoid these missions. If you choose to do these missions, check to see if the item required is on sale at the destination station. If it is on sale there, you can often purchase it and complete the mission without ever leaving your current station.
Read more
  • 0
  • 0
  • 8440
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-article-breaking-ground-with-sandbox
Packt
12 Oct 2012
11 min read
Save for later

Breaking Ground with Sandbox

Packt
12 Oct 2012
11 min read
What makes a game? We saw that majority of the games created on the CryENGINE SDK have historically been first-person shooters containing a mix of sandbox and directed gameplay. If you have gone so far as to purchase a book on the use of the CryENGINE 3 SDK, then I am certain that you have had some kind of idea for a game, or even improvements to existing games, that you might want to make. It has been my experience professionally that should you have any of these ideas and want to share or sell them, the ideas that are presented in a playable format, even in early prototype form, are far more effective and convincing than any PowerPoint presentation or 100-page design document. Reducing, reusing, recycling Good practice when creating prototypes and smaller scale games, especially if you lack the expertise in creating certain assets and code, is to reduce, reuse, and recycle. To break down what I mean: Reduce the amount of new assets and new code you need to make Reuse existing assets and code in new and unique ways Recycle the sample assets and code provided, and then convert them for your own uses Developing out of the box As mentioned earlier, the CryENGINE 3 SDK has a huge amount of out-of-the-box features for creating games. Let's begin by following a few simple steps to make our first game world. Before proceeding with this example, it's important to understand the features it is displaying; the level we will have created by the end of this article will not be a full, playable game, but rather a unique creation of yours, which will be constructed using the first major features we will need in our game. It will provide an environment in to which we can design gameplay. With the ultimate goal of this article being to create our own level with the core features immediately available to us, we must keep in mind that these examples are orientated to compliment a first-person shooter and not other genres. The first-person shooter genre is quite well defined as new games come out every year within this genre. So, it should be fairly easy for any developer to follow these examples. In my career, I have seen that you can indeed accomplish a good cross section of different games with the CryENGINE 3 SDK. However, the third- and first-person genres are significantly easier to create, immediately with the example content and features available right out of the box. For the designers:This article is truly a must-have for designers working with the engine. Though, I would highly recommend that all users of sandbox know how to use these features, as they are the principal features typically used within most levels of the different types of games in the CryENGINE. Time for action - creating a new level Let's follow a few simple steps to create our own level: Start the Editor.exe application. Select File | New. This will present you with a New Level dialog box that allows you to do the adjustments of some principal properties of your masterpiece to come. The following screenshot shows the properties available in New Level: Name this New Level, as Book_Example_1. The name that you choose here will identify this level for loading later as well as creating a folder and .cry file of the same name. In the Terrain section of the dialog box, set Heightmap Resolution to 1024x1024 , and Meters Per Unit to 1. Click on OK and your New Level will begin to load. This should occur relatively fast, but will depend on your computer's specifications. You will know the level has been loaded when you see Ready in the status bar. You will also see an ocean stretching out infinitely and some terrain slightly underneath the water. Maneuver your camera so that you have a good, overall view of the map you will create, as seen in the following screenshot: (Move the mouse over the image to enlarge.) What just happened? Congratulations! You now have an empty level to mold and modify at your will. Before moving on, let's talk a little about the properties that we just set, as they are fundamental properties of the levels within CryENGINE. It is important to understand these, as depending on the type of game you are creating, you may need bigger or smaller maps, or you may not even need terrain at all. Using the right Heightmap Resolution When we created the New Level, we chose a Heightmap Resolution of 1024x1024. To explain this further, each pixel on the heightmap has a certain grey level. This pixel then gets applied to the terrain polygons, and depending on the level of grey, will move the polygon on the terrain to a certain height. This is called displacement. Heightmaps always have varying values from full white to full black, where full white is maximum displacement and full black is minimum or no displacement. The higher the resolution of the heightmap, the more the pixels that are available to represent different features on said heightmap. You can thus achieve more definition and a more accurate geometrical representation of your heightmap using higher resolutions. The settings can range from the smallest resolution of 128x128, all the way to the largest supported resolution of 8192x8192 . The following screenshot shows the difference between high resolution and low resolution heightmaps:   Scaling your level with Meters Per Unit If the Heightmap Resolution parameter is examined in terms of pixel size, then this dialog box can be viewed also as the Meters Per Pixel parameter . This means that each pixel of the heightmap will be represented by so many meters. For example, if a heightmap's resolution has 4 Meters Per Unit, then each pixel on the generated heightmap will measure to be 4 meters in length and width on the level. Even though Meters Per Unit can be used to increase the size of your level, it will decrease the fidelity of the heightmap. You will notice that attempting to smoothen out the terrain may be difficult, since there will be a wider, minimum triangle size set by this value. Keep in mind that you can adjust the unit size even after the map has been created. This is done through the terrain editor, which we will discuss shortly. Calculating the real-world size of the terrain The expected size of the terrain can easily be calculated before making the map, because the equation is not so complicated. The real-world size of the terrain can be calculated as: (Heightmap Resolution) x Meters Per Unit = Final Terrain Dimensions. For example: (128x128) x 2m = 256x256m (512x512) x 8m = 4096x4096m (1024x1024) x 2m = 2048x2048m Using or not using terrain In most cases, levels in CryENGINE will use some amount of the terrain. The terrain itself is a highly optimized system that has levels of dynamic tessellation, which adjusts the density of polygons depending on the distance from the camera to the player. Dynamic tessellation is used to make the more defined areas of the terrain closer to the camera and the less defined ones further away, as the amount of terrain polygons on the screen will have a significant impact on the performance of the level. In some cases, however, the terrain can be expensive in terms of performance, and if the game is made in an environment like space or interior corridors and rooms, then it might make sense to disable the terrain. Disabling the terrain in these cases will save an immense amount of memory, and speed up level loading and runtime performance. In this particular example, we will use the terrain, but should you wish to disable it, simply go to the second tab in the RollupBar (usually called the environment tab) and set the ShowTerrainSurface parameter to false , as shown in the following screenshot:   Time for action - creating your own heightmap You must have created a new map to follow this example. Having sufficiently beaten the terrain system to death through explanation, let's get on with what we are most interested in, which is creating our own heightmap to use for our game: As discussed in the previous example, you should now see a flat plane of terrain slightly submerged beneath the ocean. At the top of the Sandbox interface in the main toolbar, you will find a menu selection called Terrain; open this. The following screenshot shows the options available in the Terrain menu. As we want to adjust the terrain, we will select the Edit Terrain option. This will open the Terrain Editor window, which is shown in the following screenshot: You can zoom in and pan this window to further inspect areas within the map. Click-and-drag using the right mouse button to pan the view and use the mouse wheel to zoom in and zoom out. The Terrain Editor window has a multitude of options, which can be used to manipulate the heightmap of your level. Before we start painting anything, we should first set the maximum height of the map to something more manageable: Click on Modify. Click on Set Max Height. Set your Max Terrain Height to 256. Note that the terrain height is measured in meters.     Having now set the Max Height parameter, we are ready to paint! Using a second monitor: This is a good time to take advantage of a second monitor should you have one, as you can leave the perspective view on your primary monitor and view the changes made in the Terrain Editor on your second monitor, in real time. On the right-hand side of the Terrain Editor , you will see a rollout menu named Terrain Brush. We will first use this to flatten a section of the level. Change the Brush Settings to Flatten, and set the following values: Outside Radius = 100 Inside Radius = 100 Hardness = 1 Height = 20     NOTE: You can sample the terrain height in the Terrain Editor or the view port using the shortcut Control when the flatten brush is selected. Now paint over the top half of the map. This will flatten the entire upper half of the terrain to 20 meters in height. You will end up with the following screenshot, where the dark portion represents the terrain, and since it is relatively low compared to our max height, it will appear black: Note that, by default, the water is set to a height of 16 meters. Since we flattened our terrain to a height of 20 meters, we have a 4-meter difference from the terrain to the water in the center of the map. In the perspective viewport, this will look like a steep cliff going into the water. At the location where the terrain meets the water, it would make sense to turn this into a beach, as it's the most natural way to combine terrain and water. To do this, we will smoothen the hard edge of the terrain along the water. As this is to become our beach area, let's now use the smooth tools to make it passable by the player: Change the Type of brush to Smooth and set the following parameters: Outside Radius = 50 Hardness = 1 I find it significantly easier to gauge the effects of the smooth brush in the perspective viewport. Paint the southern edge of the terrain, which will become our beach. It might be difficult to view the effects of the smooth brush simply in the terrain editor, so I recommend using the perspective viewport to paint your beach. Now that we have what will be our beach, let's sculpt some background terrain. Select the Rise/Lower brush and set the following parameters: Outside Radius = 75 Inside Radius = 50 Hardness = 0.8 Height = 1 Before painting, set the Noise Settings for the brush; to do so, check Enable Noise to true. Also set: Scale = 5 Frequency = 25 Paint the outer edges of the terrain while keeping an eye on the perspective viewport at the actual height of the mountain type structure that this creates. You can see the results in the Terrain Editor and perspective view, as seen in the following screenshots: It is a good time to use the shortcut to switch to smooth brush while painting the terrain. While in perspective view, switch to the smooth brush using the Shift shortcut. A good technique is to use the Rise/Lower brush and only click a few times, and then use Shift to switch to the smooth brush and do this multiple times on the same area. This will give you some nice terrain variation, which will serve us nicely when we go to texture it. Don't forget the player's perspective: Remember to switch to game mode periodically to inspect your terrain from the players level. It is often the case that we get caught up in the appearance of a map by looking at it from our point of view while building it, rather than from the point of view of the player, which is paramount for our game to be enjoyable to anyone playing it. Save this map as Book_Example_1_no_color.cry.
Read more
  • 0
  • 0
  • 1167

article-image-xna-4-3dgetting-battle-tanks-game-world
Packt
20 Sep 2012
16 min read
Save for later

XNA 4-3D:Getting the battle-tanks into game world

Packt
20 Sep 2012
16 min read
Adding the tank model For tank battles, we will be using a 3D model available for download from the App Hub website (http://create.msdn.com) in the Simple Animation CODE SAMPLE available at http://xbox.create.msdn.com/en-US/education/catalog/sample/simple_animation. Our first step will be to add the model to our content project in order to bring it into the game. Time for action – adding the tank model We can add the tank model to our project by following these steps: Download the 7089_06_GRAPHICSPACK.ZIP file from the book's website and extract the contents to a temporary folder. Select the .fbx file and the two .tga files from the archive and copy them to the Windows clipboard. Switch to Visual Studio and expand the Tank BattlesContent (Content) project. Right-click on the Models folder and select Paste to copy the files on the clipboard into the folder. Right-click on engine_diff_tex.tga inside the Models folder and select Exclude From Project. Right click on turret_alt_diff_tex.tga inside the Models folder and select Exclude From Project. What just happened? Adding a model to our game is like adding any other type of content, though there are a couple of pitfalls to watch out for. Our model includes two image files (the .tga files&emdash;an image format commonly associated with 3D graphics files because the format is not encumbered by patents) that will provide texture maps for the tank's surfaces. Unlike the other textures we have used, we do not want to include them as part of our content project. Why not? The content processor for models will parse the .fbx file (an Autodesk file format used by several 3D modeling packages) at compile time and look for the textures it references in the directory the model is in. It will automatically process these into .xnb files that are placed in the output folder &endash; Models, for our game. If we were to also include these textures in our content project, the standard texture processor would convert the image just like it does with the textures we normally use. When the model processor comes along and tries to convert the texture, an .xnb file with the same name will already exist in the Models folder, causing compile time errors. Incidentally, even though the images associated with our model are not included in our content project directly, they still get built by the content pipeline and stored in the output directory as .xnb files. They can be loaded just like any other Texture2D object with the Content.Load() method. Free 3D modeling software There are a number of freely available 3D modeling packages downloadable on the Web that you can use to create your own 3D content. Some of these include:   Blender: A free, open source 3D modeling and animation package. Feature rich, and very powerful. Blender can be found at http://www.blender.org. Wings 3D: Free, open source 3D modeling package. Does not support animation, but includes many useful modeling features. Wings 3D can be found at http://wings3d.com. Softimage Mod Tool: A modeling and animation package from Autodesk. The Softimage Mod Tool is available freely for non-commercial use. A version with a commercial-friendly license is also available to XNA Creator's Club members at http://usa.autodesk.com/adsk/servlet/pc/item?id=13571257&siteID=123112.         Building tanks Now that the model is part of our project, we need to create a class that will manage everything about a tank. While we could simply load the model in our TankBattlesGame class, we need more than one tank, and duplicating all of the items necessary to handle both tanks does not make sense. Time for action – building the Tank class We can build the Tank class using the following steps: Add a new class file called Tank.cs to the Tank Battles project. Add the following using directives to the top of the Tank.cs class file: using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; Add the following fields to the Tank class: #region Fields private Model model; private GraphicsDevice device; private Vector3 position; private float tankRotation; private float turretRotation; private float gunElevation; private Matrix baseTurretTransform; private Matrix baseGunTransform; private Matrix[] boneTransforms; #endregion Add the following properties to the Tank class: #region Properties public Vector3 Position { get { return position; } set { position = value; } } public float TankRotation { get { return tankRotation; } set { tankRotation = MathHelper.WrapAngle(value); } } public float TurretRotation { get { return turretRotation; } set { turretRotation = MathHelper.WrapAngle(value); } } public float GunElevation { get { return gunElevation; } set { gunElevation = MathHelper.Clamp( value, MathHelper.ToRadians(-90), MathHelper.ToRadians(0)); } } #endregion Add the Draw() method to the Tank class, as follows: #region Draw public void Draw(ArcBallCamera camera) { model.Root.Transform = Matrix.Identity * Matrix.CreateScale(0.005f) * Matrix.CreateRotationY(TankRotation) * Matrix.CreateTranslation(Position); model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect basicEffect in mesh.Effects) { basicEffect.World = boneTransforms[mesh.ParentBone. Index]; basicEffect.View = camera.View; basicEffect.Projection = camera.Projection; basicEffect.EnableDefaultLighting(); } mesh.Draw(); } } #endregion In the declarations area of the TankBattlesGame class, add a new List object to hold a list of Tank objects, as follows: List tanks = new List(); Create a temporary tank so we can see it in action by adding the following to the end of the LoadContent() method of the TankBattlesGame class: tanks.Add( new Tank( GraphicsDevice, Content.Load(@"Modelstank"), new Vector3(61, 40, 61))); In the Draw() method of the TankBattlesGame class, add a loop to draw all of the Tank objects in the tank's list after the terrain has been drawn, as follows: foreach (Tank tank in tanks) { tank.Draw(camera); } Execute the game. Use your mouse to rotate and zoom in on the tank floating above the top of the central mountain in the scene, as shown in the following screenshot: What just happened? The Tank class stores the model that will be used to draw the tank in the model field. Just as with our terrain, we need a reference to the game's GraphicsDevice in order to draw our model when necessary. In addition to this information, we have fields (and corresponding properties) to represent the position of the tank, and the rotation angle of three components of the model. The first, TankRotation, determines the angle at which the entire tank is rotated. As the turret of the tank can rotate independently of the direction in which the tank itself is facing, we store the rotation angle of the turret in TurretRotation. Both TankRotation and TurretRotation contain code in their property setters to wrap their angles around if we go past a full circle in either direction. The last angle we want to track is the elevation angle of the gun attached to the turret. This angle can range from 0 degrees (pointing straight out from the side of the turret) to -90 degrees (pointing straight up). This angle is stored in the GunElevation property. The last field added in step 3 is called boneTransforms, and is an array of matrices. We further define this array while defining the Tank class' constructor by creating an empty array with a number of elements equal to the number of bones in the model. But what exactly are bones? When a 3D artist creates a model, they can define joints that determine how the various pieces of the model are connected. This process is referred to as "rigging" the model, and a model that has been set up this way is sometimes referred to as "rigged for animation". The bones in the model are defined with relationships to each other, so that when a bone higher up in the hierarchy moves, all of the lower bones are moved in relation to it. Think for a moment of one of your fingers. It is composed of three distinct bones separated by joints. If you move the bone nearest to your palm, the other two bones move as well – they have to if your finger bones are going to stay connected! The same is true of the components in our tank. When the tank rotates, all of its pieces rotate as well. Rotating the turret moves the cannon, but has no effect on the body or the wheels. Moving the cannon has no effect on any other parts of the model, but it is hinged at its base, so that rotating the cannon joint makes the cannon appear to elevate up and down around one end instead of spinning around its center. We will come back to these bones in just a moment, but let's first look at the current Draw() method before we expand it to account for bone-based animation. Model.Root refers to the highest level bone in the model's hierarchy. Transforming this bone will transform the entire model, so our basic scaling, rotation, and positioning happen here. Notice that we are drastically scaling down the model of the tank, to a scale of 0.005f. The tank model is quite large in raw units, so we need to scale it to a size that is in line with the scale we used for our terrain. Next, we use the boneTransforms array we created earlier by calling the model's CopyAbsoluteBoneTransformsTo() method. This method calculates the resultant transforms for each of the bones in the model, taking into account all of the parent bones above it, and copies these values into the specified array. We then loop through each mesh in the model. A mesh is an independent piece of the model, representing a movable part. Each of these meshes can have multiple effects tied to it, so we loop through those as well, using an instance of BasicEffect created on the spot to render the meshes. In order to render each mesh, we establish the mesh's world location by looking up the mesh's parent bone transformation and storing it in the World matrix. We apply our View and Projection matrices just like before, and enable default lighting on the effect. Finally, we draw the mesh, which sends the triangles making up this portion of the model out to the graphics card. The tank model The tank model we are using is from the Simple Animation sample for XNA 4.0, available on Microsoft's MSDN website at http://xbox.create.msdn.com/en-US/education/catalog/sample/simple_animation. Bringing things down to earth You might have noticed that our tank is not actually sitting on the ground. In fact, we have set our terrain scaling so that the highest point in the terrain is at 30 units, while the tank is positioned at 40 units above the X-Z plane. Given a (X,Z) coordinate pair, we need to come up with a way to determine what height we should place our tank at, based on the terrain. Time for action – terrain heights To place our tank appropriately on the terrain, we first need to calculate, then place our tank there. This is done in the following steps: Add a helper method to the Terrain class to calculate the height based on a given coordinate as follows: #region Helper Methods public float GetHeight(float x, float z) { int xmin = (int)Math.Floor(x); int xmax = xmin + 1; int zmin = (int)Math.Floor(z); int zmax = zmin + 1; if ( (xmin < 0) || (zmin < 0) || (xmax > heights.GetUpperBound(0)) || (zmax > heights.GetUpperBound(1))) { return 0; } Vector3 p1 = new Vector3(xmin, heights[xmin, zmax], zmax); Vector3 p2 = new Vector3(xmax, heights[xmax, zmin], zmin); Vector3 p3; if ((x - xmin) + (z - zmin) <= 1) { p3 = new Vector3(xmin, heights[xmin, zmin], zmin); } else { p3 = new Vector3(xmax, heights[xmax, zmax], zmax); } Plane plane = new Plane(p1, p2, p3); Ray ray = new Ray(new Vector3(x, 0, z), Vector3.Up); float? height = ray.Intersects(plane); return height.HasValue ? height.Value : 0f; } #endregion In the LoadContent() method of the TankBattlesGame class, modify the statement that adds a tank to the battlefield to utilize the GetHeight() method as follows: tanks.Add( new Tank( GraphicsDevice, Content.Load(@"Modelstank"), new Vector3(61, terrain.GetHeight(61,61), 61))); Execute the game and view the tank, now placed on the terrain as shown in the following screenshot: What just happened? You might be tempted to simply grab the nearest (X, Z) coordinate from the heights[] array in the Terrain class and use that as the height for the tank. In fact, in many cases that might work. You could also average the four surrounding points and use that height, which would account for very steep slopes. The drawbacks with those approaches will not be entirely evident in Tank Battles, as our tanks are stationary. If the tanks were mobile, you would see the elevation of the tank jump between heights jarringly as the tank moved across the terrain because each virtual square of terrain that the tank entered would have only one height. In the GetHeight() method that we just saw, we take a different approach. Recall that the way our terrain is laid out, it grows along the positive X and Z axes. If we imagine looking down from a positive Y height onto our terrain with an orientation where the X axis grows to the right and the Z axis grows downward, we would have something like the following: As we discussed when we created our index buffer, our terrain is divided up into squares whose corners are exactly 1 unit apart. Unfortunately, these squares do not help us in determining the exact height of any given point, because each of the four points of the square can theoretically have any height from 0 to 30 in the case of our terrain scale. Remember though, that each square is divided into two triangles. The triangle is the basic unit of drawing for our 3D graphics. Each triangle is composed of three points, and we know that three points can be used to define a plane. We can use XNA's Plane class to represent the plane defined by an individual triangle on our terrain mesh. To do so, we just need to know which triangle we want to use to create the plane. In order to determine this, we first get the (X, Z) coordinates (relative to the view in the preceding figure) of the upper-left corner of the square our point is located in. We determine this point by dropping any fractional part of the x and z coordinates and storing the values in xmin and zmin for later use. We check to make sure that the values we will be looking up in the heights[] array are valid (greater than zero and less than or equal to the highest element in each direction in the array). This could happen if we ask for the height of a position that is outside the bounds of our map's height. Instead of crashing the game, we will simply return a zero. It should not happen in our code, but it is better to account for the possibility than be surprised later. We define three points, represented as Vector3 values p1, p2, and p3. We can see right away that no matter which of the two triangles we pick, the (xmax, zmin) and (xmin, zmax) points will be included in our plane, so their values are set right away. To decide which of the final two points to use, we need to determine which side of the central dividing line the point we are looking for lies in. This actually turns out to be fairly simple to do for the squares we are using. In the case of our triangle, if we eliminate the integer portion of our X and Z coordinates (leaving only the fractional part that tells us how far into the square we are), the sum of both of these values will be less than or equal to the size of one grid square (1 in our case) if we are in the upper left triangle. Otherwise our point is in the right triangle. The code if ((x - xmin) + (z - zmin) <= 1) performs this check, and sets the value of p3 to either (xmin, zmin) or (xmax, zmax) depending on the result. Once we have our three points, we ask XNA to construct a Plane using them, and then we construct another new type of object we have not yet used – an object of the Ray class. A Ray has a base point, represented by a Vector3, and a direction – also represented by a Vector3. Think of a Ray as an infinitely long arrow that starts somewhere in our world and heads off in a given direction forever. In the case of the Ray we are using, the starting point is at the zero point on the Y axis, and the coordinates we passed into the method for X and Z. We specify Vector3.Up as the direction the Ray is pointing in. Remember from the FPS camera that Vector3.Up has an actual value of (0, 1, 0), or pointing up along the positive Y axis. The Ray class has an Intersects() method that returns the distance from the origin point along the Ray where the Ray intersects a given Plane. We must assign the return value of this method to a float? instead of a normal float. You may not be familiar with this notation, but the question mark at the end of the type specifies that the value is nullable—that is, it might contain a value, but it could also just contain a null value. In the case of the Ray.Intersects() method, the method will return null if the object of Ray class does not intersect the object of the Plane class at any point. This should never happen with our terrain height code, but we need to account for the possibility. When using a nullable float, we need to check to make sure that the variable actually has a value before trying to use it. In this case, we use the HasValue property of the variable. If it does have one, we return it. Otherwise we return a default value of zero.
Read more
  • 0
  • 0
  • 3131

article-image-getting-started-mudbox-2013
Packt
18 Sep 2012
12 min read
Save for later

Getting Started with Mudbox 2013

Packt
18 Sep 2012
12 min read
(For more resources on Web Graphics and Videos, see here.) Introduction This article will help you get your preferences set up so that you can work in a way that is most intuitive and efficient for you. Whether you are a veteran or a newbie, it is always a good idea to establish a good workflow. It will speed up your production time, allowing you to get ideas out of your head before you forget them. This will also greatly aid you in meeting deadlines and producing more iterations of your work. Installing Mudbox 2013 documentation In addition to the recipes in this book, you may find yourself wanting to look through the Mudbox 2013 documentation for additional help. By default, when you navigate to Help through Mudbox 2013's interface, you will be sent to an online help page. If you have a slow Internet connection or lack a connection altogether, you may want to install a local copy of the documentation. After downloading and installing the local copy, it is a good idea to have Mudbox 2013 point you to the right location when you navigate to Helpfrom the menus. This will eliminate the need to navigate through your files in order to find the documentation. The following recipe will guide you through this process. How to do it... First thing you will want to do is download the documentation from Autodesk's website. You can find the documentation for this version as well as the previous versions from the following link: http://usa.autodesk.com/adsk/servlet/index?siteID=123112&id=17765502. Once you're on this page you can scroll down and click on 2013 for the language and operating system that you are using. The following screenshot is what you should see: Next you will navigate to the location that you downloaded the file to, and run it. Now follow the prompts by clicking Next until the installation is complete. This file will install the documentation into your AutodeskMudbox 2013 folder by default. You can change this location during the installation process if you like but I recommend leaving this as the default location. After the local version of the Help files are installed, we need to point Mudbox 2013's Help menu to the local copy of the documentation. To do this, open the Mudbox 2013 folder, click on Windows in the top menu bar, and click on Preferences. The following screenshot shows how it should look: Next, click on the small arrow next to Help so that more options open up. You will notice that next to Help Location it says Autodesk Web Site. We are going to change that to Installed Local Help by clicking on the small arrow next to (or directly on the text) Autodesk Web Site and choose Installed Local Help from the drop-down menu. Then click on OK. Take note that if you did install your documentation to a different directory, then you will need to choose Custom instead of Installed Local Help. Then you will need to copy and paste the directory location into the Help Path textbox. Setting up hotkeys The first thing you will want to do when you start using a new piece of software is, either set up your own hotkeys or familiarize yourself with the default hotkeys. This is very important for speeding up your workflow. If you do not use hotkeys, you will have to constantly go through menus and scroll through windows to find the tools that you need, which will undoubtedly slow you down. How to do it... First you will need to go into the Windows menu item on the top menu bar. Next, you will click on Hotkeys to bring up the hotkey window as shown in the next screenshot. You will notice a drop-down menu that reads Use keyboard shortcuts from with a Restore Mudbox Defaults button next to it. Within this menu you can set your default hotkeys to resemble a 3D software that you are accustomed to using. This will help you transition smoothly into using Mudbox. If you are new to all 3D software, or use a software package that is not on this list, then using Mudbox hotkeys should suffice. The following screenshot shows the options available in Mudbox 2013: After choosing a default set of keys, you can now go in and change any hotkeys that you would like to customize. Let's say, I would like Eyedropper to activate when I press the E key and the left mouse button together. What you will do is change the current letter that is in the box next to Eyedropper to E and you will make sure there is a check in the box next to LMB (Left Mouse Button). It should look like the following screenshot: How it works... Once all your hotkeys are set up as desired, you will be able to use quick keystrokes to access a large number of tools without ever taking your eyes off your project. The more you get comfortable with your hotkeys, the faster you will get at switching between tools. There's more... When you first start using a particular software, you probably won't know exactly which tools you will be using most often. With that in mind, you will want to revisit your hotkey customization after getting a feel for your workflow and which tools you use the most. Another thing you want to think about, when setting up your hotkeys, is how easy it is to use the hotkey. For example, I tend to make hotkeys that relate to the tool in some way in order to make it easier to remember. For example, the Create Curve tool has a good hotkey already set for it, Ctrl+ C, for the reasons mentioned as follows: One reason it is a good hotkey is that the first letter of the tool is also the letter of the key being used for the hotkey. I can relate Cto curve. Another reason this could be a good hotkey is because if creating curves is something that I find myself doing often, then all I have to do is use my pinky finger on the Ctrl key and my pointer finger on the C key. You may think "Yeah? So what?" but if I were to set the hotkey to Ctrl+ Alt+ U it's a bit more of a stretch on my fingers and I would not want to do that frequently. The point is, key location and frequency of use are things you want to think about to speed up your workflow and stay comfortable while using your hotkeys. Increasing the resolution on your model Before you can get any fine details, or details that you would see while viewing from close up, into the surface of your model you will need to subdivide your mesh to increase its resolution. In the same way that a computer monitor displays more pixels when its resolution is increased, a model will have more points on its surface when the resolution is increased. How to do it... The hotkey for subdividing your surface is Shift + D or you can alternatively go into the menus as shown in the following screenshot: How it works... What this does is it adds more polygons which can be manipulated to add more detail. You will not want to subdivide your model too many times, otherwise, your computer will begin to slow down. The extent to which your computer will slow down is exponential. For example, if you have a six-sided cube and you subdivide it once, it will become 24-sided. If you subdivide it one more time, it will become 96-sided and so on. The following screenshot from Maya shows you what the wireframe looks like from one level to the next: The reason this image was created in Maya is because Mudbox will only show the proper wireframe when your model reaches 1000 polygons or more. The more powerful your computer, the more smoothly Mudbox 2013 will run. More specifically, it's the RAM and the video memory that are important. The following are some explanations on how RAM and video memory will affect your machines performance. RAM is the most important of all. The more RAM you have, the more polygons Mudbox will be able to handle, without taking a performance hit. The video memory increases the performance of your video card and allows high resolution, high speed, and color graphics. Basically, it allows the Graphical User Interface (GUI) to have better performance. So, now that you know RAM is important, how do you decide how much will be needed to run Mudbox 2013 smoothly? Well, one thing to consider is your operating system and the version of Mudbox 2013 you are running. If you have a 32-bit operating system and you are running the 32-bit Mudbox 2013, then the maximum RAM you can get is 4 GB. But, in reality you are only getting about 3 GB of RAM as the operating system needs to use around 1 GB of that memory. On the other hand, if you are using a 64-bit operating system and the 64-bit Mudbox 2013 version then you are capped at about 8 TB (yes, I said TB not GB). You will not need anywhere near that amount of RAM to run Mudbox 2013 smoothly. My recommendation is to have a minimum of 8 GB of RAM and 1 GB of video memory. With this amount of RAM and video memory you should be able to work with around 10 million triangles on the top level of your sculpt . There's more... Notice the little white box next to Add New Subdivision Level in the following screenshot: By clicking on this box, you will be given a few options for how Mudbox will handle the subdivision, as shown in the following screenshot: The options shown in the previous screenshot are explained as follows: Smooth Positions: This option will smooth out the edges by averaging out the vertices that are added. The following screenshot shows the progression from Level 0 to Level 2 on a cube: Subdivide UVs: If this option is unchecked when you create a new subdivision level, then you will lose your UVs on the object. To get your UVs back you will need to recreate the UVs for that level. If the Subdivide UVs option is turned on then it will just add subdivisions to your existing UVs. Smooth UVs: If this option is turned on, the UVs will be smoothed within the UV Borders as shown in the next screenshot: If you want your borders to smooth along with the interior parts of the shell, as shown in the next screenshot, then you will need to take a few extra steps to allow this: This is the method Mudbox used in the 2009 and earlier versions. In Mudbox 2010, they switched the way they handle this operation so that the borders do not smooth. Here is an excerpt from the Service Pack notes from 2010: "A new environment variable now exists to alter how the Smooth UVs property works when subdividing a model: MUDBOX2009_SUBDIVIDE_SMOOTH_UV. When this environment variable is set, the Smooth UVs property works as it did in Mudbox 2009. That is, the entire UV shell, including its UV borders, are smoothed when subdividing a model whenever the Smooth UVs property is turned on. If this environment variable is not set, the default Mudbox 2010 UV smoothing behavior occurs. That is, smoothing only occurs for the interior UVs in a UV shell, leaving the UV shell border edges unsmoothed. Which UV smoothing method you choose to use is entirely dependent on your individual rendering pipeline requirements and render application used." This has not changed since Mudbox 2010. So, basically what you need to do on a PC is add an environment variable MUDBOX2009_SUBDIVIDE_SMOOTH_UV that has a value of 1. To do this you will need to right-click on My Computer and click on Properties. Then, choose Advanced system settings and under the Advanced tab click on Environment Variables.... Under System Variables click on New.... In the blank where it says Variable Name enter MUDBOX2009_SUBDIVIDE_SMOOTH_UV and under Variable Value input a 1. Hit OK and it's all ready to go. Moving up and down subdivision levels Once you create subdivision levels using Shift + D, or through the menus, you can move up and down the levels you have created by using the Page Up key to move up in levels, or the Page Down key to move down in levels. But keep in mind, you will not be able to go any higher than the highest level you created using Add New Subdivision Level and you will never be able to go below Level 0. Another thing to take into account is which model you are subdividing. If you have multiple objects in your scene, you need to make sure the correct mesh is active when subdividing. The following are a couple of ways to make sure you are subdividing the correct mesh: One way is to select the object in the Object List before hitting Shift + D. Another way is to hover your mouse cursor over the mesh that you want to subdivide and then hit Shift + D. This will subdivide the mesh that is directly underneath your cursor.
Read more
  • 0
  • 0
  • 2191

article-image-article-getting-started-on-udk-with-ios
Packt
23 Aug 2012
5 min read
Save for later

Getting Started on UDK with iOS

Packt
23 Aug 2012
5 min read
Defining UDK Unreal Development Kit (UDK) is a freely available version of Epic Games, a very popular Unreal game development engine. First developed in 1998 to power the original Unreal game, the engine (which is C++ based) has gone from strength to strength, and has not only formed the backbone of household-name Epic Games titles, such as the very popular Gears Of War series, but has also been very successfully licensed out to third-party developers. Consequent titles range from Batman: Arkham City to Mass Effect 3, many of which are as equally successful as the games developed by its parent company. Currently, the Unreal engine itself is in its fourth iteration and is considered to be a top of the range visualization tool. It is not only used in the gaming industry, but has also been used for doing any kind of work that demands real-time computer graphics. UDK uses Unreal Engine 3, but it is still a powerhouse in itself and is quite capable of delivering amazing experiences on iOS software. The offering of UDK as a concept has evolved from Epic Games content generation tools very early on with Unreal titles; tools which proved to spawn a very healthy and thriving modification (modding) community. Originally, tools like these, such as UnrealEd (the editor tool with which a user can create their own in-game level), were available to anyone who bought the game. In 2009, however, Epic turned that logic on its head by making their tools available to anybody, whether they owned an Unreal game or not. This proved to be a very successful move which expanded the Unreal developer user base thoroughly. More recently Epic Games, as part of their constant and persistent updates of UDK, have added iOS functionality (in 2010), making sure that UDK can provide games for the ever-expanding mobile customer base that the iPhone/iPad/iPod Touch devices by Apple introduced. This was first demonstrated to the public by a live tech demo called Epic Citadel; a freely available download on the iTunes store which played like an interactive walkthrough of a medieval town. This attracted a record number of downloads as at that time it was truly groundbreaking to experience high quality real-time graphics on a mobile device. Take a look at the following screenshot: In fact, although it is not within the scope of this book, very recently certain demos have surfaced highlighting a potential UDK / Adobe Flash Player pipeline, showcasing the very impressive penetration this games development application has made to a number of different platforms. Of course, we are interested in iOS here, and we'll be covering that extensively in this book, starting from the bare basics and moving on to some more advanced concepts. So what is it that we need to know about UDK and its mobile iOS limitations? Does it have any? Don't expect to make Gears of War Let's start with a fairly realistic statement; we can't make an AAA gaming title as seen on a contemporary console or PC, such as Gears of War, on iOS using UDK! It is a general limitation of doing mobile development using UDK. The main reason for this is rather obvious; we just do not have access to the same hardware. The problem is not the software! UDK can deploy the same game on a PC or an iOS device, but it is the end-hardware specification that has the final say in what can be handled in real-time or not. Mobile devices (and this of course includes iOS devices) have progressed by leaps and bounds over the last few years, and after many false starts mobile gaming today is a force to be reckoned with, both commercially and technologically. That still, however, does not change the fact that as an iOS UDK developer, you will work with more restricted hardware as opposed to, for example, someone developing for an Xbox 360 platform. Let's look at this in more detail; these are some of the current iPhone 4S technical specifications: 960 x 640 pixel display 16 GB, 32 GB, or 64 GB Flash drive Dual-core 1 GHz Cortex-A9PowerVRSGX543MP2GPU 512 MB RAM The newest iPad released in early 2012 has the following specifications: 2048 x 1056 pixel display 16 GB, 32 GB, or 64 GB Flash drive Quad-core PowerVRSGX543MP4GPU 1 GB RAM While these specs are fairly impressive compared to both mobile devices of the past, and also the fact that we are talking about what is essentially a pocket gadget, they cannot really compare with the specification of a top-of-the-range gaming PC or even a current generation console. This, of course, does not mean we can only develop poor or second rate games with UDK. Indeed, some of its contemporary examples highlight the huge potential it has in delivering AAA gaming experiences, even with its current limitations, borne, as described previously, from the hardware limitations of a mobile device. The best example by far has got to be Chair Entertainment's and Epic Games' Infinity Blade, an epic third-person sword-fighting game which came out in late 2010 and is considered the ideal showcase for UDK's technical prowess on the Apple devices. Already spawning sequels and with huge commercial success behind it from its iTunes Store business model, Infinity Blade was, and is, a big eye-opener for all aspiring games developers who want to use Unreal engine technology for a successful iOS title with a very modern feel and visuals. To see just how powerful an iOS game can be, just look at the following screenshot for a fine example:
Read more
  • 0
  • 0
  • 1569
article-image-microsoft-xna-40-game-development-receiving-player-input
Packt
02 Jul 2012
11 min read
Save for later

Microsoft XNA 4.0 Game Development: Receiving Player Input

Packt
02 Jul 2012
11 min read
(For more resources on Microsoft XNA 4.0, see here.) Adding text fields I'm generally a big fan of having as few text fields in an application as possible, and this holds doubly true for a game but there are some occasions when receiving some sort of textual information from the player is required so in these regrettable occasions, a textbox or field may be an appropriate choice. Unfortunately, a premade textbox isn't always available to us on any given gaming project, so sometimes we must create our own. Getting ready This recipe only relies upon the presence of a single SpriteFont file referring to any font at any desired size. How to do it... To start adding textboxes to your own games: Add a SpriteFont to the solution named Text: <?xml version="1.0" encoding="utf-8"?><XnaContent >    <FontName>Segoe UI Mono</FontName>    <Size>28</Size>    <Spacing>0</Spacing>    <UseKerning>true</UseKerning>    <Style>Regular</Style>    <CharacterRegions>      <CharacterRegion>        <Start>&#32;</Start>        <End>&#126;</End>      </CharacterRegion>    </CharacterRegions>  </Asset></XnaContent> Begin a new class to contain the text field logic, and embed a static mapping of the keyboard keys to their respective text characters: class Textbox{           private static Dictionary<Keys, char> characterByKey; Create a static constructor to load the mapping of keys to characters: static Textbox(){    characterByKey = new Dictionary<Keys, char>()    {        {Keys.A, 'a'},        {Keys.B, 'b'},        {Keys.C, 'c'},        {Keys.D, 'd'},     {Keys.E, 'e'},        {Keys.F, 'f'},        {Keys.G, 'g'},        {Keys.H, 'h'},        {Keys.I, 'i'},        {Keys.J, 'j'},        {Keys.K, 'k'},        {Keys.L, 'l'},        {Keys.M, 'm'},        {Keys.N, 'n'},        {Keys.O, 'o'},        {Keys.P, 'p'},        {Keys.Q, 'q'},        {Keys.R, 'r'},        {Keys.S, 's'},        {Keys.T, 't'},        {Keys.U, 'u'},        {Keys.V, 'v'},        {Keys.W, 'w'},        {Keys.X, 'x'},        {Keys.Y, 'y'},        {Keys.Z, 'z'},        {Keys.D0, '0'},        {Keys.D1, '1'},        {Keys.D2, '2'},        {Keys.D3, '3'},        {Keys.D4, '4'},        {Keys.D5, '5'},        {Keys.D6, '6'},        {Keys.D7, '7'},        {Keys.D8, '8'},        {Keys.D9, '9'},        {Keys.NumPad0, '0'},        {Keys.NumPad1, '1'},        {Keys.NumPad2, '2'},        {Keys.NumPad3, '3'},        {Keys.NumPad4, '4'},        {Keys.NumPad5, '5'},        {Keys.NumPad6, '6'},        {Keys.NumPad7, '7'},        {Keys.NumPad8, '8'},        {Keys.NumPad9, '9'},        {Keys.OemPeriod, '.'},        {Keys.OemMinus, '-'},    {Keys.Space, ' '}    };} Add the public instance fields that will determine the look and content of the display: public StringBuilder Text;public Vector2 Position;public Color ForegroundColor;public Color BackgroundColor;public bool HasFocus; Include instance-level references to the objects used in the rendering: GraphicsDevice graphicsDevice;SpriteFont font;SpriteBatch spriteBatch;        RenderTarget2D renderTarget;KeyboardState lastKeyboard;bool renderIsDirty = true; Begin the instance constructor by measuring the overall height of some key characters to determine the required height of the display, and create a render target to match: public Textbox(GraphicsDevice graphicsDevice, int width, SpriteFont font){    this.font = font;    var fontMeasurements = font.MeasureString("dfgjlJL");    var height = (int)fontMeasurements.Y;    var pp = graphicsDevice.PresentationParameters;    renderTarget = new RenderTarget2D(graphicsDevice,        width,        height,        false, pp.BackBufferFormat, pp.DepthStencilFormat); Complete the constructor by instantiating the text container and SpriteBatch:     Text = new StringBuilder();    this.graphicsDevice = graphicsDevice;    spriteBatch = new SpriteBatch(graphicsDevice);} Begin the Update() method by determining if we need to take any notice of the keyboard: public void Update(GameTime gameTime){    if (!HasFocus)    {        return;    } Retrieve all of the keys that are currently being depressed by the player and iterate through them, ignoring any that have been held down since the last update: var keyboard = Keyboard.GetState();foreach (var key in keyboard.GetPressedKeys()){    if (!lastKeyboard.IsKeyUp(key))    {        continue;    } Add the logic to remove a character from the end of the text, if either the Backspace or Delete key has been pressed: if (key == Keys.Delete ||    key == Keys.Back){    if (Text.Length == 0)    {        continue;    }    Text.Length--;    renderIsDirty = true;    continue;} Complete the loop and the method by adding the corresponding character for any keys we recognize, taking note of the case as we do so: char character;        if (!characterByKey.TryGetValue(key, out character))        {            continue;        }        if (keyboard.IsKeyDown(Keys.LeftShift) ||        keyboard.IsKeyDown(Keys.RightShift))        {            character = Char.ToUpper(character);        }        Text.Append(character);        renderIsDirty = true;    }                lastKeyboard = keyboard;} Add a new method to render the contents of the text field to RenderTarget if it has changed: public void PreDraw(){    if (!renderIsDirty)    {        return;    }    renderIsDirty = false;    var existingRenderTargets = graphicsDevice.GetRenderTargets();    graphicsDevice.SetRenderTarget(renderTarget);    spriteBatch.Begin();    graphicsDevice.Clear(BackgroundColor);    spriteBatch.DrawString(        font, Text,         Vector2.Zero, ForegroundColor);    spriteBatch.End();    graphicsDevice.SetRenderTargets(existingRenderTargets);} Complete the class by adding a method to render the image of RenderTarget to the screen: public void Draw(){    spriteBatch.Begin();    spriteBatch.Draw(renderTarget, Position, Color.White);    spriteBatch.End();} In your game's LoadContent() method, create a new instance of the text field: Textbox textbox;protected override void LoadContent(){    textbox = new Textbox(        GraphicsDevice,         400,         Content.Load<SpriteFont>("Text"))    {        ForegroundColor = Color.YellowGreen,        BackgroundColor = Color.DarkGreen,        Position = new Vector2(100,100),        HasFocus = true    };} Ensure that the text field is updated regularly via your game's Update() method: protected override void Update(GameTime gameTime){    textbox.Update(gameTime);    base.Update(gameTime);} In your game's Draw() method, let the text field perform its RenderTarget updates prior to rendering the scene including the text fi eld: protected override void Draw(GameTime gameTime){    textbox.PreDraw();    GraphicsDevice.Clear(Color.Black);    textbox.Draw();    base.Draw(gameTime);} Running the code should deliver a brand new textbox just waiting for some interesting text like the following: How it works... In the Update() method, we retrieve a list of all of the keys that are being depressed by the player at that particular moment in time. Comparing this list to the list we captured in the previous update cycle allows us to determine which keys have only just been depressed. Next, via a dictionary, we translate the newly depressed keys into characters and append them onto a StringBuilder instance. We could have just as easily used a regular string, but due to the nature of string handling in .NET, the StringBuilder class is a lot more efficient in terms of memory use and garbage creation. We could have also rendered our text directly to the screen, but it turns out that drawing text is a mildly expensive process, with each letter being placed on the screen as an individual image. So in order to minimize the cost and give the rest of our game as much processing power as possible, we render the text to RenderTarget only when the text changes, and just keep on displaying RenderTarget on screen during all those cycles when no changes occur. There's more... If you're constructing a screen that has more than one text field on it, you'll find the HasFocus state of the text field implementation to be a handy addition. This will allow you to restrict the keyboard input only to one text field at a time. In the case of multiple text fields, I'd recommend taking a leaf from the operating system UI handbooks and adding some highlighting around the edges of a text field to clearly indicate which text field has focus. Also, the addition of a visible text cursor at the end of any text within a text field with focus may help draw the player's eyes to the correct spot. On the phone If you do have access to a built-in text field control such as the one provided in the "Windows Phone XNA and Silverlight" project type, but still wish to render the control yourself, I recommend experimenting with enabling the prebuilt control, making it invisible, and feeding the text from it into your own text field display.
Read more
  • 0
  • 0
  • 1975

article-image-unity-3x-scripting-character-controller-versus-rigidbody
Packt
22 Jun 2012
16 min read
Save for later

Unity 3.x Scripting-Character Controller versus Rigidbody

Packt
22 Jun 2012
16 min read
Creating a controllable character There are two ways to create a controllable character in Unity, by using the Character Controller component or physical Rigidbody. Both of them have their pros and cons, and the choice to use one or the other is usually based on the needs of the project. For instance, if we want to create a basic role playing game, where a character is expected to be able to walk, fight, run, and interact with treasure chests, we would recommend using the Character Controller component. The character is not going to be affected by physical forces, and the Character Controller component gives us the ability to go up slopes and stairs without the need to add extra code. Sounds amazing, doesn't it? There is one caveat. The Character Controller component becomes useless if we decide to make our character non-humanoid. If our character is a dragon, spaceship, ball, or a piece of gum, the Character Controller component won't know what to do with it. It's not programmed for those entities and their behavior. So, if we want our character to swing across the pit with his whip and dodge traps by rolling over his shoulder, the Character Controller component will cause us many problems. In this article, we will look into the creation of a character that is greatly affected by physical forces, therefore, we will look into the creation of a custom Character Controller with Rigidbody, as shown in the preceding screenshot. Custom Character Controller In this section, we will write a script that will take control of basic character manipulations. It will register a player's input and translate it into movement. We will talk about vectors and vector arithmetic, try out raycasting, make a character obey our controls and see different ways to register input, describe the purpose of the FixedUpdate function, and learn to control Rigidbody. We shall start with teaching our character to walk in all directions, but before we start coding, there is a bit of theory that we need to know behind character movement. Most game engines, if not all, use vectors to control the movement of objects. Vectors simply represent direction and magnitude, and they are usually used to define an object's position (specifically its pivot point) in a 3D space. Vector is a structure that consists of three variables—X, Y, and Z. In Unity, this structure is called Vector3: To make the object move, knowing its vector is not enough. Length of vectors is known as magnitude. In physics, speed is a pure scalar, or something with a magnitude but no direction. To give an object a direction, we use vectors. Greater magnitude means greater speed. By controlling vectors and magnitude, we can easily change our direction or increase speed at any time we want. Vectors are very important to understand if we want to create any movement in a game. Through the examples in this article, we will explain some basic vector manipulations and describe their influence on the character. It is recommended that you learn extra material about vectors to be able to perfect a Character Controller based on game needs. Setting up the project To start this section, we need an example scene. Perform the following steps: Select Chapter 2 folder from book assets, and click on on the Unity_chapter2 scene inside the custom_scene folder. In the Custom scripts folder, create a new JavaScript file. Call it CH_Controller (we will reference this script in the future, so try to remember its name, if you choose a different one): In a Hierarchy view, click on the object called robot. Translate the mouse to a Scene view and press F; the camera will focus on a funny looking character that we will teach to walk, run, jump, and behave as a character from a video game. Creating movement The following is the theory of what needs to be done to make a character move: Register a player's input. Store information into a vector variable. Use it to move a character. Sounds like a simple task, doesn't it? However, when it comes to moving a player-controlled character, there are a lot of things that we need to keep in mind, such as vector manipulation, registering input from the user, raycasting, Character Controller component manipulation, and so on. All these things are simple on their own, but when it comes to putting them all together, they might bring a few problems. To make sure that none of these problems will catch us by surprise, we will go through each of them step by step. Manipulating character vector By receiving input from the player, we will be able to manipulate character movement. The following is the list of actions that we need to perform in Unity: Open the CH_Character script. Declare public variables Speed and MoveDirection of types float and Vector3 respectively. Speed is self-explanatory, it will determine at which speed our character will be moving. MoveDirection is a vector that will contain information about the direction in which our character will be moving. Declare a new function called Movement. It will be checking horizontal and vertical inputs from the player. Finally, we will use this information and apply movement to the character. An example of the code is as follows: public var Speed : float = 5.0; public var MoveDirection : Vector3 = Vector3.zero; function Movement (){ if (Input.GetAxis("Horizontal") || Input.GetAxis("Vertical")) MoveDirection = Vector3(Input.GetAxisRaw("Horizontal"),MoveDirecti on.y, Input.GetAxisRaw("Vertical")); this.transform.Translate(MoveDirection); } Register input from the user In order to move the character, we need to register an input from the user. To do that, we will use the Input.GetAxis function. It registers input and returns values from -1 to 1 from the keyboard and joystick. Input.GetAxis can only register input that had been defined by passing a string parameter to it. To find out which options are available, we will go to Edit | Projectsettings | Input. In the Inspector view, we will see Input Manager. Click on the Axes drop-down menu and you will be able to see all available input information that can be passed to the Input.GetAxis function. Alternatively, we can use Input.GetAxisRaw. The only difference is that we aren't using Unity's built-in smoothing and processing data as it is, which allows us to have greater control over character movement. To create your own input axes, simply increase the size of the array by 1 and specify your preferences (later we will look into a better way of doing and registering input for different buttons). this.transform is an access to transformation of this particular object. transform contains all the information about translation, rotation, scale, and children of this object. Translate is a function inside Unity that translates GameObject to a specific direction based on a given vector. If we simply leave it as it is, our character will move with the speed of light. That happens because translation is being applied on character every frame. Relying on frame rate when dealing with translation is very risky, and as each computer has different processing power, execution of our function will vary based on performance. To solve this problem, we will tell it to apply movement based on a common factor—time: this.transform.Translate(MoveDirection * Time.deltaTime); This will make our character move one Unity unit every second, which is still a bit too slow. Therefore, we will multiply our movement speed by the Speed variable: this.transform.Translate((MoveDirection * Speed) * Time.deltaTime); Now, when the Movement function is written, we need to call it from Update. A word of warning though—controlling GameObject or Rigidbody from the usual Update function is not recommended since, as mentioned previously, that frame rate is unreliable. Thankfully, there is a FixedUpdate function that will help us by applying movement at every fixed frame. Simply change the Update function to FixedUpdate and call the Movement function from there: function FixedUpdate (){ Movement(); } The Rigidbody component Now, when our character is moving, take a closer look at the Rigidbody component that we have attached to it. Under the Constraints drop-down menu, we will notice that Freeze Rotation for X and Z axes is checked, as shown in the following screenshot: If we uncheck those boxes and try to move our character, we will notice that it starts to fall in the direction of the movement. Why is this happening? Well, remember, we talked about Rigidbody being affected by physics laws in the engine? That applies to friction as well. To avoid force of friction affecting our character, we forced it to avoid rotation along all axes but Y. We will use the Y axis to rotate our character from left to right in the future. Another problem that we will see when moving our character around is a significant increase in speed when walking in a diagonal direction. This is not an unusual bug, but an expected behavior of the MoveDirection vector. That happens because for directional movement we use vertical and horizontal vectors. As a result, we have a vector that inherits magnitude from both, in other words, its magnitude is equal to the sum of vertical and horizontal vectors. To prevent that from happening, we need to set the magnitude of the new vector to 1. This operation is called vector normalization. With normalization and speed multiplier, we can always make sure to control our magnitude: this.transform.Translate((MoveDirection.normalized * Speed) * Time. deltaTime); Jumping Jumping is not as hard as it seems. Thanks to Rigidbody, our character is already affected by gravity, so the only thing we need to do is to send it up in the air. Jump force is different from the speed that we applied to movement. To make a decent jump, we need to set it to 500.0). For this specific example, we don't want our character to be controllable in the air (as in real life, that is physically impossible). Instead, we will make sure that he preserves transition velocity when jumping, to be able to jump in different directions. But, for now, let's limit our movement in air by declaring a separate vector for jumping. User input verification In order to make a jump, we need to be sure that we are on the ground and not floating in the air. To check that, we will declare three variables—IsGrounded, Jumping, and inAir—of a type boolean. IsGrounded will check if we are grounded. Jumping will determine if we pressed the jump button to perform a jump. inAir will help us to deal with a jump if we jumped off the platform without pressing the jump button. In this case, we don't want our character to fly with the same speed as he walks; we need to add an airControl variable that will smooth our fall. Just as we did with movement, we need to register if the player pressed a jump button. To achieve this, we will perform a check right after registering Vertical and Horizontal inputs: public var jumpSpeed : float = 500.0; public var jumpDirection : Vector3 = Vector3.zero; public var IsGrounded : boolean = false; public var Jumping : boolean = false; public var inAir : boolean = false; public var airControl : float = 0.5; function Movement(){ if (Input.GetAxis("Horizontal") || Input.GetAxis("Vertical")) { MoveDirection = Vector3(Input.GetAxisRaw("Horizontal"),MoveDirection.y,Input.GetAxisRaw("Vertical")); } if (Input.GetButtonDown("Jump") && isGrounded) {} } GetButtonDown determines if we pressed a specific button (in this case, Space bar), as specified in Input Manager. We also need to check if our character is grounded to make a jump. We will apply vertical force to a rigidbody by using the AddForce function that takes the vector as a parameter and pushes a rigidbody in the specified direction. We will also toggle Jumping boolean to true, as we pressed the jump button and preserve velocity with JumpDirection: if (Input.GetButtonDown("Jump") &&isGrounded){ Jumping = true; jumpDirection = MoveDirection; rigidbody.AddForce((transform.up) * jumpSpeed); } if (isGrounded) this.transform.Translate((MoveDirection.normalized * Speed) * Time.deltaTime); else if (Jumping || inAir) this.transform.Translate((jumpDirection * Speed * airControl) * Time.deltaTime); To make sure that our character doesn't float in space, we need to restrict its movement and apply translation with MoveDirection only, when our character is on the ground, or else we will use jumpDirection. Raycasting The jumping functionality is almost written; we now need to determine whether our character is grounded. The easiest way to check that is to apply raycasting. Raycasting simply casts a ray in a specified direction and length, and returns if it hits any collider on its way (a collider of the object that the ray had been cast from is ignored): To perform a raycast, we will need to specify a starting position, direction (vector), and length of the ray. In return, we will receive true, if the ray hits something, or false, if it doesn't: function FixedUpdate () { if (Physics.Raycast(transform.position, -transform.up, collider. height/2 + 2)){ isGrounded = true; Jumping = false; inAir = false; } else if (!inAir){ inAir = true; JumpDirection = MoveDirection; } Movement(); } As we have already mentioned, we used transform.position to specify the starting position of the ray as a center of our collider. -transform.up is a vector that is pointing downwards and collider.height is the height of the attached collider. We are using half of the height, as the starting position is located in the middle of the collider and extended ray for two units, to make sure that our ray will hit the ground. The rest of the code is simply toggling state booleans. Improving efficiency in raycasting But what if the ray didn't hit anything? That can happen in two cases—if we walk off the cliff or are performing a jump. In any case, we have to check for it. If the ray didn't hit a collider, then obviously we are in the air and need to specify that. As this is our first check, we need to preserve our current velocity to ensure that our character doesn't drop down instantly. Raycasting is a very handy thing and being used in many games. However, you should not rely on it too often. It is very expensive and can dramatically drop down your frame rate. Right now, we are casting rays every frame, which is extremely inefficient. To improve our performance, we only need to cast rays when performing a jump, but never when grounded. To ensure this, we will put all our raycasting section in FixedUpdate to fire when the character is not grounded. function FixedUpdate (){ if (!isGrounded){ if (Physics.Raycast(transform.position, -transform.up, collider.height/2 + 0.2)){ isGrounded = true; Jumping = false; inAir = false; } else if (!inAir){ inAir = true; jumpDirection = MoveDirection; } } Movement(); } function OnCollisionExit(collisionInfo : Collision){ isGrounded = false; } To determine if our character is not on the ground, we will use a default function— OnCollisionExit(). Unlike OnControllerColliderHit(), which had been used with Character Controller, this function is only for colliders and rigidbodies. So, whenever our character is not touching any collider or rigidbody, we will expect to be in the air, therefore, not grounded. Let's hit Play and see our character jumping on our command. Additional jump functionality Now that we have our character jumping, there are a few issues that should be resolved. First of all, if we decide to jump on the sharp edge of the platform, we will see that our collider penetrates other colliders. Thus, our collider ends up being stuck in the wall without a chance of getting out: A quick patch to this problem will be pushing the character away from the contact point while jumping. We will use the OnCollisionStay() function that's called at every frame when we are colliding with an object. This function receives collision contact information that can help us determine who we are colliding with, its velocity, name, if it has Rigidbody, and so on. In our case we are interested in contact points. Perform the following steps: Declare a new private variable contact of a ContactPoint type that describes the collision point of colliding objects. Declare the OnCollisonStay function. Inside this function, we will take the first point of contact with the collider and assign it to our private variable. Add force to the contact position to reverse the character's velocity, but only if the character is not on the ground. Declare a new variable and call it jumpClimax of boolean type. Contacts is an array of all contact points. Finally, we need to move away from that contact point by reversing our velocity. The AddForceAtPosition function will help us here. It is similar to the one that we used for jumping, however, this one applies force at a specified position (contact point): public var jumpClimax :boolean = false; ... function OnCollisionStay(collisionInfo : Collision){ contact = collisionInfo.contacts[0]; if (inAir || Jumping) rigidbody.AddForceAtPosition(-rigidbody.velocity, contact.point); } The next patch will aid us in the future, when we will be adding animation to our character later in this article. To make sure that our jumping animation runs smoothly, we need to know when our character reaches jumping climax, in other words, when it stops going up and start a falling. In the FixedUpdate function, right after the last else if statement, put the following code snippet: else if (inAir&&rigidbody.velocity.y == 0.0) { jumpClimax = true; } Nothing complex here. In theory, the moment we stop going up is a climax of our jump, that's why we check if we are in the air (obviously we can't reach jump climax when on the ground), and if vertical velocity of rigidbody is 0.0. The last part is to set our jumping climax to false. We'll do that at the moment when we touch the ground: if (Physics.Raycast(transform.position, -transform.up, collider. height/2 + 2)){ isGrounded = true; Jumping = false; inAir = false; jumpClimax = false; } Running We taught our character to walk, jump, and stand aimlessly on the same spot. The next logical step will be to teach him running. From a technical point of view, there is nothing too hard. Running is simply the same thing as walking, but with a greater speed. Perform the following steps: Declare a new variable IsRunning of a type boolean, which will be used to determine whether our character has been told to run or not. Inside the Movement function, at the very top, we will check if the player is pressing left or right, and shift and assign an appropriate value to isRunning: public var isRunning : boolean = false; ... function Movement() { if (Input.GetKey (KeyCode.LeftShift) || Input.GetKey (KeyCode. RightShift)) isRunning = true; else isRunning = false; ... } Another way to get input from the user is to use KeyCode. It is an enumeration for all physical keys on the keyboard. Look at the KeyCode script reference for a complete list of available keys, on the official website: http://unity3d.com/ support/documentation/ScriptReference/KeyCode We will return to running later, in the animation section.
Read more
  • 0
  • 1
  • 13153

article-image-webgl-animating-3d-scene
Packt
20 Jun 2012
10 min read
Save for later

WebGL: Animating a 3D scene

Packt
20 Jun 2012
10 min read
We will discuss the following topics: Global versus local transformations Matrix stacks and using them to perform animation Using JavaScript timers to do time-based animation Parametric curves Global transformation allows us to create two different kinds of cameras. Once we have applied the camera transform to all the objects in the scene, each one of them could update its position; representing, for instance, targets that are moving in a first-person shooting game, or the position of other competitors in a car racing game. This can be achieved by modifying the current Model-View transform for each object. However, if we modified the Model-View matrix, how could we make sure that these modifications do not affect other objects? After all, we only have one Model-View matrix, right? The solution to this dilemma is to use matrix stacks. Matrix stacks A matrix stack provides a way to apply local transforms to individual objects in our scene while at the same time we keep the global transform (camera transform) coherent for all of them. Let's see how it works. Each rendering cycle (each call to draw function) requires calculating the scene matrices to react to camera movements. We are going to update the Model-View matrix for each object in our scene before passing the matrices to the shading program (as attributes). We do this in three steps as follows: Once the global Model-View matrix (camera transform) has been calculated, we proceed to save it in a stack. This step will allow us to recover the original matrix once we had applied to any local transforms. Calculate an updated Model-View matrix for each object in the scene. This update consists of multiplying the original Model-View matrix by a matrix that represents the rotation, translation, and/or scaling of each object in the scene. The updated Model-View matrix is passed to the program and the respective object then appears in the location indicated by its local transform. We recover the original matrix from the stack and then we repeat steps 1 to 3 for the next object that needs to be rendered. The following diagram shows this three-step procedure for one object: Animating a 3D scene To animate a scene is nothing else than applying the appropriate local transformations to objects in it. For instance, if we have a cone and a sphere and we want to move them, each one of them will have a corresponding local transformation that will describe its location, orientation, and scale. In the previous section, we saw that matrix stacks allow recovering the original Model-View transform so we can apply the correct local transform for the next object to be rendered. Knowing how to move objects with local transforms and matrix stacks, the question that needs to be addressed is: When? If we calculated the position that we want to give to the cone and the sphere of our example every time we called the draw function, this would imply that the animation rate would be dependent on how fast our rendering cycle goes. A slower rendering cycle would produce choppy animations and a too fast rendering cycle would create the illusion of objects jumping from one side to the other without smooth transitions. Therefore, it is important to make the animation independent from the rendering cycle. There are a couple of JavaScript elements that we can use to achieve this goal: The requestAnimFrame function and JavaScript timers. requestAnimFrame function The window.requestAnimFrame() function is currently being implemented in HTML5-WebGL enabled Internet browsers. This function is designed such that it calls the rendering function (whatever function we indicate) in a safe way only when the browser/tab window is in focus. Otherwise, there is no call. This saves precious CPU, GPU, and memory resources. Using the requestAnimFrame function, we can obtain a rendering cycle that goes as fast as the hardware allows and at the same time, it is automatically suspended whenever the window is out of focus. If we used requestAnimFrame to implement our rendering cycle, we could use then a JavaScript timer that fires up periodically calculating the elapsed time and updating the animation time accordingly. However, the function is a feature that is still in development. To check on the status of the requestAnimFrame function, please refer to the following URL: Mozilla Developer Network JavaScript timers We can use two JavaScript timers to isolate the rendering rate from the animation rate. The rendering rate is controlled by the class WebGLApp. This class invokes the draw function, defined in our page, periodically using a JavaScript timer. Unlike the requestAnimFrame function, JavaScript timers keep running in the background even when the page is not in focus. This is not an optimal performance for your computer given that you are allocating resources to a scene that you are not even looking. To mimic some of the requestAnimFrame intelligent behavior provided for this purpose, we can use the onblur and onfocus events of the JavaScript window object. Let's see what we can do: Action (What) Goal (Why) Method (How) Pause the rendering To stop the rendering until the window is in focus Clear the timer calling clearInterval in the window.onblur function Slow the rendering To reduce resource consumption but make sure that the 3D scene keeps evolving even if we are not looking at it We can clear current timer calling clearInterval in the window.onblur function and create a new timer with a more relaxed interval (higher value) Resume the rendering To activate the 3D scene at full speed when the browser window recovers its focus We start a new timer with the original render rate in the window.onfocus function By reducing the JavaScript timer rate or clearing the timer, we can handle hardware resources more efficiently. In WebGLApp you can see how the onblur and onfocus events have been used to control the rendering timer as described previously. Timing strategies In this section, we will create the second JavaScript timer that will allow controlling the animation. As previously mentioned, a second JavaScript timer will provide independency between how fast your computer can render frames and how fast we want the animation to go. We have called this property the animation rate. However, before moving forward you should know that there is a caveat when working with timers: JavaScript is not a multi-threaded language. This means that if there are several asynchronous events occurring at the same time (blocking events) the browser will queue them for their posterior execution. Each browser has a different mechanism to deal with blocking event queues. There are two blocking event-handling alternatives for the purpose of developing an animation timer. Animation strategy The first alternative is to calculate the elapsed time inside the timer callback. The pseudo-code looks like the following: var initialTime = undefined; var elapsedTime = undefined; var animationRate = 30; //30 ms function animate(deltaT){ //calculate object positions based on deltaT } function onFrame(){ elapsedTime = (new Date).getTime() – initialTime; if (elapsedTime < animationRate) return; //come back later animate(elapsedTime); initialTime = (new Date).getTime(); } function startAnimation(){ setInterval(onFrame,animationRate/1000); } Doing so, we can guarantee that the animation time is independent from how often the timer callback is actually executed. If there are big delays (due to other blocking events) this method can result in dropped frames. This means the object's positions in our scene will be immediately moved to the current position that they should be in according to the elapsed time (between consecutive animation timer callbacks) and then the intermediate positions are to be ignored. The motion on screen may jump but often a dropped animation frame is an acceptable loss in a real-time application, for instance, when we move one object from point A to point B over a given period of time. However, if we were using this strategy when shooting a target in a 3D shooting game, we could quickly run into problems. Imagine that you shoot a target and then there is a delay, next thing you know the target is no longer there! Notice that in this case where we need to calculate a collision, we cannot afford to miss frames, because the collision could occur in any of the frames that we would drop otherwise without analyzing. The following strategy solves that problem. Simulation strategy There are several applications such as the shooting game example where we need all the intermediate frames to assure the integrity of the outcome. For example, when working with collision detection, physics simulations, or artificial intelligence for games. In this case, we need to update the object's positions at a constant rate. We do so by directly calculating the next position for the objects inside the timer callback. var animationRate = 30; //30 ms var deltaPosition = 0.1 function animate(deltaP){ //calculate object positions based on deltaP } function onFrame(){ animate(deltaPosition); } function startAnimation(){ setInterval(onFrame,animationRate/1000); } This may lead to frozen frames when there is a long list of blocking events because the object's positions would not be timely updated. Combined approach: animation and simulation Generally speaking, browsers are really efficient at handling blocking events and in most cases, the performance would be similar regardless of the chosen strategy. Then, deciding to calculate the elapsed time or the next position in timer callbacks will then depend on your particular application. Nonetheless, there are some cases where it is desirable to combine both animation and simulation strategies. We can create a timer callback that calculates the elapsed time and updates the animation as many times as required per frame. The pseudocode looks like the following: var initialTime = undefined; var elapsedTime = undefined; var animationRate = 30; //30 ms var deltaPosition = 0.1; function animate(delta){ //calculate object positions based on delta } function onFrame(){ elapsedTime = (new Date).getTime() - initialTime; if (elapsedTime < animationRate) return; //come back later! var steps = Math.floor(elapsedTime / animationRate); while(steps > 0){ animate(deltaPosition); steps -= 1; } initialTime = (new Date).getTime(); } function startAnimation(){ initialTime = (new Date).getTime(); setInterval(onFrame,animationRate/1000); } You can see from the preceding code snippet that the animation will always update at a fixed rate, no matter how much time elapses between frames. If the app is running at 60 Hz, the animation will update once every other frame, if the app runs at 30 Hz the animation will update once per frame, and if the app runs at 15 Hz the animation will update twice per frame. The key is that by always moving the animation forward a fixed amount it is far more stable and deterministic. The following diagram shows the responsibilities of each function in the call stack for the combined approach: This approach can cause issues if for whatever reason an animation step actually takes longer to compute than the fixed step, but if that is occurring, you really ought to simplify your animation code or put out a recommended minimum system spec for your application. Web Workers: Real multithreading in JavaScript You may want to know that if performance is really critical to you and you need to ensure that a particular update loop always fires at a consistent rate then you could use Web Workers. Web Workers is an API that allows web applications to spawn background processes running scripts in parallel to their main page. This allows for thread-like operation with message-passing as the coordination mechanism. You can find the Web Workers specification at the following URL: W3C Web Workers
Read more
  • 0
  • 0
  • 3487
article-image-construct-game-development-platformer-revisited-2d-shooter
Packt
05 Jun 2012
11 min read
Save for later

Construct Game Development: Platformer Revisited, a 2D Shooter

Packt
05 Jun 2012
11 min read
(For more resources on Game Development, see here.) Before we start As there is a large amount of ground to cover in this chapter, we'll be moving quickly through steps similar to those we've done before. If it's been a while since you've used Construct, then you may find it useful to read through a chapter or two again before continuing, because certain steps assume that you are able to complete acO ons we have performed in the past. Multiplayer: getting your friends involved We're ready to start our next game, a multiplayer side-scrolling shooter, but before we add any shooO ng, we'll need to have the multiplayer side-scroller part finished first. So let's get to it! Time for action – creating the game assets and title screen The first thing we will need to do is to create our game content and create the first layout of the game, the title screen. First, draw up our player graphics and guns. We'll want the torso to be a separate object from the legs for easier animation. Use red dots where the legs will be attached to as markers for image point placement later on. Also include drawings for three weapons: a pistol, an uzi, and a shotgun: Next, we can draw up our enemies for the game. In this case, we'll use an enemy robot with a turret arm that shoots balls of plasma: We'll also need some scenery and ground objects for the levels: Finally, we'll need a graphic to tell the player to go onto the next screen when no enemies are present: Now we can move on to starting our game. Create a new project and set its Name to SideShooter, and enter your Creator name. Then set the window size to 800 by 600. Create the global variables CanGoNext, CurrentScreen, NumPlayers, P1Lives, P2Lives, GameWonLost, and RespawnY with values 0, 0, 1, 3, 3, 0, and 100 respectively. Following that, rename the first layout to Title and its event sheet to Title events . On this layout, create the layers Text, Buttons, and Background in top-to-bottom order. Selecting the Background layer, create a Panel object and name it Background before setting its top corner filters to Green and bottom corner filters to DarkGreen. Stretch this object to cover the whole of the layout and lock the layer. Now, on the Buttons layer, create a sprite and draw a box with the word Play in it. PosiO on this object in the center of the layout. This will be the start button for our game and should have the name btnPlay. Next, add the Mouse & Keyboard and XAudio2 objects into the layout and give them the global property. In order to finish the layout design, create a Text object on the Text layer and set its name to Title and its text to SideShooter, and position it above the btnPlay object: Switch over to the event sheet editor to add in a Start of layout event and use it to play a music file, Title.mp3, and loop it. This can be any title music you'd like, and it will also be played at the game's end screen. Next, create an event for the MouseKeyboard object with the condition Mouse is over object to check if the mouse overlaps btnPlay. Give this event the action Set colour filter for the btnPlay object to set the filter to Grey – 40. Now create an Else event to set the filter color back to White. In order to finish the event sheet, create an event with the condition On object clicked and select the object btnPlay. Add actions to this event to set the value of NumPlayers to 1 and the value of CurrentScreen to 0 before adding the final System action Next Layout: Time for action – designing the level Now that we have our graphics created, we can put them into our second layout and make the first playable level. Now we're ready to create a level layout. Create a layout and name it Level1 and its event sheet Level1 events . Then create our main game event sheet Game. For the layout, set its width to a multiple of the screen width (800), and check the box for Unbounded scrolling. In the event sheet of this layout, include our main event sheet Game. Next, give this layout the layers HUD, Foreground, FrontScenery, Objects, LevelLights, ShadowCasters, BackScenery, and Walls in top-to-bottom order. After that, set the ScrollX and ScrollY rates of the HUD and Walls layers to 0% On the objects layer, create a Tiled Background object named Ground and give it the smaller light gray tile image. Ensure it has the Solid attribute and stretch and place some of them to form a level design. Now create a Sprite object with the light green tile image and name it CrateBox . Give it the Solid attribute as well and place some around the level too. Have its collisions mode set to Bounding box. Next, create a Sprite named ExitLevel and fill it with a solid color. Give it a width of 32 and stretch it so that it's taller than the display height (600). Then finish the object by checking the box for Invisible on start and placing it at the end of the level: With the base layout complete, we can now add in three more invisible objects to handle scrolling. These are going to be Box objects with the names BackStopper, FrontStopper, and GoNextScreen. Have the BackStopper and FrontStopper objects colored red and marked Solid with a width of 120. Set the Hotspot property of the BackStopper object to Bottom-right, and the FrontStopper to Bottom-Left, before positioning them at 0,600 and 800,600 respectively. Next, have the GoNextScreen box colored green and a width of 32 as well as a Hotspot of Bottom-right. Position this object at 800,600: Time for action – creating player characters and conveyor belt objects Now we can create our player character objects and also add moving and staO c conveyor belts into our level. We are now ready to create our player objects. Start by inserting a Sprite and paste the standing image of the player character legs into it. Have an image point named 1 at the red spot that we drew earlier, and then place the hotspot at the bottom-middle point of the image ( num-pad 2) as shown in the following image: Name this sprite P1Legs, and for its Default animation, set the animation Tag to Stopped before checking the Auto Mirror checkbox in the main Appearance settings of the object. Next, give it the platform behavior with the settings shown as follows: Next, scroll down to the Angle properties box and check the Auto mirror checkbox. Now we are ready to add the object to a family. Scroll up to the Groups sub-group Families and click on Add to bring up the Construct: New Family screen. Note that Construct Classic comes with some default families, but will also display any family that have been used previously: Click on Edit Families to bring up the Construct: Families editor screen. On this screen, enter the name Legs and click on the + button: We can now draw the family image using the image editor that appears, shown as follows: After finishing the image, save it by exiting out of the image editor and check that the family is now in the list. Once finished, click on Done to return to the family selection page. Now select the Legs family and click on Ok: Now we can add the animation Walk in for our object. In this animation, set the Tag to Walking, and add the other leg sprites so they match the following image: Now change the settings of the animation to have an Animation speed of 5 and check the Loop checkbox. Next, copy the object and right-click on the layout to use the Paste Clone option, and name this clone as P2Legs. In the Platform behavior for this object, change the Control setting to Player 2 . Now go into the project properties and scroll to the bottom section, Controls. Click on the Add / Edit option for the Manage Controls setting at the bottom to bring up the controls box: Use the red X button to remove all of the controls below Jump. Then click on the Green plus button to add a new control. Select this control and click on the Pencil and paper button to change its name to Move Left. Click on the Key name for this control to bring up a drop-down box and set it to A. Now, in the Player drop-down box, select Player 2 for this control. It should match the following screenshot: Now continue to add controls until it matches the settings in the following screenshot before clicking on Done: Next, we'll create the bodies of our player objects. Create a Sprite called P1Torso and paste the normal body image of our character into it. Then position the Hotspot in the bottom-middle of the body. Give this sprite an image point 1 in the centre of its hand: Rename the Default animation to Normal and set its speed to 0, and check the Auto Mirror checkbox for this object as well. Create two more animations, Up and Down respectively. Set their frames to match the following screenshot: Now give the object a new family and name it Body. Then create Private Variables of names Weapon, Ammo, and GunAngle . Set the starting values to 0, 99, and 0 respectively. Clone this object as well to create P2Torso, and replace the sprites with the second player graphics. Now select P1Legs and scroll down to the Groups | Container properties to click on Add object and select P1Torso. The properties box and sprites should match the following screenshot: Next, put the P2Torso object into the container P2Legs using the same method. Now, on the Walls layer, create a Tiled Background object named FactoryWall and paste the dark wall graphic into it. Then resize it to 800 in Width by 600 in Height and set its position to 0,0 . The layout should look similar to the following screenshot: Switch to the FrontScenery layer and create a Tiled Background object called ConveyorMiddle, and give it the center piece of the conveyor belt images: Give this object the Private Variables of Direction and Speed with starting values of 0. Place these around the map to act as scenery, as well as using them as an object to move players with at certain points. Set the Direction variable to 1 to have the conveyor belt move right, and 2 to move left. Speed is the attribute used to determine how fast a player character is moved by the conveyor belt; a speed of 25 works well in this instance. The following screenshot shows a moving conveyor belt in the layout: On the same layer, create a Sprite with the name ConveyorStart and Collisions set to None. Use the starting conveyor belt image for this object and set the Hotspot to the middle-right (num-pad 6). Give this sprite the Attribute of Destroy on Startup. Create a second Sprite with the same setttigs called ConveyorEnd and a Hotspot in the middle-left (num-pad 4). Both sprites are shown in the following screenshot: Time for action – creating the HUD objects Now, we will move on to creating the Heads Up Display (HUD) objects for our game. We are now ready to create the HUD objects for our game. Switch to the HUD layer and create a Sprite called P1Face, and give it an image of the P1Torso head and set its Collisions mode to Bounding Box: Next, create a Text object called P1Info and set it up to match the following screenshot: Create similar objects replacing P1 with P2 for the second player. In this case, have the objects match the following layout screenshot: Now create a Text object for when the second player is not playing, and call it P2Join. Set its text to Press left control to join and have it matched to the following screenshot: Give it a Sine behavior to make it fade in and out by matching its settings to those in the following screenshot: Now create the final HUD item, a Sprite called NextSign, and place the next arrow image into it. Set the Collisions of this object to None:
Read more
  • 0
  • 0
  • 3524

article-image-article-building-objects-inkscape
Packt
22 May 2012
9 min read
Save for later

Building Objects in Inkscape

Packt
22 May 2012
9 min read
(For more resources on Inkscape, see here.) Working with objects Objects in Inkscape are any shapes that make up your overall drawing. This means that any text, path, or shape that you create is essentially an object. Let's start by making a simple object and then changing some of its attributes. Time for action – creating a simple object Inkscape can create predefined shapes that are part of the SVG standard. These include rectangles/squares, circles/ellipses/arcs, stars, polygons, and spirals. To create any of these shapes, you can select items from the toolbar: However, you can also create more freehand-based objects as well. Let's look at how we can create a simple freehand triangle: Select the Bezier tool: Click once where you want the first corner and then move the mouse/pointer to the next corner. A node appears with the click and then a freehand line: When you have the length of the first side of the triangle estimated, click for the second corner: Move the mouse to form the second side and click for the third corner: Move the mouse back to the first corner node and click it to form the triangle, shown as follows: Now save the file. From the main menu, select File and then Save. We will use this triangle to build a graphic later in this book, so choose a location to save so that you will know where to find the file. Now that the basic triangle is saved, let's also experiment with how we can manipulate the shape itself and/or the shape's position on the canvas. Let's start with manipulating the triangle. Select the triangle and drag a handle to a new location. You have essentially skewed the triangle, as shown in the following diagram: To change the overall shape of the triangle, select the triangle, then click the Edit path by Nodes tool (or press F2 ): Now the nodes of the triangle are displayed as follows: Nodes are points on a path that define the path's shape. Click a node and you can drag it to another location to manipulate the triangle's overall shape as follows: Double-click between two nodes to add another node and change the shape: If you decide that you don't want the extra node, click it (the node turns red), press Delete on your keyboard and it disappears. You can also use the control bar to add, delete, or manipulate the path/shape and nodes: If you want to change the position of the shape on the canvas by choosing the Select tool in the toolbox, click and drag the shape and move it where you need it to be. Change the size of the shape by also choosing the Select tool from the toolbox, clicking and holding the edge of the shape at the handle (small square or circles at edges), and dragging it outward to grow larger or inward to shrink until the shape is of the desired size. You can also rotate an object. Choose the Select tool from the toolbox and single-click the shape until the nodes turn to arrows with curves (this might require you to click the object a couple of times). When you see the curved arrow nodes, click-and-drag on a corner node to rotate the object until it is rotated and positioned correctly. No need to save this file again after we have manipulated it—unless you want to reference this new version of the triangle for future projects. What just happened? We created a free-form triangle and saved it for a future project. We also manipulated the shape in a number of ways—used the nodes to change the skew of the overall shape, added nodes to change the shape completely, and also how to move the shape around on the canvas. Fill and Stroke As you've already noticed, when creating objects in Inkscape they have color associated with them. You can fill an object with a color as well as give the object an outline or stroke. This section will explain how to change these characteristics of an object in Inkscape. Fill and Stroke dialog You can use the Fill and Stroke dialog from the main menu to change the fill colors of an object. Time for action – using the Fill and Stroke dialog Let's open the dialog and get started: Open your triangle Inkscape file again and select the triangle. From the main menu, choose Object | Fill and Stroke (or use the Shift + Ctrl + F keyboard shortcut). The Fill and Stroke dialog appears on the right-hand side of your screen. Notice it has three tabs: Fill, Stroke paint, and Stroke style , as shown in the following screenshot: Select the Fill tab (if not already selected). Here are the options for fill: Type of fill: The buttons below the Fill tab allow you to select the type of fill you would like to use. No fill (the button with the X), flat color, linear or radial gradients. In the previous example screenshot, the flat fill button is selected. Color picker : Another set of tabs below the type of the fill area are presented; RGB , CMYK, HSL, and Wheel. You can use any of these to choose a color. The most intuitive option is Wheel as it allows you to visually see all the colors and rotate a triangle to the color of your choice, as shown in the following screenshot: Once a color is chosen, then the exact color can be seen in various values on the other color picker tabs. Blur : Below the color area, you also have an option to blur the object's fill. This means that if you move the sliding lever to the right, the blur of the fill will move outward. See the following diagram for examples of an object without and with blur: Opacity: Lastly, there is the opacity slider. By moving this slider to the right you will give the object an alpha of opacity setting making it a bit more transparent. The following diagram demonstrates opacity: In the Fill and Stroke dialog , if you select the Stroke paint tab , you will notice it looks very much like the Fill tab. You can remove the stroke (outline) of the object, set the color, and determine if it is a flat color or gradient: In the last tab, Stroke style is where you can most notably set the width of the stroke: You can also use this tab to determine what types of corners or joins an object has (round or square corners) and how the end caps of the border look like. The Dashes field gives options for the stroke line type, as shown in the following screenshot: Start, Mid, and End Markers allow you to add end points to your strokes, as follows: For our triangle object, use the Fill tab and choose a green color, no stroke, and 100 percent opacity: What just happened? You learned where to open the Fill and Stroke dialog, adjust the fill of an object, use blur and opacity, and how to change the stroke color and weights of the stroke line. Next, let's learn other ways to change the fill and stroke options. Color palette bar You can also use the color palette bar to change fill color: Time for action – using the color palette Let's learn all the tips and tricks for using the color palette bar: From the palette bar, click a color and drag it from the palette onto the object to change its fill, as shown in the following diagram: You can also change an object and the stroke color in a number of other ways: Select an object on the canvas and then click a color box in the palette to immediately set the fill of an object. Select an object on the canvas and then right-click a color box in the palette. A popup menu appears with options to set the fill (and stroke). If you hold the Shift key and drag a color box onto an object, it changes the stroke color. Shift + left-click a color box to immediately set the stroke color. Note, you can use the scroll bar just below the viewable color swatches on the color palette to scroll right to see even more color choices. What just happened? You learned how to change the fill and stroke color of an object by using the color swatches on the color palette bar on the main screen of Inkscape. Dropper Yet another way to change the fill or stroke of an object is to use the dropper: Let's learn how to use it. Time for action – using the dropper tool Open an Inkscape file with objects on the canvas or create a quick object to try this out: Select an object on the canvas. Select the dropper tool from the toolbar or use the shortcut key F7 . Then click anywhere in the drawing with that tool that has the color you want to choose. The chosen color will be assigned to the selected object's fill. Alternatively, use Shift + click to set the stroke color. Be aware of the tool control bar and the dropper tool controls, shown as follows: The two buttons affect the opacity of the object, especially if it is different than the 100% setting. If Pick is disabled, then the color as chosen by the dropper looks exactly like it is on screen If Pick is enabled and Assign is disabled, then the color picked by the dropper is one that the object would have if its opacity was 100% If Pick is enabled and Assign is enabled, then the color and opacity are both copied from the picked object What just happened? By using the dropper tool, you learned how to change a color of another object on the screen.
Read more
  • 0
  • 0
  • 7516