Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-resource-manager
Packt
03 Sep 2013
11 min read
Save for later

Resource Manager

Packt
03 Sep 2013
11 min read
(For more resources related to this topic, see here.) Resource definitions In order to be able to define resources, we need to create a module that will be in charge of handling this. The main idea is that before calling a certain asset through ResourceManager, it has to be defined inResourceDefinitions. In this way, ResourceManager will always have access to some metadata it needs to create the asset (filenames, sizes, volumes, and so on). In order to identify the asset types (sounds, images, tiles, and fonts), we will define some constants (note that the number values of these constants are arbitrary; you could use whatever you want here). Let’s call them RESOURCE_TYPE_[type] (feel free to use another convention if you want to). To make things easier, just follow this convention for now since it’s the one we’ll use in the rest of the book. You should enter them in main.lua as follows: RESOURCE_TYPE_IMAGE = 0RESOURCE_TYPE_TILED_IMAGE = 1RESOURCE_TYPE_FONT = 2RESOURCE_TYPE_SOUND = 3 If you want to understand the actual reason behind these resource type constants, take a look at the load function of our ResourceManager entity in the next section. We need to create a file named resource_definitions.lua and add some simple methods that will handle it. Add the following line to it: module ( “ResourceDefinitions”, package.seeall ) The preceding line indicates that all of the code in the file should be treated as a module function, being accessed through ResourceDefinitions in the code. This is one of many Lua patterns used to create modules. If you’re not used to the Lua’s module function, you can read about it in the modules tutorial at http://lua-users.org/wiki/ModulesTutorial. Next, we will create a table that contains these definitions: local definitions = {} This will be used internally and is not accessible through the module API, so we create it using the keyword local. Now, we need to create the setter, getter, and unload methods for the definitions. The setter method (called set) stores the definition parameter (a table) in the definitions table, using the name parameter (a string) as the key, as follows: function ResourceDefinitions:set(name, definition) definitions[name] = definitionend The getter method (called get, duh!) retrieves the definition that was previously stored (by use of ResourceDefinitions:set ()) using the name parameter as the key of the definitions table, as follows: function ResourceDefinitions:get(name) return definitions[name]end The final method that we’re creating is remove. We use it to clear the memory space used by the definition. In order to achieve this we assign nil to an entry in the definitions table indexed by the name parameter as follows: function ResourceDefinitions:remove (name) definitions[name] = nilend In this way, we remove the reference to the object, allowing the memory to be released by the garbage collector. This may seem useless here, but it’s a good example of how you should manage your objects to be removed from memory by the garbage collector. And besides this, we don’t know information comes in a resource definition; it may be huge, we just don’t know. This is all we need for the resource definitions. We’re making use of the dynamism that Lua provides. See how easy it was to create a repository for definitions that is abstracted from the content of each definition. We’ll define different fields for each asset type, and we don’t need to define them beforehand as we probably would have needed to do in C++. Resource manager We will now create our resource manager. This module will be in charge of creating and storing our decks and assets in general. We’ll retrieve the assets with one single command, and they’ll come from the cache or get created using the definition. We need to create a file named resource_manager.lua and add the following line to it: module ( “ResourceManager”, package.seeall ) This is the same as in the resource definitions; we’re creating a module that will be accessed using ResourceManager. ASSETS_PATH = ‘assets/’ We now create the ASSETS_PATH constant. This is the path where we will store our assets. You could have many paths for different kinds of assets, but in order to keep things simple, we’ll keep all of them in one single directory in this example. Using this constant will allow us to use just the filename instead of having to write the whole path when creating the actual resource definitions, saving us some phalanx injuries! local cache = {} Again, we’re creating a cache table as a local variable. This will be the variable that will store our initialized assets. Now we should take care of implementing the important functionality. In order to make this more readable, I’ll be using methods that we define in the following pages. So, I recommend that you read the whole section before trying to run what we code now. The full source code can be downloaded from the book’s website, featuring inline comments. In the book, we removed the comments for brevity’s sake. Getter The first thing we will implement is our getter method since it’s simple enough: function ResourceManager:get ( name ) if (not self:loaded ( name )) then self:load ( name ) end return cache[name] end This method receives a name parameter that is the identifier of the resource we’re working with. On the first line, we call loaded (a method that we will define soon) to see if the resource identified by name was already loaded. If it was, we just need to return the cached value, but if it was not we need to load it, and that’s what we do in the if statement. We use the internalload method (which we will define later as well) to take care of the loading. We will make this load method store the loaded object in the cache table. So after loading it, the only thing we have to do is return the object contained in the cache table indexed by name. One of the auxiliary functions that we use here is loaded. Let’s implement it since it’s really easy to do so: function ResourceManager:loaded ( name ) return cache[name] ~= nil end What we do here is check whether the cache table indexed by the name parameter is not equal to nil. If cache has an object under that key, this will return true, and that’s what we were looking for to decide whether the object represented by the name parameter was already loaded. Loader load and its auxiliary functions are the most important methods of this module. They’ll be slightly more complex than what we’ve done so far since they make the magic happen. Pay special attention to this section. It’s not particularly hard, but it might get confusing. Like the previous methods, this one receives just the name parameter that represents the asset we’re loading as follows: function ResourceManager:load ( name ) First of all, we retrieve the definition for the resource associated to name. We make a call to the get method from ResourceDefinitions, which we defined earlier as follows: local resourceDefinition = ResourceDefinitions:get( name ) If the resource definition does not exist (because we forgot to define it before), we print an error to the screen, as follows: if not resourceDefinition then print(“ERROR: Missing resource definition for “ .. name ) If the resource definition was retrieved successfully, we create a variable that will hold the resource and (pay attention) we call the correct load auxiliary function, depending on the asset type. else local resource Remember the RESOURCE_TYPE_[type] constants that we created in the ResourceDefinitions module? This is the reason for their existence. Thanks to the creation of the RESOURCE_TYPE_[type] constants, we now know how to load the resources correctly. When we define a resource, we must include a type key with one of the resource types. We’ll insist on this soon. What we do now is call the correct load method for images, tiled images, fonts, and sounds, using the value stored in resourceDefinition.type as follows: if (resourceDefinition.type == RESOURCE_TYPE_IMAGE) then resource = self:loadImage ( resourceDefinition ) elseif (resourceDefinition.type == RESOURCE_TYPE_TILED_IMAGE) then resource = self:loadTiledImage ( resourceDefinition ) elseif (resourceDefinition.type == RESOURCE_TYPE_FONT) then resource = self:loadFont ( resourceDefinition ) elseif (resourceDefinition.type == RESOURCE_TYPE_SOUND) then resource = self:loadSound ( resourceDefinition ) end After loading the current resource, we store it in the cache table, in an entry specified by the name parameter, as follows: -- store the resource under the name on cache cache[name] = resource endend Now, let’s take a look at all of the different load methods. The expected definitions are explained before the actual functions so you have a reference when reading them. Images Loading images is something that we’ve already done, so this is going to look somewhat familiar. In this book, we’ll have two ways of defining images. Let’s take a look at them: {type = RESOURCE_TYPE_IMAGEfileName = “tile_back.png”,width = 62,height = 62,} As you may have guessed, the type key is the one used in the load function. In this case, we need to make it of type RESOURCE_TYPE_IMAGE. Here we are defining an image that has specific width and height values, and that is located at assets/title_back.png. Remember that we will use ASSET_PATH in order to avoid writing assets/ a zillion times. That’s why we’re not writing it on the definition. Another useful definition is: {type = RESOURCE_TYPE_IMAGEfileName = “tile_back.png”,coords = { -10, -10, 10, 10 }} This is handy when you want a specific rectangle inside a bigger image. You can use the cords attribute to define this rectangle. For example, we get a square with 20 pixel long sides centered in the image by specifying coords = { -10, -10, 10, 10 }. Now, let’s take a look at the actual loadImage method to see how this all falls into place: function ResourceManager:loadImage ( definition ) local image First of all, we use the same technique of defining an empty variable that will hold our image: local filePath = ASSETS_PATH .. definition.fileName We create the actual full path by appending the value of fileName in the definition to the value of the ASSETS_PATH constant. if checks whether the coords attribute is defined: if definition.coords thenimage = self:loadGfxQuad2D ( filePath, definition.coords ) Then, we use another auxiliary function called loadGfxQuad2D. This will be in charge of creating the actual image. The reason why we’re using another auxiliary function is that the code used to create the image is the same for both definition styles, but the data in the definition needs to be processed differently. In this case, we just pass the coordinates of the rectangle. else local halfWidth = definition.width / 2 local halfHeight = definition.height / 2 image = self:loadGfxQuad2D(filePath, {-halfWidth, -halfHeight, halfWidth, halfHeight} ) If there were no coords attribute, we’d assume the image is defined using width and height. So what we do is to define a rectangle that covers the whole width and height for the image. We do this by calculating halfWidth and halfHeight and then passing these values to theloadGfxQuad2D method. Remember the discussion about the texture coordinates in Moai SDK; this is the reason why we need to divide the dimensions by 2 and pass them as negative and positive parameters for the rectangle. This allows it to be centered on (0, 0). After loading the image, we return it so it can be stored in the cache by the load method: end return imageend Now the last method we need to write is loadGfxQuad2D. This method is basically to display an image as follows: function ResourceManager:loadGfxQuad2D ( filePath, coords ) local image = MOAIGfxQuad2D.new () image:setTexture ( filePath ) image:setRect ( unpack(cords) ) return imageend Lua’s unpack method is a nice tool that allows you to pass a table as separate parameters. You can use it to split a table into multiple variables as well: x, y = unpack ( position_table )   What we do here is instantiate the MOAIGfxQuad2D class, set the texture we defined in the previous function, and use the coordinates we constructed to set the rectangle this image will use from the original texture. Then we return it so loadImage can use it. Well! That was it for images. It may look complicated at first, but it’s not that complex. The rest of the assets will be simpler than this, so if you understood this one, the rest will be a piece of cake.
Read more
  • 0
  • 0
  • 985

article-image-introduction-ai
Packt
28 Aug 2013
30 min read
Save for later

Introduction to AI

Packt
28 Aug 2013
30 min read
(For more resources related to this topic, see here.) Artificial Intelligence (AI) Living organisms such as animals and humans have some sort of intelligence that helps us in making a particular decision to perform something. On the other hand, computers are just electronic devices that can accept data, perform logical and mathematical operations at high speeds, and output the results. So, Artificial Intelligence (AI) is essentially the subject of making computers able to think and decide like living organisms to perform specific operations. So, apparently this is a huge subject. But it is really important to understand the basics of AI being used in different domains. AI is just a general term; its implementations and applications are different for different purposes, solving different sets of problems. Before we move on to game-specific techniques, we'll take a look at the following research areas in AI applications: Computer vision: It is the ability to take visual input from sources such as videos and cameras, and analyze them to do particular operations such as facial recognition, object recognition, and optical-character recognition. Natural language processing (NLP): It is the ability that allows a machine to read and understand the languages, as we normally write and speak. The problem is that the languages we use today are difficult for machines to understand. There are many different ways to say the same thing, and the same sentence can have different meanings according to the context. NLP is an important step for machines, since they need to understand the languages and expressions we use, before they can process them and respond accordingly. Fortunately, there's an enormous amount of data sets available on the Web that can help researchers to do automatic analysis of a language. Common sense reasoning: This is a technique that our brains can easily use to draw answers even from the domains we don't fully understand. Common sense knowledge is a usual and common way for us to attempt certain questions, since our brains can mix and interplay between the context, background knowledge, and language proficiency. But making machines to apply such knowledge is very complex, and still a major challenge for researchers. AI in games Game AI needs to complement the quality of a game. For that we need to understand the fundamental requirement that every game must have. The answer should be easy. It is the fun factor. So, what makes a game fun to play? This is the subject of game design, and a good reference is The Art of Game Design by Jesse Schell. Let's attempt to tackle this question without going deep into game design topics. We'll find that a challenging game is indeed fun to play. Let me repeat: it's about making a game challenging. This means the game should not be so difficult that it's impossible for the player to beat the opponent, or too easy to win. Finding the right challenge level is the key to make a game fun to play. And that's where the AI kicks in. The role of AI in games is to make it fun by providing challenging opponents to compete, and interesting non-player characters (NPCs) that behave realistically inside the game world. So, the objective here is not to replicate the whole thought process of humans or animals, but to make the NPCs seem intelligent by reacting to the changing situations inside the game world in a way that makes sense to the player. The reason that we don't want to make the AI system in games so computationally expensive is that the processing power required for AI calculations needs to be shared between other operations such as graphic rendering and physics simulation. Also, don't forget that they are all happening in real time, and it's also really important to achieve a steady framerate throughout the game. There were even attempts to create dedicated processor for AI calculations (AI Seek's Intia Processor). With the ever-increasing processing power, we now have more and more room for AI calculations. However, like all the other disciplines in game development, optimizing AI calculations remains a huge challenge for the AI developers. AI techniques In this section, we'll walk through some of the AI techniques being used in different types of games. So, let's just take it as a crash course, before actually going into implementation. If you want to learn more about AI for games, there are some really great books out there, such as Programming Game AI by Example by Mat Buckland and Artificial Intelligence for Games by Ian Millington and John Funge. The AI Game Programming Wisdom series also contain a lot of useful resources and articles on the latest AI techniques. Finite State Machines (FSM) Finite State Machines (FSM) can be considered as one of the simplest AI model form, and are commonly used in the majority of games. A state machine basically consists of a finite number of states that are connected in a graph by the transitions between them. A game entity starts with an initial state, and then looks out for the events and rules that will trigger a transition to another state. A game entity can only be in exactly one state at any given time. For example, let's take a look at an AI guard character in a typical shooting game. Its states could be as simple as patrolling, chasing, and shooting. Simple FSM of an AI guard character There are basically four components in a simple FSM: States: This component defines a set of states that a game entity or an NPC can choose from (patrol, chase, and shoot) Transitions: This component defines relations between different states Rules: This component is used to trigger a state transition (player on sight, close enough to attack, and lost/killed player) Events: This is the component, which will trigger to check the rules (guard's visible area, distance with the player, and so on) So, a monster in Quake 2 might have the following states: standing, walking, running, dodging, attacking, idle, and searching. FSMs are widely used in game AI especially, because they are really easy to implement and more than enough for both simple and somewhat complex games. Using simple if/else statements or switch statements, we can easily implement an FSM. It can get messy, as we start to have more states and more transitions. Random and probability in AI Imagine an enemy bot in an FPS game that can always kill the player with a headshot, an opponent in a racing game that always chooses the best route, and overtakes without collision with any obstacle. Such a level of intelligence will make the game so difficult that it becomes almost impossible to win. On the other hand, imagine an AI enemy that always chooses the same route to follow, or tries to escape from the player. AI controlled entities behaving the same way every time the player encounters them, makes the game predictable and easy to win. Both of the previous situations obviously affect the fun aspect of the game, and make the player feel like the game is not challenging or fair enough anymore. One way to fix this sort of perfect AI and stupid AI is to introduce some errors in their intelligence. In games, randomness and probabilities are applied in the decision making process of AI calculations. The following are the main situations when we would want to let our AI entities change a random decision: Non-intentional: This situation is sometimes a game agent, or perhaps an NPC might need to make a decision randomly, just because it doesn't have enough information to make a perfect decision, and/or it doesn't really matter what decision it makes. Simply making a decision randomly and hoping for the best result is the way to go in such a situation. Intentional: This situation is for perfect AI and stupid AI. As we discussed in the previous examples, we will need to add some randomness purposely, just to make them more realistic, and also to match the difficulty level that the player is comfortable with. Such randomness and probability could be used for things such as hit probabilities, plus or minus random damage on top of base damage. Using randomness and probability we can add a sense of realistic uncertainty to our game and make our AI system somewhat unpredictable. We can also use probability to define different classes of AI characters. Let's look at the hero characters from Defense of the Ancient (DotA), which is a popular action real-time strategy (RTS) game mode of Warcraft III. There are three categories of heroes based on the three main attributes: strength, intelligence, and agility. Strength is the measure of the physical power of the hero, while intellect relates to how well the hero can control spells and magic. Agility defines a hero's ability to avoid attacks and attack quickly. An AI hero from the strength category will have the ability to do more damage during close combat, while an intelligence hero will have more chance of success to score higher damage using spells and magic. Carefully balancing the randomness and probability between different classes and heroes, makes the game a lot more challenging, and makes DotA a lot fun to play. The sensor system Our AI characters need to know about their surroundings, and the world they are interacting with, in order to make a particular decision. Such information could be as follows: Position of the player: This information is used to decide whether to attack or chase, or keep patrolling Buildings and objects nearby: This information is used to hide or take cover Player's health and its own health: This remaining information is used to decide whether to retreat or advance Location of resources on the map in an RTS game: This information is used to occupy and collect resources, required for constructing and producing other units As you can see, it could vary a lot depending on the type of game we are trying to build. So, how do we collect that information? Polling One method to collect such information is polling. We can simply do if/else or switch checks in the FixedUpdate method of our AI character. AI character just polls the information they are interested in from the game world, does the checks, and takes action accordingly. Polling methods works great, if there aren't too many things to check. However, some characters might not need to poll the world states every frame. Different characters might require different polling rates. So, usually in larger games with more complex AI systems, we need to deploy an event-driven method using a global messaging system. The messaging system AI does decision making in response to the events in the world. The events are communicated between the AI entity and the player, the world, or the other AI entities through a messaging system. For example, when the player attacks an enemy unit from a group of patrol guards, the other AI units need to know about this incident as well, so that they can start searching for and attacking the player. If we were using the polling method, our AI entities will need to check the state of all the other AI entities, in order to know about this incident. But with an event-driven messaging system, we can implement this in a more manageable and scalable way. The AI characters interested in a particular event can be registered as listeners, and if that event happens, our messaging system will broadcast to all listeners. The AI entities can then proceed to take appropriate actions, or perform further checks. The event-driven system does not necessarily provide faster mechanism than polling. But it provides a convenient, central checking system that senses the world and informs the interested AI agents, rather than each individual agent having to check the same event in every frame. In reality, both polling and messaging system are used together most of the time. For example, AI might poll for more detailed information when it receives an event from the messaging system. Flocking, swarming, and herding Many living beings such as birds, fish, insects, and land animals perform certain operations such as moving, hunting, and foraging in groups. They stay and hunt in groups, because it makes them stronger and safer from predators than pursuing goals individually. So, let's say you want a group of birds flocking, swarming around in the sky; it'll cost too much time and effort for animators to design the movement and animations of each bird. But if we apply some simple rules for each bird to follow, we can achieve emergent intelligence of the whole group with complex, global behavior. One pioneer of this concept is Craig Reynolds, who presented such a flocking algorithm in his SIGGRAPH paper, 1987, Flocks, Herds and Schools – A Distributed Behavioral Model. He coined the term "boid" that sounds like "bird", but referring to a "bird-like" object. He proposed three simple rules to apply to each unit, which are as follows: Separation: This rule is used to maintain a minimum distance with neighboring boids to avoid hitting them Alignment: This rule is used to align itself with the average direction of its neighbors, and then move in the same velocity with them as a flock Cohesion: This step is used to maintain a minimum distance with the group's center of mass These three simple rules are all that we need to implement a realistic and a fairly complex flocking behavior for birds. They can also be applied to group behaviors of any other entity type with little or no modifications. Path following and steering Sometimes we want our AI characters to roam around in the game world, following a roughly guided or thoroughly defined path. For example in a racing game, the AI opponents need to navigate on the road. And the decision-making algorithms such as our flocking boid algorithm discussed already, can only do well in making decisions. But in the end, it all comes down to dealing with actual movements and steering behaviors. Steering behaviors for AI characters have been in research topics for a couple of decades now. One notable paper in this field is Steering Behaviors for Autonomous Characters, again by Craig Reynolds, presented in 1999 at the Game Developers Conference (GDC). He categorized steering behaviors into the following three layers: Hierarchy of motion behaviors Let me quote the original example from his paper to understand these three layers: "Consider, for example, some cowboys tending a herd of cattle out on the range. A cow wanders away from the herd. The trail boss tells a cowboy to fetch the stray. The cowboy says "giddy-up" to his horse, and guides it to the cow, possibly avoiding obstacles along the way. In this example, the trail boss represents action selection, noticing that the state of the world has changed (a cow left the herd), and setting a goal (retrieve the stray). The steering level is represented by the cowboy who decomposes the goal into a series of simple sub goals (approach the cow, avoid obstacles, and retrieve the cow). A sub goal corresponds to a steering behavior for the cowboy-and-horse team. Using various control signals (vocal commands, spurs, and reins), the cowboy steers his horse towards the target. In general terms, these signals express concepts like go faster, go slower, turn right, turn left, and so on. The horse implements the locomotion level. Taking the cowboy's control signals as input, the horse moves in the indicated direction. This motion is the result of a complex interaction of the horse's visual perception, its sense of balance, and its muscles applying torques to the joints of its skeleton." Then he presented how to design and implement some common and simple steering behaviors for individual AI characters and pairs. Such behaviors include seek and flee, pursue and evade, wander, arrival, obstacle avoidance, wall following, and path following. A* pathfinding There are many games where you can find monsters or enemies that follow the player, or go to a particular point while avoiding obstacles. For example, let's take a look at a typical RTS game. You can select a group of units and click a location where you want them to move or click on the enemy units to attack them. Your units then need to find a way to reach the goal without colliding with the obstacles. The enemy units also need to be able to do the same. Obstacles could be different for different units. For example, an air force unit might be able to pass over a mountain, while the ground or artillery units need to find a way around it. A* (pronounced "A star") is a pathfinding algorithm widely used in games, because of its performance and accuracy. Let's take a look at an example to see how it works. Let's say we want our unit to move from point A to point B, but there's a wall in the way, and it can't go straight towards the target. So, it needs to find a way to point B while avoiding the wall. Top-down view of our map We are looking at a simple 2D example. But the same idea can be applied to 3D environments. In order to find the path from point A to point B, we need to know more about the map such as the position of obstacles. For that we can split our whole map into small tiles, representing the whole map in a grid format, as shown in the following figure: Map represented in a 2D grid The tiles can also be of other shapes such as hexagons and triangles. But we'll just use square tiles here, as that's quite simple and enough for our scenario. Representing the whole map in a grid, makes the search area more simplified, and this is an important step in pathfinding. We can now reference our map in a small 2D array. Our map is now represented by a 5 x 5 grid of square tiles with a total of 25 tiles. We can start searching for the best path to reach the target. How do we do this? By calculating the movement score of each tile adjacent to the starting tile, which is a tile on the map not occupied by an obstacle, and then choosing the tile with the lowest cost. There are four possible adjacent tiles to the player, if we don't consider the diagonal movements. Now, we need to know two numbers to calculate the movement score for each of those tiles. Let's call them G and H, where G is the cost of movement from starting tile to current tile, and H is the cost to reach the target tile from current tile. By adding G and H, we can get the final score of that tile; let's call it F. So we'll be using this formula: F = G + H. Valid adjacent tiles In this example, we'll be using a simple method called Manhattan length (also known as Taxicab geometry), in which we just count the total number of tiles between the starting tile and the target tile to know the distance between them. Calculating G The preceding figure shows the calculations of G with two different paths. We just add one (which is the cost to move one tile) to the previous tile's G score to get the current G score of the current tile. We can give different costs to different tiles. For example, we might want to give a higher movement cost for diagonal movements (if we are considering them), or to specific tiles occupied by, let's say a pond or a muddy road. Now we know how to get G. Let's look at the calculation of H. The following figure shows different H values from different starting tiles to the target tile. You can try counting the squares between them to understand how we get those values. Calculating H So, now we know how to get G and H. Let's go back to our original example to figure out the shortest path from A to B. We first choose the starting tile, and then determine the valid adjacent tiles, as shown in the following figure. Then we calculate the G and H scores of each tile, shown in the lower-left and right corners of the tile respectively. And then the final score F, which is G + H is shown at the top-left corner. Obviously, the tile to the immediate right of the start tile has got the lowest F score. So, we choose this tile as our next movement, and store the previous tile as its parent. This parent stuff will be useful later, when we trace back our final path. Starting position From the current tile, we do the similar process again, determining valid adjacent tiles. This time there are only two valid adjacent tiles at the top and bottom. The left tile is a starting tile, which we've already examined, and the obstacle occupies the right tile. We calculate the G, the H, and then the F score of those new adjacent tiles. This time we have four tiles on our map with all having the same score, six. So, which one do we choose? We can choose any of them. It doesn't really matter in this example, because we'll eventually find the shortest path with whichever tile we choose, if they have the same score. Usually, we just choose the tile added most recently to our adjacent list. This is because later we'll be using some sort of data structure, such as a list to store those tiles that are being considered for the next move. So, accessing the tile most recently added to that list could be faster than searching through the list to reach a particular tile that was added previously. In this demo, we'll just randomly choose the tile for our next test, just to prove that it can actually find the shortest path. Second step So, we choose this tile, which is highlighted with a red border. Again we examine the adjacent tiles. In this step, there's only one new adjacent tile with a calculated F score of 8. So, the lowest score right now is still 6. We can choose any tile with the score 6. Third step So, we choose a tile randomly from all the tiles with the score 6. If we repeat this process until we reach our target tile, we'll end up with a board complete with all the scores for each valid tile. Reach target Now all we have to do is to trace back starting from the target tile using its parent tile. This will give a path that looks something like the following figure: Path traced back So this is the concept of A* pathfinding in a nutshell, without displaying any code. A* is an important concept in the AI pathfinding area, but since Unity 3.5, there are a couple of new features such as automatic navigation mesh generation and the Nav Mesh Agent, which we'll see roughly in the next section and then in more detail later. These features make implementing pathfinding in your games very much easier. In fact, you may not even need to know about A* to implement pathfinding for your AI characters. Nonetheless, knowing how the system is actually working behind the scenes will help you to become a solid AI programmer. Unfortunately, those advanced navigation features in Unity are only available in the Pro version at this moment. A navigation mesh Now we have some idea of A* pathfinding techniques. One thing that you might notice is that using a simple grid in A* requires quite a number of computations to get a path which is the shortest to the target, and at the same time avoids the obstacles. So, to make it cheaper and easier for AI characters to find a path, people came up with the idea of using waypoints as a guide to move AI characters from the start point to the target point. Let's say we want to move our AI character from point A to point B, and we've set up three waypoints as shown in the following figure: Waypoints All we have to do now is to pick up the nearest waypoint, and then follow its connected node leading to the target waypoint. Most of the games use waypoints for pathfinding, because they are simple and quite effective in using less computation resources. However, they do have some issues. What if we want to update the obstacles in our map? We'll also have to place waypoints for the updated map again, as shown in the following figure: New waypoints Following each node to the target can mean the AI character moves in zigzag directions. Look at the preceding figures; it's quite likely that the AI character will collide with the wall where the path is close to the wall. If that happens, our AI will keep trying to go through the wall to reach the next target, but it won't be able to and it will get stuck there. Even though we can smooth out the zigzag path by transforming it to a spline and do some adjustments to avoid such obstacles, the problem is the waypoints don't give any information about the environment, other than the spline connected between two nodes. What if our smoothed and adjusted path passes the edge of a cliff or a bridge? The new path might not be a safe path anymore. So, for our AI entities to be able to effectively traverse the whole level, we're going to need a tremendous number of waypoints, which will be really hard to implement and manage. Let's look at a better solution, navigation mesh. A navigation mesh is another graph structure that can be used to represent our world, similar to the way we did with our square tile-based grid or waypoints graph. Navigation mesh A navigation mesh uses convex polygons to represent the areas in the map that an AI entity can travel. The most important benefit of using a navigation mesh is that it gives a lot more information about the environment than a waypoint system. Now we can adjust our path safely, because we know the safe region in which our AI entities can travel. Another advantage of using a navigation mesh is that we can use the same mesh for different types of AI entities. Different AI entities can have different properties such as size, speed, and movement abilities. A set of waypoints is tailored for human, AI may not work nicely for flying creatures or AI controlled vehicles. Those might need different sets of waypoints. Using a navigation mesh can save a lot of time in such cases. But generating a navigation mesh programmatically based on a scene, is a somewhat complicated process. Fortunately, Unity 3.5 introduced a built-in navigation mesh generator (Pro only feature). Instead, we'll learn how to use Unity's navigation mesh for generating features to easily implement our AI pathfinding. The behavior trees Behavior trees are the other techniques used to represent and control the logic behind AI characters. They have become popular for the applications in AAA games such as Halo and Spore. Previously, we have briefly covered FSM. FSMs provide a very simple way to define the logic of an AI character, based on the different states and transitions between them. However, FSMs are considered difficult to scale and re-use existing logic. We need to add many states and hard-wire many transitions, in order to support all the scenarios, which we want our AI character to consider. So, we need a more scalable approach when dealing with large problems. behavior trees are a better way to implement AI game characters that could potentially become more and more complex. The basic elements of behavior trees are tasks, where states are the main elements for FSMs. There are a few different tasks such as Sequence, Selector, and Parallel Decorator. This is quite confusing. The best way to understand this is to look at an example. Let's try to translate our example from the FSM section using a behavior tree. We can break all the transitions and states into tasks. Tasks Let's look at a Selector task for this Behavior tree. Selector tasks are represented with a circle and a question mark inside. First it'll choose to attack the player. If the Attack task returns success, the Selector task is done and will go back to the parent node, if there is one. If the Attack task fails, it'll try the Chase task. If the Chase task fails, it'll try the Patrol task. Selector task What about the tests? They are also one of the tasks in the behavior trees. The following diagram shows the use of Sequence tasks, denoted by a rectangle with an arrow inside it. The root selector may choose the first Sequence action. This Sequence action's first task is to check whether the player character is close enough to attack. If this task succeeds, it'll proceed with the next task, which is to attack the player. If the Attack task also returns success, the whole sequence will return success, and the selector is done with this behavior, and will not continue with other Sequence tasks. If the Close enough to attack? task fails, then the Sequence action will not proceed to the Attack task, and will return a failed status to the parent selector task. Then the selector will choose the next task in the sequence, Lost or Killed Player?. Sequence tasks The other two common components are Parallel and Decorator. A Parallel task will execute all of its child tasks at the same time, while the Sequence and Selector tasks only execute their child tasks one by one. Decorator is another type of task that has only one child. It can change the behavior of its own child's tasks, which includes whether to run its child's task or not, how many times it should run, and so on. We'll study how to implement a basic behavior tree system later. There's a free add-on for Unity called Behave in the Unity Asset Store. Behave is a useful, free GUI editor to set up behavior trees of AI characters, and we'll look at it in more detail later as well. Locomotion Animals (including humans) have a very complex musculoskeletal system (the locomotor system) that gives them the ability to move around the body using the muscular and skeletal systems. We know where to put our steps when climbing a ladder, stairs, or on uneven terrain, and we know how to balance our body to stabilize all the fancy poses we want to make. We can do all this using our bones, muscles, joints, and other tissues, collectively described as our locomotor system. Now put that into our game development perspective. Let's say we've a human character who needs to walk on both even and uneven surfaces, or on small slopes, and we have only one animation for a "walk" cycle. With the lack of a locomotor system in our virtual character, this is how it would look: Climbing stair without locomotion First we play the walk animation and advance the player forward. Now the character knows it's penetrating the surface. So, the collision detection system will pull the character up above the surface to prevent this penetration. This is how we usually set up the movement on an uneven surface. Even though it doesn't give a realistic look and feel, it does the job and is cheap to implement. Let's take a look at how we really walk up stairs. We put our step firmly on the staircase, and using this force we pull up the rest of our body for the next step. This is how we do it in real life with our advanced locomotor system. However, it's not so simple to implement this level of realism inside games. We'll need a lot of animations for different scenarios, which include climbing ladders, walking/running up stairs, and so on. So, only the large studios with a lot of animators could pull this off in the past, until we came up with an automated system. With a locomotion system Fortunately, Unity 3D has an extension that can do just that, which is a locomotion system. Locomotion system Unity extension This system can automatically blend our animated walk/run cycles, and adjust the movements of the bones in the legs to ensure that the feet step correctly on the ground. It can also adjust the original animations made for a specific speed and direction on any surface, arbitrary steps, and slopes. We'll see how to use this locomotion system to apply realistic movement to our AI characters. Dijkstra's algorithm The Dijkstra's algorithm, named after professor Edsger Dijkstra, who devised the algorithm, is one of the most famous algorithms for finding the shortest paths in a graph with non-negative edge path costs. The algorithm was originally designed to solve the shortest path problem in the context of mathematical graph theory. And it's designed to find all the shortest paths from a starting node to all the other nodes in the graph. Since most of the games only need the shortest path between one starting point and one target point, all the other paths generated or found by this algorithm are not really useful. We can stop the algorithm, once we find the shortest path from a single starting point to a target point. But still it'll try to find all the shortest paths from all the points it has visited. So, this algorithm is not efficient enough to be used in most games. And we won't be doing a Unity demo of Dijkstra's algorithm in this article as well. However, Dijkstra's algorithm is an important algorithm for the games that require strategic AI that needs as much information as possible about the map to make tactical decisions. It has many applications other than games, such as finding the shortest path in network routing protocols. Summary Game AI and academic AI have different objectives. Academic AI researches try to solve real-world problems, and prove a theory without much limited resources. Game AI focuses on building NPCs within limited resources that seems to be intelligent to the player. Objective of AI in games is to provide a challenging opponent that makes the game more fun to play with. We also learned briefly about the different AI techniques that are widely used in games such as finite state machines (FSMs), random and probability, sensor and input system, flocking and group behaviors, path following and steering behaviors, AI path finding, navigation mesh generation, and behavior trees. Resources for Article: Further resources on this subject: Unity 3-0 Enter the Third Dimension [Article] Introduction to Game Development Using Unity 3D [Article] Unity Game Development: Welcome to the 3D world [Article]
Read more
  • 0
  • 0
  • 4146

article-image-bringing-your-game-life-ai-and-animations
Packt
26 Aug 2013
16 min read
Save for later

Bringing Your Game to Life with AI and Animations

Packt
26 Aug 2013
16 min read
(For more resources related to this topic, see here.) After going through these principles, we will be completing the tasks to enhance the maze game and the gameplay. We will apply animations to characters and trigger these in particular situations. We will improve the gameplay by allowing NPCs to follow the player where he/she is nearby (behavior based on distance), and attack the user when he/she is within reach. All material required to complete this article is available for free download on the companion website: http://patrickfelicia.wordpress.com/publications/books/unity-outbreak/. The pack for this article includes some great models and animations that were provided by the company Mixamo to enhance the quality of our final game. The characters were animated using Mixamo's easy online sequences and animation building tools. For more information on Mixamo and its easy-to-use 3D character rigging and animation tools, you can visit http://www.mixamo.com. Before we start creating our level, we will need to rename our scene and download the necessary assets from the companion website as follows: Duplicate the scene we have by saving the current scene (File Save | Scene), and then saving this scene as chapter5 (File | Save Scene As…). Open the link for the companion website: http://patrickfelicia.wordpress.com/publications/books/unity-outbreak/. Click on the link for the chapter5 pack to download this file. In Unity3D, create a new folder, chapter5, inside the Assets folder and select this folder (that is, chapter5). From Unity, select Assets | Import Package | Custom Package, and import the package you have just downloaded. This should create a folder, chapter5_pack, within the folder labeled chapter5. Importing and configuring the 3D character We will start by inserting and configuring the zombie character in the scene as shown in the following steps: Open the Unity Assets Store window (Window | Asset Store). In the Search field located in the top-right corner, type the text zombie. Click on the search result labeled Zombie Character Pack, and then click on the button labeled Import. In the new window entitled Importing package, uncheck the last box for the low-resolution zombie character and then click on Import. This will import the high-resolution zombie character inside our project and create a corresponding folder labeled ZombieCharacterPack inside the Assets folder. Locate the prefab zombie_hires by navigating to Assets | ZombieCharacterPack. Select this prefab and open the Inspector window, if it is not open yet. Click on the Rig tag, set the animation type to humanoid, and leave the other options as default. Click on the Apply button and then click on the Configure button; a pop-up window will appear: click on Save. In the new window, select: Mapping | Automap, as shown in the following screenshot: After this step, if we check the Hierarchy window, we should see a hierarchy of bones for this character. Select Pose | Enforce T-Pose as shown in the following screenshot: Click on the Muscles tab and then click on Apply in the new pop-up window. The Muscles tab makes it possible to apply constraints on our character. Check whether the mapping is correct by moving some of the sliders and ensuring that the character is represented properly. After this check, click on Done to go back to the previous window. Animating the character for the game Once we have applied these settings to the character, we will now use it for our scene. Drag-and-drop the prefab labeled zombie_hires by navigating to Assets | ZombieCharacterPack to the scene, change its position to (x=0, y =0, z=0), and add a collider to the character. Select: Component | Physics | Capsule Collider. Set the center position of this collider to (x=0, y=0.7, z=0), the radius to 0.5, the height to 2, and leave the other options as default, as illustrated in the following screenshot: Select: Assets | chapter5 | chapter5_pack; you will see that it includes several animations, including Zombie@idle, Zombie@walkForward, Zombie@attack, Zombie@hit, and Zombie@dead. We will now create the necessary animation for our character. Click once on the object zombie_hires in the Hierarchy window. We should see that it includes a component called Animator. This component is related to the animation of the character through Mecanim. You will also notice an empty slot for an Animator Controller. This controller will be created so that we can animate the character and control its different states, using a state machine. Let's create an Animator Controller that will be used for this character: From the project folder, select the chapter5 folder, then select Create | Animator Controller in the Project window. This should create a new Animator Controller labeled New Animator Controller in the folder chapter5. Rename this controller zombieController. Select the object labeled zombie_hires in the Hierarchy window. Locate the Animator Controller that we have just created by navigating to Assets | chapter5 (zombieController), drag-and-drop it to the empty slot to the right of the attribute controller in the Animator component of the zombie character, and check that the options Apply Root Motion and Animate Physics are selected. Our character is now ready to receive the animations. Open the Animator window (Window | Animator). This window is employed to display and manage the different states of our character. Since no animation is linked to the character, the default state is Any State. Select the object labeled zombie_hires in the Hierarchy window. Rearrange the windows in our project so that we can see both the state machine window and the character in the Scene view: we can drag the tab labeled Scene for the Scene view at the bottom of the Animator window, so that both windows can be seen simultaneously. We will now apply our first animation to the character: Locate the prefab Zombie@idle by navigating to Assets | chapter5 | chapter5_pack. Click once on this prefab, and in the Inspector window, click the Rig tab. In the new window, select the option Humanoid for the attribute Animation Type and click on Apply. Click on the Animations tab, and then click on the label idle, this will provide information on the idle clip. Scroll down the window, check the box for the attribute Loop Pose, and click on Apply to apply this change (you will need to scroll down to locate this button). In the Project view, click on the arrow located to the left (or right, depending on how much we have zoomed-in within this window) of the prefab Zombie@idle; it will reveal items included in this prefab, including an animation called idle, symbolized by a gray box with a white triangle. Make sure that the Animator window is active and drag this animation (idle) to the Animator window. This will create an idle state, and this state will be colored in orange, which means that it is the default state for our character. Rename this state Idle (upper case I) using the Inspector. Play the scene and check that the character is in an idle state. Repeat steps 1-9 for the prefab Zombie@walkForward and create a state called WalkForward. To test the second animation, we can temporarily set the state walkForward to be the default state by right-clicking on the walkForward state in the Animator window, and selecting Set As Default. Once we have tested this animation, set the state Idle as the default state. While the zombie is animated properly, you may notice that the camera on the First Person Controller might be too high. You will address this by changing the height of the camera so that it is at eye-level. In the Hierarchy view, select the object Main Camera that is located with the object First Person Controller and change its position to (x=0, y=0.5, z=0). We now have two animations. At present, the character is in the Idle state, and we need to de fine triggers or conditions for the character to start or stop walking toward the player. In this game, we will have enemies with different degrees of intelligence. This first type will follow the user when it sees the user, is close to the user, or is being attacked by the user. The Animator window will help to create animations and to apply transition conditions and blending between them so that transitions between each animation are smoother. To move around this window, we can hold the Alt key while simultaneously dragging-and-dropping the mouse. We can also select states by clicking on them or de fining a selection area (drag-and-drop the mouse to define the area). If needed, it is also possible to maximize this window using the icon located at its top-right corner. Creating parameters and transitions First, let's create transitions. Open the Animator window, right-click on the state labeled Idle, and select the option Make Transition from the contextual menu. This will create an arrow that symbolizes the transition from this state to another state. While this arrow is visible, click on the state labeled WalkForward. This will create a transition between the states WalkForward and Idle as illustrated in the following screenshot: Repeat the last step to create a transition between the state WalkForward and Idle: right-click on the state labeled WalkForward, select the option Make Transition from the contextual menu, and click on the state labeled Idle. Now that these transitions have been defined, we will need to specify how the animations will change from one state to the other. This will be achieved using parameters. In the Animator window, click on the + button located at the bottom-right corner of the window, as indicated in the following screenshot: Doing so will display a contextual menu, from which we can choose the type of the parameter. Select the option Bool to create a Boolean parameter. A new window should now appear with a default name for our new parameter as illustrated in the following screenshot: change the name of the parameter to walking. Now that the parameter has been defined, we can start defining transitions based on this parameter. Let's start with the first transition from the Idle state to the Walkforward state: Select the transition from the Idle state to the Walkforward state (that is, click on the corresponding transition in the Animator window). If we look at the Inspector window, we can see that this object has several components, including Transitions and Conditions. Let's focus on the Conditions component for the time being. We can see that the condition for the transition is based on a parameter called ExitTime and that the value is 0.98. This means that the transition will occur when the current animation has reached 98 percent completion. However, we would like to use the parameter labeled walking instead. Click on the parameter ExitTime, this should display other parameters that we can use for this transition. Select walking from the contextual menu and make sure that the condition is set to true as shown in the following screenshot: The process will be similar for the other transition (that is, from WalkForward to Idle), except that the condition for the transition for the parameter walking will be false: select the second transition (WalkForward to Idle) and set the transition condition of walking to false. To check that the transitions are working, we can do the following: Play the scene and look at the Scene view (not the Game view). In the Animator window, change the parameter walking to true by checking the corresponding box, as highlighted in the following screenshot: Check that the zombie character starts walking; click on this box again to set the variable walking to false, check that the zombie stops walking, and stop the Play mode (Ctrl + P). Adding basic AI to enemies We have managed to set transitions for the animations and the state of the zombie from Idle to walking. To add some challenge to the game, we will equip this enemy with some AI and create a script that changes the state of the enemy from Idle to WalkForward whenever it sees the player. First, let's allocate the predefined-tag player to First Person Controller: select First Person Controller from the Hierarchy window, and in the Inspector window, click on the drop-down menu to the right of the label Tag and select the tag Player. Then, we can start creating a script that will set the direction of the zombie toward the player. Create a folder labeled Scripts inside the folder Assets | chapter5, create a new script, rename it controlZombie, and add the following code to the start of the script: public var walking:boolean = false;public var anim:Animator;public var currentBaseState:AnimatorStateInfo; public var walkForwardState:int = Animator.StringToHash("Base Layer.WalkForward");public var idleState:int = Animator.StringToHash("Base Layer.Idle");private var playerTransform:Transform;private var hit:RaycastHit; In statement 1 of the previous code, a Boolean value is created. It is linked to the parameter used for the animation in the Animator window. In statement 2 of the previous code, we define an Animator object that will be used to manage the animator component of the zombie character. In statement 3 of the previous code, we create an AnimatorStateInfo variable that will be used to determine the current state of the animation (for example, Idle or WalkForward). In statement 4 of the previous code, we create a variable, walkForwardState , that will represent the state WalkForward previously de fined in the Animator window. We use the method Animator.StringToHash to convert this state initially from a string to an integer that can then be used to monitor the active state. In statement 5 of the previous code, similar to the previous comments, a variable is created for the state Idle. In statement 6 of the previous code, we create a variable that will be used to detect the position of the player. In statement 7 of the previous code, we create a ray that will be employed later on to detect the player. Next, let's add the following function to the script: function Start (){ anim = GetComponent(Animator); playerTransform = GameObject.FindWithTag("Player").transform;} In line 3 of the previous code, we initialize the variable anim with the Animator component linked to this GameObject. We can then add the following lines of code: function Update (){ currentBaseState = anim.GetCurrentAnimatorStateInfo(0); gameObject.transform.LookAt(playerTransform);} In line 3 of the previous code, we determine the current state for our animation. In line 4 of the previous code, the transform component of the current game object is oriented so that it is looking at the First Person Controller. Therefore, when the zombie is walking, it will follow the player. Save this script, and drag-and-drop it to the character labeled zombie_hires in the Hierarchy window. As we have seen previously, we will need to manage several states through our script, including the states Idle and WalkForward. Let's add the following code in the Update function: switch (currentBaseState.nameHash){case idleState:break;case walkForwardState:break;default:break;} In line 1 of the previous code, depending on the current state, we will switch to a different set of instructions All code related to the state Idle will be included within lines 3-4 of the previous code All code related to the state WalkForward will be included within lines 6-7 If we play the scene, we may notice that the zombie rotates around the x and z axes when near the player; its y position also changes over time. To correct this issue, let's add the following code at the end of the function Update: transform.position.y = -0.5;transform.rotation.x = 0.0;transform.rotation.z = 0.0; We now need to detect whether the zombie can see the player, or detect its presence within a radius of two meters(that is, the zombie would hear the player if he/she is within two meters). This can be achieved using two techniques: by calculating the distance between the zombie and the player, and by casting a ray from the zombie and detecting whether the player is in front of the zombie. If this is the case, the zombie will start walking toward the player. We need to calculate the distance between the player and the zombie by adding the following code to the script controlZombie, at the start of the function Update, before the switch statement: var distance:float = Vector3.Distance(transform.position, playerTransform.position); In the previous code, we create a variable labeled distance and initialize it with the distance between the player and the zombie. This is achieved using the built-in function Vector3.Distance. Now that the distance is calculated (and updated in every frame), we can implement the code that will serve to detect whether the player is near or in front of the zombie. Open the script entitled controlZombie, and add the following lines to the function Update within the block of instructions for the Idle state, so that it looks as follows: case idleState: if ((Physics.Raycast (Vector3(transform.position.x,transform.position.y+.5,transform.po sition.z), transform.forward, hit,40) && hit.collider.gameObject.tag == "Player") || distance <2.0f) { anim.SetBool("walking",true); }break; In the previous lines of code, a ray or ray cast is created. It is casted forward from the zombie, 0.5 meters above the ground and over 40 meters. Thanks to the variable hit, we read the tag of the object that is colliding with our ray and check whether this object is the player. If this is the case, the parameter walking is set to true. Effectively, this should trigger a transition to the state walking, as we have defined previously, so that the zombie starts walking toward the player. Initially, our code was written so that the zombie rotated around to face the player, even in the Idle state (using the built-in function LookAt). However, we need to modify this feature so that the zombie only turns around to face the player while it is following the player, otherwise, the player will always be in sight and the zombie will always see him/her, even in the Idle state. We can achieve this by deleting the code highlighted in the following code snippet (from the start of the function Update), and adding it to the code for the state WalkForward: case walkForwardState: transform.LookAt(playerTransform); break; In the previous lines, we checked whether the zombie is walking forward, and if this is the case, the zombie will rotate in order to look at and follow the player. Test our code by playing the scene and either moving within two meters of the zombie or in front of the zombie.
Read more
  • 0
  • 0
  • 2243
Banner background image

article-image-unreal-engine
Packt
26 Aug 2013
8 min read
Save for later

The Unreal Engine

Packt
26 Aug 2013
8 min read
(For more resources related to this topic, see here.) Sound cues versus sound wave data There are two types of sound entries in UDK: Sound cues Sound wave data The simplest difference between the two is that Sound Wave Data is what we would have if we imported a sound file into the editor, and a Sound Cue is taking a sound wave data or multiple sound wave datas and manipulating them or combining them using a fairly robust and powerful toolset that UDK gives us in their Sound Cue Editor. In terms of uses, sound wave datas are primarily only used as parts of sound cues. However, in terms of placing ambient sounds, that is, sounds that are just sort of always playing in the background, sound wave datas and sound cues offer different situations where each is used. Regardless, they both get represented in the level as Sound Actors, of which there are several types as shown in the following screenshot:     Types of sound actors A key element of any well designed level is ambient sound effects. This requires placing sound actors into the world. Some of these actors use sound wave data and others use sound cues. There are strengths, weaknesses, and specific use cases for all of them, so we'll touch on those presently. Using sound cues There are two distinct types of sound actors that call for the use of sound cues specifically. The strength of using sound cues for ambient sounds is that the different sounds can be manipulated in a wider variety of ways. Generally, this isn't necessary as most ambient sounds are some looping sound used to add sound to things like torches, rippling streams, a subtle blowing wind, or other such environmental instances. The two types of sound actors that use sound cues are Ambient Sounds and Ambient Sound Movables as shown in the following screenshot: Ambient sound As the name suggests, this is a standard ambient sound. It stays exactly where you place it and cannot be moved. These ambient sound actors are generally used for stationary sounds that need some level of randomization or some other form of specific control of multiple sound wave datas. Ambient sound movable Functionally very similar to the regular ambient sound, this variation can, as the name suggests, be moved. That means, this sort of ambient sound actor should be used in a situation where an ambient sound would be used, but needs to be mobile. The main weakness of the two ambient sound actors that utilize sound cues is that each one you place in a level is identically set to the exact settings within the sound cue. Conversely, ambient sound actors utilizing sound wave datas can be set up on an instance by instance basis. What this means is explained with the help of an example. Lets say we have two fires in our game. One is a small torch, and the other is a roaring bonfire. If we feel that using the same sound for each is what we want to do, then we can place both the ambient sound actors utilizing sound wave datas and adjust some settings within the actor to make sure that the bonfire is louder and/or lower pitched. If we wanted this type of variation using sound cues, we would have to make separate sound cues. Using sound wave data There are four types of ambient sound actors that utilize sound wave datas directly as opposed to housed within sound cues. As previously mentioned, the purpose of using ambient sound actors that use sound wave data is to avoid having to create multiple sound cues with only minimally different contents for simple ambient sounds. This is most readily displayed by the fact that the most commonly used ambient sound actors that use sound wave data are called AmbientSoundSimple and AmbientSoundSimpleToggleable as shown in the following screenshot: Ambient sound simple Ambient sound simple is, as the name suggests, the simplest of ambient sound actors. They are only used when we need one sound wave data or multiple sound wave datas to just repeat on a loop over and over again. Fortunately, most ambient sounds in a level fit this description. In most cases, if we were to go through a level and do an ambient sound pass, all we would need to use are ambient sound simples. Ambient sound non loop Ambient sound non loop are pretty much the same, functionally, as ambient sound simples. The only difference is, as the name suggests, they don't loop. They will play whatever sound wave data(s) that are set in the actor, then delay by a number of seconds that is also set within the actor, and then go through it again. This is useful when we want to have a sound play somewhat intermittently, but not be on a regular loop. Ambient sound non looping toggleable Ambient sound non looping toggleable are, for all intents and purposes, the same as the regular ambient sound non loop actors, but they are toggleable. This means, put simply, that they can be turned on and off at will using Kismet. This would obviously be useful if we needed one of these intermittent sounds to play only when certain things happened first. Ambient sound simple toggleable Ambient sound simple toggleable are basically the same as a plain old, run of the mill ambient sound simple with the difference being, as like the ambient sound non looping toggleable, it can be turned on and off using Kismet. Playing sounds in Kismet There are several different ways to play different kinds of sounds using Kismet. Firstly, if we are using a toggleable ambient sound actor, then we can simply use a toggle sequence, which can be found under New Action | Toggle. There is also a Play Sound sequence located in New Action | Sound | Play Sound. Both of these are relatively straightforward in terms of where to plug in the sound cue. Playing sounds in Matinee If we need a sound to play as part of a Matinee sequence, the Matinee tool gives us the ability to trigger the sound in question. If we have a Matinee sequence that contains a Director track, then we need to simply right-click and select Add New Sound Track. From here, we just need to have the sound cue we want to use selected in the Content Browser, and then, with the Sound Track selected in the active Matinee window, we simply place the time marker where we want the sound to play and press Enter. This will place a keyframe that will trigger our sound to play, easy as pie. The Matinee tool dialog will look like the following screenshot: Matinee will only play one sound in a sound track at a time, so if we place multiple sounds and they overlap, they won't play simultaneously. Fortunately, we can have as many separate sound tracks as we need. So if we find ourselves setting up a Matinee and two or more sounds overlap in our sound track, we can just add a second one and move some of our sounds in to it. Now that we've gone over the different ways to directly play and use sound cues, let's look at how to make and manipulate the same sound cues using UDK's surprisingly robust Sound Cue Editor. Summary Now that we have a decent grasp of what kinds of sound control UDK offers us and how to manipulate sounds in the editor, we can set about bringing our game to audible life. A quick tip for placing ambient sounds: if you look at something that visually seems like it should be making a noise like a waterfall, a fire, a flickering light, or whatever else, then it probably should have an ambient sound of some sort placed right on it. And as always, what we've covered in this article is an overview of some of the bare bones basics required to get started exploring sounds and soundscapes in UDK. There are plenty of other actors, settings, and things that can be done. So, again, I recommend playing around with anything you can find. Experiment with everything in UDK and you'll learn all sorts of new and interesting things. Resources for Article: Further resources on this subject: Unreal Development Toolkit: Level Design HQ [Article] Getting Started on UDK with iOS [Article] Configuration and Handy Tweaks for UDK [Article]
Read more
  • 0
  • 0
  • 4160

article-image-cameras-are-rolling
Packt
23 Aug 2013
32 min read
Save for later

Cameras are Rolling

Packt
23 Aug 2013
32 min read
(For more resources related to this topic, see here.) Keyframing cameras If you create a diverse and interesting 3D scene, the odds are you are going to want your camera to navigate through it and not just sit in one spot. We are going to start off this article by learning how to keyframe our cameras and have them change their position and angles over the course of our timeline. This is a basic technique you'll use constantly in Cinema 4D. Getting ready Open the Keyframing_Cameras.c4d file in your C4D Content Pack and use it with this recipe. How to do it… Our scene is a simple setup with a Figure object in the middle of the scene posing for us. Start by placing a Camera in your scene, found in the Create menu under the Camera tab, or inside the Command Palette indicated by the icon of the movie camera. Click on its name in the Object Manager and label it something descriptive called Keyframe Camera Select the Cameras menu in the Viewer, and mouse over to the Use Camera option and click on the Keyframe Camera object that you created so that it is checked. The view will not change, but now the Viewer will represent the changes in position and rotation made to your camera, and not the Default Camera. You can also click on the small square box with the plus sign on it in the Object Manager; it's next to the traffic light, and when it turns white it means that the camera is the active camera in the Viewer. Set the position and rotation properties of the camera to zero in the Coordinates tab of the Attribute Manager. You can just enter zero values manually or use the Reset PSR command we learned earlier, found in the Character menu under Commands. Hence, we are starting with a camera that has no tilt to it, and it's positioned exactly at our origin. You should have noticed by now that the Viewer has shifted with these new values, because we have changed the position of our camera. Take a look at the Animation toolbar and take note of the icons inside the three red circles. These icons can control the keyframing of our cameras (and any objects for that matter) when we change our views and move along in the timeline. Move the camera up in the Viewer, so that the camera is framing our subject just above the waist. Now, with the playhead at the very beginning of the timeline, click on the red icon farthest to the left of the three with the image of the key inside it; this is the Record Active Objects command and it will set a keyframe for all our important coordinate properties in the Attribute Manager. Look at how all have a red dot filled in next to their values, meaning that they are keyframed. Now, scrub the playhead forward to frame 60 and then move your Viewer back, so we can see more of our subject and then hit the red keyframe icon once again. If you play your animation in the timeline from the beginning, you'll see that you have created a camera move; the camera moves backwards and reveals more of your subject over time. How it works… The Record Active Objects button in the Animation toolbar is a quick way to set the keyframe information for your camera and in whichever frame your playhead is currently positioned at. Setting two keyframes moves our camera from one point to another in our scene over time. You could also set the keyframes manually in the Attribute Manager, and similarly, the set keyframe button works for other objects too, not just cameras. There's more… There are cleverer camera setups to be made. Automatic keyframing…do it at your own risk The middle button in that red group of three enables automatic keyframing. Some people choose to use this, but it also tends to cause headaches if you forget whether it's active or not. Basically, it will set new keyframe values whenever you change your camera view; you can move to new positions in the timeline and change the view, and the keyframes will automatically be set for you. Try it out and see if you like working with it—just remember to turn it on and off when you want to use it. Moving a camera along a path You have the ability to draw a spline that represents the path you want your camera to travel, and then dictate the amount of time it takes to complete the movement. This is what you want to do when you want to travel through your scene and focus on multiple objects or alter the perspective throughout your timeline. This recipe shows how to use the Align to Spline feature to control the movement of your camera. How to do it... In a new project, create a new camera via the icon in the Command Palette. In the Viewer, go and change back to Default Camera and turn off your created camera. Next, you should switch the view of your scene from Perspective to Top under the Cameras menu in the Viewer. You are now looking directly over your scene, without altering the perspective of your created camera. Under the Primitives tab in the Command Palette, pick three different primitives and scatter them in three random places in your scene so that your objects create a triangle. You can use the zoom feature if you can't see all your objects. Click-and-drag them to different spots in 3D space using the Move Tool so we can move our camera around each of them in space. The exact position of each is not important, just as long as they are spread out from one another. Click on the Splines icon in the Command Palette and select the Bezier option. You can now click to add points and drag your mouse to extend the bezier curves; the further you drag the mouse, the smoother the curve will be. This line will represent your camera's movement, so draw the spline such that it passes in front of all three of your objects. Make sure you don't get too close to your three primitives; else they will not fill the frame tastefully. Because we switched to work in the Top view in Default Camera, our points will only move in the X and Z dimensions. You can't move the points higher and lower, so you must switch between camera views in order to adjust the height (Y position) of the points on your spline. Using the Left and Right camera views will allow you to adjust the height, but not the length in the X dimension, and the Front and Back views will allow you to adjust the height and the length, but not the depth. As noted before, try out different camera views using Default Camera so you can see your scene in these useful perspectives. Adjust the Y position of your points in the Left or Right view so the spline now has a few peaks and valleys. Now, switch back your view to your created camera and turn off Default Camera. Highlight it in the Object Manager and slide a little over to the Tags menu. Then, under the Cinema 4D Tags submenu, select Align to Spline. A tag like this places a small icon next to the selected object in the Object Manager, where you can select it and open up the tag-specific options. Take your spline in the Object Manager and drag it into the Spline Path field in the Attribute Manager. Once inside, the camera snaps into place at one end of your spline. You have now tied the camera's position to the path you drew for it to travel. Adjust the values in the percentage slider labeled Position in the Tag properties. You will see that your camera is moving to different points on your spline. The tag works by setting the end points of your spline as 0% for the start and 100% for the end, and in everything between. The camera moves between the start and end points, thereby traveling along the spline when keyframed. To complete the movement, set a keyframe at the start of your timeline with the Position value at 0%, move to the end of the timeline and set the value to 100%, and then set a keyframe. Play back your animation to see your camera move along the spline you drew. How it works... Instead of keyframing the position and rotation of the actual camera, we can control how our camera moves by manipulating just the shape of the spline. You can set as many keyframes as you need to move your camera through the completion of your path, and simply sliding them along the timeline allows you to speed up or slow down your camera move. There's more... Tweak the spline to get your camera move just right. You can keep moving the points along the spline, or you can use the Move Tool to select the entire spline and move it around that way. Get your camera to pass in front of all the objects in your scene and have them fill the frame nicely. You can also adjust the rotation properties in the Coordinates tab for the camera in the Attribute Manager. Not just for cameras The Align to Spline tag can be used on any object, in case you want to control the path that an object or a light moves along. The Tangential checkbox You are going to leave the Tangential option unchecked in the Align to Spline tab. This will align the camera's movements tangentially along the spline, depending on which axis you pick in the Axis drop-down menu at the bottom. This is more useful when you are aligning objects to a spline, as you may want time to face a specific direction throughout the movement, but it's not very practical for cameras. If you would compare the last recipe involving moving cameras with the Align to Spline tag to moving an actual physical camera with your hands or on a Steadicam, then this recipe shows you how to place your camera firmly on a tripod and walk away from it. Once we get a camera positioned where we want, we don't want to have to worry about bumping the scroll wheel on your mouse or accidentally switching camera views and losing your nice scene composition. This can happen by accident and frustrate you in the midst of a deadline, and it can be easily prevented. How to do it… Start by adding a Cube primitive to your scene from the Command Palette. Then, click once on the Camera icon in the Command Palette to add a camera to your scene. Switch to it in the Viewer under the Cameras menu and the Use Camera submenu, making sure that we are no longer on Default Camera. Let's say this is the perfect angle and you don't want to lose this shot. Highlight the camera in the Object Manager and then click on Tags, followed by the Cinema 4D Tags and Protection tag. A small orange and black "no sign" appears to the right of your camera in the Object Manager. This shows that the Protection tag is active. Nothing happens: your camera is frozen in place and your shot is preserved. Click on the Protection tag in the Object Manager to load it into the Attribute Manager and you'll see plenty of options for the tag, which is all new in release 13. You have the ability to pick which parameters the tag protects. Uncheck the boxes for the X, Y, and Z values under the P group (P for position). Now, try moving your camera. You will be able to move the camera's position, but if you try rotating it, it remains locked. How it works… The Protection tag is a useful feature that helps preserve your camera shots by making sure you don't adjust the viewer accidently and ruin your composition. When you are working with multiple cameras, sometimes you can lose your place and forget which one is active and you inadvertently move it and regret the move you made. There's more… This feature has been highlighted because many of the recipes in this article feature the tag. I set up many projects to have specific camera angles so you can follow along with the same images in the article, so be aware of the tags and don't get frustrated if you can't change the look in the Viewer. Protection for all The Protection tag can be used on objects too; in case you need to make sure they don't get accidentally nudged or moved. You can still edit an object's properties, and not just the position, rotation, or scale. Undo view If you do mess up with your camera view, there's always a way to get it back. In the Viewer under the View menu, there's an Undo View command, which will revert your view to how it was before you just moved it. This is a convenient fix, but the Protection tag is the ultimate way to prevent any issues with altering your camera's perspective. You can always keyframe If you keyframe the position of your camera, it will automatically jump back to the keyframe values, even if you change the view. Because you have specifically told Cinema 4D when you want your camera with these keyframes, it will always revert to this spot until you set the new keyframes and tell it otherwise. Also, you have to keyframe all the position and rotation values too for this to work, because a change to a value that is not keyframed will not revert to any specific value, so be careful. Using target cameras In Cinema 4D, you have the option to have your camera target a specific object in your scene, and it always remains fixed on it regardless of how the camera moves. Because we are designing objects with three dimensions, it makes sense that we learn and develop camera movements that can display our objects from every angle. This recipe demonstrates how we can easily create a camera that can rotate around an object and display the object from any angle we choose throughout the entire movement. How to do it... Be sure to check out the first recipe Keyframing cameras, because it builds upon that technique. Then, open the Target_Camera.c4d file from the C4D Content Pack to use with this recipe. We'll be recreating the famous shot from the movie, The Matrix, where time slows to a crawl and the camera completely circles around Neo as he dodges some bullets, remaining fixed on him the entire time. Instead of creating a regular camera, hold down the Camera icon in the Command Palette and select the second option for Target Camera. Select your Target Camera by switching to it from Default Camera in the Viewer. By default, Target Camera behaves just like a regular camera, but that will change in a few steps. When you create Target Camera, you actually create three things: a regular camera, a tag that turns it into a target camera, and a Null Object named Camera.Target.1 as a default object for it to focus on. Delete the Camera.Target.1 object from the Object Manager, as we won't be using it. Now, we need to create the circular path for our camera to travel around. This is a perfect use for our Align to Spline tag, so highlight your camera in the Object Manager, click on the Tags menu, go under Cinema 4D Tags, and select the Align to Spline tag. The camera is going to take a circular path around the subject, so we obviously need a Circle spline to outline this path. Click and hold down on the Splines icon in the Command Palette and select the Circle spline option. The spline is good, except that it has the wrong orientation by default, and we need to adjust the Plane value. Under the Object tab, in the Attribute Manager for the Circle spline, change the Plane value from XY to XZ. This will align the spline with the proper orientation towards our figure. Adjust the Radius to 1200 cm so it is a much larger circle, and our figure will fit into the shot. Next, you need to select the Align to Spline tag on your camera in the Object Manager, and then drag the Circle spline into the Spline Path field. Your camera is now aligned to your spline, but it's not really locked in on anything in particular. Our camera is aligned to our spline, but we won't be animating the Align to Spline tag. The Position value in the Align to Spline tag only goes from 0% to 100%. This doesn't do us any good if we want to move in the opposite direction or perhaps make more than one rotation. Going from 0% to 100% allows for only one clockwise rotation. We are going to animate the actual Circle spline instead so that we can have control over the direction and amount of loops our camera travels. Select your Circle spline and look under the Coordinates tab in the Attribute Manager . Set a keyframe for the R.H value at the beginning of your timeline. 360 degrees represents one full revolution, so move to the very end of your timeline and change the rotation value to 360 and set a keyframe. Play your animation and you will see that the camera circles around one time. But, the problem is our camera isn't focused on our subject. Click on the Target tag, which is the little target crosshairs in the Object Manager on your camera. In the Attribute Manager under the Object tab, you'll find the empty Target Object field, where you can simply drag The One from the Object Manager into the field and watch as your camera instantly locks onto the target and follows it throughout the length of the animation. You may now unplug yourself from the Matrix. How it works... The target cameras allow you to fix the position and rotation of your camera to an object in your scene. This way, whenever you move the camera, it will remain focused on the object you specified. It can work separately from the Align to Spline tag, but the combination of the two tags helps us create the exact camera move we were looking for. There's more... Experiment with changing the coordinates of your Circle spline to get some more interesting camera rotations. Try animating the Y position value of the Circle spline, and you can move between a bird's eye and a worm's eye view of your figure, while focusing on the center of it. Try adjusting the R.P value and you can get an interesting tilt to go with your rotation instead of it being flat. Pans and tilts Using a target camera and animating the Null Object is a good way to simulate pans and tilts, instead of animating the actual camera. Just specify a Null Object to be your Target Object, and the camera will stay in one spot and follow the target object if you animate the null position from left to right or from up and down. Think of it as a camera on a tripod, and it's following whatever you are aiming it at. Adding the target manually If you create a camera and decide later that you want it to be a target camera, just look for it under the Cinema 4D Tags menu and add it manually; it will work the same way. Linear keyframes for loops To get a proper looping animation, you'll need to have keyframes with linear interpolation features. Click on the actual keyframe in the Animation toolbar at the first frame when you have the Circle spline selected in the Object Manager, and change the Interpolation value from Spline to Linear in the Attribute Manager. Adjusting focal lengths In Cinema 4D release 13, the camera settings were overhauled and we now have new features that give us better control like we see in actual cameras. The next two recipes deal with these features, though this one shows how to adjust the focal length of your cameras in order to compress or exaggerate depth. Just like in actual photography, picking the right focal length and lens is crucial to getting the right look for your image. Getting ready Use the Focal_Lengths.c4d project with this recipe so you can follow along. How to do it... Start by adding a new camera to the scene and switching to it in the Viewer instead of Default Camera Adjust the camera coordinates so that the X and Y position values and all the rotation values are at 0 set the Z position value at -1500 Now, take this camera and duplicate it by clicking-and-holding the Ctrl or command key and dragging the mouse up or down to make a copy on release. Add a Protection tag to each camera so they don't move, and rename one camera to Short Focal Length and the other to Long Focal Length. You'll find Focal Length under the Object tab of the camera in the Attribute Manager. The default focal length is pretty average at 36 mm, and you can often leave it at its default and get a good result. But, let's create two different focal lengths for cameras that are in the exact same position and see how drastically it changes the look of your scene in Cinema 4D. Under the Object tab in the Attribute Manager for the Short Focal Length camera, click on the Focal Length menu and select Wide Angle, which changes the Focal Length value to 25. Then, switch cameras in the Object Manager, so the Long Focal Length camera is active in the Attribute Manager. Switch the Focal Length setting to Tele, which is a very big value of 135. In order to see both cameras at once, change it to 2 Views Stacked inside the Arrangement options, under the Panel menu in the Viewer. Activate the Short Focal Length camera in one window, and then the Long Focal Length one in the other, under the Cameras menu. Make use of Use Camera in each of the two views: Keep in mind that these cameras are exactly at the same position, but the images look completely different. The camera with a shorter focal length allows us to see more of the cubes with the text appearing small. Also, the distortion of the cubes that are closer and towards the edges. The camera with a larger focal length has our shot zoomed in way too tight in our text, and it appears that there are far fewer cubes in the scene. How it works... This exercise hopefully showed you that it's important to not only adjust the position of your camera, but the focal length as well. A scene with many objects scattered about may require a camera with a shorter focal length, while a larger focal length will allow you to compress and focus on a particular object. There's more... Cinema 4D cameras can mimic real cameras, so check out some photography sites for tips on how to compose your scenes and adjust your cameras. The following site from Envato has tons of helpful tips and tutorials for photography and accompanying software: http://photo.tutsplus.com/ Matching your camera to footage Let's say you create an object in Cinema 4D, and you want it to appear as if it is a part of another piece of footage or a still photo. So, you want to use 4D to make something 3D, and then have it mixed into something that is 2D. That just about covers every dimension you can have. This recipe shows you how to prepare a camera setup to match the look of a still photograph, so the elements you design in Cinema 4D will appear like they belong in the footage you'll be making a composition of. Getting ready Use the Oak_Alley photograph provided in the C4D Content Pack with this recipe. The image is in 16:9 (aspect ratio) for use in a 1920 x 1080 HD format, so you should change your composition to be in this resolution as well. Go to the Render menu and click on the Render Settings. Under the Output tab, load the preset for HDTV 1080 29.97 under the Film/Video options. Your frame will now have a 16:9 aspect ratio. Whether you use stills or a video, the process laid out in this recipe is the same. How to do it... Start by creating a new camera and switching to it instead of the Default Camera mode. Next, you'll need to create a background object that will serve as a back wall for our scene. Look in the Create menu, and then under Environment you'll see Background. Click on it and add this into your scene. The background object requires a material on it, otherwise it's invisible. So double-click on the empty space in the Material Manager and a new material will be created. Double-click on the material, and in the Color channel, you'll need to load the Oak_Alley.jpg image into the Texture field. Click on the small ellipses bar on the left or the bigger bar in the middle; each one will allow you to cycle through your computer and load the image into the Color channel from the C4D Content Pack. Now, you have the image loaded into the material. So, drag the material from the Material Manager to the Background object in the Object Manager and release the mouse over it in order to place the material on it. Your Viewer should now be filled with the image, and the background will remain in the same place, regardless of where you move and position the camera. The goal now is to match the perspective of your Cinema 4D camera to the perspective in which I took this photograph. We need to move the camera into a spot that lines up the floor grid in the Viewer with the ground in the photo. This is the process of matching camera shots in Cinema 4D to real-life photos and footage. By creating and aligning floors, planes, and ceilings with the perspective and edges in our photo, we can make the camera project our 3D objects as if they are in front of the camera, filming our image: In the camera's Coordinates tab, keep the X position at 0, but move the camera back in the Z position to -2000. Now, place a Sphere object in your scene from the Primitives palette. The sphere will be placed at the origin, and moving the camera closer towards it in the Z dimension will make the object appear closer to the camera. Next, we need to make adjustments in order to line up the floor plane in our Cinema 4D scene to that of our photograph. These adjustments will be made to our camera's Y position value, as well as to our rotation values. You'll notice a grid that's projected as a floor in the X and Z dimensions. This is ideal for matching our perspective with the ground in our scene. If the grid does not appear in your scene by default, you can activate it in the Viewer window by clicking on the Filter menu and selecting Grid. The grid should be placed even with the ground in the photo, so that the lines on the grid are parallel with those in the photo, such as the rows of bricks on the sidewalk. Move the camera up higher in the Y position to a value of 92. Next, you can slightly tweak R.H to a very small value of 0.3, and R.P should have a value of 1. This is a good alignment for our scene, which will vary from photo to photo or video to video, but the process remains the same. The last step to get an even more accurate setup is to adjust your camera settings to match the lens to the actual camera lens. I took this photo with the focal length set to 42 mm. Simply go to the Object tab in the Attribute Manager of your camera and change the value of Focal Length from the default 36 to 42. It's a slight change, but will provide more convincing renders if the photo or video was shot with special lenses, such as a wide angle or telephoto lens. How it works... Use the grid to line up as if it was the ground or the floor in your photo or video. It can also represent a ceiling if the footage was taken from a lower angle. Add Plane objects and make them perpendicular to your floor if you need to represent walls. Basically, you are trying to mimic the camera that was used in real life to capture your image, so add the faces of the room or environment where you can match them up with the footage you shot. Getting the coordinates and the camera settings as precise as possible will help you build a more convincing scene. There's more... This example used a still photograph, which could also be a video shot on a tripod. The point is the camera is not moving and the perspective is not changing. Matching a camera in Cinema 4D to a moving camera is much trickier. You'll need to learn more about 3D camera tracking in order to get your objects to match up with a camera that changes position and angles on your footage. SynthEyes is a software application capable of handling 3D camera tracking, and it can be used in conjunction with Cinema 4D and other programs. Don't render the background When you render your scene with your objects matched up to your camera, you don't actually want the background to be rendered with it. You'd much rather composite your render on the top of the original image in a program, such as Photoshop or After Effects. Once you are ready to render, go to the Basic tab of the Background object in the Attribute Manager or just use the traffic light to change Visible in the Render setting to Off, and then render your scene with an alpha channel so you can composite it elsewhere. The Physical tab Each camera in Cinema 4D now comes with the new Physical tab in the Attribute Manager in release 13. These features streamline the previously clunky process and make your cameras behave like real cameras in 3D projects. Within the Physical tab, we can control all the features to help create a realistic depth of field, motion blur, and more with our cameras. This recipe shows you how to adjust all the settings in order to get more than ever before out of your cameras in Cinema 4D. Getting ready Open the Card_Table.c4d file and use it while you work through this recipe. How to do it... Check out the setup we have here. It's a camera close-up of a card table, and when you play the animation, a pair of cards slides in front of the camera. The camera is set to the Portrait setting, giving it a focal length of 80 mm for a more shallow focus. Switch your camera over to the Physical tab in the Attribute Manager and you'll notice the options are mostly grayed out. Click on the checkbox for Movie Camera, and then open the Render Settings window from the Command Palette or via the Render menu under Edit Render Settings. You'll see the drop-down menu in the top-left corner, which is used for setting our renderer to Standard. However, we need to switch that to Physical in order to take advantage of all the new features in our camera. Also, check the boxes for Depth of Field and Motion Blur. Lastly, change the Sampling Quality value from Low to Medium. Let's start by getting our depth of field up and running. The setting we'll want to adjust is F-Stop in our Physical tab. The F-Stop on real cameras adjusts the aperture, or how much light is let into the camera. The smaller the F-Stop is, the smaller the depth of field will be, which will result in a selective focus that can draw attention to certain objects of the scene. Lower the F-Stop value to f/2.8, then switch to the Object tab, and take a look at the ways we can define where our focal plane is. The Focus Distance value is a set distance you want to define for the focal plane width, so you can tell the camera at what distance to focus on and also about all the objects that will be out of focus in the front and behind. Or, you can drag-and-drop an object into the Focus Object field, and it will automatically adjust and focus on that object. Drag the Deck of Cards group from the Object Manager and drag it into the Focus Object field. You then do a render preview by hitting Ctrl or command + R. Depending on how fast your computer is, you'll get a rendered sample of your scene in a few seconds. The deck of cards at the back should be the objects in focus, while the dealt cards and the chips should be blurred out: I'd rather have the dealt cards in focus, with the other objects slightly out of focus. Remove the Deck of Cards group from the Target Object field by clicking on the small arrow on the right-hand side of the field and then clicking on Clear. Switch to the Default Camera mode in the Viewer from inside the Cameras menu under Use Camera. Rotate the active camera around so you can see the cone of your other camera. Make sure the playhead in the Animation toolbar is on a frame towards the end, where the cards are dealt, and are sitting in their final position. Now, let's adjust the Focus Distance value manually to a value of 200. You'll see the end of the plane jump. Move over on top of the two dealt cards. If you switch back to your main camera and do a render preview with Ctrl or command + R, you'll see the two dealt cards and the $1 chips that are with them are in focus, and the deck and $5 chips are out of focus because they lie further away from the focal plane we assigned: Now, let's figure out how to apply motion blur to our animation. Move to frame 55 while the Jack of Spades is in motion. Because we checked on the Movie Camera box, our motion blur is controlled via the Shutter Angle setting. Movie cameras have two shutters that rotate and capture images. The shutter angle is the gap between the two shutters, and the larger the angle, the more motion blur gets captured. However, increasing it will also cause the shutter to become overexposed, because more light will be entering, so make sure the Exposure checkbox is deactivated to eliminate this issue. The Shutter Angle value is set to 180 and that will give us a solid motion blur. We don't need to increase it to notice the result. If you do a render preview, it won't matter. This is because the motion blur is not displayed in the Viewer. We'll need to render to Picture Viewer instead, which can be activated by pressing Shift + R . The Picture Viewer will pop the open angle; you'll get a frame with your motion blur on the playing card as it slides across the table: How it works... We were able to get realistic camera effects, such as the depth of field and motion blur, by using the Physical settings in our camera and in the Render Settings too. By enabling these features, we were able to get a nice, shallow focus by adjusting the F-Stop value and positioning the focal plane on our two dealt cards. We activated motion blur in Render Settings and had control over it via our Shutter Angle value. Our final image contains both these effects and results in a more interesting-looking image. There's more... These new features are great, and are head and shoulders better than the methods used in the previous versions of Cinema 4D to add depth of field and motion blur. But, they will increase your render times for certain, and all these effects can be added in a finishing program such as After Effects. More effects There are a few other effects you are able to add via the Physical tab, such as vignetting, chromatic aberration, and lens distortion. These can all be added after you render in After Effects as well. Rack focus Set a Null Object to be your Target object, and animate its position within the depth of your camera shot. This will simulate a rack focus, where your focal plane will change during a shot and bring different objects into focus over time.
Read more
  • 0
  • 0
  • 2307

article-image-romeo-and-juliet
Packt
21 Aug 2013
10 min read
Save for later

Romeo and Juliet

Packt
21 Aug 2013
10 min read
(For more resources related to this topic, see here.) Mission Briefing To create the Processing sketches for this project, we will need to install the Processing library ttslib. This library is a wrapper around the FreeTTS Java library that helps us to write a sketch that reads out text. We will learn how to change the voice parameters of the kevin16 voice of the FreeTTS package to make our robot's voices distinguishable. We will also create a parser that is able to read the Shakespeare script and which generates text-line objects that allow our script to know which line is read by which robot. A Drama thread will be used to control the text-to-speech objects, and the draw() method of our sketch will print the script on the screen while our robots perform it, just in case one of them forgets a line. Finally, we will use some cardboard boxes and a pair of cheap speakers to create the robots and their stage. The following figure shows how the robots work: Why Is It Awesome? Since the 18th century, inventors have tried to build talking machines (with varying success). Talking toys swamped the market in the 1980s and 90s. In every decent Sci-Fi novel, computers and robots are capable of speaking. So how could building talking robots not be awesome? And what could be more appropriate to put these speaking capabilities to test than performing a Shakespeare play? So as you see, building actor robots is officially awesome, just in case your non-geek family members should ask. Your Hotshot Objectives We will split this project into four tasks that will guide you through the general on of the robots from beginning to end. Here is a short overview of what we are going to do: Making Processing talk Reading Shakespeare Adding more actors Building robots Making Processing talk Since Processing has no speaking capabilities out of the box, our first task is adding an external library using the new Processing Library Manager. We will use the ttslib package, which is a wrapper library around the FreeTTS library. We will also create a short, speaking Processing sketch to check the installation. Engage Thrusters Processing can be extended by contributed libraries. Most of these additional libraries can be installed by navigating to Sketch | Import Library… | Add Library..., as shown in the following screenshot: In the Library Manager dialog, enter ttslib in the search field to filter the list of libraries. Click on the ttslib entry and then on the Install button, as shown in the following screenshot, to download and install the library: To use the new library, we need to import it to our sketch. We do this by clicking on the Sketch menu and choosing Import Library... and then ttslib. We will now add the setup() and draw() methods to our sketch. We will leave the draw() method empty for now and instantiate a TTS object in the setup() method. Your sketch should look like the following code snippet: import guru.ttslib.*;TTS tts;void setup() { tts = new TTS();}void draw() {} Now we will add a mousePressed() method to our sketch, which will get called if someone clicks on our sketch window. In this method, we are calling the speak() method of the TTS object we created in the setup() method. void mousePressed() { tts.speak("Hello, I am a Computer");} Click on the Run button to start the Processing sketch. A little gray window should appear. Turn on your speakers or put on your headphones, and click on the gray window. If nothing went wrong, a friendly male computer voice named kevin16 should greet you now. Objective Complete - Mini Debriefing In steps 1 to 3, we installed an additional library to Processing. The ttslib is a wrapper library around the FreeTTS text-to-speech engine. Then we created a simple Processing sketch that imports the installed library and creates an instance of the TTS class. The TTS objects match the speakers we need in our sketches. In this case, we created only one speaker and added a mousePressed() method that calls the speak() method of our tts object. Reading Shakespeare In this part of the project, we are going to create a Drama thread and teach Processing how to read a Shakespeare script. This thread runs in the background and is controlling the performance. We focus on reading and executing the play in this task, and add the speakers in the next one. Prepare for Lift Off Our sketch needs to know which line of the script is read by which robot. So we need to convert the Shakespeare script into a more machine-readable format. For every line of text, we need to know which speaker should read the line. So we take the script and add the letter J and a separation character that is used nowhere else in the script, in front of every line our Juliet-Robot should speak, and we add R and the separation letter for every line our Romeo-Robot should speak. After all these steps, our text file looks something like the following: R# Lady, by yonder blessed moon I vow,R# That tips with silver all these fruit-tree tops --J# O, swear not by the moon, the inconstant moon,J# That monthly changes in her circled orb,J# Lest that thy love prove likewise variable.R# What shall I swear by?J# Do not swear at all.J# Or if thou wilt, swear by thy gracious self,J# Which is the god of my idolatry,J# And I'll believe thee. Engage Thrusters Let's write our parser: Let's start a new sketch by navigating to File | New. Add a setup() and a draw() method. Now add the prepared script to the Processing sketch by navigating to Sketch | Add File and selecting the file you just downloaded. Add the following line to your setup() method: void setup() { String[] rawLines = loadStrings ( "romeo_and_juliet.txt" );} If you renamed your text file, change the filename accordingly. Create a new tab by clicking on the little arrow icon on the right and choosing New Tab. Name the class Line. This class will hold our text lines and the speaker. Add the following code to the tab we just created: public class Line { String speaker; String text; public Line( String speaker, String text ) { this.speaker = speaker; this.text = text; }} Switch back to our main tab and add the following highlighted lines of code to the setup() method: void setup() { String[] rawLines = loadStrings ( "romeo_and_juliet.txt" ); ArrayList lines = new ArrayList(); for ( int i=0; i<rawLines.length; i++) { if (!"".equals(rawLines[i])) { String[] tmp = rawLines[i].split("#"); lines.add( new Line( tmp[0], tmp[1].trim() )); } }} We have read our text lines and parsed them into the lines array list, but we still need a class that does something with our text lines. So create another tab by clicking on the arrow icon and choosing New Tab from the menu; name it Drama. Our Drama class will be a thread that runs in the background and tells each of the speaker objects to read one line of text. Add the following lines of code to your Drama class: public class Drama extends Thread { int current; ArrayList lines; boolean running; public Drama( ArrayList lines ) { this.lines = lines; current = 0; running = false; } public int getCurrent() { return current; } public Line getLine( int num ) { if ( num >=0 && num < lines.size()) { return (Line)lines.get( num ); } else { return null; } } public boolean isRunning() { return running; }} Now we add a run() method that gets executed in the background if we start our thread. Since we have no speaker objects yet, we will print the lines on the console and include a little pause after each line. public void run() { running = true; for ( int i =0; i < lines.size(); i++) { current = i; Line l = (Line)lines.get(i); System.out.println( l.text ); delay( 1 ); } running = false; } Switch back to the main sketch tab and add the highlighted code to the setup() method to create a drama thread object, and then feed it the parsed text-lines. Drama drama;void setup() { String[] rawLines = loadStrings ( "romeo_and_juliet.txt" ); ArrayList lines = new ArrayList(); for ( int i=0; i<rawLines.length; i++) { if (!"".equals(rawLines[i])) { String[] tmp = rawLines[i].split("#"); lines.add( new Line( tmp[0], tmp[1].trim() )); } } drama = new Drama( lines );} So far our sketch parses the text lines and creates a Drama thread object. What we need next is a method to start it. So add a mousePressed() method to start the drama thread. void mousePressed() { if ( !drama.isRunning()) { drama.start(); }} Now add a little bit of text to the draw() method to tell the user what to do. Add the following code to the draw() method: void draw() { background(255); textAlign(CENTER); fill(0); text( "Click here for Drama", width/2, height/2 );} Currently, our sketch window is way too small to contain the text, and we also want to use a bigger font. To change the window size, we simply add the following line to the setup() method: void setup() { size( 800, 400 ); String[] rawLines = loadStrings ( "romeo_and_juliet.txt" ); ArrayList lines = new ArrayList(); for ( int i=0; i<rawLines.length; i++) { if (!"".equals(rawLines[i])) { String[] tmp = rawLines[i].split("#"); lines.add( new Line( tmp[0], tmp[1].trim() )); } } drama = new Drama( lines );} To change the used font, we need to tell Processing which font to use. The easiest way to find out the names of the fonts that are currently installed on the computer is to create a new sketch, type the following line, and run the sketch: println(PFont.list()); Copy one of the font names you like and add the following line to the Romeo and Juliet sketch: void setup() { size( 800, 400 ); textFont( createFont( "Georgia", 24 ));... Replace the font name in the code lines with one of the fonts on your computer. Objective Complete - Mini Debriefing In this section, we wrote the code that parses a text file and generates a list of Line objects. These objects are then used by a Drama thread that runs in the background as soon as anyone clicks on the sketch window. Currently, the Drama thread prints out the text line on the console. In steps 6 to 8, we created the Line class. This class is a very simple, so-called Plain Old Java Object (POJO) that holds our text lines, but it doesn't add any functionality. The code that is controlling the performance of our play was created in steps 10 to 12. We created a thread that is able to run in the background, since in the next step we want to be able to use the draw() method and some TTS objects simultaneously. The code block in step 12 defines a Boolean variable named running, which we used in the mousePressed() method to check if the sketch is already running or should be started. Classified Intel In step 17, we used the list() method of the PFont class to get a list of installed fonts. This is a very common pattern in Processing. You would use the same approach to get a list of installed midi-interfaces, web-cams, serial-ports, and so on.
Read more
  • 0
  • 0
  • 1279
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-earning-your-first-gold
Packt
16 Aug 2013
15 min read
Save for later

Earning Your First Gold

Packt
16 Aug 2013
15 min read
(For more resources related to this topic, see here.) You need to spend gold to make gold The adage "You need to spend money to make money" holds true in World of Warcraft as well. A lot of your income will come from manufacturing processes: taking raw materials (or mats) and turning them into finished goods for the end user. To expedite this process, we will be mostly buying the materials from the Auction House, and so you will need a sizeable supply of gold when you're starting out and building inventory. Unlike in the real world, where you can get investors or loans to start up a business, in World of Warcraft you will need to collect every bit of copper yourself. It's not until you have built up this starting capital that you can truly start making significant amounts of gold. There are a few ways to go about this and some players may be able to build capital while playing as they usually do, but most players will have to work at building this capital. All the methods in this section are designed to earn you gold, with time being the only investment; no gold needs to be invested. Once you've built your starting capital, you may find yourself moving away from these sources of revenue to focus on more lucrative markets. Reselling vendor pets There are many items that you can sell on the Auction House but vendor pets will be your best bet when trying to make gold. While your mileage may vary depending on your server's economy, there are certain vendors and items that are usually profitable. As with any other tip, you will have to confirm for yourself that this is pro fitable on your server. Even before Mists of Pandaria was announced and launched with its pet battles, pets were in great demand. There are several vendors scattered throughout Azeroth that sell pets; you can then resell the pets on the Auction House for a profit. There are two reasons why pets from these vendors can typically be sold for a profit over the vendor price: Players are too lazy to venture out into the world themselves to visit these vendors and buy the pets Players don't do their research and, when they see these pets on the Auction House, are unaware that they can get these pets from the vendors We'll take a look at some examples of where you can get pets from a vendor to sell on the Auction House. Always make sure you're selling these pets for a profit. As with most methods we will go over, this tactic's effectiveness is largely determined by your server's market. Be sure to check market prices for these pets before you go out and collect them (The Undermine Journal (https://theunderminejournal.com/) will help you with market research). Neutral vendors Players from either faction will be able to access these vendors and resell their wares on their faction-specific Auction House. One of these vendors is Dealer Rashaad at the Stormspire in Netherstorm and he can be found west of the flight master at Stormspire. The following screenshot shows the map of Netherstorm, and the player icon marks the location of Dealer Rashaad: Dealer Rashaad is marked with the <Exotic Creatures> tag and sells the following pets (as presented by the vendor, from left to right, top to bottom): Parrot Cage (Senegal): Purchase for 40 silver Cat Carrier (Siamese): Purchase for 60 silver Undercity Cockroach: Purchase for 50 silver Crimson Snake: Purchase for 50 silver Brown Rabbit Crate: Purchase for 10 gold Red Moth Egg: Purchase for 10 gold Blue Dragonhawk Hatchling: Purchase for 10 gold Mana Wyrmling: Purchase for 40 gold Dealer Rashaad has associations with the faction The Consortium, and so these pets will appear cheaper, depending on your reputation with this faction (up to 20 percent off if you have Exalted reputation). Another pet vendor you should visit is Breanni (tagged as <Pet Supplies>) at Magical Menagerie in the city of Dalaran in Crystalsong Forest, Northrend. The following screenshot shows the map of Dalaran and the player icon marks the location of Breanni: Breanni sells the following pets (listed in order of appearance): Cat Carrier (Calico Cat): Purchase price 50 gold Albino Snake: Purchase price 50 gold Obsidian Hatchling: Purchase price 50 gold Breanni also sells accessories for pets but, since these are Bind on Pickup items, you will not be able to sell them on the Auction House. There are several vendors that sell only one or two pets; while you can sell many of these on the Auction House as well, you will have to determine for yourself if the profit margins are worth your time spent in collecting them. The following is a list of these pets and their vendors. Only those pets that can be purchased for gold (or silver) are included in this list as there are several that can be purchased for other currencies: Ancona Chicken: Plucky Johnson in Thousand Needles Tree Frog Box: Flik at the Darkmoon Faire Wood Frog Box: Flik at the Darkmoon Faire Winterspring Cub: Michelle De Rum in Winterspring Parrot Cage (Cockatiel): Narkk in Booty Bay To make the most of these pet vendors, buy multiples of each pet (at least three to five of each) and store the spares in a bank while you are selling them. Doing this limits the number of repeat trips you have to make and makes it more worth your while. Be sure to empty your bags as much as possible when going out to fetch these pets. While being at Friendly reputation or better with these vendors can fetch you a discount of five to 20 percent, the amount of time it takes to reach these reputations is not really worth the discount. That being said, if you have a character that you know gets a discount with these vendors, use it to collect the pets and get the five to 20 percent discount on the purchase price. If you have trouble finding any of these pets, you can go to http://www.wowhead.com, which is a database of all the items, vendors, achievements, and more. Wowhead has every vendor listed and a map of where to find the vendors.  Pets from in-game events During most in-game holidays, there is a selection of pets you can buy with the holiday-specific currencies (typically tokens). While it's likely that it won't be worth your time to gather these currencies specifically to buy these pets, only you can decide what is worth your time; you might want to have these currencies anyway. Note that these pets are related to specific events only and are not available all year round. These pets are as follows: Captured Flame: 350 Burning Blossoms (Midsummer Fire Festival) Purchasable at vendors in every major city for every major faction during the course of the Midsummer Fire Festival Feline Familiar: 150 Tricky Treats Purchasable at vendors in Undercity and Elwynn Forest as part of the Hallow's End festivities Sinister Squashling: 150 Tricky Treats Purchasable at vendors in Undercity and Elwynn Forest as part of the Hallow's End Festivities Spring Rabbit's Foot: 00 Noblegarden Chocolate Purchasable at various vendors outside every major Alliance or Horde city as part of the Noblegarden festivities Pint-Sized Pink Pachyderm: 100 Brewfest Tokens Purchasable at vendors in Dun Morogh, Durotar, Ironforge, and Orgrimmar as part of Brewfest Activities Lunar Lantern and Festival Lantern: 50 Coins of Ancestry each Purchasable at vendors in Moonglade during the Lunar Festival event Truesilver Shafted Arrow: 40 Love Tokens Purchasable at vendors in all major Alliance and Horde cities during the Love is in the Air event These pets are available in plenty during and shortly after the events, so to get the best price out of your hard work, hold on to the pets for a month or more after the event ends. Some of these pets do drop (not significantly, though) as part of the incentive for Call to Arms but the largest supply comes from the events themselves. The currencies for these pets are obtained through quests and achievements related to the event and can thus only be obtained while the event is active. Many players choose to do these events anyway (for achievements or for the sake of completion), so you might find that selling these pets is an easy way to make extra gold. If you have any problems with the quests or achievements, the posts on Wowhead will typically have advice and the writers at WoW Insider (http://wow.joystiq.com) put up guides every year for the events. Faction-specific vendors and pets Each faction, Alliance and Horde, has a selection of pets that are specific to it. While you can sell them on your home Auction House, you can often get much better prices on the neutral Auction House, where you can sell to the opposite faction because this, and faction transfers, which cost money, are the only ways for them to get the pets. Keep in mind though that the neutral Auction House charges a higher fee—they charge a 15 percent cut on sales — so be wary when selling and adjust your profi t margins accordingly. As with the faction-neutral pets, these pets are broken into two categories: Purchasable with standard currency (gold, silver, and copper) Purchasable with other currencies Alliance The list of Alliance vendors with pets available for standard currency is as follows: Donni Anthania <Crazy Cat Lady>, Elwynn Forest: Cat Carrier (Bombay): Costs 40 silver Cat Carrier (Cornish Rex): Costs 40 silver Cat Carrier (Orange Tabby): Costs 40 silver Cat Carrier (Silver Tabby): Costs 40 silver Yarlyn Amberstill, Dun Morogh: Rabbit Crate (Snowshoe): Costs 40 silver Shylenai <Owl Trainer>, Darnassus: Great Horned Owl: Costs 50 silver Hawk Owl: Costs 50 silver Sixx <Moth Keeper>, The Exodar: Blue Moth Egg: Costs 50 silver White Moth Egg: Costs 50 silver Yellow Moth Egg: Costs 50 silver Lil Timmy <Boy with kittens>, Stormwind, rare spawn: Cat Carrier (White Kitten): Costs 60 silver The White Kitten especially commands a good price on the Auction House as it's difficult to get even for Alliance players, so always keep an eye out for Lil Timmy when you are in Stormwind. Finally, for those who are champions in the Argent Tournament with the playable races on the Alliance side, you can buy pets at the Argent Tournament for 40 Champion's Seals. All the vendors can be found in the Alliance tent on the north-east corner of the Argent Tournament grounds. The following screenshot shows the map of Icecrown; the player icon marks the location of the Alliance team: The pets available to Alliance players are as follows: Teldrassil Sproutling, Darnassus Mechanopeep, Gnomeregan Ammen Vale Lashling, Exodar Elwynn Lamb, Stormwind Dun Morogh Cub, Ironforge? Shimmering Wyrmling, Silver Covenant The Argent Tournament champions do well on the Alliance Auction House as well since no one plays the Argent Tournament any more and unlocking the pets requires a significant amount of work. Horde The list of Horde vendors with pets available for standard currency is as follows: Xan'tish <Snake Vendor>, Orgrimmar: Black Kingsnake: 50 silver Brown Snake: 50 silver Crimson Snake*: 50 silver Halpa <Prairie Dog Vendor>, Thunder Bluff: Prairie Dog Whistle: 50 silver Jeremiah Payson <Cockroach Vendor>, Undercity: Undercity Cockroach*: 50 silver Jilanne, Eversong Woods: Golden Dragonhawk Hatchling: 50 silver Red Dragonhawk Hatchling: 50 silver Silver Dragonhawk Hatchling: 50 silver Pets with an asterisk (*) next to them can also be purchased from a faction-neutral vendor and so may not get the same price on the neutral Auction House. Champions of the Horde's main races can get pets from the Argent Tournament for 40 Champion's Seals. To be able to buy any of these pets, a player must be a champion of the race that the particular pet is associated with. All the vendors for these pets can be found in the Horde tent at the Argent Tournament on the south-east corner of the grounds. The following screenshot shows the map of Icecrown; the player icon marks the location of the Horde tent: Fishing Fishing, one of the original secondary professions, is a convenient way to make gold with no real start-up costs. You don't need to level Fishing to make gold with it; you can train it and start fishing up valuable fish straightaway. Without max fishing, you can still fish from pools around Pandaria; the fish can then be sold on the Auction House to players looking to make buff foods. Make sure to only fish from pools if you don't have max fishing as a lower skill level in Fishing makes it almost impossible to pull any fish (except Golden Carp) from open waters. The following screenshot shows a character fishing from a pool in Pandaria: The Tillers, farming, and Sunsong Ranch Every player, over the course of leveling, will come across the Sunsong Ranch. The ranch is an excellent source of income for those who don't have the capital or professions to start the gold-making methods discussed later in this book. The ranch is basically 16 plots of soil (you start out with four and then unlock an additional four at every reputation level with the Tillers) where you can plant seeds for vegetables and other items such as Motes of Harmony. With the vegetables, you can either create buff food (requires 600 Cooking) or sell them on the Auction House to other players (who will more than likely be using them to create buff foods themselves). A copper saved is a copper earned A surprising amount of gold can be saved by nickel-and-diming everything in the game; when you're putting so much work into building up your pile of gold, you don't want to waste it! To keep inflation in check and the economy from getting out of hand, World of Warcraft has several gold sinks to try and keep the gold that is leaving the economy balanced with the gold entering it. One of the biggest gold sinks is armor repair, and an untold amount of gold is lost to this necessity every day. Luckily, there are ways to minimize how much gold is syphoned out of your pockets and into the repair vendors'. Vendors that are associated with factions (typically displayed in the way a player's guild would be, that is, in brackets under the name) give players discounts if they have a reputation with the faction. Discounts start at Friendly with 5 percent and continue all the way through to Exalted, which offers a 20 percent discount; and yes, this discount applies to repairs as well. What this means is that players can save up to 20 percent on their repair bills by repairing only at vendors. Obviously, you can't always get to a vendor you have Exalted reputation with but simply paying attention to where you repair can save you some serious gold. Make sure to repair before you go out into the world or get into a raid and resist the temptation of repairing between every wipe. Guilds can give many perks that provide you with ways to save gold. Guilds level 9 and higher have a perk that reduces the durability loss your gear experiences when you die or take damage. This means you need to repair your gear less often, which in turn means you have to spend less gold to keep it in good condition. Also, look into items before you buy them as sometimes, even though you can find them on a vendor, you can find them cheaper on the Auction House or vice versa. As we discussed in the previous section, there's a whole market for buying items from a vendor and selling them on the Auction House, so be an informed buyer! Similarly, there are some items, such as Disappearance Dust, which are often sold cheaper on the Auction House because it's cheaper to produce them through Inscription than it is to buy them from the vendor. Costs associated with the Auction House There are several places where you can reduce the amount of gold you lose when posting items on the Auction House. There are two ways you lose gold when posting items in the Auction House: Auction House cuts on sales: 5 percent is cut at Alliance and Horde Auction Houses 15 percent is cut at the neutral Auction House Lost deposits on expired or cancelled auctions When using the Auction House, it will benefit you greatly to be diligent in how you post. Don't use the neutral Auction House except for selling faction-specific items (the items that only players from one faction can obtain). Attempting to use the neutral Auction House for any other items will be futile; the few sales you actually make will be cut heavily and you'll waste way too much gold on deposits. On that note, it's best to limit how long you post auctions for as the longer the auction, the higher the deposit required. When an auction expires or is canceled, you don't get your deposit back, so if you know you will be canceling a lot of auctions, don't post your auctions for 48 hours; instead, post for 24 hours, if you don't have a lot of time to dedicate to the game, or for 12 hours if you can check the Auction House more frequently.
Read more
  • 0
  • 0
  • 1828

article-image-using-specular-unity
Packt
16 Aug 2013
14 min read
Save for later

Using Specular in Unity

Packt
16 Aug 2013
14 min read
(For more resources related to this topic, see here.) The specularity of an object surface simply describes how shiny it is. These types of effects are often referred to as view-dependent effects in the Shader world. This is because in order to achieve a realistic Specular effect in your Shaders, you need to include the direction the camera or user is facing the object's surface. Although Specular requires one more component to achieve its visual believability, which is the light direction. By combining these two directions or vectors, we end up with a hotspot or highlight on the surface of the object, half way between the view direction and the light direction. This half-way direction is called the half vector and is something new we are going to explore in this article, along with customizing our Specular effects to simulate metallic and cloth Specular surfaces. Utilizing Unity3D's built-in Specular type Unity has already provided us with a Specular function we can use for our Shaders. It is called the BlinnPhong Specular lighting model. It is one of the more basic and efficient forms of Specular, which you can find used in a lot of games even today. Since it is already built into the Unity Surface Shader language, we thought it is best to start with that first and build on it. You can also find an example in the Unity reference manual, but we will go into a bit more depth with it and explain where the data is coming from and why it is working the way it is. This will help you to get a nice grounding in setting up Specular, so that we can build on that knowledge in the future recipes in this article. Getting ready Let's start by carrying out the following: Create a new Shader and give it a name. Create a new Material, give it a name, and assign the new Shader to its shaper property. Then create a sphere object and place it roughly at world center. Finally, let's create a directional light to cast some light onto our object. When your assets have been set up in Unity, you should have a scene that resembles the following screenshot: How to do it… Begin by adding the following properties to the Shader's Properties block: We then need to make sure we add the variables to the CGPROGRAM block, so that we can use the data in our new properties inside our Shader's CGPROGRAM block. Notice that we don't need to declare the _SpecColor property as a variable. This is because Unity has already created this variable for us in the built-in Specular model. All we need to do is declare it in our Properties block and it will pass the data along to the surf() function. Our Shader now needs to be told which lighting model we want to use to light our model with. You have seen the Lambert lighting model and how to make your own lighting model, but we haven't seen the BlinnPhong lighting model yet. So, let's add BlinnPhong to our #pragma statement like so: We then need to modify our surf() function to look like the following: How it works… This basic Specular is a great starting point when you are prototyping your Shaders, as you can get a lot accomplished in terms of writing the core functionality of the Shader, while not having to worry about the basic lighting functions. Unity has provided us with a lighting model that has already taken the task of creating your Specular lighting for you. If you look into the UnityCG.cginc file found in your Unity's install directory under the Data folder, you will notice that you have Lambert and BlinnPhong lighting models available for you to use. The moment you compile your Shader with the #pragma surface surf BlinnPhong, you are telling the Shader to utilize the BlinnPhong lighting function in the UnityCG.cginc file, so that we don't have to write that code over and over again. With your Shader compiled and no errors present, you should see a result similar to the following screenshot: Creating a Phong Specular type The most basic and performance-friendly Specular type is the Phong Specular effect. It is the calculation of the light direction reflecting off of the surface compared to the user's view direction. It is a very common Specular model used in many applications, from games to movies. While it isn't the most realistic in terms of accurately modeling the reflected Specular, it gives a great approximation that performs well in most situations. Plus, if your object is further away from the camera and the need for a very accurate Specular isn't needed, this is a great way to provide a Specular effect on your Shaders. In this article, we will be covering how to implement the per vertex version of the and also see how to implement the per pixel version using some new parameters in the surface Shader's Input struct. We will see the difference and discuss when and why to use these two different implementations for different situations. Getting ready Create a new Shader, Material, and object, and give them appropriate names so that you can find them later. Finally, attach the Shader to the Material and the Material to the object. To finish off your new scene, create a new directional light so that we can see our Specular effect as we code it. How to do it… You might be seeing a pattern at this point, but we always like to start out with our most basic part of the Shader writing process: the creation of properties. So, let's add the following properties to the Shader: We then have to make sure to add the corresponding variables to our CGPROGRAM block inside our SubShader block. Now we have to add our custom lighting model so that we can compute our own Phong Specular. Add the following code to the Shader's SubShader() function. Don't worry if it doesn't make sense at this point; we will cover each line of code in the next section: Finally, we have to tell the CGPROGRAM block that it needs to use our custom lighting function instead of one of the built-in ones. We do this by changing the #pragma statement to the following: The following screenshot demonstrates the result of our custom Phong lighting model using our own custom reflection vector: How it works… Let's break down the lighting function by itself, as the rest of the Shader should be pretty familiar to you at this point. We simply start by using the lighting function that gives us the view direction. Remember that Unity has given you a set of lighting functions that you can use, but in order to use them correctly you have to have the same arguments they provide. Refer to the following table, or go to http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaderLighting.html: Not view Dependent half4 Lighting Name You choose (SurfaceOutput s, half3 lightDir, half atten); View Dependent half4 Lighting Name You choose (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten); In our case, we are doing a Specular Shader, so we need to have the view-dependent lighting function structure. So, we have to write: This will tell the Shader that we want to create our own view-dependent Shader. Always make sure that your lighting function name is the same in your lighting function declaration and the #pragma statement, or Unity will not be able to find your lighting model. The lighting function then begins by declaring the usual Diffuse component by dotting the vertex normal with the light direction or vector. This will give us a value of 1 when a normal on the model is facing towards the light, and a value of -1 when facing away from the light direction. We then calculate the reflection vector taking the vertex normal, scaling it by 2.0 and by the diff value, then subtracting the light direction from it. This has the effect of bending the normal towards the light; so as a vertex normal is pointing away from the light, it is forced to look at the light. Refer to the following screenshot for a more visual representation. The script that produces this debug effect is included at the book's support page at www.packtpub.com/support. Then all we have left to do is to create the final spec's value and color. To do this, we dot the reflection vector with the view direction and take it to a power of _SpecPower. Finally, we just multiply the _SpecularColor.rgb value over the spec value to get our final Specular highlight. The following screenshot displays the final result of our Phong Specular calculation isolated out in the Shader: Creating a BlinnPhong Specular type Blinn is another more efficient way of calculating and estimating specularity. It is done by getting the half vector from the view direction and the light direction. It was brought into the world of Cg by a man named Jim Blinn. He found that it was much more efficient to just get the half vector instead of calculating our own reflection vectors. It cut down on both code and processing time. If you actually look at the built-in BlinnPhong lighting model included in the UnityCG.cginc file, you will notice that it is using the half vector as well, hence the reason why it is named BlinnPhong. It is just a simpler version of the full Phong calculation. Getting ready This time, instead of creating a whole new scene, let's just use the objects and scene we have, and create a new Shader and Material and name them BlinnPhong. Once you have a new Shader, double-click on it to launch MonoDevelop, so that we can start to edit our Shader. How to do it… First, we need to add our own properties to the Properties block, so that we can control the look of the Specular highlight. Then, we need to make sure that we have created the corresponding variables inside our CGPROGRAM block, so that we can access the data from our Properties block, inside of our subshader. Now it's time to create our custom lighting model that will process our Diffuse and Specular calculations. To complete our Shader, we will need to tell our CGPROGRAM block to use our custom lighting model rather than a built-in one, by modifying the #pragma statement with the following code: The following screenshot demonstrates the results of our BlinnPhong lighting model: How it works… The BlinnPhong Specular is almost exactly like the Phong Specular, except that it is more efficient because it uses less code to achieve almost the same effect. You will find this approach nine times out of ten in today's modern Shaders, as it is easier to code and lighter on the Shader performance. Instead of calculating our own reflection vector, we are simply going to get the vector half way between the view direction and the light direction, basically simulating the reflection vector. It has actually been found that this approach is more physically accurate than the last approach, but we thought it is necessary to show you all the possibilities. So to get the half vector, we simply need to add the view direction and the light direction together, as shown in the following code snippet: Then, we simply need to dot the vertex normal with that new half vector to get our main Specular value. After that, we just take it to a power of _SpecPower and multiply it by the Specular color variable. It's much lighter on the code and much lighter on the math, but still gives us a nice Specular highlight that will work for a lot of real-time situations. Masking Specular with textures Now that we have taken a look at how to create a Specular effect for our Shaders, let's start to take a look into the ways in which we can start to modify our Specular and give more artistic control over its final visual quality. In this next recipe, we will look at how we can use textures to drive our Specular and Specular power attributes. The technique of using Specular textures is seen in most modern game development pipelines because it allows the 3D artists to control the final visual effect on a per-pixel basis. This provides us with a way in which we can have a mat-type surface and a shiny surface all in one Shader; or, we can drive the width of the Specular or the Specular power with another texture, to have one surface with a broad Specular highlight and another surface with a very sharp, tiny highlight. There are many effects one can achieve by mixing his/her Shader calculations with textures, and giving artists the ability to control their Shader's final visual effect is key to an efficient pipeline. Let's see how we can use textures to drive our Specular lighting models. This article will introduce you to some new concepts, such as creating your own Input struct, and learning how the data is being passed around from the output struct, to the lighting function, to the Input struct, and to the surf() function. Understanding the flow of data between these core Surface Shader elements is core to a successful Shader pipeline. Getting ready We will need a new Shader, Material, and another object to apply our Shader and Material on to. With the Shader and Material connected and assigned to your object in your scene, double-click the Shader to bring it up in MonoDevelop. We will also need a Specular texture to use. Any texture will do as long as it has some nice variation in colors and patterns. The following screenshot shows the textures we are using for this recipe: How to do it… First, let's populate our Properties block with some new properties. Add the following code to your Shader's Properties block: We then need to add the corresponding variables to the subshader, so that we can access the data from the properties in our Properties block. Add the following code, just after the #pragma statement: Now we have to add our own custom Output struct. This will allow us to store more data for use between our surf function and our lighting model. Don't worry if this doesn't make sense just yet. We will cover the finer details of this Output struct in the next section of the article. Place the following code just after the variables in the SubShader block: Just after the Output struct we just entered, we need to add our custom lighting model. In this case, we have a custom lighting model called LightingCustomPhong. Enter the following code just after the Output struct we just created: In order for our custom lighting model to work, we have to tell the SubShader block which lighting model we want to use. Enter the following code to the #pragma statement so that it loads our custom lighting model: Since we are going to be using a texture to modify the values of our base Specular calculation, we need to store another set of UVs for that texture specifically. This is done inside the Input struct by placing the word uv in front of the variable's name that is holding the texture. Enter the following code just after your custom lighting model: To finish off the Shader, we just need to modify our surf() function with the following code. This will let us pass the texture information to our lighting model function, so that we can use the pixel values of the texture to modify our Specular values in the lighting model function: The following screenshot shows the result of masking our Specular calculations with a color texture and its channel information. We now have a nice variation in Specular over the entire surface, instead of just a global value for the Specular:
Read more
  • 0
  • 0
  • 7263

article-image-using-cameras
Packt
16 Aug 2013
11 min read
Save for later

Using Cameras

Packt
16 Aug 2013
11 min read
(For more resources related to this topic, see here.) Creating a picture-in-picture effect Having more than one viewport displayed can be useful in many situations. For example, you might want to show simultaneous events going on in different locations, or maybe you want to have a separate window for hot-seat multiplayer games. Although you could do it manually by adjusting the Normalized Viewport Rect parameters on your camera, this recipe includes a series of extra preferences to make it more independent from the user's display configuration. Getting ready For this recipe, we have prepared a package named basicLevel containing a scene. The package is in the 0423_02_01_02 folder. How to do it... To create a picture-in-picture display, just follow these steps: Import the basicLevel package into your Unity project. In the Project view, open basicScene, inside the folder 02_01_02. This is a basic scene featuring a directional light, a camera, and some geometry. Add the Camera option to the scene through the Create dropdown menu on top of the Hierarchy view, as shown in the following screenshot: Select the camera you have created and, in the Inspector view, set its Depth to 1: In the Project view, create a new C# script and rename it PictureInPicture. Open your script and replace everything with the following code: using UnityEngine;public class PictureInPicture: MonoBehaviour {public enum HorizontalAlignment{left, center, right};public enum VerticalAlignment{top, middle, bottom};public HorizontalAlignment horizontalAlignment =HorizontalAlignment.left;public VerticalAlignment verticalAlignment =VerticalAlignment.top;public enum ScreenDimensions{pixels, screen_percentage};public ScreenDimensions dimensionsIn = ScreenDimensions.pixels;public int width = 50;public int height= 50;public float xOffset = 0f;public float yOffset = 0f;public bool update = true;private int hsize, vsize, hloc, vloc;void Start (){AdjustCamera ();}void Update (){if(update)AdjustCamera ();}void AdjustCamera(){if(dimensionsIn == ScreenDimensions.screen_percentage){hsize = Mathf.RoundToInt(width * 0.01f * Screen.width);vsize = Mathf.RoundToInt(height * 0.01f * Screen.height);} else {hsize = width;vsize = height;}if(horizontalAlignment == HorizontalAlignment.left){hloc = Mathf.RoundToInt(xOffset * 0.01f *Screen.width);} else if(horizontalAlignment == HorizontalAlignment.right){hloc = Mathf.RoundToInt((Screen.width - hsize)- (xOffset * 0.01f * Screen.width));} else {hloc = Mathf.RoundToInt(((Screen.width * 0.5f)- (hsize * 0.5f)) - (xOffset * 0.01f * Screen.height));}if(verticalAlignment == VerticalAlignment.top){vloc = Mathf.RoundToInt((Screen.height -vsize) - (yOffset * 0.01f * Screen.height));} else if(verticalAlignment == VerticalAlignment.bottom){vloc = Mathf.RoundToInt(yOffset * 0.01f *Screen.height);} else {vloc = Mathf.RoundToInt(((Screen.height *0.5f) - (vsize * 0.5f)) - (yOffset * 0.01f * Screen.height));}camera.pixelRect = new Rect(hloc,vloc,hsize,vsize);}} In case you haven't noticed, we are not achieving percentage by dividing numbers by 100, but rather multiplying them by 0.01. The reason behind that is performance: computer processors are faster multiplying than dividing. Save your script and attach it to the new camera that you created previously. Uncheck the new camera's Audio Listener component and change some of the PictureInPicture parameters: change Horizontal Alignment to Right, Vertical Alignment to Top, and Dimensions In to pixels. Leave XOffset and YOffset as 0, change Width to 400 and Height to 200, as shown below: Play your scene. The new camera's viewport should be visible on the top right of the screen: How it works... Our script changes the camera's Normalized Viewport Rect parameters, thus resizing and positioning the viewport according to the user preferences. There's more... The following are some aspects of your picture-in-picture you could change. Making the picture-in-picture proportional to the screen's size If you change the Dimensions In option to screen_percentage, the viewport size will be based on the actual screen's dimensions instead of pixels. Changing the position of the picture-in-picture Vertical Alignment and Horizontal Alignment can be used to change the viewport's origin. Use them to place it where you wish. Preventing the picture-in-picture from updating on every frame Leave the Update option unchecked if you don't plan to change the viewport position in running mode. Also, it's a good idea to leave it checked when testing and then uncheck it once the position has been decided and set up. See also The Displaying a mini-map recipe. Switching between multiple cameras Choosing from a variety of cameras is a common feature in many genres: race sims, sports sims, tycoon/strategy, and many others. In this recipe, we will learn how to give players the ability of choosing an option from many cameras using their keyboard. Getting ready In order to follow this recipe, we have prepared a package containing a basic level named basicScene. The package is in the folder 0423_02_01_02. How to do it... To implement switchable cameras, follow these steps: Import the basicLevel package into your Unity project. In the Project view, open basicScene from the 02_01_02 folder. This is a basic scene featuring a directional light, a camera, and some geometry. Add two more cameras to the scene. You can do it through the Create drop-down menu on top of the Hierarchy view. Rename them cam1 and cam2. Change the cam2 camera's position and rotation so it won't be identical to cam1. Create an Empty game object by navigating to Game Object | Create Empty. Then, rename it Switchboard. In the Inspector view, disable the Camera and Audio Listener components of both cam1 and cam2. In the Project view, create a new C# script. Rename it CameraSwitch and open it in your editor. Open your script and replace everything with the following code: using UnityEngine;public class CameraSwitch : MonoBehaviour {public GameObject[] cameras;public string[] shortcuts;public bool changeAudioListener = true;void Update (){int i = 0;for(i=0; i<cameras.Length; i++){if (Input.GetKeyUp(shortcuts[i]))SwitchCamera(i);}}void SwitchCamera ( int index ){int i = 0;for(i=0; i<cameras.Length; i++){if(i != index){if(changeAudioListener){cameras[i].GetComponent<AudioListener>().enabled = false;}cameras[i].camera.enabled = false;} else {if(changeAudioListener){cameras[i].GetComponent<AudioListener>().enabled = true;}cameras[i].camera.enabled = true;}}}} Attach CameraSwitch to the Switchboard game object. In the Inspector view, set both Cameras and Shortcuts size to 3. Then, drag the scene cameras into the Cameras slots, and type 1, 2, and 3 into the Shortcuts text fields, as shown in the next screenshot. Play your scene and test your cameras. How it works... The script is very straightforward. All it does is capture the key pressed and enable its respective camera (and its Audio Listener, in case the Change Audio Listener option is checked). There's more... Here are some ideas on how you could try twisting this recipe a bit. Using a single-enabled camera A different approach to the problem would be keeping all the secondary cameras disabled and assigning their position and rotation to the main camera via a script (you would need to make a copy of the main camera and add it to the list, in case you wanted to save its transform settings). Triggering the switch from other events Also, you could change your camera from other game object's scripts by using a line of code, such as the one shown here: GameObject.Find("Switchboard").GetComponent("CameraSwitch"). SwitchCamera(1); See also The Making an inspect camera recipe. Customizing the lens flare effect As anyone who has played a game set in an outdoor environment in the last 15 years can tell you, the lens flare effect is used to simulate the incidence of bright lights over the player's field of view. Although it has become a bit overused, it is still very much present in all kinds of games. In this recipe, we will create and test our own lens flare texture. Getting ready In order to continue with this recipe, it's strongly recommended that you have access to image editor software such as Adobe Photoshop or GIMP. The source for lens texture created in this recipe can be found in the 0423_02_03 folder. How to do it... To create a new lens flare texture and apply it to the scene, follow these steps: Import Unity's Character Controller package by navigating to Assets | Import Package | Character Controller. Do the same for the Light Flares package. In the Hierarchy view, use the Create button to add a Directional Light effect to your scene. Select your camera and add a Mouse Look component by accessing the Component | Camera Control | Mouse Look menu option. In the Project view, locate the Sun flare (inside Standard Assets | Light Flares), duplicate it and rename it to MySun, as shown in the following screenshot: In the Inspector view, click Flare Texture to reveal the base texture's location in the Project view. It should be a texture named 50mmflare. Duplicate the texture and rename it My50mmflare. Right-click My50mmflare and choose Open. This should open the file (actually a.psd) in your image editor. If you're using Adobe Photoshop, you might see the guidelines for the texture, as shown here: To create the light rings, create new Circle shapes and add different Layer Effects such as Gradient Overlay, Stroke, Inner Glow, and Outer Glow. Recreate the star-shaped flares by editing the originals or by drawing lines and blurring them. Save the file and go back to the Unity Editor. In the Inspector view, select MySun, and set Flare Texture to My50mmflare: Select Directional Light and, in the Inspector view, set Flare to MySun. Play the scene and move your mouse around. You will be able to see the lens flare as the camera faces the light. How it works... We have used Unity's built-in lens flare texture as a blueprint for our own. Once applied, the lens flare texture will be displayed when the player looks into the approximate direction of the light. There's more... Flare textures can use different layouts and parameters for each element. In case you want to learn more about the Lens Flare effect, check out Unity's documentation at http://docs. unity3d.com/Documentation/Components/class-LensFlare.html. Making textures from screen content If you want your game or player to take in-game snapshots and apply it as a texture, this recipe will show you how. This can be very useful if you plan to implement an in-game photo gallery or display a snapshot of a past key moment at the end of a level (Race Games and Stunt Sims use this feature a lot). Getting ready In order to follow this recipe, please import the basicTerrain package, available in the 0423_02_04_05 folder, into your project. The package includes a basic terrain and a camera that can be rotated via a mouse. How to do it... To create textures from screen content, follow these steps: Import the Unity package and open the 02_04_05 scene. We need to create a script. In the Project view, click on the Create drop-down menu and choose C# Script. Rename it ScreenTexture and open it in your editor. Open your script and replace everything with the following code: using UnityEngine;using System.Collections;public class ScreenTexture : MonoBehaviour {public int photoWidth = 50;public int photoHeight = 50;public int thumbProportion = 25;public Color borderColor = Color.white;public int borderWidth = 2;private Texture2D texture;private Texture2D border;private int screenWidth;private int screenHeight;private int frameWidth;private int frameHeight;private bool shoot = false;void Start (){screenWidth = Screen.width;screenHeight = Screen.height;frameWidth = Mathf.RoundToInt(screenWidth * photoWidth *0.01f);frameHeight = Mathf.RoundToInt(screenHeight * photoHeight* 0.01f);texture = new Texture2D (frameWidth,frameHeight,TextureFormat.RGB24,false);border = new Texture2D (1,1,TextureFormat.ARGB32, false);border.SetPixel(0,0,borderColor);border.Apply();}void Update (){if (Input.GetKeyUp(KeyCode.Mouse0))StartCoroutine(CaptureScreen());}void OnGUI (){GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f) - borderWidth*2,((screenHeight*0.5f)-(frameHeight*0.5f)) - borderWidth,frameWidth + borderWidth*2,borderWidth),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f) - borderWidth*2,(screenHeight*0.5f)+(frameHeight*0.5f),frameWidth + borderWidth*2,borderWidth),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f)- borderWidth*2,(screenHeight*0.5f)-(frameHeight*0.5f),borderWidth,frameHeight),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)+(frameWidth*0.5f),(screenHeight*0.5f)-(frameHeight*0.5f),borderWidth,frameHeight),border,ScaleMode.StretchToFill);if(shoot){GUI.DrawTexture(new Rect (10,10,frameWidth*thumbProportion*0.01f,frameHeight*thumbProportion* 0.01f),texture,ScaleMode.StretchToFill);}}IEnumerator CaptureScreen (){yield return new WaitForEndOfFrame();texture.ReadPixels(new Rect((screenWidth*0.5f)-(frameWidth*0.5f),(screenHeight*0.5f)-(frameHeight*0.5f),frameWidth,frameHeight),0,0);texture.Apply();shoot = true;}} Save your script and apply it to the Main Camera game object. In the Inspector view, change the values for the Screen Capturecomponent, leaving Photo Width and Photo Height as 25 and Thumb Proportion as 75, as shown here: Play the scene. You will be able to take a snapshot of the screen (and have it displayed on the top-left corner) by clicking the mouse button. How it works... Clicking the mouse triggers a function that reads pixels within the specified rectangle and applies them into a texture that is drawn by the GUI. There's more... Apart from displaying the texture as a GUI element, you could use it in other ways. Applying your texture to a material You apply your texture to an existing object's material by adding a line similar to GameObject.Find("MyObject").renderer.material.mainTexture = texture; at the end of the CaptureScreen function. Using your texture as a screenshot You can encode your texture as a PNG image file and save it. Check out Unity's documentation on this feature at http://docs.unity3d.com/Documentation/ScriptReference/ Texture2D.EncodeToPNG.html.
Read more
  • 0
  • 0
  • 2611

article-image-using-glew
Packt
30 Jul 2013
9 min read
Save for later

Using GLEW

Packt
30 Jul 2013
9 min read
(For more resources related to this topic, see here.) Quick start – using GLEW You have now installed GLEW successfully and configured your OpenGL project in Visual Studio to use it. In this article, you will learn how to use GLEW by playing with a simple OpenGL program that displays a teapot. We will extend this program to render the teapot with toon lighting by using shader programs. To do this, we will use GLEW to set up the OpenGL extensions necessary to use shader programs. This example gives you a chance to experience GLEW to utilize a popular OpenGL extension. Step 1 – using an OpenGL program to display a teapot Consider the following OpenGL program that displays a teapot with a light shining on it: #include <GL/glut.h>void initGraphics(){glEnable(GL_LIGHTING);glEnable(GL_LIGHT0);const float lightPos[4] = {1, .5, 1, 0};glLightfv(GL_LIGHT0, GL_POSITION, lightPos);glEnable(GL_DEPTH_TEST);glClearColor(1.0, 1.0, 1.0, 1.0);}void onResize(int w, int h){glMatrixMode(GL_PROJECTION);glLoadIdentity();glViewport(0, 0, w, h);gluPerspective(40, (float) w / h, 1, 100);glMatrixMode(GL_MODELVIEW);}void onDisplay(){glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);glLoadIdentity();gluLookAt(0.0, 0.0, 5.0,0.0, 0.0, 1.0,0.0, 1.0, 0.0);11Instant GLEWglutSolidTeapot(1);glutSwapBuffers();}int main(int argc, char** argv){glutInit(&argc, argv);glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE);glutInitWindowSize(500, 500);glutCreateWindow("Teapot");initGraphics();glutDisplayFunc(onDisplay);glutReshapeFunc(onResize);glutMainLoop();return 0;} Create a new C++ console project in Visual Studio and copy the above code into the source file. On compiling and running this code in Visual Studio, you will see a window with a grey colored teapot displayed inside it as shown in the screenshot below: Let us briefly examine this OpenGL program and try to understand it. The main function shown below uses the GLUT API to create an OpenGL context, to create a window to render in and to set up the display function that is invoked on every frame. Instead of GLUT, you could also use other cross-platform alternatives such as the OpenGL Framework (GLFW) library or the windowing API of your platform. int main(int argc, char** argv){glutInit(&argc, argv);glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE);glutInitWindowSize(500, 500);glutCreateWindow("Teapot");initGraphics();glutDisplayFunc(onDisplay);glutReshapeFunc(onResize);glutMainLoop();return 0;} Here, the call to glutInit creates an OpenGL context and the calls to glutInitDisplayMode, glutInitWindowSize, and glutCreateWindow help create a window in which to render the teapot. If you examine the initGraphics function, you can see that it enables lighting, creates a light at a given position in 3D space, and sets the background color to white. Similarly, the onResize function sets the size of the viewport based on the size of the rendering window. Passing a pointer to the onResize function as input to glutReshapeFunc ensures that GLUT calls onResize every time the window is resized. And finally, the onDisplay function does the main job of setting the camera and drawing a teapot. Passing a pointer to the onDisplay function as input to glutDisplayFunc ensures that GLUT calls onDisplayfunction every time a frame is rendered. Step 2 – using OpenGL extensions to apply vertex and fragment shaders One of the most common uses of GLEW is to use vertex and fragment shader programs in an OpenGL program. These programs can be written using the OpenGL Shading Language (GLSL). This was standardized in OpenGL 2.0. But, most of the versions of Windows support only OpenGL 1.0 or 1.1. On these operating systems, shader programs can be used only if they are supported by the graphics hardware through OpenGL extensions. Using GLEW is an excellent way to write portable OpenGL programs that use shader programs. The program can be written such that shaders are used when they are supported by the system, and the program falls back on simpler rendering methods when they are not supported. In this section, we extend our OpenGL program to render the teapot using toon lighting. This is a simple trick to render the teapot using cartoonish colors. We first create two new text files: one for the vertex shader named teapot.vert and another for the fragment shader named teapot.frag. You can create these files in the directory that has your OpenGL source program. Copy the following code to the teapot.vert shader file: varying vec3 normal, lightDir;void main(){lightDir = normalize(vec3(gl_LightSource[0].position));normal = normalize(gl_NormalMatrix * gl_Normal);gl_Position = ftransform();} Do not worry if you do not know GLSL or cannot understand this code. We are using this shader program only as an example to demonstrate the use of OpenGL extensions. This shader code applies the standard transformations on vertices. In addition, it also notes down the direction of the light and the normal of the vertex. These variables are passed to the fragment shader program which is described next. Copy the following code to the teapot.frag shader file: varying vec3 normal, lightDir;void main(){float intensity;vec3 n;vec4 color;n = normalize(normal);intensity = max(dot(lightDir, n), 0); if (intensity > 0.97)color = vec4(1, .8, .8, 1.0);else if (intensity > 0.25)color = vec4(.8, 0, .8, 1.0);elsecolor = vec4(.4, 0, .4, 1.0);gl_FragColor = color;} Again, do not worry if you do not understand this code. This fragment shader program is executed at every pixel that is generated for display. The result of this program is a color, which is used to draw that pixel. This program uses the light direction and the normal passed from the vertex shader program to determine the light intensity at a pixel. Based on the intensity value, it picks one of three possible shades of purple to color the pixel. By employing these shader programs, the teapot is rendered in toon lighting like this: However, to get this output our OpenGL program needs to be modified to compile and load these shader programs. Step 3 – including the GLEW header file To be able to call the GLEW API, you need to include the glew.h header file in your OpenGL code. Make sure it is placed above the include files of gl.h, glext.h, glut.h, or any other OpenGL header files. Also, if you include glew.h, you don't really need to include gl.h or glext.h. This is because GLEW redefines the types and function declarations that are in these OpenGL header files. #include <GL/glew.h>#include <GL/glut.h> Step 4 – initializing GLEW GLEW should be initialized before calling any of its other API functions. This can be performed by calling the glewInit function. Ensure that this is called after an OpenGL context has been created. For example, if you are using GLUT in your OpenGL program, call glewInit only after a GLUT window has been created. The code shown below initializes GLEW: GLenum err = glewInit();if (GLEW_OK != err){printf("GLEW init failed: %s!n", glewGetErrorString(err));exit(1);}else{printf("GLEW init success!n");} The call to glewInit does the hard work of determining all the OpenGL extensions that are supported on your system. It returns a value of GLEW_OK or GLEW_NO_ERROR if the initialization was successful; otherwise, it returns a different value. For example, if glewInit is called before an OpenGL context was created, it returns a value of GLEW_ERROR_NO_GL_VERSION. You can find out the cause of a GLEW error by passing the return value of glewInit to the function glewGetErrorString as shown above. This returns a human-readable string that explains the error. Step 5 – checking if an OpenGL extension is supported New or enhanced functionality in the OpenGL API is provided by the means of an extension. This typically means that new data types and API functions are added to the OpenGL specification. Details of the name and functionality of any extension can be found in the OpenGL. In our example, we want our OpenGL program to be able to use GLSL vertex and fragment shaders. This functionality has been provided using extensions that are named GL_ARB_vertex_shader and GL_ARB_fragment_shader. These extensions provide functions to create shader objects, set the shader source code, compile it, link it, and use them with an OpenGL program. Some of the functions provided by this extension are listed below: glCreateShaderObjectARB();glShaderSourceARB();glCompileShaderARB();glCreateProgramObjectARB();glAttachObjectARB();glLinkProgramARB();glUseProgramObjectARB(); To be able to use these functions in our OpenGL program, we first check if the extension is enabled in our system. Depending on the graphics hardware and drivers on your system, not every OpenGL extension might be available and usable on your system. For example, most versions of Windows support only OpenGL 1.0 or 1.1. The drivers supplied by graphics hardware vendors, such as NVIDIA or AMD for example, might support more recent versions of OpenGL and OpenGL extensions. Every OpenGL extension has a name of the form GL_VENDOR_extension_name. The VENDOR may be NV, ATI, APPLE, EXT, ARB, or any such supported vendor name. An extension created by a single vendor is called a vendor-specific extension. If it is created by many vendors, it is called a multivendor extension. If many users find an extension to be a good enhancement, it is promoted to an ARB-approved extension. Such extensions might be integrated into future versions of OpenGL as a core feature. To check for an extension using GLEW, you check if a global boolean variable named GLEW_VENDOR_extension_name is set to true. These variables are defined and their values are set when you initialize GLEW using glewInit. So, to test if vertex and fragment shaders are supported, we add the following code: if (!GLEW_ARB_vertex_shader || !GLEW_ARB_fragment_shader){printf("No GLSL supportn");exit(1);} In this example, we exit the program if these extensions are not supported. Alternatively, you could write the program so that it switches to a simpler or alternate rendering method if the extension you want is not supported. Summary This article provided you the details to use GLEW with OpenGL code using a simple example of teapot rendering. Resources for Article : Further resources on this subject: Introduction to Modern OpenGL [Article] Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 [Article] Android Native Application API [Article]
Read more
  • 0
  • 0
  • 6434
article-image-material-nodes-cycles
Packt
25 Jul 2013
12 min read
Save for later

Material nodes in Cycles

Packt
25 Jul 2013
12 min read
(For more resources related to this topic, see here.) Getting Ready In the description of the following steps, I'll assume that you are starting with a brand new Blender with the default factory settings; if not, start Blender and just click on the File menu item to the top main header bar to select Load Factory Settings from the pop-up menu. In the upper menu bar, switch from Blender Render to Cycles Render (hovering with the mouse on this button shows the Engine to use for rendering label). Now split the 3D view into two horizontal rows and change the upper one in to the Node Editor window by selecting the menu item from the Editor type button at the left-hand corner of the bottom bar of the window itself. The Node Editor window is, in fact, the window we will use to build our shader by mixing the nodes (it's not the only way, actually, but we'll see this later). Put the mouse cursor in the 3D view and add a plane under the cube (press Shift + A and navigate to Mesh | Plane). Enter edit mode (press Tab), scale it 3.5 times bigger (press S, digit 3.5, and hit Enter) and go out of edit mode (press Tab). Now, move the plane one Blender unit down (press G, then Z, digit -1, and then hit Enter). Go to the little icon (Viewport Shading) showing a sphere in the bottom bar of the 3D view and click on it. A menu showing different options appears (Bounding Box, Wireframe, Solid, Texture, Material, and Rendered). Select Rendered from the top of the list and watch your cube being rendered in real time in the 3D viewport. Now, you can rotate or/and translate the view or the cube itself and the view gets updated in real time (the speed of the update is only restricted by the complexity of the scene and by the computing power of your CPU or of your graphic card). Let's learn more about this: Select Lamp in the Outliner (by default, a Point lamp). Go to the Object Data window under the Properties panel on the right-hand side of the interface. Under the Nodes tab, click on Use Nodes to activate a node system for the selected light in the scene; this node system is made by an Emission closure connected to a Lamp Output node. Go to the Strength item, which is set to 100.000 by default, and start to increase the value—as the intensity of the Lamp increases, you can see the cube and the plane rendered in the viewport getting more and more bright, as shown in the following screenshot: How to do it... We just prepared the scene and had a first look at one of the more appreciated features of Cycles: the real-time rendered preview. Now let's start with the object's materials: Select the cube to assign the shader to, by left-clicking on the item in the Outliner, or also by right-clicking directly on the object in the Rendered viewport (but be aware that in Rendered mode, the object selection outline usually around the mesh is not visible because, obviously, it's not renderable). Go to the Material window under the Properties panel: even if with the default Factory Settings selected, the cube has already a default material assigned (as you can precisely see by navigating to Properties | Material | Surface). In any case, you need to click on the Use Nodes button under the Surface tab to activate the node system; or else, by checking the Use Nodes box in the header of the Node Editor window. As you check the Use Nodes box, the content of the Surface tab changes showing that a Diffuse BSDF shader has been assigned to the cube and that, accordingly, two linked nodes have appeared inside the Node Editor window: the Diffuse BSDF shader itself is already connected to the Surface input socket of a Material Output node (the Volume input socket does nothing at the moment, it's there in anticipation of a volumetric feature on the to-do list, and we'll see the Displacement socket later). Put the mouse cursor in the Node Editor window and by scrolling the mouse wheel, zoom in to the Diffuse BSDF node. Left-click on the Color rectangle: a color wheel appears, where you can select a new color to change the shader color by clicking on the wheel or by inserting the RGB values (and take note that there are also a color sampler and the Alpha channel value, although the latter, in this case, doesn't have any visible effect on the object material's color): The cube rendered in the 3D preview changes its material's color in real time. You can even move the cursor in the color wheel and watch the rendered object switching the colors accordingly. Set the object's color to a greenish color by setting its RGB values to 0.430, 0.800, and 0.499 respectively. Go to the Material window and, under the Surface tab, click on the Surface button, which at the moment is showing the Diffuse BSDF item. From the pop-up menu, select the Glossy BSDF shader item. The node now also changes in the Node Editor window and so does accordingly the cube's material in the Rendered preview, as shown here: Note that although we just switched a shader node with a different one, the color we set in the former one has been kept also in the new one; actually, this happens for all the values that can be kept from one node to a different one. Now, because in the real world a material having a 100 percent matte or reflective surface could hardly exist, a more correct basic Cycles material should be made by mixing the Diffuse BSDF and the Glossy BSDF shaders blended together by a Mix Shader node, in turn connected to the Material Output node. In the Material window, under the Surface tab, click again on the Surface button that is now showing the Glossy BSDF item and replace it back with a Diffuse BSDF shader. Put the mouse pointer in the Node Editor window and, by pressing Shift + A on the keyboard, make a pop-up menu appear with several items. Move the mouse pointer on the Shader item, it shows one more pop-up where all the shader options are collected. Select one of these shader menu items, for example, the Glossy BSDF item. The shader node is now added to the Node Editor window, although not connected to anything yet (in fact, it's not visible in the Material window but is visible only in the Node Editor window); the new nodes appear already selected. Again press Shift + A in the Node Editor window and this time add a Mix Shader node. Press G to move it on the link connecting the Diffuse BSDF node to the Material Output node (you'll probably need to first adjust the position of the two nodes to make room between them). The Mix Shader node gets automatically pasted in between, the Diffuse node output connected to the first Shader input socket, as shown in the following screenshot: Left-click with the mouse on the green dot output of the Glossy BSDF shader node and grab the link to the second input socket of the Mix Shader node. Release the mouse button now and see the nodes being connected. Because the blending Fac (factor) value of the Mix Shader node is set by default to 0.500, the two shader components, Diffuse and Glossy, are now showing on the cube's surface in equal parts, that is, each one at 50 percent. Left-click on the Fac slider with the mouse and slide it to 0.000. The cube's surface is now showing only the Diffuse component, because the Diffuse BSDF shader is connected to the first Shader input socket that is corresponding to a value set to 0.000. Slide the Fac slider value to 1.000 and the surface is now showing only the Glossy BSDF shader component, which is, in fact, connected to the second Shader input socket corresponding to a value set to 1.000. Set the Fac value to 0.800. The cube is now reflecting on its sides, even if blurred, the white plane, because we have a material that is reflective at 80 percent, matte at 20 percent, and so on: Lastly, select the plane, go to the Material window and click on the New button to assign a default whitish material. How it works... So, in its minimal form, a Cycles material is made by a closure (a node shader) connected to the Material Output node; by default, for a new material, the node shader is the Diffuse BSDF with RGB color set to 0.800, and the result is a matte whitish material (with the Roughness value at 0.000 actually corresponding to a Lambert shader). The Diffuse BSDF node can be replaced by any other one of the available shader list. For example, by a Glossy BSDF shader as in the former cube scene, which produces a totally mirrored surface material. As we have seen, the Node Editor window is not the only way to build the materials; in the Properties panel on the right-hand side of the UI, we have access to the Material window, which is usually divided as follows: The material name, user, and the datablock tab The Surface tab, including in a vertical ordered column only the shader nodes added in the Node Editor window and already connected to each other The Displacement tab, which we'll see later The Settings tab, where we can set the object color as seen in the viewport in not-rendered mode (Viewport Color), the material Pass Index, and a Multiple Importance Sample option The Material window not only reflects what we do in the Node Editor window and changes accordingly to it (and vice versa), but also can be used to change the values, to easily switch the closures themselves and to some extent to connect them to the other nodes. The Material and the Node Editor windows are so mutual that there is no prevalence in which one to use to build a material; both can be used individually or combined, depending on preferences or practical utility. In some cases, it can be very handy to switch a shader from the Surface tab under Material on the right (or a texture from the Texture window as well, but we'll see textures later), leaving untouched all the settings and the links in the node's network. There is no question, by the way, that the Material window can become pretty complex and confusing as a material network grows more and more in complexity, while the graphic appearance of the Node Editor window shows the same network in a much more clear and comprehensible way. There's more... Looking at the Rendered viewport, you'll notice that the image is now quite noisy and that there are white dots in certain areas of the image; these are the infamous fireflies, caused mainly by transparent, luminescent, or glossy surfaces. Actually, they have been introduced in our render by the glossy component. Follow these steps to avoid them: Go to the Render window under the Properties panel. In the Sampling tab, set Samples to 100 both for Preview and Render (they are set to 10 by default). Set the Clamp value to 1.00 (it's set to 0.00 by default). Go to the Light Paths tab and set the Filter Glossy value to 1.00 as well. The resulting rendered image, as shown here, is now a lot more smooth and noise free: Save the blend file in an appropriate location on your hard drive with a name such as start_01.blend. Samples set to 10 by default are obviously not enough to give a noiseless image, but are good for a fast preview. We could also let the Preview samples as default and increase only the Render value, to have longer rendering times but a clean image only for the final render (that can be started, as in BI, by pressing the F12 key). By using the Clamp value, we can cut the energy of the light. Internally, Blender converts the image color space to linear. It then re-converts it to RGB, that is, from 0 to 255 for the output. A value of 1.00 in linear space means that all the image values are now included inside a range starting from 0 and arriving to a maximum of 1, and that values bigger than 1 are not possible, so usually avoiding the fireflies problem. Clamp values higher than 1.00 can start to lower the general lighting intensity of the scene. The Filter Glossy value is exactly what the name says, a filter that blurs the glossy reflections on the surface to reduce noise. Be aware that even with the same samples, the Rendered preview not always has a total correspondence to the final render, both with regards to the noise as well as to the fireflies. This is mainly due to the fact that the preview-rendered 3D window and the final rendered image usually have very different sizes, and artifacts visible in the final rendered image may not show in a smaller preview-rendered window. Summary In this article we learned how to build a basic Cycles material, add textures, and use lamps or light-emitting objects. Also, we learned how to successfully create a simple scene. Resources for Article: Further resources on this subject: Advanced Effects using Blender Particle System [Article] Learn Baking in Blender [Article] Character Head Modeling in Blender: Part 2 [Article]
Read more
  • 0
  • 0
  • 2349

article-image-detailing-environments
Packt
17 Jul 2013
4 min read
Save for later

Detailing Environments

Packt
17 Jul 2013
4 min read
(For more resources related to this topic, see here.) Applying materials As it stands, our current level looks rather... well, bland. I'd say it's missing something in order to really make it realistic... the walls are all the same! Thankfully, we can use textures to make the walls come to life in a very simple way, bringing us one step closer to that AAA quality that we're going for! Applying materials to our walls in Unreal Development Kit (UDK) is actually very simple once we know how to do it, which is what we're going to look at now: First, go to the menu bar at the top and access the Actor Classes window by going to the top menu and navigating to View | Browser Windows | Content Browser. Once in the Content Browser window, make sure that Packages are sorted by folder by clicking on the left-hand side button. Once this is done, click on the UDK Game folder in the Packages window. Then type in floor master in the top search bar menu. Click on the M_LT_Floors_BSP_Master material. Close the Content Browser window and then left-click on the floor of our level; if you look closely, you should see. With the floor selected, right-click and select Apply Material : M_LT_Floors_BSP_Master. Now that we have given the floor a material, let's give it a platform as well. Select each of the faces by holding down Ctrl and left-clicking on them individually. Once selected, right-click and select Apply Material : M_LT_Floors_BSP_Master. Another way to select all of the faces would be to rightclick on the floor and navigate to Select Surfaces | Adjacent Floors. Now our floor is placed; but if you play the game, you may notice the texture being repeated over and over again and the texture on the platform being stretched strangely. One of the ways we can rectify this problem is by scaling the texture to fit our needs. With all of the floor and the pieces of the platform selected, navigate to View| Surface Properties. From there, change the Simple field under Scaling to 2.0 and click on the Apply button to its right that will double the size of our textures. After that, go to Alignment and select Box; click on the Apply button placed below it to align our textures as if the faces that we selected were like a box. This works very well for objects consisting of box-like objects (our brushes, for instance). Close the Surface Properties window and open up the Content Browser window. Now search for floors organic. Select M_LT_Floors_BSP_ Organic15b and close the Content Browser window. Now select one of the floors on the edges with the default texture on them. Then right-click and go to Select Surfaces | Matching Texture. After that, right-click and select Apply Material : M_LT_Floors_BSP_Organic15b. We build our project by navigating to Build | Build All, save our game by going to the Save option within the File menu, and run our game by navigating to Play | In Editor. And with that, we now have a nicely textured world, and it is quite a good start towards getting our levels looking as refined as possible. Summary This article discusses the role of an environment artist doing a texture pass on the environment. After that, we will place meshes to make our level pop with added details. Finally, we will add a few more things to make the experience as nice looking as possible. Resources for Article : Further resources on this subject: Getting Started on UDK with iOS [Article] Configuration and Handy Tweaks for UDK [Article] Creating Virtual Landscapes [Article]
Read more
  • 0
  • 0
  • 1622

article-image-data-driven-design
Packt
10 Jul 2013
21 min read
Save for later

Data-driven Design

Packt
10 Jul 2013
21 min read
(For more resources related to this topic, see here.) Loading XML files I have chosen to use XML files because they are so easy to parse. We are not going to write our own XML parser, rather we will use an open source library called TinyXML. TinyXML was written by Lee Thomason and is available under the zlib license from http://sourceforge.net/projects/tinyxml/. Once downloaded the only setup we need to do is to include a few of the files in our project: tinyxmlerror.cpp tinyxmlparser.cpp tinystr.cpp tinystr.h tinyxml.cpp tinyxml.h Also, at the top of tinyxml.h, add this line of code: #define TIXML_USE_STL By doing this we ensure that we are using the STL versions of the TinyXML functions. We can now go through a little of how an XML file is structured. It's actually fairly simple and we will only give a brief overview to help you get up to speed with how we will use it. Basic XML structure Here is a basic XML file: <?xml version="1.0" ?> <ROOT> <ELEMENT> </ELEMENT> </ROOT> The first line of the file defines the format of the XML file. The second line is our Root element; everything else is a child of this element. The third line is the first child of the root element. Now let's look at a slightly more complicated XML file: <?xml version="1.0" ?> <ROOT> <ELEMENTS> <ELEMENT>Hello,</ELEMENT> <ELEMENT> World!</ELEMENT> </ELEMENTS> </ROOT> As you can see we have now added children to the first child element. You can nest as many children as you like. But without a good structure, your XML file may become very hard to read. If we were to parse the above file, here are the steps we would take: Load the XML file. Get the root element, <ROOT>. Get the first child of the root element, <ELEMENTS>. For each child, <ELEMENT> of <ELEMENTS>, get the content. Close the file. Another useful XML feature is the use of attributes. Here is an example: <ROOT> <ELEMENTS> <ELEMENT text="Hello,"/> <ELEMENT text=" World!"/> </ELEMENTS> </ROOT> We have now stored the text we want in an attribute named text. When this file is parsed, we would now grab the text attribute for each element and store that instead of the content between the <ELEMENT></ELEMENT> tags. This is especially useful for us as we can use attributes to store lots of different values for our objects. So let's look at something closer to what we will use in our game: <?xml version="1.0" ?> <STATES> <!--The Menu State--> <MENU> <TEXTURES> <texture filename="button.png" ID="playbutton"/> <texture filename="exit.png" ID="exitbutton"/> </TEXTURES> <OBJECTS> <object type="MenuButton" x="100" y="100" width="400" height="100" textureID="playbutton"/> <object type="MenuButton" x="100" y="300" width="400" height="100" textureID="exitbutton"/> </OBJECTS> </MENU> <!--The Play State--> <PLAY> </PLAY> <!-- The Game Over State --> <GAMEOVER> </GAMEOVER> </STATES> This is slightly more complex. We define each state in its own element and within this element we have objects and textures with various attributes. These attributes can be loaded in to create the state. With this knowledge of XML you can easily create your own file structures if what we cover within this book is not to your needs. Implementing Object Factories We are now armed with a little XML knowledge but before we move forward, we are going to take a look at Object Factories. An object factory is a class that is tasked with the creation of our objects. Essentially, we tell the factory the object we would like it to create and it goes ahead and creates a new instance of that object and then returns it. We can start by looking at a rudimentary implementation: GameObject* GameObjectFactory::createGameObject(ID id) { switch(id) { case "PLAYER": return new Player(); break; case "ENEMY": return new Enemy(); break; // lots more object types } } This function is very simple. We pass in an ID for the object and the factory uses a big switch statement to look it up and return the correct object. Not a terrible solution but also not a particularly good one, as the factory will need to know about each type it needs to create and maintaining the switch statement for many different objects would be extremely tedious. We want this factory not to care about which type we ask for. It shouldn't need to know all of the specific types we want it to create. Luckily this is something that we can definitely achieve. Using Distributed Factories Through the use of Distributed Factories we can make a generic object factory that will create any of our types. Distributed factories allow us to dynamically maintain the types of objects we want our factory to create, rather than hard code them into a function (like in the preceding simple example). The approach we will take is to have the factory contain std::map that maps a string (the type of our object) to a small class called Creator whose only purpose is the creation of a specific object. We will register a new type with the factory using a function that takes a string (the ID) and a Creator class and adds them to the factory's map. We are going to start with the base class for all the Creator types. Create GameObjectFactory.h and declare this class at the top of the file. #include <string> #include <map> #include "GameObject.h" class BaseCreator { public: virtual GameObject* createGameObject() const = 0; virtual ~BaseCreator() {} }; We can now go ahead and create the rest of our factory and then go through it piece by piece. class GameObjectFactory { public: bool registerType(std::string typeID, BaseCreator* pCreator) { std::map<std::string, BaseCreator*>::iterator it = m_creators.find(typeID); // if the type is already registered, do nothing if(it != m_creators.end()) { delete pCreator; return false; } m_creators[typeID] = pCreator; return true; } GameObject* create(std::string typeID) { std::map<std::string, BaseCreator*>::iterator it = m_creators.find(typeID); if(it == m_creators.end()) { std::cout << "could not find type: " << typeID << "n"; return NULL; } BaseCreator* pCreator = (*it).second; return pCreator->createGameObject(); } private: std::map<std::string, BaseCreator*> m_creators; }; This is quite a small class but it is actually very powerful. We will cover each part separately starting with std::map m_creators. std::map<std::string, BaseCreator*> m_creators; This map holds the important elements of our factory, the functions of the class essentially either add or remove from this map. This becomes apparent when we look at the registerType function: bool registerType(std::string typeID, BaseCreator* pCreator) This function takes the ID we want to associate the object type with (as a string), and the creator object for that class. The function then attempts to find the type using the std::mapfind function: std::map<std::string, BaseCreator*>::iterator it = m_creators.find(typeID); If the type is found then it is already registered. The function then deletes the passed in pointer and returns false: if(it != m_creators.end()) { delete pCreator; return false; } If the type is not already registered then it can be assigned to the map and then true is returned: m_creators[typeID] = pCreator; return true; } As you can see, the registerType function is actually very simple; it is just a way to add types to the map. The create function is very similar: GameObject* create(std::string typeID) { std::map<std::string, BaseCreator*>::iterator it = m_creators.find(typeID); if(it == m_creators.end()) { std::cout << "could not find type: " << typeID << "n"; return 0; } BaseCreator* pCreator = (*it).second; return pCreator->createGameObject(); } The function looks for the type in the same way as registerType does, but this time it checks whether the type was not found (as opposed to found). If the type is not found we return 0, and if the type is found then we use the Creator object for that type to return a new instance of it as a pointer to GameObject. It is worth noting that the GameObjectFactory class should probably be a singleton. We won't cover how to make it a singleton in this article. Try implementing it yourself or see how it is implemented in the source code download. Fitting the factory into the framework With our factory now in place, we can start altering our GameObject classes to use it. Our first step is to ensure that we have a Creator class for each of our objects. Here is one for Player: class PlayerCreator : public BaseCreator { GameObject* createGameObject() const { return new Player(); } }; This can be added to the bottom of the Player.h file. Any object we want the factory to create must have its own Creator implementation. Another addition we must make is to move LoaderParams from the constructor to their own function called load. This stops the need for us to pass the LoaderParams object to the factory itself. We will put the load function into the GameObject base class, as we want every object to have one. class GameObject { public: virtual void draw()=0; virtual void update()=0; virtual void clean()=0; // new load function virtual void load(const LoaderParams* pParams)=0; protected: GameObject() {} virtual ~GameObject() {} }; Each of our derived classes will now need to implement this load function. The SDLGameObject class will now look like this: SDLGameObject::SDLGameObject() : GameObject() { } voidSDLGameObject::load(const LoaderParams *pParams) { m_position = Vector2D(pParams->getX(),pParams->getY()); m_velocity = Vector2D(0,0); m_acceleration = Vector2D(0,0); m_width = pParams->getWidth(); m_height = pParams->getHeight(); m_textureID = pParams->getTextureID(); m_currentRow = 1; m_currentFrame = 1; m_numFrames = pParams->getNumFrames(); } Our objects that derive from SDLGameObject can use this load function as well; for example, here is the Player::load function: Player::Player() : SDLGameObject() { } void Player::load(const LoaderParams *pParams) { SDLGameObject::load(pParams); } This may seem a bit pointless but it actually saves us having to pass through LoaderParams everywhere. Without it, we would need to pass LoaderParams through the factory's create function which would then in turn pass it through to the Creator object. We have eliminated the need for this by having a specific function that handles parsing our loading values. This will make more sense once we start parsing our states from a file. We have another issue which needs rectifying; we have two classes with extra parameters in their constructors (MenuButton and AnimatedGraphic). Both classes take an extra parameter as well as LoaderParams. To combat this we will add these values to LoaderParams and give them default values. LoaderParams(int x, int y, int width, int height, std::string textureID,int numFrames, int callbackID = 0, int animSpeed = 0) : m_x(x), m_y(y), m_width(width), m_height(height), m_textureID(textureID), m_numFrames(numFrames), m_callbackID(callbackID), m_animSpeed(animSpeed) { } In other words, if the parameter is not passed in, then the default values will be used (0 in both cases). Rather than passing in a function pointer as MenuButton did, we are using callbackID to decide which callback function to use within a state. We can now start using our factory and parsing our states from an XML file. Parsing states from an XML file The file we will be parsing is the following (test.xml in source code downloads): <?xml version="1.0" ?> <STATES> <MENU> <TEXTURES> <texture filename="assets/button.png" ID="playbutton"/> <texture filename="assets/exit.png" ID="exitbutton"/> </TEXTURES> <OBJECTS> <object type="MenuButton" x="100" y="100" width="400" height="100" textureID="playbutton" numFrames="0" callbackID="1"/> <object type="MenuButton" x="100" y="300" width="400" height="100" textureID="exitbutton" numFrames="0" callbackID="2"/> </OBJECTS> </MENU> <PLAY> </PLAY> <GAMEOVER> </GAMEOVER> </STATES> We are going to create a new class that parses our states for us called StateParser. The StateParser class has no data members, it is to be used once in the onEnter function of a state and then discarded when it goes out of scope. Create a StateParser.h file and add the following code: #include <iostream> #include <vector> #include "tinyxml.h" class GameObject; class StateParser { public: bool parseState(const char* stateFile, std::string stateID, std::vector<GameObject*> *pObjects); private: void parseObjects(TiXmlElement* pStateRoot, std::vector<GameObject*> *pObjects); void parseTextures(TiXmlElement* pStateRoot, std::vector<std::string> *pTextureIDs); }; We have three functions here, one public and two private. The parseState function takes the filename of an XML file as a parameter, along with the current stateID value and a pointer to std::vector of GameObject* for that state. The StateParser.cpp file will define this function: bool StateParser::parseState(const char *stateFile, string stateID, vector<GameObject *> *pObjects, std::vector<std::string> *pTextureIDs) { // create the XML document TiXmlDocument xmlDoc; // load the state file if(!xmlDoc.LoadFile(stateFile)) { cerr << xmlDoc.ErrorDesc() << "n"; return false; } // get the root element TiXmlElement* pRoot = xmlDoc.RootElement(); // pre declare the states root node TiXmlElement* pStateRoot = 0; // get this states root node and assign it to pStateRoot for(TiXmlElement* e = pRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == stateID) { pStateRoot = e; } } // pre declare the texture root TiXmlElement* pTextureRoot = 0; // get the root of the texture elements for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == string("TEXTURES")) { pTextureRoot = e; } } // now parse the textures parseTextures(pTextureRoot, pTextureIDs); // pre declare the object root node TiXmlElement* pObjectRoot = 0; // get the root node and assign it to pObjectRoot for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == string("OBJECTS")) { pObjectRoot = e; } } // now parse the objects parseObjects(pObjectRoot, pObjects); return true; } There is a lot of code in this function so it is worth covering in some depth. We will note the corresponding part of the XML file, along with the code we use, to obtain it. The first part of the function attempts to load the XML file that is passed into the function: // create the XML document TiXmlDocument xmlDoc; // load the state file if(!xmlDoc.LoadFile(stateFile)) { cerr << xmlDoc.ErrorDesc() << "n"; return false; } It displays an error to let you know what happened if the XML loading fails. Next we must grab the root node of the XML file: // get the root element TiXmlElement* pRoot = xmlDoc.RootElement(); // <STATES> The rest of the nodes in the file are all children of this root node. We must now get the root node of the state we are currently parsing; let's say we are looking for MENU: // declare the states root node TiXmlElement* pStateRoot = 0; // get this states root node and assign it to pStateRoot for(TiXmlElement* e = pRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == stateID) { pStateRoot = e; } } This piece of code goes through each direct child of the root node and checks if its name is the same as stateID. Once it finds the correct node it assigns it to pStateRoot. We now have the root node of the state we want to parse. <MENU> // the states root node Now that we have a pointer to the root node of our state we can start to grab values from it. First we want to load the textures from the file so we look for the <TEXTURE> node using the children of the pStateRoot object we found before: // pre declare the texture root TiXmlElement* pTextureRoot = 0; // get the root of the texture elements for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == string("TEXTURES")) { pTextureRoot = e; } } Once the <TEXTURE> node is found, we can pass it into the private parseTextures function (which we will cover a little later). parseTextures(pTextureRoot, std::vector<std::string> *pTextureIDs); The function then moves onto searching for the <OBJECT> node and, once found, it passes it into the private parseObjects function. We also pass in the pObjects parameter: // pre declare the object root node TiXmlElement* pObjectRoot = 0; // get the root node and assign it to pObjectRoot for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == string("OBJECTS")) { pObjectRoot = e; } } parseObjects(pObjectRoot, pObjects); return true; } At this point our state has been parsed. We can now cover the two private functions, starting with parseTextures. void StateParser::parseTextures(TiXmlElement* pStateRoot, std::vector<std::string> *pTextureIDs) { for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { string filenameAttribute = e->Attribute("filename"); string idAttribute = e->Attribute("ID"); pTextureIDs->push_back(idAttribute); // push into list TheTextureManager::Instance()->load(filenameAttribute, idAttribute, TheGame::Instance()->getRenderer()); } } This function gets the filename and ID attributes from each of the texture values in this part of the XML: <TEXTURES> <texture filename="button.png" ID="playbutton"/> <texture filename="exit.png" ID="exitbutton"/> </TEXTURES> It then adds them to TextureManager. TheTextureManager::Instance()->load(filenameAttribute, idAttribute, TheGame::Instance()->getRenderer()); The parseObjects function is quite a bit more complicated. It creates objects using our GameObjectFactory function and reads from this part of the XML file: <OBJECTS> <object type="MenuButton" x="100" y="100" width="400" height="100" textureID="playbutton" numFrames="0" callbackID="1"/> <object type="MenuButton" x="100" y="300" width="400" height="100" textureID="exitbutton" numFrames="0" callbackID="2"/> </OBJECTS> The parseObjects function is defined like so: void StateParser::parseObjects(TiXmlElement *pStateRoot, std::vector<GameObject *> *pObjects) { for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { int x, y, width, height, numFrames, callbackID, animSpeed; string textureID; e->Attribute("x", &x); e->Attribute("y", &y); e->Attribute("width",&width); e->Attribute("height", &height); e->Attribute("numFrames", &numFrames); e->Attribute("callbackID", &callbackID); e->Attribute("animSpeed", &animSpeed); textureID = e->Attribute("textureID"); GameObject* pGameObject = TheGameObjectFactory::Instance() ->create(e->Attribute("type")); pGameObject->load(new LoaderParams (x,y,width,height,textureID,numFrames,callbackID, animSpeed)); pObjects->push_back(pGameObject); } } First we get any values we need from the current node. Since XML files are pure text, we cannot simply grab ints or floats from the file. TinyXML has functions with which you can pass in the value you want to be set and the attribute name. For example: e->Attribute("x", &x); This sets the variable x to the value contained within attribute "x". Next comes the creation of a GameObject * class using the factory. GameObject* pGameObject = TheGameObjectFactory::Instance()->create(e->Attribute("type")); We pass in the value from the type attribute and use that to create the correct object from the factory. After this we must use the load function of GameObject to set our desired values using the values loaded from the XML file. pGameObject->load(new LoaderParams(x,y,width,height,textureID,numFrames,callbackID)); And finally we push pGameObject into the pObjects array, which is actually a pointer to the current state's object vector. pObjects->push_back(pGameObject); Loading the menu state from an XML file We now have most of our state loading code in place and can make use of this in the MenuState class. First we must do a little legwork and set up a new way of assigning the callbacks to our MenuButton objects, since this is not something we could pass in from an XML file. The approach we will take is to give any object that wants to make use of a callback an attribute named callbackID in the XML file. Other objects do not need this value and LoaderParams will use the default value of 0. The MenuButton class will make use of this value and pull it from its LoaderParams, like so: void MenuButton::load(const LoaderParams *pParams) { SDLGameObject::load(pParams); m_callbackID = pParams->getCallbackID(); m_currentFrame = MOUSE_OUT; } The MenuButton class will also need two other functions, one to set the callback function and another to return its callback ID: void setCallback(void(*callback)()) { m_callback = callback;} int getCallbackID() { return m_callbackID; } Next we must create a function to set callbacks. Any state that uses objects with callbacks will need an implementation of this function. The most likely states to have callbacks are menu states, so we will rename our MenuState class to MainMenuState and make MenuState an abstract class that extends from GameState. The class will declare a function that sets the callbacks for any items that need it and it will also have a vector of the Callback objects as a member; this will be used within the setCallbacks function for each state. class MenuState : public GameState { protected: typedef void(*Callback)(); virtual void setCallbacks(const std::vector<Callback>& callbacks) = 0; std::vector<Callback> m_callbacks; }; The MainMenuState class (previously MenuState) will now derive from this MenuState class. #include "MenuState.h" #include "GameObject.h" class MainMenuState : public MenuState { public: virtual void update(); virtual void render(); virtual bool onEnter(); virtual bool onExit(); virtual std::string getStateID() const { return s_menuID; } private: virtual void setCallbacks(const std::vector<Callback>& callbacks); // call back functions for menu items static void s_menuToPlay(); static void s_exitFromMenu(); static const std::string s_menuID; std::vector<GameObject*> m_gameObjects; }; Because MainMenuState now derives from MenuState, it must of course declare and define the setCallbacks function. We are now ready to use our state parsing to load the MainMenuState class. Our onEnter function will now look like this: bool MainMenuState::onEnter() { // parse the state StateParser stateParser; stateParser.parseState("test.xml", s_menuID, &m_gameObjects, &m_textureIDList); m_callbacks.push_back(0); //pushback 0 callbackID start from 1 m_callbacks.push_back(s_menuToPlay); m_callbacks.push_back(s_exitFromMenu); // set the callbacks for menu items setCallbacks(m_callbacks); std::cout << "entering MenuStaten"; return true; } We create a state parser and then use it to parse the current state. We push any callbacks into the m_callbacks array inherited from MenuState. Now we need to define the setCallbacks function: void MainMenuState::setCallbacks(const std::vector<Callback>& callbacks) { // go through the game objects for(int i = 0; i < m_gameObjects.size(); i++) { // if they are of type MenuButton then assign a callback based on the id passed in from the file if(dynamic_cast<MenuButton*>(m_gameObjects[i])) { MenuButton* pButton = dynamic_cast<MenuButton*>(m_gameObjects[i]); pButton->setCallback(callbacks[pButton->getCallbackID()]); } } } We use dynamic_cast to check whether the object is a MenuButton type; if it is then we do the actual cast and then use the objects callbackID as the index into the callbacks vector and assign the correct function. While this method of assigning callbacks could be seen as not very extendable and could possibly be better implemented, it does have a redeeming feature; it allows us to keep our callbacks inside the state they will need to be called from. This means that we won't need a huge header file with all of the callbacks in. One last alteration we need is to add a list of texture IDs to each state so that we can clear all of the textures that were loaded for that state. Open up GameState.h and we will add a protected variable. protected: std::vector<std::string> m_textureIDList; We will pass this into the state parser in onEnter and then we can clear any used textures in the onExit function of each state, like so: // clear the texture manager for(int i = 0; i < m_textureIDList.size(); i++) { TheTextureManager::Instance()-> clearFromTextureMap(m_textureIDList[i]); } Before we start running the game we need to register our MenuButton type with the GameObjectFactory. Open up Game.cpp and in the Game::init function we can register the type. TheGameObjectFactory::Instance()->registerType("MenuButton", new MenuButtonCreator()); We can now run the game and see our fully data-driven MainMenuState.
Read more
  • 0
  • 0
  • 2864
article-image-warfare-unleashed-implementing-gameplay
Packt
10 Jul 2013
24 min read
Save for later

Warfare Unleashed Implementing Gameplay

Packt
10 Jul 2013
24 min read
(For more resources related to this topic, see here.) Equipping the entities The SceneNode base class was inherited by the Entity class. Entities are the central part of this chapter. It's all about the interaction between entities of different kinds. Before starting to implement all those interactions, it is reasonable to think about crucial properties our entities need to have. Introducing hitpoints Since, we are preparing our airplanes for the battlefield, we need to provide them with new specific attributes. To our class definition of Entity, we add a new member variable that memorizes the current hitpoints. Hitpoints ( HP ) are a measure for the hull integrity of an entity; the entity is destroyed as soon as the hitpoints reach or fall below zero. In addition to the member variable, we provide member functions that allow the modification of the hitpoints. We do not provide direct write access, however, the hitpoints can be decreased (the plane is damaged) or increased (the plane is repaired). Also, a destroy() function instantly destroys the entity. class Entity : public SceneNode{public:explicit Entity(int hitpoints);void repair(int points);void damage(int points);void destroy();int getHitpoints() const;bool isDestroyed() const;...private:int mHitpoints;...}; The implementation is as expected: repair() adds the specified hitpoints, damage() subtracts them, and destroy() sets them to zero. Storing entity attributes in data tables In our game, there are already two different airplanes with different attributes. For this chapter, we introduce a third one to make the game more interesting. With an increasing amount of new aircraft types, attributes such as speed, hitpoints, used texture, or fire rate may vary strongly among them. We need to think of a way to store those properties in a central place, allowing easy access to them. What we clearly want to avoid are case differentiations in every Aircraft method, since this makes the local logic code less readable, and spreads the attributes across different functions. Instead of if/else cascades or switch statements, we can store the attributes in a central table, and just access the table every time we need an attribute. Let's define the type of such a table entry in the case of an airplane. We choose the simplest way, and have a structure AircraftData with all members public. This type is defined in the file DataTables.hpp. struct AircraftData{int hitpoints;float speed;Textures::ID texture;}; While AircraftData is a single table entry, the whole table is represented as a sequence of entries, namely std::vector<AircraftData>. Next, we write a function that initializes the table for different aircraft types. We begin to define a vector of the correct size (Aircraft::TypeCount is the last enumerator of the enum Aircraft::Type, it contains the number of different aircraft types). Since the enumerators are consecutive and begin at zero, we can use them as indices in our STL container. We thus initialize all the attributes for different airplanes, and eventually return the filled table. std::vector<AircraftData> initializeAircraftData(){std::vector<AircraftData> data(Aircraft::TypeCount);data[Aircraft::Eagle].hitpoints = 100;data[Aircraft::Eagle].speed = 200.f;data[Aircraft::Eagle].texture = Textures::Eagle;data[Aircraft::Raptor].hitpoints = 20;data[Aircraft::Raptor].speed = 80.f;data[Aircraft::Raptor].texture = Textures::Raptor;...return data;} The global function initializeAircraftData() is declared in DataTables.hpp and defined in DataTables.cpp. It is used inside Aircraft.cpp, to initialize a global constant Table. This constant is declared locally in the .cpp file, so only the Aircraft internals can access it. In order to avoid name collisions in other files, we use an anonymous namespace. namespace{const std::vector<AircraftData> Table = initializeAircraftData();} Inside the Aircraft methods, we can access a constant attribute of the own plane type using the member variable mType as index. For example, Table[mType].hitpoints denotes the maximal hitpoints of the current aircraft. Data tables are only the first step of storing gameplay constants. For more flexibility, and to avoid recompiling the application, you can also store these constants externally, for example, in a simple text file or using a specific file format. The application initially loads these files, parses the values, and fills the data tables accordingly. Nowadays, it is very common to load gameplay information from external resources. There are text-based formats such as YAML or XML, as well as, many application-specific text and binary formats. There are also well-known C++ libraries such as Boost.Serialize (www.boost. org) that help with loading and saving data structures from C++. One possibility that has recently gained popularity consists of using script languages, most notably Lua (www.lua.org), in addition to C++. This has the advantage that not only constant data, but dynamic functionality can be outsourced and loaded during runtime. Displaying text We would like to add some text on the display, for example, to show the hitpoints or ammunition of different entities. Since this text information is supposed to be shown next to the entity, it stands to reason to attach it to the corresponding scene node. We therefore, create a TextNode class which inherits SceneNode as shown in the following code: class TextNode : public SceneNode{public:explicit TextNode(const FontHolder& fonts,const std::string& text);void setString(const std::string& text);private:virtual void drawCurrent(sf::RenderTarget& target,sf::RenderStates states) const;private:sf::Text mText;}; The implementation of the functions is not complicated. The SFML class sf::Text provides most of what we need. In the TextNode constructor, we retrieve the font from the resource holder and assign it to the text. TextNode::TextNode(const FontHolder& fonts, const std::string& text){mText.setFont(fonts.get(Fonts::Main));mText.setCharacterSize(20);setString(text);} The function to draw the text nodes just forwards the call to the SFML render target, as you know it from sprites. void TextNode::drawCurrent(sf::RenderTarget& target, sf::RenderStatesstates) const{target.draw(mText, states);} For the interface, mainly the following method is interesting. It assigns a new string to the text node, and automatically adapts to its size. centerOrigin() is a utility function we wrote; it sets the object's origin to its center, which simplifies positioning a lot. void TextNode::setString(const std::string& text){mText.setString(text);centerOrigin(mText);} In the Aircraft constructor, we create a text node and attach it to the aircraft itself. We keep a pointer mHealthDisplay as a member variable and let it point to the attached node. std::unique_ptr<TextNode> healthDisplay(new TextNode(fonts, ""));mHealthDisplay = healthDisplay.get();attachChild(std::move(healthDisplay)); In the method Aircraft::update(), we check for the current hitpoints, and convert them to a string, using our custom toString() function. The text node's string and relative position are set. Additionally, we set the text node's rotation to the negative aircraft rotation, which compensates the rotation in total. We do this in order to have the text always upright, independent of the aircraft's orientation. mHealthDisplay->setString(toString(getHitpoints()) + " HP");mHealthDisplay->setPosition(0.f, 50.f);mHealthDisplay->setRotation(-getRotation()); Creating enemies Enemies are other instances of the Aircraft class. They appear at the top of the screen and move downwards, until they fly past the bottom of the screen. Most properties are the same for the player and enemies, so we only explain the new aircraft functionality. Movement patterns By default, enemies fly downwards in a straight line. But it would be nice if different enemies moved differently, giving the feeling of a very basic artificial intelligence ( AI ). Thus, we introduce specific movement patterns. Such a pattern can be described as a sequence of directions to which the enemy airplane heads. A direction consists of an angle and a distance. struct Direction{Direction(float angle, float distance);float angle;float distance;}; Our data table for aircraft gets a new entry for the sequence of directions as shown in following code: struct AircraftData{int hitpoints;float speed;Textures::ID texture;std::vector<Direction> directions;}; Let's implement a zigzag movement pattern for the Raptor plane. First, it steers for 80 units in 45 degrees direction. Then, the angle changes to -45 degrees, and the plane traverses 160 units back. Last, it moves again 80 units in +45 degrees direction, until it arrives at its original x position. data[Aircraft::Raptor].directions.push_back(Direction( 45, 80));data[Aircraft::Raptor].directions.push_back(Direction(-45, 160));data[Aircraft::Raptor].directions.push_back(Direction( 45, 80)); For the Avenger plane, we use a slightly more complex pattern: it is essentially a zigzag, but between the two diagonal movements, the plane moves straight for 50 units. data[Aircraft::Avenger].directions.push_back(Direction(+45, 50));data[Aircraft::Avenger].directions.push_back(Direction( 0, 50));data[Aircraft::Avenger].directions.push_back(Direction(-45, 100));data[Aircraft::Avenger].directions.push_back(Direction( 0, 50));data[Aircraft::Avenger].directions.push_back(Direction(+45, 50)); The following figure shows the sequence of directions for both planes; the Raptor plane is located on the left, Avenger on the right: This way of defining movement is very simple, yet it enables a lot of possibilities. You can let the planes fly in any direction (also sideward or backwards); you can even approximate curves when using small intervals. Now, we look at the logic we have to implement to follow these movement patterns. To the Aircraft class, we add two member variables: mTravelledDistance, which denotes the distance already travelled for each direction, and mDirectionIndex, to know which direction the plane is currently taking. First, we retrieve the aircraft's movement pattern and store it as a reference to const named directions. We only proceed if there are movement patterns for the current type (otherwise the plane flies straight down). void Aircraft::updateMovementPattern(sf::Time dt){const std::vector<Direction>& directions= Table[mType].directions;if (!directions.empty()){ Second, we check if the current direction has already been passed by the plane (that is, the travelled distance is higher than the direction's distance). If so, the index is advanced to the next direction. The modulo operator allows a cycle; after finishing the last direction, the plane begins again with the first one. float distanceToTravel= directions[mDirectionIndex].distance;if (mTravelledDistance > distanceToTravel){mDirectionIndex= (mDirectionIndex + 1) % directions.size();mTravelledDistance = 0.f;} Now, we have to get a velocity vector out of the angle. First, we turn the angle by 90 degrees (by default, 0 degrees points to the right), but since our planes fly downwards, we work in a rotated coordinate system, such that we can use a minus to toggle between left/right. We also have to convert degrees to radians, using our function toRadian(). The velocity's x component is computed using the cosine of the angle multiplied with the maximal speed; analogue for the y component, where the sine is used. Eventually, the travelled distance is updated: float radians= toRadian(directions[mDirectionIndex].angle + 90.f);float vx = getMaxSpeed() * std::cos(radians);float vy = getMaxSpeed() * std::sin(radians);setVelocity(vx, vy);mTravelledDistance += getMaxSpeed() * dt.asSeconds();}} Note that if the distance to travel is no multiple of the aircraft speed, the plane will fly further than intended. This error is usually small, because there are many logic frames per second, and hardly noticeable, since each enemy will only be in the view for a short time. Spawning enemies It would be good if enemies were initially inactive, and the world created them as soon as they come closer to the player. By doing so, we do not need to process enemies that are relevant in the distant future; the scene graph can concentrate on updating and drawing active enemies. We create a structure nested inside the World class that represents a spawn point for an enemy. struct SpawnPoint{SpawnPoint(Aircraft::Type type, float x, float y);Aircraft::Type type;float x;float y;}; A member variable World::mEnemySpawnPoints of type std::vector<SpawnPoint> holds all future spawn points. As soon as an enemy position enters the battlefield, the corresponding enemy is created and inserted to the scene graph, and the spawn point is removed. The World class member function getBattlefieldBounds(), returns sf::FloatRect to the battlefield area, similar to getViewBounds(). The battlefield area extends the view area by a small rectangle at the top, inside which new enemies spawn before they enter the view. If an enemy's y coordinate lies below the battlefield's top member, the enemy will be created at its spawn point. Since enemies face downwards, they are rotated by 180 degrees. void World::spawnEnemies(){while (!mEnemySpawnPoints.empty()&& mEnemySpawnPoints.back().y> getBattlefieldBounds().top){SpawnPoint spawn = mEnemySpawnPoints.back();std::unique_ptr<Aircraft> enemy(new Aircraft(spawn.type, mTextures, mFonts));enemy->setPosition(spawn.x, spawn.y);enemy->setRotation(180.f);mSceneLayers[Air]->attachChild(std::move(enemy));mEnemySpawnPoints.pop_back();}} Now, let's insert the spawn points. addEnemy() effectively calls mEnemySpawnPoints.push_back(), and interprets the passed coordinates relative to the player's spawn position. After inserting all spawn points, we sort them by their y coordinates. By doing so, spawnEnemies() needs to check only the elements at the end of the sequence instead of iterating through it every time. void World::addEnemies(){addEnemy(Aircraft::Raptor, 0.f, 500.f);addEnemy(Aircraft::Avenger, -70.f, 1400.f);...std::sort(mEnemySpawnPoints.begin(), mEnemySpawnPoints.end(),[] (SpawnPoint lhs, SpawnPoint rhs){return lhs.y < rhs.y;});} Here is an example of the player facing four Avenger enemies. Above each, you see how many hitpoints it has left. Adding projectiles Finally, time to add what makes a game fun. Shooting down stuff is essential for our game. The code to interact with the W orld class is already defined, thanks to the actions in Player and to the existing Entity base class. All that's left is to define the projectiles themselves. We start with the Projectile class. We have normal machine gun bullets and homing missiles represented by the same class. This class inherits from the Entity class and is quite small, since it doesn't have anything special that differentiates it from other entities apart from collision tests, which we will talk about later. class Projectile : public Entity{public:enum Type{AlliedBullet,EnemyBullet,Missile,TypeCount};public:Projectile(Type type,const TextureHolder& textures);void guideTowards(sf::Vector2f position);bool isGuided() const;virtual unsigned int getCategory() const;virtual sf::FloatRect getBoundingRect() const;float getMaxSpeed() const;int getDamage() const;private:virtual void updateCurrent(sf::Time dt,CommandQueue& commands);virtual void drawCurrent(sf::RenderTarget& target,sf::RenderStates states) const;private:Type mType;sf::Sprite mSprite;sf::Vector2f mTargetDirection;}; Nothing fun or exciting here; we add some new helper functions such as the one to guide the missile towards a target. So let's have a quick look at the implementation. You might notice, we use the same data tables that we used in the Aircraft class to store data. Projectile::Projectile(Type type, const TextureHolder& textures): Entity(1), mType(type), mSprite(textures.get(Table[type].texture)){centerOrigin(mSprite);} The constructor simply creates a sprite with the texture we want for the projectile. We will check out the guide function when we actually implement the behavior of missiles. The rest of the functions don't hold anything particularly interesting. Draw the sprite and return a category for the commands and other data needed. To get an overview of the class hierarchy in the scene graph, here is an inheritance diagram of the current scene node types. The data table structures which are directly related to their corresponding entities are shown at the bottom of the following diagram: Firing bullets and missiles So let's try and shoot some bullets in the game. We start with adding two new actions in the Player class: Fire and LaunchMissile. We define the default key bindings for these to be the Space bar and M keys. Player::Player(){// Set initial key bindingsmKeyBinding[sf::Keyboard::Left] = MoveLeft;mKeyBinding[sf::Keyboard::Right] = MoveRight;mKeyBinding[sf::Keyboard::Up] = MoveUp;mKeyBinding[sf::Keyboard::Down] = MoveDown;mKeyBinding[sf::Keyboard::Space] = Fire;mKeyBinding[sf::Keyboard::M] = LaunchMissile;// ...}void Player::initializeActions(){// ...mActionBinding[Fire].action = derivedAction<Aircraft>(std::bind(&Aircraft::fire, _1));mActionBinding[LaunchMissile].action =derivedAction<Aircraft>(std::bind(&Aircraft::launchMissile, _1));} So when we press the keys bound to those two actions, a command will be fired which calls the aircraft's fire() and launchMissile() functions. However, we cannot put the actual code that fires the bullet or missile in those two functions. The reason is, because if we could, we would have no concept of how much time has elapsed. We don't want to fire a projectile for every frame. We want there to be some cool down until the next time we fire a bullet, to accomplish that we need to use the delta time passed in the aircraft's update() function. Instead, we mark what we want to fire by setting the Boolean flags mIsFiring or mIsLaunchingMissile to true in the Aircraft::fire() and the Aircraft::launchMissile() functions, respectively. Then we perform the actual logic in the update() function using commands. In order to make the code clearer to read, we have extracted it to its own function. void Aircraft::checkProjectileLaunch(sf::Time dt, CommandQueue&commands){if (mIsFiring && mFireCountdown <= sf::Time::Zero){commands.push(mFireCommand);mFireCountdown += sf::seconds(1.f / (mFireRateLevel+1));mIsFiring = false;}else if (mFireCountdown > sf::Time::Zero){mFireCountdown -= dt;}if (mIsLaunchingMissile){commands.push(mMissileCommand);mIsLaunchingMissile = false;}} We have a cool down for the bullets. When enough time has elapsed since the last bullet was fired, we can fire another bullet. The actual creation of the bullet is done using a command which we will look at later. After we spawn the bullet, we reset the countdown. Here, we use += instead of =; with a simple assignment, we would discard a little time remainder in each frame, generating a bigger error as time goes by. The time of the countdown is calculated using a member variable mFireCountdown in Aircraft. Like that, we can improve the aircraft's fire rate easily. So if the fire rate level is one, then we can fire a bullet every half a second, increase it to level two, and we get every third of a second. We also have to remember to keep ticking down the countdown member, even if the user is not trying to fire. Otherwise, the countdown would get stuck when the user released the Space bar. Next is the missile launch. We don't need a countdown here, because in the Player class, we made the input an event-based (not real-time based) input. bool Player::isRealtimeAction(Action action){switch (action){case MoveLeft:case MoveRight:case MoveDown:case MoveUp:case Fire:return true;default:return false;}} Since the switch statement does not identify LaunchMissile as a real-time input, the user has to release the M key before he can shoot another missile. The user wants to save his missiles for the moment he needs them. So, let's look at the commands that we perform, in order to actually shoot the projectiles. We define them in the constructor in order to have access to the texture holder. This shows one of the strengths of lambda expressions in C++11. Aircraft::Aircraft(Type type, const TextureHolder& textures){mFireCommand.category = Category::SceneAirLayer;mFireCommand.action =[this, &textures] (SceneNode& node, sf::Time){createBullets(node, textures);};mMissileCommand.category = Category::SceneAirLayer;mMissileCommand.action =[this, &textures] (SceneNode& node, sf::Time){createProjectile(node, Projectile::Missile, 0.f, 0.5f,textures);};} Now, we can pass the texture holder to the projectiles without any extra difficulty, and we don't even have to keep an explicit reference to the resources. This makes the Aircraft class and our code a lot simpler, since the reference does not need to exist in the update() function. The commands are sent to the air layer in the scene graph. This is the node where we want to create our projectiles. The missile is a bit simpler to create than bullets, that's why we call directly Aircraft::createProjectile(). So how do we create bullets then? void Aircraft::createBullets(SceneNode& node, const TextureHolder&textures) const{Projectile::Type type = isAllied()? Projectile::AlliedBullet : Projectile::EnemyBullet;switch (mSpreadLevel){case 1:createProjectile(node, type, 0.0f, 0.5f, textures);break;case 2:createProjectile(node, type, -0.33f, 0.33f, textures);createProjectile(node, type, +0.33f, 0.33f, textures);break;case 3:createProjectile(node, type, -0.5f, 0.33f, textures);createProjectile(node, type, 0.0f, 0.5f, textures);createProjectile(node, type, +0.5f, 0.33f, textures);break;}} For projectiles, we provide different levels of fire spread in order to make the game more interesting. The player can feel that progress is made, and that his aircraft becomes more powerful as he is playing. The function calls createProjectile() just as it was done for the missile. So how do we actually create the projectile and attach it to the scene graph? void Aircraft::createProjectile(SceneNode& node,Projectile::Type type, float xOffset, float yOffset,const TextureHolder& textures) const{std::unique_ptr<Projectile> projectile(new Projectile(type, textures));sf::Vector2f offset(xOffset * mSprite.getGlobalBounds().width,yOffset * mSprite.getGlobalBounds().height);sf::Vector2f velocity(0, projectile->getMaxSpeed());float sign = isAllied() ? -1.f : +1.f;projectile->setPosition(getWorldPosition() + offset * sign);projectile->setVelocity(velocity * sign);node.attachChild(std::move(projectile));} We create the projectile with an offset from the player and a velocity required by the projectile type. Also, depending on if this projectile is shot by an enemy or the player, we will have different directions. We do not want the enemy bullets to go upwards like the player's bullets or the other way around. Implementing gunfire for enemies is now a tiny step; instead of calling fire() when keys are pressed, we just call it always. We do this by adding the following code to the beginning of the checkProjectileLaunch() function: if (!isAllied())fire(); Now we have bullets that fly and split the sky. Homing missiles What would a modern aircraft be if it hadn't got an arsenal of homing missiles? This is where we start to add intelligence to our missiles; they should be capable of seeking enemies autonomously. Let's first look at what we need to implement on the projectile site. For homing missiles, the functions guideTowards() and isGuided(), as well as the variable mTargetDirection are important. Their implementation looks as follows: bool Projectile::isGuided() const{return mType == Missile;}void Projectile::guideTowards(sf::Vector2f position){assert(isGuided());mTargetDirection = unitVector(position - getWorldPosition());} The function unitVector() is a helper we have written. It divides a vector by its length, thus, always returns a vector of length one. The target direction is therefore a unit vector headed towards the target. In the function updateCurrent(), we steer our missile. We change the current missile's velocity by adding small contributions of the target direction vector to it. By doing so, the velocity vector continuously approaches the target direction, having the effect that the missile flies along a curve towards the target. approachRate is a constant that determines, to what extent the target direction contributes to the velocity. newVelocity, which is the weighted sum of the two vectors, is scaled to the maximum speed of the missile. It is assigned to the missile's velocity, and its angle is assigned to the missile's rotation. We use +90 here, because the missile texture points upwards (instead of right). void Projectile::updateCurrent(sf::Time dt,CommandQueue& commands){if (isGuided()){const float approachRate = 200.f;sf::Vector2f newVelocity = unitVector(approachRate* dt.asSeconds() * mTargetDirection + getVelocity());newVelocity *= getMaxSpeed();float angle = std::atan2(newVelocity.y, newVelocity.x);setRotation(toDegree(angle) + 90.f);setVelocity(newVelocity);}Entity::updateCurrent(dt, commands);} Note that there are many possibilities to guide a missile. Steering behaviors define a whole field of AI; they incorporate advanced mechanisms such as evasion, interception, and group behavior. Don't hesitate to search on the internet if you're interested. Now, we have guided the missile to a certain position, but how to retrieve that position? We want our missile to pursuit the closest enemy. For this, we switch from Projectile to the World class, where we write a new function. First, we store all currently active (that is, already spawned and not yet destroyed) enemies in the member variable mActiveEnemies. With the command facility, this task is almost trivial: void World::guideMissiles(){Command enemyCollector;enemyCollector.category = Category::EnemyAircraft;enemyCollector.action = derivedAction<Aircraft>([this] (Aircraft& enemy, sf::Time){if (!enemy.isDestroyed())mActiveEnemies.push_back(&enemy);}); Next, we have to find the nearest enemy for each missile. We set up another command, now for projectiles, that iterates through the active enemies to find the closest one. Here, distance() is a helper function that returns the distance between the centers of two scene nodes. Command missileGuider;missileGuider.category = Category::AlliedProjectile;missileGuider.action = derivedAction<Projectile>([this] (Projectile& missile, sf::Time){// Ignore unguided bulletsif (!missile.isGuided())return;float minDistance = std::numeric_limits<float>::max();Aircraft* closestEnemy = nullptr;FOREACH(Aircraft* enemy, mActiveEnemies){float enemyDistance = distance(missile, *enemy);if (enemyDistance < minDistance){closestEnemy = enemy;minDistance = enemyDistance;}} In case we found a closest enemy, we let the missile chase it. if (closestEnemy)missile.guideTowards(closestEnemy->getWorldPosition());}); After defining the second command, we push both to our queue, and reset the container of active enemies. Remember that the commands are not yet executed, they wait in the queue until they are invoked on the scene graph in World::update(). mCommandQueue.push(enemyCollector);mCommandQueue.push(missileGuider);mActiveEnemies.clear();} That's it, now we are able to fire and forget! The result looks as follows: Picking up some goodies Now we have implemented enemies and projectiles. But even if the player shot enemy airplanes down, and had exciting battles, he wouldn't remark that his success changes anything. You want to give the player the feeling that he is progressing in the game. Usual for this game genre are power-ups that the enemies drop when they are killed. So let's go ahead and implement that in our game. Now this is the same story as with the projectile. Most of the things we need have already been implemented; therefore, this will be quite easy to add. What we want is only an entity that, when the player touches it, applies an effect to the player and disappears. Not much work with our current framework. class Pickup : public Entity{public:enum Type{HealthRefill,MissileRefill,FireSpread,FireRate,TypeCount};public:Pickup(Type type,const TextureHolder& textures);virtual unsigned int getCategory() const;virtual sf::FloatRect getBoundingRect() const;void apply(Aircraft& player) const;protected:virtual void drawCurrent(sf::RenderTarget& target,sf::RenderStates states) const;private:Type mType;sf::Sprite mSprite;}; So, let's start looking at a few interesting parts. As usual, we have a data table, create a sprite and center it, so the constructor looks just as you would expect it. Let's investigate the apply() function, and how the data table is created. In apply(), a function object stored in the table is invoked with player as argument. The initializePickupData() function initializes the function objects, using std::bind() that redirects to the Aircraft member functions. void Pickup::apply(Aircraft& player) const{Table[mType].action(player);}std::vector<PickupData> initializePickupData(){std::vector<PickupData> data(Pickup::TypeCount);data[Pickup::HealthRefill].texture = Textures::HealthRefill;data[Pickup::HealthRefill].action= std::bind(&Aircraft::repair, _1, 25);data[Pickup::MissileRefill].texture = Textures::MissileRefill;data[Pickup::MissileRefill].action= std::bind(&Aircraft::collectMissiles, _1, 3);data[Pickup::FireSpread].texture = Textures::FireSpread;data[Pickup::FireSpread].action= std::bind(&Aircraft::increaseSpread, _1);data[Pickup::FireRate].texture = Textures::FireRate;data[Pickup::FireRate].action= std::bind(&Aircraft::increaseFireRate, _1);return data;} The pickups call already defined functions on the player aircraft that let us modify its state. These functions may repair it, refill it with missiles, or improve its firepower. It's nice when things just work out of the box. That's how the scene looks when two pickups (health and fire rate) are floating in the air. You may notice that the player's Eagle plane shoots two bullets at once, which is the result of a previously collected fire spread pickup.
Read more
  • 0
  • 0
  • 2006

article-image-introduction-modern-opengl
Packt
02 Jul 2013
16 min read
Save for later

Introduction to Modern OpenGL

Packt
02 Jul 2013
16 min read
(For more resources related to this topic, see here.) Setting up the OpenGL v3.3 core profile on Visual Studio 2010 using the GLEW and freeglut libraries We will start with a very basic example in which we will set up the modern OpenGL v3.3 core profile. This example will simply create a blank window and clear the window with red color. OpenGL or any other graphics API for that matter requires a window to display graphics in. This is carried out through platform specific codes. Previously, the GLUT library was invented to provide windowing functionality in a platform independent manner. However, this library was not maintained with each new OpenGL release. Fortunately, another independent project, freeglut, followed in the GLUT footsteps by providing similar (and in some cases better) windowing support in a platform independent way. In addition, it also helps with the creation of the OpenGL core/compatibility profile contexts. The latest version of freeglut may be downloaded from http://freeglut.sourceforge.net. The version used in the source code accompanying this book is v2.8.0. After downloading the freeglut library, you will have to compile it to generate the libs/dlls. The extension mechanism provided by OpenGL still exists. To aid with getting the appropriate function pointers, the GLEW library is used. The latest version can be downloaded from http://glew.sourceforge.net. The version of GLEW used in the source code accompanying this book is v1.9.0. If the source release is downloaded, you will have to build GLEW first to generate the libs and dlls on your platform. You may also download the pre-built binaries. Prior to OpenGL v3.0, the OpenGL API provided support for matrices by providing specific matrix stacks such as the modelview, projection, and texture matrix stacks. In addition, transformation functions such as translate, rotate, and scale, as well as projection functions were also provided. Moreover, immediate mode rendering was supported, allowing application programmers to directly push the vertex information to the hardware. In OpenGL v3.0 and above, all of these functionalities are removed from the core profile, whereas for backward compatibility they are retained in the compatibility profile. If we use the core profile (which is the recommended approach), it is our responsibility to implement all of these functionalities including all matrix handling and transformations. Fortunately, a library called glm exists that provides math related classes such as vectors and matrices. It also provides additional convenience functions and classes. For all of the demos in this book, we will use the glm library. Since this is a headers only library, there are no linker libraries for glm. The latest version of glm can be downloaded from http://glm.g-truc.net. The version used for the source code in this book is v0.9.4.0. There are several image formats available. It is not a trivial task to write an image loader for such a large number of image formats. Fortunately, there are several image loading libraries that make image loading a trivial task. In addition, they provide support for both loading as well as saving of images into various formats. One such library is the SOIL image loading library. The latest version of SOIL can be downloaded from http://www.lonesock.net/soil.html. Once we have downloaded the SOIL library, we extract the file to a location on the hard disk. Next, we set up the include and library paths in the Visual Studio environment. The include path on my development machine is D:LibrariessoilSimple OpenGL Image Librarysrc whereas, the library path is set to D:LibrariessoilSimple OpenGL Image LibrarylibVC10_Debug. Of course, the path for your system will be different than mine but these are the folders that the directories should point to. These steps will help us to set up our development environment. For all of the recipes in this book, Visual Studio 2010 Professional version is used. Readers may also use the free express edition or any other version of Visual Studio (for example, Ultimate/Enterprise). Since there are a myriad of development environments, to make it easier for users on other platforms, we have provided premake script files as well. The code for this recipe is in the Chapter1/GettingStarted directory. How to do it... Let us setup the development environment using the following steps: After downloading the required libraries, we set up the Visual Studio 2010 environment settings. We first create a new Win32 Console Application project as shown in the preceding screenshot. We set up an empty Win32 project as shown in the following screenshot: Next, we set up the include and library paths for the project by going into the Project menu and selecting project Properties . This opens a new dialog box. In the left pane, click on the Configuration Properties option and then on VC++ Directories . In the right pane, in the Include Directories field, add the GLEW and freeglut subfolder paths. Similarly, in the Library Directories , add the path to the lib subfolder of GLEW and freeglut libraries as shown in the following screenshot: Next, we add a new .cpp file to the project and name it main.cpp. This is the main source file of our project. You may also browse through Chapter1/ GettingStarted/GettingStarted/main.cpp which does all this setup already. Let us skim through the Chapter1/ GettingStarted/GettingStarted/main.cpp file piece by piece. #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> These lines are the include files that we will add to all of our projects. The first is the GLEW header, the second is the freeglut header, and the final include is the standard input/output header. In Visual Studio, we can add the required linker libraries in two ways. The first way is through the Visual Studio environment (by going to the Properties menu item in the Project menu). This opens the project's property pages. In the configuration properties tree, we collapse the Linker subtree and click on the Input item. The first field in the right pane is Additional Dependencies. We can add the linker library in this field as shown in the following screenshot: The second way is to add the glew32.lib file to the linker settings programmatically. This can be achieved by adding the following pragma: #pragma comment(lib, "glew32.lib") The next line is the using directive to enable access to the functions in the std namespace. This is not mandatory but we include this here so that we do not have to prefix std:: to any standard library function from the iostream header file. using namespace std; The next lines define the width and height constants which will be the screen resolution for the window. After these declarations, there are five function definitions . The OnInit() function is used for initializing any OpenGL state or object, OnShutdown() is used to delete an OpenGL object, OnResize() is used to handle the resize event, OnRender() helps to handle the paint event, and main() is the entry point of the application. We start with the definition of the main() function. const int WIDTH = 1280; const int HEIGHT = 960; int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); glutInitContextVersion (3, 3); glutInitContextFlags (GLUT_CORE_PROFILE | GLUT_DEBUG); glutInitContextProfile(GLUT_FORWARD_COMPATIBLE); glutInitWindowSize(WIDTH, HEIGHT); The first line glutInit initializes the GLUT environment. We pass the command line arguments to this function from our entry point. Next, we set up the display mode for our application. In this case, we request the GLUT framework to provide support for a depth buffer, double buffering (that is a front and a back buffer for smooth, flicker-free rendering), and the format of the frame buffer to be RGBA (that is with red, green, blue, and alpha channels). Next, we set the required OpenGL context version we desire by using the glutInitContextVersion. The first parameter is the major version of OpenGL and the second parameter is the minor version of OpenGL. For example, if we want to create an OpenGL v4.3 context, we will call glutInitContextVersion (4, 3). Next, the context flags are specified: glutInitContextFlags (GLUT_CORE_PROFILE | GLUT_DEBUG); glutInitContextProfile(GLUT_FORWARD_COMPATIBLE); In OpenGL v4.3, we can register a callback when any OpenGL related error occurs. Passing GLUT_DEBUG to the glutInitContextFlags functions creates the OpenGL context in debug mode which is needed for the debug message callback. For any version of OpenGL including OpenGL v3.3 and above, there are two profiles available: the core profile (which is a pure shader based profile without support for OpenGL fixed functionality) and the compatibility profile (which supports the OpenGL fixed functionality). All of the matrix stack functionality glMatrixMode(*), glTranslate*, glRotate*, glScale*, and so on, and immediate mode calls such as glVertex*, glTexCoord*, and glNormal* of legacy OpenGL, are retained in the compatibility profile. However, they are removed from the core profile. In our case, we will request a forward compatible core profile which means that we will not have any fixed function OpenGL functionality available. Next, we set the screen size and create the window: glutInitWindowSize(WIDTH, HEIGHT); glutCreateWindow("Getting started with OpenGL 3.3"); Next, we initialize the GLEW library. It is important to initialize the GLEW library after the OpenGL context has been created. If the function returns GLEW_OK the function succeeds, otherwise the GLEW initialization fails. glewExperimental = GL_TRUE; GLenum err = glewInit(); if (GLEW_OK != err){ cerr<<"Error: "<<glewGetErrorString(err)<<endl; } else { if (GLEW_VERSION_3_3) { cout<<"Driver supports OpenGL 3.3nDetails:"<<endl; } } cout<<"tUsing glew "<<glewGetString(GLEW_VERSION)<<endl; cout<<"tVendor: "<<glGetString (GL_VENDOR)<<endl; cout<<"tRenderer: "<<glGetString (GL_RENDERER)<<endl; cout<<"tVersion: "<<glGetString (GL_VERSION)<<endl; cout<<"tGLSL: "<<glGetString(GL_SHADING_LANGUAGE_VERSION)<<endl; The glewExperimental global switch allows the GLEW library to report an extension if it is supported by the hardware but is unsupported by the experimental or pre-release drivers. After the function is initialized, the GLEW diagnostic information such as the GLEW version, the graphics vendor, the OpenGL renderer, and the shader language version are printed to the standard output. Finally, we call our initialization function OnInit() and then attach our uninitialization function OnShutdown() as the> glutCloseFunc method—the close callback function which will be called when the window is about to close. Next, we attach our display and reshape function to their corresponding callbacks. The main function is terminated with a call to the glutMainLoop() function> which starts the application's main loop. OnInit(); glutCloseFunc(OnShutdown); glutDisplayFunc(OnRender); glutReshapeFunc(OnResize); glutMainLoop(); return 0; } There's more… The remaining functions are defined as follows: void OnInit() { glClearColor(1,0,0,0); cout<<"Initialization successfull"<<endl; } void OnShutdown() { cout<<"Shutdown successfull"<<endl; } void OnResize(int nw, int nh) { } void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glutSwapBuffers(); } For this simple example, we set the clear color to red (R:1, G:0, B:0, A:0). The first three are the red, green, and blue channels and the last is the alpha channel which is used in alpha blending. The only other function defined in this simple example is the OnRender() function, which is our display callback function that is called on the paint event. This function first clears the color and depth buffers to the clear color and clear depth values respectively. Similar to the color buffer, there is another buffer called the depth buffer. Its clear value can be set using the glClearDepth function. It is used for hardware based hidden surface removal. It simply stores the depth of the nearest fragment encountered so far. The incoming fragment's depth value overwrites the depth buffer value based on the depth clear function specified for the depth test using the> glDepthFunc function. By default the depth value gets overwritten if the current fragment's depth is lower than the existing depth in the depth buffer. The glutSwapBuffers function is then called to set the current back buffer as the current front buffer that is shown on screen. This call is required in a double buffered OpenGL application. Running the code gives us the output shown in the following screenshot. Designing a GLSL shader class We will now have a look at how to set up shaders. Shaders are special programs that are run on the GPU. There are different shaders for controlling different stages of the programmable graphics pipeline. In the modern GPU, these include the vertex shader (which is responsible for calculating the clip-space position of a vertex), the tessellation control shader (which is responsible for determining the amount of tessellation of a given patch), the tessellation evaluation shader (which computes the interpolated positions and other attributes on the tessellation result), the geometry shader (which processes primitives and can add additional primitives and vertices if needed), and the fragment shader (which converts a rasterized fragment into a colored pixel and a depth). The modern GPU pipeline highlighting the different shader stages is shown in the following figure. Note that the tessellation control/evaluation shaders are only available in the hardware supporting OpenGL v4.0 and above. Since the steps involved in shader handling as well as compiling and attaching shaders for use in OpenGL applications are similar, we wrap these steps in a simple class we call GLSLShader. Getting ready The GLSLShader class is defined in the GLSLShader.[h/cpp] files. We first declare the constructor and destructor which initialize the member variables. The next three functions, LoadFromString, LoadFromFile, and CreateAndLinkProgram handle the shader compilation, linking, and program creation. The next two functions, Use and UnUse functions bind and unbind the program. Two std::map datastructures are used. They store the attribute's/uniform's name as the key and its location as the value. This is done to remove the redundant call to get the attribute's/uniform's location each frame or when the location is required to access the attribute/uniform. The next two functions, AddAttribute and AddUniform add the locations of the attribute and uniforms into their respective std::map (_attributeList and _uniformLocationList). class GLSLShader { public: GLSLShader(void); ~GLSLShader(void); void LoadFromString(GLenum whichShader, const string& source); void LoadFromFile(GLenum whichShader, const string& filename); void CreateAndLinkProgram(); void Use(); void UnUse(); void AddAttribute(const string& attribute); void AddUniform(const string& uniform); GLuint operator[](const string& attribute); GLuint operator()(const string& uniform); void DeleteShaderProgram(); private: enum ShaderType{VERTEX_SHADER,FRAGMENT_SHADER,GEOMETRY_SHADER}; GLuint _program; int _totalShaders; GLuint _shaders[3]; map<string,GLuint> _attributeList; map<string,GLuint> _uniformLocationList; }; To make it convenient to access the attribute and uniform locations from their maps , we declare the two indexers. For attributes, we overload the square brackets ([]) whereas for uniforms, we overload the parenthesis operation (). Finally, we define a function DeleteShaderProgram for deletion of the shader program object. Following the function declarations are the member fields. How to do it In a typical shader application, the >usage of the GLSLShader object is as follows: Create the GLSLShader object either on stack (for example, GLSLShader shader;) or on the heap (for example, GLSLShader* shader=new GLSLShader();) Call LoadFromFile on the GLSLShader object reference Call CreateAndLinkProgram on the GLSLShader object reference Call Use on the GLSLShader object reference to bind the shader object Call AddAttribute/AddUniform to store locations of all of the shader's attributes and uniforms respectively Call UnUse on the GLSLShader object reference to unbind the shader object Note that the above steps are required at initialization only. We can set the values of the uniforms that remain constant throughout the execution of the application in the Use/UnUse block given above. At the rendering step, we access uniform(s), if we have uniforms that change each frame (for example, the modelview matrices). We first bind the shader by calling the GLSLShader::Use function. We then set the uniform by calling the glUniform{*} function, invoke the rendering by calling the glDraw{*} function, and then unbind the shader (GLSLShader::UnUse). Note that the glDraw{*} call passes the attributes to the GPU. How it works In a typical OpenGL shader application, the shader specific functions and their sequence of execution are as follows: glCreateShader glShaderSource glCompileShader glGetShaderInfoLog Execution of the above four functions creates a shader object. After the shader object is created, a shader program object is >created using the following set of functions in the following sequence: glCreateProgram glAttachShader glLinkProgram glGetProgramInfoLog Note that after the shader program has been linked, we can safely delete the shader object. There's more In the GLSLShader class, the first four steps are handled in the LoadFromString function and the later four steps are handled by the CreateAndLinkProgram member function. After the shader program object has been created, we can set the program for execution on the GPU. This process is called shader binding This is carried out by the glUseProgram function which is called through the Use/UnUse functions in the GLSLShader class. To enable communication between the application and the shader, there are two different kinds of fields available in the shader. The first are the attributes which may change during shader execution across different shader stages. All per-vertex attributes fall in this category. The second are the uniforms which remain constant throughout the shader execution. Typical examples include the modelview matrix and the texture samplers. In order to communicate with the shader program, the application must obtain the location of an attribute/uniform after the shader program is bound. The location identifies the attribute/uniform. In the GLSLShader class, for convenience, we store the locations of attributes and uniforms in two separate std::map objects. For accessing any attribute/uniform location, we provide an indexer in the GLSLShader class. In cases where there is an error in the compilation or linking stage, the shader log is printed to the console. Say for example, our GLSLshader object is called shader and our shader contains a uniform called MVP. We can first add it to the map of GLSLShader by calling shader.AddUniform("MVP"). This function adds the uniform's location to the map. Then when we want to access the uniform, we directly call shader("MVP") and it returns the location of our uniform.
Read more
  • 0
  • 0
  • 3341