Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-introducing-variables
Packt
24 Jun 2014
6 min read
Save for later

Introducing variables

Packt
24 Jun 2014
6 min read
(For more resources related to this topic, see here.) In order to store data, you have to store data in the right kind of variables. We can think of variables as boxes, and what you put in these boxes depends on what type of box it is. In most native programming languages, you have to declare a variable and its type. Number variables Let's go over some of the major types of variables. The first type is number variables. These variables store numbers and not letters. That means, if you tried to put a name in, let's say "John Bura", then the app simply won't work. Integer variables There are numerous different types of number variables. Integer variables, called Int variables, can be positive or negative whole numbers—you cannot have a decimal at all. So, you could put -1 as an integer variable but not 1.2. Real variables Real variables can be positive or negative, and they can be decimal numbers. A real variable can be 1.0, -40.4, or 100.1, for instance. There are other kinds of number variables as well. They are used in more specific situations. For the most part, integer and real variables are the ones you need to know—make sure you don't get them mixed up. If you were to run an app with this kind of mismatch, chances are it won't work. String variables There is another kind of variable that is really important. This type of variable is called a string variable. String variables are variables that comprise letters or words. This means that if you want to record a character's name, then you will have to use a string variable. In most programming languages, string variables have to be in quotes, for example, "John Bura". The quote marks tell the computer that the characters within are actually strings that the computer can use. When you put a number 1 into a string, is it a real number 1 or is it just a fake number? It's a fake number because strings are not numbers—they are strings. Even though the string shows the number 1, it isn't actually the number 1. Strings are meant to display characters, and numbers are meant to do math. Strings are not meant to do math—they just hold characters. If you tried to do math with a string, it wouldn't work (except in JavaScript, which we will talk about shortly). Strings shouldn't be used for calculations—they are meant to hold and display characters. If we have a string "1", it will be recorded as a character rather than an integer that can be used for calculations. Boolean variables The last main type of variable that we need to talk about is Boolean variables. Boolean variables are either true or false, and they are very important when it comes to games. They are used where there can only be two options. The following are some examples of Boolean variables: isAlive isShooting isInAppPurchaseCompleted isConnectedToInternet Most of these variables start off with the word is. This is usually done to signify that the variable that we are using is a Boolean. When you make games, you tend to use a lot of Boolean variables because there are so many states that game objects can be in. Often, these states have only two options, and the best thing to do is use a Boolean. Sometimes, you need to use an integer instead of a Boolean. Usually, 0 equals false and 1 equals true. Other variables When it comes to game production, there are a lot of specific variables that differ from environment to environment. Sometimes, there are GameObject variables, and there can also be a whole bunch of more specific variables. Declaring variables If you want to store any kind of data in variables, you have to declare them first. In the backend of Construct 2, there are a lot of variables that are already declared for you. This means that Construct 2 takes out the work of declaring variables. The variables that are taken care of for you include the following: Keyboard Mouse position Mouse angle Type of web browser Writing variables in code When we use Construct 2, a lot of the backend busywork has already been done for us. So, how do we declare variables in code? Usually, variables are declared at the top of the coding document, as shown in the following code: Int score; Real timescale = 1.2; Bool isDead; Bool isShooting = false; String name = "John Bura"; Let's take a look at all of them. The type of variable is listed first. In this case, we have the Int, Real, Bool (Boolean), and String variables. Next, we have the name of the variable. If you look carefully, you can see that certain variables have an = (equals sign) and some do not. When we have a variable with an equals sign, we initialize it. This means that we set the information in the variable right away. Sometimes, you need to do this and at other times, you do not. For example, a score does not need to be initialized because we are going to change the score as the game progresses. As you already know, you can initialize a Boolean variable to either true or false—these are the only two states a Boolean variable can be in. You will also notice that there are quotes around the string variable. Let's take a look at some examples that won't work: Int score = -1.2; Bool isDead = "false"; String name = John Bura; There is something wrong with all these examples. First of all, the Int variable cannot be a decimal. Second, the Bool variable has quotes around it. Lastly, the String variable has no quotes. In most environments, this will cause the program to not work. However, in HTML5 or JavaScript, the variable is changed to fit the situation. Summary In this article, we learned about the different types of variables and even looked at a few correct and incorrect variable declarations. If you are making a game, get used to making and setting lots of variables. The best part is that Construct 2 makes handling variables really easy. Resources for Article: Further resources on this subject: 2D game development with Monkey [article] Microsoft XNA 4.0 Game Development: Receiving Player Input [article] Flash Game Development: Making of Astro-PANIC! [article]
Read more
  • 0
  • 0
  • 2736

article-image-saving-data-create-longer-games
Packt
15 May 2014
6 min read
Save for later

Saving Data to Create Longer Games

Packt
15 May 2014
6 min read
(For more resources related to this topic, see here.) Creating collectibles to save The Unity engine features an incredibly simple saving and loading system that can load your data between sessions in just a few lines of code. The downside of using Unity's built-in data management is that save data will be erased if the game is ever uninstalled from the OUYA console. Later, we'll talk about how to make your data persistent between installations, but for now, we'll set up some basic data-driven values in your marble prototype. However, before we load the saved data, we have to create something to save. Time for action – creating a basic collectible Some games use save data to track the total number of times the player has obtained a collectible item. Players may not feel like it's worth gathering collectibles if they disappear when the game session is closed, but making the game track their long-term progress can give players the motivation to explore a game world and discover everything it has to offer. We're going to add collectibles to the marble game prototype you created and save them so that the player can see how many collectibles they've totally gathered over every play session. Perform the following steps to create a collectible: Open your RollingMarble Unity project and double-click on the scene that has your level in it. Create a new cylinder from the Create menu in your Hierarchy menu. Move the cylinder so that it rests on the level's platform. It should appear as shown in the following screenshot: We don't want our collectible to look like a plain old cylinder, so manipulate it with the rotate and scale tools until it looks a little more like a coin. Obviously, you'll have a coin model in the final game that you can load, but we can customize and differentiate primitive objects for the purpose of our prototype. Our primitive is starting to look like a coin, but it's still a bland gray color. To make it look a little bit nicer, we'll use Unity to apply a material. A material tells the engine how an object should appear when it is rendered, including which textures and colors to use for each object. Right now, we'll only apply a basic color, but later on we'll see how it can store different kinds of textures and other data. Materials can be created and customized in a matter of minutes in Unity, and they're a great way to color simple objects or distinguish primitive shapes from one another. Create a new folder named Materials in your Project window and right-click on it to create a new material named CoinMaterial as shown in the following screenshot: Click on the material that you just created and its properties will appear in the Inspector window. Click on the color box next to the Main Color property and change it to a yellow color. The colored sphere in the Material window will change to reflect how the material will look in real time, as shown in the following screenshot: Our collectible coin now has a color, but as we can see from the preview of the sphere, it's still kind of dull. We want our coin to be shiny so that it catches the player's eye, so we'll change the Shader type, which dictates how light hits the object. The current Shader type on our coin material is Diffuse, which basically means it is a softer, nonreflective material. To make the coin shiny, change the Shader type to Specular. You'll see a reflective flare appear on the sphere preview; adjust the Shininess slider to see how different levels of specularity affect the material. You may have noticed that another color value was added when you changed the material's shader from Diffuse to Specular; this value affects only the shiny parts of the object. You can make the material shine brighter by changing it from gray to white, or give its shininess a tint by using a completely new color. Attach your material to the collectible object by clicking-and-dragging the material from the Project window and releasing it over the object in your scene view. The object will look like the one shown in the following screenshot: Our collectible coin object now has a unique shape and appearance, so it's a good idea to save it as a prefab. Create a Prefabs folder in your Project window if you haven't already, and use the folder's right-click menu to create a new blank prefab named Coin. Click-and-drag the coin object from the hierarchy to the prefab to complete the link. We'll add code to the coin later, but we can change the prefab after we initially create it, so don't worry about saving an incomplete collectible. Verify whether the prefab link worked by clicking-and-dragging multiple instances of the prefab from the Project window onto the Scene view. What just happened? Until you start adding 3D models to your game, primitives are a great way to create placeholder objects, and materials are useful for making them look more complex and unique. Materials add color to objects, but they also contain a shader that affects the way light hits the object. The two most basic shaders are Diffuse (dull) and Specular (shiny), but there are several other shaders in Unity that can help make your object appear exactly like you want it. You can even code your own shaders using the ShaderLab language, which you can learn on your own using the documentation at http://docs.unity3d.com/Documentation/Components/SL-Reference.html. Next, we'll add some functionality to your coin to save the collection data. Have a go hero – make your prototype stand out with materials As materials are easy to set up with Unity's color picker and built-in shaders, you have a lot of options at your fingertips to quickly make your prototype stand out and look better than a basic grayscale mock-up. Take any of your existing projects and see how far you can push the aesthetic with different combinations of colors and materials. Keep the following points in mind: Some shaders, such as Specular, have multiple colors that you can assign. Play around with different combinations to create a unique appearance. There are more shaders available to you than just the ones loaded into a new project; move your mouse over the Import Package option in Unity's Assets menu and import the Toon Shading package to add even more options to your shader collection. Complex object prefabs made of more than one primitive can have a different material on each primitive. Add multiple materials to a single object to help your user differentiate between its various parts and give your scene more detail. Try changing the materials used in your scene until you come up with something unique and clean, as shown in the following screenshot of our cannon prototype with custom materials:
Read more
  • 0
  • 0
  • 983

article-image-guidelines-setting-ouya-odk
Packt
07 May 2014
5 min read
Save for later

Guidelines for Setting Up the OUYA ODK

Packt
07 May 2014
5 min read
(For more resources related to this topic, see here.) Starting with the OUYA Development Kit The OUYA Development Kit (OUYA ODK) is a tool to create games and applications for the OUYA console, and its extensions and libraries are in the .jar format. It is released under Apache License Version 2.0. The OUYA ODK contains the following folders: Licenses: The SDK games and applications depend on various open source libraries. This folder contains all the necessary authorizations for the successful compilation and publication of the project in the OUYA console or testing in the emulator. Samples: This folder has some scene examples, which help to show users how to use the Standard Development Kit. Javadoc: This folder contains the documentation of Java classes, methods, and libraries. Libs: This folder contains the .jar files for the OUYA Java classes and their dependencies for the development of applications for the OUYA console. OUYA Framework APK file: This file contains the core of the OUYA software environment that allows visualization of a project based on the environment of OUYA. OUYA Launcher APK file: This file contains the OUYA launcher that displays the generated .apk file. The ODK plugin within Unity3D Download the Unity3D plugin for OUYA. In the developer portal, you will find these resources at https://github.com/ouya/ouya-unity-plugin. After downloading the ODK plugin, unzip the file in the desktop and import the ODK plugin for the Unity3D folder in the interface engine of the Assets folder; you will find several folders in it, including the following ones: Ouya: This folder contains the Examples, SDK, and StarterKit folders LitJson: This folder contains libraries that are important for compilation Plugins: This folder contains the Android folder, which is required for mobile projects "The Last Maya" created with the .ngui extension Importing the ODK plugin within Unity3D The OUYA Unity plugin can be imported into the Unity IDE. Navigate to Assets | Import Package | Custom Package…. Find the Ouya Unity Plugin folder on the desktop and import all the files. The package is divided into Core and Examples. The Core folder contains the OUYA panel and all the code for the construction of a project for the console. The Core and Examples folders can be used as individual packages and exported from the menu, as shown in the following screenshot: Installing and configuring the ODK plugin First, execute the Unity3D application and navigate to File | Open Project and then select the folder where you need to put the OUYA Unity plugin. You can check if Ouya Unity Plugin has been successfully imported by having a look at the window menu at the top of Unity3D, where the toolbars are located. In this manner, you can review the various components of the OUYA panel. While loading the OUYA panel, a window will be displayed with the following sections and buttons: Build Application: This is the first button and is used to compile, build, and create an Android Application Package file (APK) Build and Run Application: This is the next button and allows you to compile the application, generate an APK, and then run it on the emulator or publish directly to a device connected to the computer Compile: This button compiles the entire solution The lower section displays the paths of different libraries. Before it uses the OUYA plugin, the user ought to edit the fields in the PlayerSettings window (specifically the Bundle Identifier field), set the Minimum API Level field to API level 16, and set the Default Orientation field to Landscape Left. Another button that is mandatory is Bundle Identifier synchronizer, which synchronizes the Android manifest file (XML) and the identifiers of Java packages. Remember that the package ID must be unique for each game and has to be edited to avoid synchronization problems. Also, the OuyaGameObject (shown in the following screenshot) is very important for use in in-app purchases: The OUYA panel The Unity tab in the OUYA panel shows the path of the Unity JAR file, which houses the file's JAR class. This file is important because it is the one that communicates with the Unity Web Player. This Unity tab is shown in the following screenshot: The Java JDK tab shows the routes of the Java Runtime installation with all its different components to properly compile a project for Android and OUYA, as shown in the following screenshot: The Android SDK tab displays the current version of the SDK and contains the paths of the different components of the SDK: Android Jar Path ADB, APT SDK Path, and Path, as shown in the following screenshot. These paths must correspond to the PATH environment variable of the operating system. Finally, the last tab of the OUYA panel, Android NDK, shows the installation path of C++ scripts for native builds, as shown in the following screenshot: Installing and configuring the Java class If at this point you want to perform native development using the NDK or have problems opening or compiling the OUYA project, you need to configure the Java files. To install and configure the Java class, perform the following steps: Download and install the JDK 1.6 and configure the Java Runtime path in the PATH environment variable. Next, you need to set a unique bundle, identifier.com.yourcompany.gametitle. Hit the Sync button so your packages and manifest match. Create a game in the developer portal that uses the bundle ID. Download that signing key (key.der) and save it in Unity. Compile the Java plugin and the Java application. Input your developer UUID from the developer portal into the OuyaGameObject in the scene.
Read more
  • 0
  • 0
  • 1791
Banner background image

Packt
21 Apr 2014
7 min read
Save for later

Getting Started – An Introduction to GML

Packt
21 Apr 2014
7 min read
(For more resources related to this topic, see here.) Creating GML scripts Before diving into any actual code, the various places in which scripts can appear in GameMaker as well as the reasoning behind placing scripts in one area versus another should be addressed. Creating GML scripts within an event Within an object, each event added can either contain a script or call one. This will be the only instance when dragging-and-dropping is required as the goal of scripting is to eliminate the need for it. To add a script to an event within an object, go to the control tab of the Object Properties menu of the object being edited. Under the Code label, the first two icons deal with scripts. Displayed in the following screenshot, the leftmost icon, which looks like a piece of paper, will create a script that is unique to that object type; the middle icon, which looks like a piece of paper with a green arrow, will allow for a script resource to be selected and then called during the respective event. Creating scripts within events is most useful when the scripts within those events perform actions that are very specific to the object instance triggering the event. The following screenshot shows these object instances: Creating scripts as resources Navigating to Resources | Create Script or using the keyboard shortcut Shift + Ctrl + C will create a script resource. Once created, a new script should appear under the Scripts folder on the left side of the project where resources are located. Creating a script as a resource is most useful in the following conditions: When many different objects utilize this functionality When a function requires multiple input values or arguments When global actions such as saving and loading are utilized When implementing complex logic and algorithms Scripting a room's creation code Room resources are specific resources where objects are placed and gameplay occurs. Room resources can be created by navigating to Resources | Create room or using Shift + Ctrl + R. Rooms can also contain scripts. When editing a room, navigate to the settings tab within the Room Properties panel and you should see a button labeled Creation code as seen in the following screenshot. When clicked on, this will open a blank GML script. This script will be executed as soon as the player loads the specified room, before any objects trigger their own events. Using Creation code is essentially the same as having a script in the Create event of an object. Understanding parts of GML scripts GML scripts are made up of many different parts. The following section will go over these different parts and their syntax, formatting, and usage. Programs A program is a set of instructions that are followed in a specific order. One way to think of it is that every script written in GML is essentially a program. Programs in GML are usually enclosed within braces, { }, as shown in the following example: { // Defines an instanced string variable. str_text = "Hello Word"; // Every frame, 10 units are added to x, a built-in variable. x += 10; // If x is greater than 200 units, the string changes. if (x > 200) { str_text = "Hello Mars"; } } The previous code example contains two assignment expressions followed by a conditional statement, followed by another program with an assignment expression. If the preceding script were an actual GML script, the initial set of braces enclosing the program would not be required. Each instruction or line of code ends with a semicolon ( ;). This is not required as a line break or return is sufficient, but the semicolon is a common symbol used in many other programming languages to indicate the end of an instruction. Using it is a good habit to improve the overall readability of one's code. snake_case Before continuing with this overview of GML, it's very important to observe that the formatting used in GML programs is snake case. Though it is not necessary to use this formatting, the built-in methods and constants of GML use it; so, for the sake of readability and consistency, it is recommended that you use snake casing, which has the following requirements: No capital letters are used All words are separated by underscores Variables Variables are the main working units within GML scripts, which are used to represent values. Variables are unique in GML in that, unlike some programming languages, they are not strictly typed, which means that the variable does not have to represent a specific data structure. Instead, variables can represent either of the following types: A number also known as real, such as 100 or 2.0312. Integers can also correspond to the particular instance of an object, room, script, or another type of resource. A string which represents a collection of alphanumeric characters commonly used to display text, encased in either single or double quotation marks, for example, "Hello World". Variable prefixes As previously mentioned, the same variable can be assigned to any of the mentioned variable types, which can cause a variety of problems. To combat this, the prefixes of variable names usually identify the type of data stored within the variable, such as str_player_name (which represents a string). The following are the common prefixes that will be used: str: String spr: Sprites snd: Sounds bg: Backgrounds pth: Paths scr: Scripts fnt: Fonts tml: Timeline obj: Object rm: Room ps: Particle System pe: Particle Emitter pt: Particle Type ev: Event Variable names cannot be started with numbers and most other non-alphanumeric characters, so it is best to stick with using basic letters. Variable scope Within GML scripts, variables have different scopes. This means that the way in which the values of variables are accessed and set varies. The following are the different scopes: Instance: These variables are unique to the instances or copies of each object. They can be accessed and set by themselves or by other game objects and are the most common variables in GML. Local: Local variables are those that exist only within a function or script. They are declared using the var keyword and can be accessed only within the scripts in which they've been created. Global: A variable that is global can be accessed by any object through scripting. It belongs to the game and not an individual object instance. There cannot be multiple global variables of the same name. Constants: Constants are variables whose values can only be read and not altered. They can be instanced or global variables. Instanced constants are, for example, object_index or sprite_width. The true and false variables are examples of global constants. Additionally, any created resource can be thought of as a global constant representing its ID and unable to be assigned a new value. The following example demonstrates the assignment of different variable types: // Local variable assignment. var a = 1; // Global variable declaration and assignment. globalvar b; b = 2; // Alternate global variable declaration and assignment. global.c = 10; // Instanced variable assignment through the use of "self". self.x = 10; /* Instanced variable assignment without the use of "self". Works identically to using "self". */ y = 10; Built-in variables Some global variables and instanced variables are already provided by GameMaker: Studio for each game and object. Variables such as x, sprite_index, and image_speed are examples of built-in instanced variables. Meanwhile, some built-in variables are also global, such as health, score, and lives. The use of these in a game is really up to personal preference, but their appropriate names do make them easier to remember. When any type of built-in variable is used in scripting, it will appear in a different color, the default being a light, pinkish red. All built-in variables are documented in GameMaker: Studio's help contents, which can be accessed by navigating to Help | Contents... | Reference or by pressing F1. Creating custom constants Custom constants can be defined by going to Resources | Define Constants... or by pressing Shift + Ctrl + N. In this dialog, first a variable name and then a correlating value are set. By default, constants will appear in the same color as built-in variables when written in the GML code. The following screenshot shows this interface with some custom constants created:
Read more
  • 0
  • 0
  • 1813

article-image-skinning-character
Packt
21 Apr 2014
6 min read
Save for later

Skinning a character

Packt
21 Apr 2014
6 min read
(For more resources related to this topic, see here.) Our world in 5000 AD is incomplete without our mutated human being Mr. Green. Our Mr. Green is a rigged model, exported from Blender. All famous 3D games from Counter Strike to World of Warcraft use skinned models to give the most impressive real world model animations and kinematics. Hence, our learning has to now evolve to load Mr. Green and add the same quality of animation in our game. We will start our study of character animation by discussing the skeleton, which is the base of the character animation, upon which a body and its motion is built. Then, we will learn about skinning, how the bones of the skeleton are attached to the vertices, and then understand its animations. In this article, we will cover basics of a character's skeleton, basics of skinning, and some aspects of Loading a rigged JSON model. Understanding the basics of a character's skeleton A character's skeleton is a posable framework of bones. These bones are connected by articulated joints, arranged in a hierarchical data structure. The skeleton is generally rendered and is used as an invisible armature to position and orient a character's skin. The joints are used for relative movement within the skeleton. They are represented by a 4 x 4 linear transformation matrices (combination of rotation, translation, and scale). The character skeleton is set up using only simple rotational joints as they are sufficient to model the joints of real animals. Every joint has limited degrees of freedom (DOFs). DOFs are the possible ranges of motion of an object. For instance, an elbow joint has one rotational DOF and a shoulder joint has three DOFs, as the shoulder can rotate along three perpendicular axes. Individual joints usually have one to six DOFs. Refer to the link http://en.wikipedia.org/wiki/Six_degrees_of_freedom to understand different degrees of freedom. A joint local matrix is constructed for each joint. This matrix defines the position and orientation of each joint and is relative to the joint above it in the hierarchy. The local matrices are used to compute the world space matrices of the joint, using the process of forward kinematics. The world space matrix is used to render the attached geometry and is also used for collision detection. The digital character skeleton is analogous to the real-world skeleton of vertebrates. However, the bones of our digital human character do have to correspond to the actual bones. It will depend on the level of detail of the character you require. For example, you may or may not require cheek bones to animate facial expressions. Skeletons are not just used to animate vertebrates but also mechanical parts such as doors or wheels. Comprehending the joint hierarchy The topology of a skeleton is a tree or an open-directed graph. The joints are connected up in a hierarchical fashion to the selected root joint. The root joint has no parent of itself and is presented in the model JSON file with the parent value of -1. All skeletons are kept as open trees without any closed loops. This restriction though does not prevent kinematic loops. Each node of the tree represents a joint, also called bones. We use both terms interchangeably. For example, the shoulder is a joint, and the upper arm is a bone, but the transformation matrix of both objects is same. So mathematically, we would represent it as a single component with three DOFs. The amount of rotation of the shoulder joint will be reflected by the upper arm's bone. The following figure shows simple robotic bone hierarchy: Understanding forward kinematics Kinematics is a mathematical description of a motion without the underlying physical forces. Kinematics describes the position, velocity, and acceleration of an object. We use kinematics to calculate the position of an individual bone of the skeleton structure (skeleton pose). Hence, we will limit our study to position and orientation. The skeleton is purely a kinematic structure. Forward kinematics is used to compute the world space matrix of each bone from its DOF value. Inverse kinematics is used to calculate the DOF values from the position of the bone in the world. Let's dive a little deeper into forward kinematics and study a simple case of bone hierarchy that starts from the shoulder, moves to the elbow, finally to the wrist. Each bone/joint has a local transformation matrix, this.modelMatrix. This local matrix is calculated from the bone's position and rotation. Let's say the model matrices of the wrist, elbow, and shoulder are this.modelMatrixwrist, this.modelMatrixelbow, and this.modelMatrixshoulder respectively. The world matrix is the transformation matrix that will be used by shaders as the model matrix, as it denotes the position and rotation in world space. The world matrix for a wrist will be: this.worldMatrixwrist = this.worldMatrixelbow * this.modelMatrixwrist The world matrix for an elbow will be: this.worldMatrixelbow = this.worldMatrixshoulder * this.modelMatrixelbow If you look at the preceding equations, you will realize that to calculate the exact location of a wrist in the world space, we need to calculate the position of the elbow in the world space first. To calculate the position of the elbow, we first need to calculate the position of shoulder. We need to calculate the world space coordinate of the parent first in order to calculate that of its children. Hence, we use depth-first tree traversal to traverse the complete skeleton tree starting from its root node. A depth-first traversal begins by calculating modelMatrix of the root node and traverses down through each of its children. A child node is visited and subsequently all of its children are traversed. After all the children are visited, the control is transferred to the parent of modelMatrix. We calculate the world matrix by concatenating the joint parent's world matrix and its local matrix. The computation of calculating a local matrix from DOF and then its world matrix from the parent's world matrix is defined as forward kinematics. Let's now define some important terms that we will often use: Joint DOFs: A movable joint movement can generally be described by six DOFs (three for position and rotation each). DOF is a general term: this.position = vec3.fromValues(x, y, z); this.quaternion = quat.fromValues(x, y, z, w); this.scale = vec3.fromValues(1, 1, 1); We use quaternion rotations to store rotational transformations to avoid issues such as gimbal lock. The quaternion holds the DOF values for rotation around the x, y, and z values. Joint offset: Joints have a fixed offset position in the parent node's space. When we skin a joint, we change the position of each joint to match the mesh. This new fixed position acts as a pivot point for the joint movement. The pivot point of an elbow is at a fixed location relative to the shoulder joint. This position is denoted by a vector position in the joint local matrix and is stored in m31, m32, and m33 indices of the matrix. The offset matrix also holds initial rotational values.
Read more
  • 0
  • 0
  • 2294

article-image-making-entity-multiplayer-ready
Packt
07 Apr 2014
8 min read
Save for later

Making an entity multiplayer-ready

Packt
07 Apr 2014
8 min read
(For more resources related to this topic, see here.) Understanding the dataflow of Lua entities in a multiplayer environment When using your own Lua entities in a multiplayer environment, you need to make sure everything your entity does on one of the clients is also triggered on all other clients. Let's take a light switch as an example. If one of the players turned on the light switch, the switch should also be flipped on all other clients. Each client connected to the game has an instance of that light switch in their level. The CryENGINE network implementation already handles all the work involved in linking these individual instances together using network entity IDs. Each light switch can contact its own instances on all connected clients and call its functions over the network. All you need to do is use the functionality that is already there. One way of implementing the light switch functionality is to turn on the switch in the entity as soon as the OnUsed() event is triggered and then send a message to all other clients in the network to also turn on their lights. This might work for something as simple as a switch, but can soon get messy when the entity becomes more complex. Ping times and message orders can lead to inconsistencies if two players try to flip the light switch at the same time. The representation of the process would look like the following diagram: Not so good – the light switch entity could trigger its own switch on all network instances of itself Doing it this way, with the clients notifying each other, can cause many problems. In a more stable solution, these kinds of events are usually run through the server. The server entity—let's call it the master entity—determines the state of the entities across the network at all times and distributes the entities throughout the network. This could be visualized as shown in the following diagram: Better – the light switch entity calls the server that will distribute the event to all clients In the light switch scenario mentioned earlier, the light switch entity would send an event to the server light switch entity first. Then, the server entity would call each light switch entity, including the original sender, to turn on their lights. It is important to understand that the entity that received the event originally does nothing else but inform the server about the event. The actual light is not turned on until the server calls back to all entities with the request to do so. The aforementioned dataflow works in single player as well, as CryENGINE will just pretend that the local machine is both the client and the server. This way, you will not have to make adjustments or add extra code to your entity to check whether it is single player or multiplayer. In a multiplayer environment with a server and multiple clients, it is important to set the script up so that it acts properly and the correct functions are called on either the client or the server. The first step to achieve this is to add a client and server table to the entity script using the following code: Client = {}, Server = {}, With this addition, our script table looks like the following code snippet: Testy = {Properties={ fileModel = "",Physics = { bRigidBody=1, = 1, Density = -1, Mass = -1, }, Client = {}, Server = {}, Editor={ Icon="User.bmp", }, } Now, we can go ahead and modify the functions so that they work properly in multiplayer. We do this by adding the Client and Server subtables to our script. This way, the network system will be able to identify the Client/Server functions on the entity. The Client/Server functions The Client/Server functions are defined within your entity script by using the respective subtables that we previously defined in the entity table. Let's update our script and add a simple function that outputs a debug text into the console on each client. In order for everything to work properly, we first need to update our OnInit() function and make sure it gets called on the server properly. Simply add a server subtable to the function so that it looks like the following code snippet: functionTesty.Server:OnInit() self:OnReset(); end; This way, our OnReset() function will still be called properly. Now, we can add a new function that outputs a debug text for us. Let's keep it simple and just make it output a console log using the CryENGINE Log function, as shown in the following code snippet: functionTesty.Client:PrintLogOutput(text) Log(text); end; This function will simply print some text into the CryENGINE console. Of course, you can add more sophisticated code at this point to be executed on the client. Please also note the Client subtable in the function definition that tells the engine that this is a client function. In the next step, we have to add a way to trigger this function so that we can test the behavior properly. There are many ways of doing this, but to keep things simple, we will simply use the OnHit() callback function that will be automatically triggered when the entity is hit by something; for example, a bullet. This way, we can test our script easily by just shooting at our entity. The OnHit() callback function is quite simple. All it needs to do in our case is to call our PrintLogOutput function, or rather request the server to call it. For this purpose, we add another function to be called on the server that calls our PrintLogOutput() function. Again, please note that we are using the Client subtable of the entity to catch the hit that happens on the client. Our two new functions should look as shown in the following code snippet: functionTesty.Client:OnHit(user) self.server:SvRequestLogOutput("My Text!"); end functionTesty.Server:SvRequestLogOutput(text) self.allClients:PrintLogOutput(text); end We now have two new functions: one is a client function calling a server function and the other one is a server function calling the actual function on all the clients. The Remote Method Invocation definitions As a last step, before we are finished, we need to expose our entity and its functions to the network. We can do this by adding a table within the root of our entity script that defines the necessary Remote Method Invocation (RMI). The Net.Expose table will expose our entity and its functions to the network so that they can be called remotely, as shown in the following code snippet: Net.Expose { Class = Testy, ClientMethods = { PrintLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING }, }, ServerMethods = { SvRequestLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING}, }, ServerProperties = { }, }; Each RMI is defined by providing a function name, a set of RMI flags, and additional parameters. The first RMI flag is an order flag and defines the order of the network packets. You can choose between the following options: UNRELIABLE_ORDERED RELIABLE_ORDERED RELIABLE_UNORDERED These flags tell the engine whether the order of the packets is important or not. The attachment flag will define at what time the RMI is attached during the serialization process of the network. This parameter can be either of the following flags: PREATTACH: This flag attaches the RMI before game data serialization. POSTATTACH: This flag attaches the RMI after game data serialization. NOATTACH: This flag is used when it is not important if the RMI is attached before or after the game data serialization. FAST: This flag performs an immediate transfer of the RMI without waiting for a frame update. This flag is very CPU intensive and should be avoided if possible. The Net.Expose table we just added defines which functions will be exposed on the client and the server and will give us access to the following three subtables: allClients otherClients server With these functions, we can now call functions either on the server or the clients. You can use the allClients subtable to call a function on all clients or the otherClients subtable to call it on all clients except the own client. At this point, the entity table of our script should look as follows: Testy = { Properties={ fileModel = "", Physics = { bRigidBody=1, bRigidBodyActive = 1, Density = -1, Mass = -1, }, Client = {}, Server = {}, Editor={ Icon="User.bmp", ShowBounds = 1, }, } Net.Expose { Class = Testy, ClientMethods = { PrintLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING }, }, ServerMethods = { SvRequestLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING}, }, ServerProperties = { }, }; This defines our entity and its network exposure. With our latest updates, the rest of our script with all its functions should look as follows: functionTesty.Server:OnInit() self:OnReset(); end; functionTesty:OnReset() local props=self.Properties; if(not EmptyString(props.fileModel))then self:LoadObject(0,props.fileModel); end; EntityCommon.PhysicalizeRigid(self,0,props.Physics,0); self:DrawSlot(0, 1); end; functionTesty:OnPropertyChange() self:OnReset(); end; functionTesty.Client:PrintLogOutput(text) Log(text); end; functionTesty.Client:OnHit(user) self.server:SvRequestLogOutput("My Text!"); end functionTesty.Server:SvRequestLogOutput(text) self.allClients:PrintLogOutput(text); end With these functions added to our entity, everything should be ready to go and you can test the behavior in game mode. When the entity is being shot at, the OnHit() function will request the log output to be printed from the server. The server calls the actual function on all clients. Summary In this article we learned about making our entity ready for a multiplayer environment by understanding the dataflow of Lua entities, understanding the Client/Server functions, and by exposing our entities to the network using the Remote Method Invocation definitions. Resources for Article: Further resources on this subject: CryENGINE 3: Terrain Sculpting [Article] CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article]
Read more
  • 0
  • 0
  • 3886
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-preparing-your-scene
Packt
21 Mar 2014
10 min read
Save for later

Preparing Your Scene

Packt
21 Mar 2014
10 min read
(For more resources related to this topic, see here.) Element scenes in After Effects Element 3D scenes, unfortunately, are currently limited to five object groups. An object group is a collection of geometrical objects that is animated together. Under some circumstances, you may have multiple objects in a group, but not in this example. As a result, much of this article is devoted to tricks to overcome the limitations in Element 3D. We're going to cover the following topics: Saving your objects to your library so that they may be used in other instances of the Element plugin Setting up your objects in the After Effects interface and generating null objects to animate them easily Setting up your camera and lights in the scene and creating a new null object to create a focal point for the depth of field Let's get started! Saving your objects First things first: Let's create a folder in your Model Browser window. Navigate to your default objects directory for Element 3D (Documents/VideoCopilot/Models). Create a folder called BookModels inside that directory. Now, open your AEX project and go into the Scene Setup window in Element 3D. Your folder won't show up in the Model Browser (because there are no objects in that folder just yet). Let's save the lampshade glass to the presets now. The following screenshot shows the interface for saving your objects: Right-click on the lampshade glass object and select Save Model Preset. Now, you will see the directory we created. Unfortunately, there is no way to create a new folder from within this window yet. (Hence the need to navigate to the directory using your Windows Explorer (or Finder on Mac) to create the directory first.) Save all of your objects to this directory. Save them as shown in the following list: lampshade(glass) Lamp(base) Table WineBottle PepperShaker SaltShaker Note that we created a pepper shaker (in addition to our salt shaker). This was done by saving the salt shaker, then changing the texture do something darker, and saving it again as a pepper shaker. This can be great help when creating slight variants in your objects. The following screenshot shows how your Model Browser window should look: Preparing our scene Now that we have all our objects saved as presets, we can start populating our scene. Start off by deleting all your objects from the scene (click on the small X on the right side of the object bar in the Scene window). Don't worry. This is why we set them up as presets. Now we can bring them back in the right order. Setting up the lamp Let's start with the lamp. We'll be putting the lamp back in its lampshade and adding bulbs to the sockets. To do this, let's execute the following steps: In your Model Browser window, click on your lamp, then your lampshade, navigate to your starter pack, and click on bulb_on. Then in your Scene window, put the lamp in group 1, the shade in group 2, and the bulbs in group 3, as shown in the following screenshot: But wait. There's more! We have one bulb for four light sockets (it's a pretty huge bulb). That's okay! Element 3D wasn't really designed as a full 3D animation package. This is probably why it still has some of the limitations it does. Instead, it was designed as a way to use 3D objects as particles and replicate them in interesting ways. We'll get more into these features in the advanced sections, but it's time to get your feet wet by replicating the bulbs. Replicating the bulbs The shade and lamp should appear perfectly together (as if they were modeled together). The bulb is a stock asset of Element 3D, so it must be adjusted using the following steps: First, let's create a camera (so that we can accurately look around). Create a camera layer called Camera 1 using the stock's 50 mm preset. Remember, you can move your camera around using the camera tool (press C on your keyboard, then the mouse buttons will let you move your view around). Position your camera over the lamp so that we can see through the lampshades. Now, open the effect controls again for the ElementWineAndLamp layer. We don't need to touch anything in group 1 or 2 (that is, our lampshade and lamp). Since we created a symmetrical lamp, all we need to do is position one bulb and replicate it (like an array). Positioning and replicating an object Open Group 3 and then the Particle Replicator option. If we set the replicator shape to a plane, we will be able to replicate them all symmetrically on a plane. Pretty self explanatory, right? Let's set that first, as setting it later will change the coordinate system for our object, and as a result, it will look like it rests. Down the list, a little away from the Replicator Shape parameter are our Position XY, Position Z, and Scale Shape parameters. Mess with these to get your bulb directly in one of the bulb sockets. Then, you can turn the Particle Count parameter up to 4, and you'll end up with four bulbs positioned directly in the sockets. If they're not lined up, you're not out of luck! Directly under the Scale Shape parameter is Scale XYZ. It's important to note that Scale XYZ does not scale the object. Instead, it scales the replicator shape. There is no way in Element 3D to scale objects on their individual axis as of yet. Just remember, Scale Shape scales the geometry and Scale XYZ scales the replicator. You can also use the Particle Size parameter inside Particle Look to adjust object sizes. If you used your own lamp for this exercise, play around; you'll get it. If you downloaded the sample lamp, this screenshot shows the correct settings to get the bulbs into place. Don't worry if they're not exact. They aren't the hero objects in this animation. They're just there in case we see them either through the lampshades, or catch a glimpse. The result and setting should look similar to what is shown in the following screenshot: Lighting the lamp Luckily, lighting our scene is relatively easy, there's a lamp! Let's create an ambient light first. If our scene is going to be believable, we're going to need a blue light for ambient lighting. Create a new light with the type as Ambient, a light blue color, and an intensity of 10 %. Why so low? We want dramatic lighting in this scene. We'll keep the details without losing contrast with a very low ambient light. Call this light Ambient Light. The settings should look similar to what is shown in the following screenshot: So, why did we make it blue? Well, for a lamp to be on, it's probably night. If you look at any film footage, night is really denoted by shades of blue. Therefore, the light of night is blue. The same reason our lamp bulbs will have a bit of orangey-yellow to them. They're incandescent bulbs, which have a very warm look (also called color temperature). If you're going to be an animation god, you have to pay attention to everything around you: timings, lighting, and colors (actual colors and not just perceived colors). With all this in mind, let's create our first (of four) bulb lights by performing the following steps: Create a point light that is slightly orangey-yellow (be subtle) with an intensity of 100 percent. Don't worry just yet about any shadows or falloffs. These will actually be controlled within Element 3D. Name this light Light 1. The settings should look similar to what is shown in the following screenshot: Now, position this light directly on your first light bulb. Make sure you rotate your camera to ensure that it's placed properly. Once that's done, you can simply duplicate the light (Ctrl + D) and position the next on another bulb, either on the x or the z axis, and eventually have four lights (one on each bulb) in addition to your ambient light. The result should look like the following screenshot. Remember, lighting is the key to stunning looks. Always take the time to light your scenes well. The following screenshot shows the location of the lights directly on the bulbs: We'll finalize our lighting's falloff and the illusion of shadows in a bit. For now, let's move on. Adding the table and wine bottle Obviously, we're not going to be able to add the wine bottle, salt and pepper shakers, and the table to one layer of Element in AEX. Hence, we have the naming convention calling out the lamp and the bottle. We're going to break up our other objects across other layers within AEX. We'll get to that, but don't worry, there's a trick to do this. First, let's set up this layer. Place the table and move it (and scale it) using the techniques you already learned for positioning the first lamp. Position this table under the lamp. Do not move the lamp. Then, do the same with the wine bottle, placing it on top of the table. Use group 4 for the table and group 5 for the bottle. So far, your scene should look similar to what is shown in the following screenshot: Finishing the initial setup Now we have all of the five groups used up, but before we can add more objects (in other layers), we want to finalize our render settings (so we don't have to remember them and keep adjusting more Element layers). This will make all our layers look the same for rendering and give the illusion they were all on the same layer. Faking shadows Again, there is no ray tracing in Element 3D, and because the lights are native to AEX and they only project shadows on objects, AEX can see for itself that using AEX's shadow engine isn't an option. So we have to fake them. This can be done with something called ambient occlusion (AO). When the light goes into a corner, it has a hard time bouncing out again. This means that by nature, corners are darker than non-corners. This principal is called ambient occlusion. Element 3D does have this feature. So, by turning the AO way up, we can fake shadows. Some surfaces receive shadows better than others, so we'll have to play around a bit. Start by going into your E3D effect controls. Under Render Settings, you'll see Ambient Occlusion. Tick the Enable AO checkbox. Also, turn the intensity up to 10 and crank up the samples to 50. The sample amount is simple: the larger the number, the smoother the AO. However, the smoother the AO, the longer the render time. Crank up the AO Radius option to 3.2 and the AO Gamma option to 3.2. Watch what happens on your Preview window while you do this. Your settings should look similar to what is shown in the following screenshot: Metals don't receive shadows very well, so let's go turn it down on the gold. Go back into your scene settings, and on the gold shader (down in the advanced section), turn down your AO Amount option to 0.42. The final result should look similar to what is shown in the following screenshot: Light falloff When a candle is very bright at its flame but against a far wall, the wall is lit dimly. This effect is true with light bulbs as well. This principle is called Light Falloff. To add some realism and dramatic lighting to our scene, let's adjust this setting. Inside Render Settings in the Element effect controls, you'll see Lighting. Turn the Light Falloff setting up to 2.49. The larger this number, the more severe the falloff. The result should look similar to what is shown in the following screenshot:
Read more
  • 0
  • 0
  • 966

article-image-adding-animations
Packt
19 Mar 2014
10 min read
Save for later

Adding Animations

Packt
19 Mar 2014
10 min read
(For more resources related to this topic, see here.) Exploring 3D hierarchies The ability to parent objects among one another is a versatile feature in Unity3D. So far, in this article, we have seen how hierarchies can be used as a means of associating objects with one another and organizing them into hierarchies. One example of this is the character Prefabs with their child hats we developed for the player Prefab and the racers. In this example, by dragging-and-dropping the hat object onto the player's body, we associated the hat with the object body by putting the child into the parent's coordinate system. After doing this, we saw that the rotations, translations, and scales of the body's transform component were propagated to the hat. In practice, this is how we attach objects to one another in 3D games, that is, by parenting them to different parents and thereby changing their coordinate systems or frames of reference. Another example of using hierarchies as a data structure was for collections. Examples of this are the splineData collections we developed for the NPCs. These objects had a single game object at the head and a collection of data as child objects. We could then perform operations on the head, which lets us process all of the child objects in the collection (in our case, installing the way points). A third use of hierarchies was for animation. Since rotations, translations, and scales that are applied to a parent transform are propagated to all of the children, we have a way of developing fine-tuned motion for a hierarchy of objects. It turns out that characters in games and animated objects alike use this hierarchy of transforms technique to implement their motion. Skinned meshes in Unity3D A skinned mesh is a set of polygons whose positions and rotations are computed based on the positions of the various transforms in a hierarchy. Instead of each GameObject in the hierarchy have its own mesh, a single set of polygons is shared across a number of transforms. This results in a single mesh that we call a skin because it envelops the transforms in a skin-like structure. It turns out that this type of mesh is great for in-game characters because it moves like skin. We will now use www.mixamo.com to download some free models and animations for our game. Acquiring and importing models Let's download a character model for the hero of our game. Open your favorite Internet browser and go to www.mixamo.com. Click on Characters and scroll through the list of skinned models. From the models that are listed free, choose the one that you want to use. At the time of writing this article, we chose Justin from the free models available, as shown in the following screenshot: Click on Justin and you will be presented with a preview window of the character on the next page. Click on Download to prepare the model for download. Note, you may have to sign up for an account on the website to continue. Once you have clicked on Download, the small downloads pop-up window at the bottom-right corner of the screen will contain your model. Open the tab and select the model. Make sure to set the download format to FBX for Unity (.fbx) and then click on Download again. FBX (Filmbox)—originally created by a company of the same name and now owned by AutoDesk—is an industry standard format for models and animations. Congratulations! You have downloaded the model we will use for the main player character. While this model will be downloaded and saved to the Downloads folder of your browser, go back to the character select page and choose two more models to use for other NPCs in the game. At the time of writing this article, we chose Alexis and Zombie from the selection of free models, as shown in the following screenshot: Go to Unity, create a folder in your Project tab named Chapter8, and inside this folder, create a folder named models. Right-click on this folder and select Open In Explorer. Once you have the folder opened, drag-and-drop the two character models from your Download folder into the models folder. This will trigger the process of importing a model into Unity, and you should then see your models in your Project view as shown in the following screenshot: Click on each model in the models folder, and note that the Preview tab shows a t-pose of the model as well as some import options. In the Inspector pane, under the Rig tab, ensure that Animation Type is set to Humanoid instead of Generic. The various rig options tell Unity how to animate the skinned mesh on the model. While Generic would work, choosing Humanoid will give us more options when animating under Mechanim. Let Unity create the avatar definition file for you and you can simply click on Apply to change the Rig settings, as shown in the following screenshot: We can now drag-and-drop your characters to the game from the Project tab into the Scene view of the level, as shown in the following screenshot: Congratulations! You have successfully downloaded, imported, and instantiated skinned mesh models. Now, let's learn how to animate them, and we will do this starting with the main player's character. Exploring the Mechanim animation system The Mechanim animation system allows the Unity3D user to integrate animations into complex state machines in an intuitive and visual way! It also has advanced features that allow the user to fine-tune the motion of their characters, right from the use of blend trees to mix, combine, and switch between animations as well as Inverse Kinematics (IK) to adjust the hands and feet of the character after animating. To develop an FSM for our main player, we need to consider the gameplay, moves, and features that the player represents in the game. Choosing appropriate animations In level one, our character needs to be able to walk around the world collecting flags and returning them to the flag monument. Let's go back to www.mixamo.com, click on Animations, and download the free Idle, Gangnam Style, and Zombie Running animations, as shown in the following screenshot: Building a simple character animation FSM Let's build an FSM that the main character will use. To start, let's develop the locomotion system. Import the downloaded animations to a new folder named anims, in Chapter8. If you downloaded the animations attached to a skin, don't worry. We can remove it from the model, when it is imported, and apply it to the animation FSM that you will build. Open the scene file from the first gameplay level TESTBED1. Drag-and-drop the Justin model from the Projects tab into the Scene view and rename it Justin. Click on Justin and add an Animator component. This is the component that drives a character with the Mechanim system. Once you have added this component, you will be able to animate the character with the Mechanim system. Create a new animator controller and call it JustinController. Drag-and-drop JustinController into the controller reference on the Animator component of the Justin instance. The animator controller is the file that will store the specific Mechanim FSM that the Animator component will use to drive the character. Think of it as a container for the FSM. Click on the Justin@t-pose model from the Project tab, and drag-and-drop the avatar definition file from Model to the Avatar reference on the Animator component on the Justin instance. Go to the Window drop-down menu and select Animator. You will see a new tab open up beside the Scene view. With your Justin model selected, you should see an empty Animator panel inside the tab, as shown in the following screenshot: Right now our Justin model has no animations in his FSM. Let's add the idle animation (named Idle_1) from the Adam model we downloaded. You can drag-and-drop it from the Project view to any location inside this panel. That's all there is to it! Now, we have a single anim FSM attached to our character. When you play the game now, it should show Justin playing the Idle animation. You may notice that the loop doesn't repeat or cycle repeatedly. To fix this, you need to duplicate the animation and then select the Loop Pose checkbox. Highlight the animation child object idle_1 and press the Ctrl + D shortcut to duplicate it. The duplicate will appear outside of the hierarchy. You can then rename it to a name of your choice. Let's choose Idle as shown in the following screenshot: Now, click on Idle, and in the Inspector window, make sure that Loop Pose is selected. Congratulations! Using this Idle animation now results in a character who idles in a loop. Let's take a look at adding the walk animation. Click on the Zombie Running animation, which is a child asset of the Zombie model, and duplicate it such that a new copy appears in the Project window. Rename this copy Run. Click on this animation and make sure to check the Loop Pose checkbox so that the animation runs in cycles. Drag-and-drop the Run animation into the Animator tab. You should now have two animations in your FSM, with the default animation still as Idle; if you run the game, Justin should still just be idle. To make him switch animations, we need to do the following: Add some transitions to the Run animation from the Idle animation and vice versa. Trigger the transitions from a script. You will want to switch from the Idle to the Run animation when the player's speed (as determined from the script) is greater than a small number (let's say 0.1 f). Since the variable for speed only lives in the script, we will need a way for the script to communicate with the animation, and we will do this with parameters. In your Animator tab, note that the FSM we are developing lives in the Base Layer screen. While it is possible to add multiple animation layers by clicking on the + sign under Base Layer, this would allow the programmer to design multiple concurrent animation FSMs that could be used to develop varying degrees/levels of complex animation. Add a new parameter by clicking on the + sign beside the Parameters panel. Select float from the list of datatypes. You should now see a new parameter. Name this speed as shown in the following screenshot: Right-click on the Idle Animation and select Make Transition. You will now see that a white arrow is visible, which extends from Idle, that tracks the position of your mouse pointer. If you now click on the Run animation, the transition from Idle to Run will lock into place. Left-click on the little white triangle of the transition and observe the inspector for the transition itself. Scroll down in the Inspector window to the very bottom and set the condition for the transition to speed greater than 0.1, as shown in the following screenshot: Make the transition from the Run animation cycle back to the Idle cycle by following the same procedure. Right-click on the Run animation to start the transition. Then, left-click on Idle. After this, left-click on the transition once it is locked into place. Then, when the transition is activated in Conditions, set its speed to less than 0.09. Congratulations! Our character will now transition from Idle to Run when the speed crosses the 0.1 threshold. The transition is a nice blend from the first animation to the second over a brief period of time, and this is indicated in the transition graph.
Read more
  • 0
  • 0
  • 4444

Packt
18 Mar 2014
10 min read
Save for later

Introducing a new game type – top-down

Packt
18 Mar 2014
10 min read
(For more resources related to this topic, see here.) We want to start a new little game prototype. Let's create a little top-down shooter. For that reason, create a new application. For now, let's choose one of my favorite resolutions for retro games: 480 x 320. Here are the steps to create a basic top-down character: Click on Application in the workspace toolbar, choose Window in the properties, and select 480 x 320. You'll be asked to apply these settings. Your application's resolution has been set. Now set the size of your frame to 800 x 600 to create more space for your game objects. Create an active object, center hotspot, and action point. Set the movement of your object to Eight Directions, and change the properties to your desired values. You have just created a basic top-down character that you can steer with the arrow keys (or later with the touch joystick if you want to create a mobile game). Of course, you can also visit the Clickteam forum and search for a 360 degree movement example to get a more advanced movement system. In our case, the built-in eight-directions movement will do a great job though. Create one more Active object. This will be your bullet. Change the graphics if you want to. Let's create a simple shooting event: Repeat while "button 1" is pressed – Launch bullet in the direction "player" with a speed of 100 Too many bullets are created while pressing button 1 now. You can trigger the bullets every 10 seconds with conditions from the timer. Just add the following condition to the existing button condition: Every 10 seconds This event will create one bullet every 10 seconds while button 1 is pressed. One more thing you could do is center the display at your player's position to activate scrolling just like in the platformer example: Enemy movements As a next step, we want to create some basic enemy movements. Remember, this is just a prototype, so we don't really care about graphics. We just want to test different movements and events to get basic knowledge about Fusion. Path movement The simplest method to make your characters move might be on a path. Create an active object (an enemy), and set its movement to Path. Now hit the Edit button and place at least one node for a path. Also activate Reverse at End, and loop the movement in the Path Movement Setup. No matter what game type you are creating, the path movement can be used for a simple platformer enemy as well as for top-down spaceships. Bouncing ball movement The bouncing ball movement can be used for a million situations. The name that gives basic movement though would be another simple motion. Create an active object and set the movement to Bouncing Ball. Change the movement properties to whatever fits your game dynamic. We want our object to bounce whenever it leaves the game area. You will only need one event to trigger this situation. Start a new condition: navigate to Set Position | Test position of "enemy". Hit all the arrows pointing outside the frame area in the pop ( ) up that appears. This will create the condition Enemy leaves the play area. Now select your enemy object and create the action—navigate to Movement | Bounce: Just before your enemy object leaves the game area, it will bounce to a random direction within the frame. It will move forever within your game—until you blow it to pieces of course. Direction detection – a basic AI You can easily modify your bouncing ball enemy to follow your player object wherever it might go and create your first little Artificial Intelligence (AI). That actually sounds pretty cool, doesn't it? Use the bouncing ball movement for a new active object again. Set Speed to a low value such as 8. Go to the event editor and create the condition Always. Select your enemy object to create the action—navigate to Direction | Look in the direction of…— ( ) and select your player object. You should get this event: Always – Look at (0,0) from "player" The following screenshot shows the creation of the preceding event: Your enemy moves with a constant speed of 8 towards the player now! This might be the most simple AI you can create with Fusion. The great thing is it totally works for simple enemies! You can always dig deeper and create a more powerful movement system, but sometimes less is more. Especially when things need to be done quickly, you will be very happy about the built-in movements in Fusion! There are many, many, many different ways to work with the built-in movements in Fusion. There are just as many ways to create AI's for enemies and characters. You will get behind it step by step with every new game project! Alterable values At the moment, you have your frantic shooting player and a screen full of swirling enemy squares. Now we want to let them interact with each other. In the next steps, you will learn how to use alterable values, which are internal counters you can change and make calculations with. In your special case, you will use those values for the health of your enemies, but they can actually be used for pretty much any situation where you need to set values. Some examples are as follows: Reloading the shield of a spaceship Money, gold, or credits of a character Number of bullets for a weapon Health, energy, or life bars The easiest way to describe alterable values is with a simple example. We will give one of your enemies five health points, which is pretty nice of us. Select the enemy with path movement and go to the Values tab in the objects properties. Hit the New button to create Alterable Value A. Double-click on Alterable Value A name it PlayerHealth, and set the value to 5: Naming alterable values is not necessary but is highly recommended. Each object has up to 26 values (from A to Z) by the way. The following screenshot shows the naming of alterable values: Now open the event editor to create the interaction of your bullet and your enemy. You will need a simple event that reduces the alterable value HealthBar by 1 whenever it gets hit by one of your bullets: Collision between "enemy" and "bullet" - Sub 1 from "HealthBar" Additionally, let this condition destroy your bullet. The plan is to destroy the enemy object after it gets hit four times. To do so, test whether the alterable value HealthBar is lower or equal to 0 to destroy the enemy: HealthBar <= 0 – Destroy "enemy" This event will destroy your enemy when the alterable value hits a value lower or equal to 1. This is just one of countless possibilities for alterable values. As you can already see, alterable values will be your best friends from this very day! Interface, counters, and health bars We could talk a million years about good or bad interface design. The way your interface might look is just the beginning. Movement, speed, transparency, position, and size are just a few values you really have to think of with every new game. Some games completely go without an interface, which can create an even more immersive gaming experience. In other words, try to plan your interface precisely before you start to create an energy bar. We will work with the Counter game object to create a basic energy bar. The Counter object stores the numbers in your application and is used for objects such as fuel displays, clocks, speedometers, and health bars. The Counter object can be displayed as an animation, a bar, or a simple number. You can also hide the counter if you want to use it for calculations only. The following is the screenshot of the Counter object: Create a new counter object within your frame. The counter Type in the Properties settings is set to Numbers by default. Change the type to Horizontal bar. This will turn the counter to a small black rectangle. Change the color and size of this bar if you want. Now set the Initial Value to 10, the Minimum Value to 0, and the Maximum Value to 10. This means that the counter will start with a value of 10 when the frame (game) starts. Assuming that the bar represents your player's health, values will be subtracted or added. The value of this counter cannot go lower than 0 or higher than 10, as we have set the minimum and the maximum values! Now take a look at the RunTime Options tab. The option Follow the frame is important for every game. It is deactivated by default. That means that the counter will always stay in the same position no matter where your player character moves to. You can also put the counter (and the whole interface) in to a new layer of course. Open the event editor and create an event that will subtract 1 from the Counter object HealthBar (very similar to your enemy): Collision between "player" and "enemy" - Sub 1 from "HealthBar" Also add a limiting condition to deactivate multiple collisions at one time. You'll find this condition under Special | Limit conditions | Only one action when event loops. The only thing that is left is an event to destroy your object. Just test whether the health counter is lower or equal to 1 to destroy the player object: HealthBar <= 1 – Destroy "player" The following screenshot shows the event created: So, this is basically how you can use the counter object. As you can imagine, there are a lot of other situations where this object comes in very handy. You could, for example, create one more counter and leave it as a number. This could be your ammo counter! Set both Maximum and Initial Value to 20 and the Minimum Value to 0. In the event editor, subtract 1 from the counter whenever your player character fires a bullet. Add the following condition to your existing shooting condition: Counter > 0 Now your player will only shoot bullets when the counter is greater than 0. Of course, you have to add ammo packs to your game now. This is something you can find out on your own. Just use what you have learned so far. Going further Now think of the options you already have! You could add a destroy animation for your player. Let some simple particles bounce when your bullet hits the enemy or an obstacle. Go for some more advanced methods and change the animation of your player to a hurt state when he gets hit by an enemy. Maybe add some new events to your player. The player might be invincible for a while after he gets hurt, for example! Also think of your previous platformer prototype. Create a counter and add 1 every time you destroy one of the red crates! Talking about the red ones: why not set a path movement to the red crates? This would turn them from static boxes to patrolling, evil, explosive crates! Summary Resolutions are something you will think about before you start your next prototype. You created a new game setup to test some more advanced properties. Within this prototype, you also created and placed your first interface and turned it into a basic energy bar for your player. Alterable values will also be very useful from now on. Resources for Article: Further resources on this subject: Getting Started with Fusion Applications [Article] Dynamic Flash Charts - FusionCharts style [Article] Fine Tune the View layer of your Fusion Web Application [Article]
Read more
  • 0
  • 0
  • 1136

article-image-parallax-scrolling
Packt
13 Mar 2014
16 min read
Save for later

Parallax scrolling

Packt
13 Mar 2014
16 min read
(For more resources related to this topic, see here.) As a developer, we have been asked numerous times about how to implement parallax scrolling in a 2D game. Cerulean Games, my game studio, has even had the elements of parallax scrolling as the "do or die" requirement to close a project deal with a client. In reality, this is incredibly easy to accomplish, and there are a number of ways to do this. In Power Rangers Samurai SMASH! (developed by Cerulean Games for Curious Brain; you can find it in the iOS App Store), we implemented a simple check that would see what the linear velocity of the player is and then move the background objects in the opposite direction. The sprites were layered on the Z plane, and each was given a speed multiplier based on its distance from the camera. So, as the player moved to the right, all parallax scrolling objects would move to the left based on their multiplier value. That technique worked, and it fared us and our client quite well for the production of the game. This is also a common and quick way to manage parallax scrolling, and it's also pretty much how we're going to manage it in this game as well. OK, enough talk! Have at you! Well, have at the code: Create a new script called ParallaxController and make it look like the following code: -using UnityEngine; using System.Collections; public class ParallaxController : MonoBehaviour {     public GameObject[] clouds;     public GameObject[] nearHills;     public GameObject[] farHills;     public float cloudLayerSpeedModifier;     public float nearHillLayerSpeedModifier;     public float farHillLayerSpeedModifier;     public Camera myCamera;     private Vector3 lastCamPos;     void Start()     {         lastCamPos = myCamera.transform.position;     }     void Update()     {         Vector3 currCamPos = myCamera.transform.position;         float xPosDiff = lastCamPos.x - currCamPos.x;         adjustParallaxPositionsForArray(clouds,           cloudLayerSpeedModifier, xPosDiff);         adjustParallaxPositionsForArray(nearHills,           nearHillLayerSpeedModifier, xPosDiff);         adjustParallaxPositionsForArray(farHills,           farHillLayerSpeedModifier, xPosDiff);         lastCamPos = myCamera.transform.position;     }     void adjustParallaxPositionsForArray(GameObject[]       layerArray, float layerSpeedModifier, float xPosDiff)     {         for(int i = 0; i < layerArray.Length; i++)         {             Vector3 objPos =               layerArray[i].transform.position;             objPos.x += xPosDiff * layerSpeedModifier;             layerArray[i].transform.position = objPos;         }     } } Create a new GameObject in your scene and call it _ParallaxLayers. This will act as the container for all the parallax layers. Create three more GameObjects and call them _CloudLayer, _NearHillsLayer, and _FarHillsLayer, respectively. Place these three objects inside the _ParallaxLayers object, and place the Parallax Controller component onto the _ParallaxLayers object. Done? Good. Now we can move some sprites. Import the sprites from SpritesParallaxScenery. Start placing sprites in the three layer containers you created earlier. For the hills you want to be closer, place the sprites in the _NearHillsLayer container; for the hills you want to be further away, place the sprites in _FarHillsLayer; and place the clouds in the _CloudLayer container. The following screenshot shows an example of what the layers will now look like in the scene: Pro tip Is this the absolute, most efficient way of doing parallax? Somewhat; however, it's a bit hardcoded to only really fit the needs of this game. Challenge yourself to extend it to be flexible and work for any scenario! Parallax layer ordering Wait, you say that the objects are layered in the wrong order? Your hills are all mixed up with your platforms and your platforms are all mixed up with your hills? OK, don't panic, we've got this. What you need to do here is change the Order in Layer option for each of the parallax sprites. You can find this property in the Sprite Renderer component. Click on one of the sprites in your scene, such as one of the clouds, and you can see it in the Inspector panel. Here's a screenshot to show you where to look: Rather than changing each sprite individually, we can easily adjust the sprites in bulk by performing the following steps: Select all of your cloud layer sprites, and under their Sprite Renderer components, set their Order in Layer to 0. Set the Order in Layer property of the _NearHillsLayer sprites to 1 and that of the _FarHillsLayer sprites to 0. Select the Prefab named Platform and set its Order in Layer to 2; you should see all of your Platform sprites instantly update in the scene. Set the Order in Layer values of the Prefabs for Enemy and Player Bullet to 2. Set the sprite on the Player object in the scene to 2 as well. Finally, set the Wall objects to 3 and you're good to go. With the layers all set up, let's finish setting up the parallax layers. First, finish placing any additional parallax sprites; I'll wait. Brilliant! Now, go to the _ParallaxLayers object and let's play around with that Parallax Controller component. We're going to want to add all of those sprites to Parallax Controller. To make this easy, look at the top-right corner of the Inspector panel. See the little lock icon? Click on it. Now, regardless of what you do, the Parallax Controller component will not be deselected. Since it can't be deselected, you can now easily drag-and-drop all of the Cloud sprites into the Clouds array in the ParallaxController component, and all of the _FarHillsLayer child objects into the Far Hills array—you see where this is going. Set the My Camera field to use the Main Camera object. Finally, let's set some values in those Layer Speed Modifier fields. The higher the number, the faster the object will move as the camera moves. As an example, we set the Cloud layer to 0.05, the Near layer to 0.2, and the Far layer to 0.1. Feel free though to play with the values and see what you like! Go ahead and play the game. Click on the play button and watch those layers move! But, what's this? The particles that burst when an enemy is defeated render behind the sprites in the background—actually, they render behind all the sprites! To fix this, we need to tell Unity to render the particles on a layer in front of the sprites. By default, the sprites render after the particles. Let's change that. First, we need to create a new sorting layer. These are special types of layers that tell Unity the order to render things in. Go to the Tags & Layers window and look out for the drop-down menu called Sorting Layers. Add a new layer called ParticleLayer on Layer 1, as shown in the following screenshot: With this in place, it means anything with the Sorting Layers menu of ParticleLayer will render after the Default layer. Now, we need a way to assign this Sorting Layer to the particle system used when enemies are defeated. Create a new script called ParticleLayering and make it look like the following code: using UnityEngine; using System.Collections; public class ParticleLayering : MonoBehaviour {     public string sortLayerString = "";     void Start ()     {         particleSystem.renderer.sortingLayerName = sortLayerString;     } } Add this script to the EnemyDeathFX Prefab and set the Sort Layer String field to ParticleLayer. Go ahead and play the game again now to watch those particles fly in front of the other objects. Finally, if you want a solid color background to your scene, you don't need to worry about adding a colored plane or anything. Simply select the Main Camera object, and in the Inspector panel, look for the Background field in the Camera component. Adjust the color there as per your need. For the example game, we made this color a nice sky blue with the following values: R: 128, G: 197, B: 232, and A: 0. The one thing you may notice we're missing is something at the bottom of the scene. Here's a nice little challenge for you. We've given you a Lava sprite. Now, add in a lava layer of parallax sprites in the foreground using all the info you've read in this article. You can do this! Let's score! One of the most important elements to a game is being able to track progress. A quick and simple way to do this is to implement a score system. In our case, we will have a score that increases whenever you defeat an enemy. Now, Unity does have a built-in GUI system. However, it has some drawbacks. With this in mind, we won't be relying on Unity's built-in system. Instead, we are going to create objects and attach them to the camera, which in turn will allow us to have a 3D GUI. Pro tip If you want to use what this author believes is the best UI system for Unity, purchase a license for NGUI from the Unity Asset Store. I'm not the only one to think it's the best; Unity hired the NGUI developer to build the new official UI system for Unity itself. Let's build out some GUI elements: Create a 3D Text object by navigating to the menu item GameObject | Create Other; name it Score. Make it a child of the Main Camera GameObject and align it such that it sits in the top-left corner of the screen. Set its position to X: -6.91, Y: 4.99, Z: 10 to get this effect. Make the text color solid black and adjust the scaling so that it looks the way you want it to. Set the Anchor field to Upper Left and Alignment to Left. Adjust the scene to your taste, but it should look a little something like the following screenshot: Pro tip Unity's default 3D text font looks rather low quality in most situations. Try importing your own font and then set the font size to something much higher than you would usually need; often around 25 to 40. Then, when you place it in the world, it will look crisp and clean. Let's make it so that the Score visual element can actually track the player's score. Create a new script called ScoreWatcher and write the following code in it: using UnityEngine; using System.Collections; public class ScoreWatcher : MonoBehaviour {     public int currScore = 0;     private TextMesh scoreMesh = null;         void Start()     {         scoreMesh = gameObject.GetComponent<TextMesh>(); scoreMesh.text = "0";     }         void OnEnable()     {         EnemyControllerScript.enemyDied += addScore;     }         void OnDisable()     {         EnemyControllerScript.enemyDied -= addScore;     }         void addScore(int scoreToAdd)     {         currScore += scoreToAdd;         scoreMesh.text = currScore.ToString();     } } You may notice that in the preceding script, we are listening to the enemyDied event on the EnemyControllerScript. What we did here was we allowed other objects to easily create scoring events that the Score object can optionally listen to. There is lots of power to this! Let's add that event and delegate to the enemy. Open up EnemyControllerScript, and in the beginning, add the following code:     // States to allow objects to know when an enemy dies     public delegate void enemyEventHandler(int scoreMod);         public static event enemyEventHandler enemyDied; Then, down in the hitByPlayerBullet function, add the following code just above Destroy(gameObject,0.1f);, right around line 95: // Call the EnemyDied event and give it a score of 25. if(enemyDied != null)     enemyDied(25); Add the ScoreWatcher component to the Score object. Now, when you play the game and defeat the enemies, you can watch the score increase by 25 points each time! Yeeee-haw! Sorry 'bout that... shootin' things for points always makes me feel a bit Texan. Enemies – forever! So, you defeated all your enemies and now find yourself without enemies to defeat. This gets boring fast; so, let's find a way to get more enemies to whack. To do this, we are going to create enemy spawn points in the form of nifty rotating vortexes and have them spit out enemies whenever we kill other enemies. It shall be glorious, and we'll never be without friends to give gifts—and by gifts, we mean bullets. First things first. We need to make a cool-looking vortex. This vortex will be a stacked, animated visual FX object that is built for a 2D world. Don't worry, we've got you covered on textures, so please go through the following steps: Import the ones in the assets folder under SpritesVortex. Create a new GameObject called Vortex and add all three of the Vortex sprites in it, with each of their Position values set to X:0, Y:0, and Z:0. Adjust their Order in Layer values so that the Vortex_Back child is set to 10, Vortex_Center is set to 11, and Vortex_Front is set to 12. You should now have an object that looks something like the following screenshot: Go ahead and give it a nice spinning animation by rotating the Z axis from 0 to 356. Once you're happy with it, create a new script called EnemyRespawner and code it up as shown in the following code snippet: using UnityEngine; using System.Collections; public class EnemyRespawner : MonoBehaviour {     public GameObject spawnEnemy = null;     float respawnTime = 0.0f;         void OnEnable()     {         EnemyControllerScript.enemyDied += scheduleRespawn;     }         void OnDisable()     {         EnemyControllerScript.enemyDied -= scheduleRespawn;     }         // Note: Even though we don't need the enemyScore, we still need to accept it because the event passes it     void scheduleRespawn(int enemyScore)     {         // Randomly decide if we will respawn or not         if(Random.Range(0,10) < 5)             return;                 respawnTime = Time.time + 4.0f;     }         void Update()     {         if(respawnTime > 0.0f)         {             if(respawnTime < Time.time)             {                 respawnTime = 0.0f;                 GameObject newEnemy = Instantiate(spawnEnemy) as GameObject;                 newEnemy.transform.position = transform.position;             }         }     } } Now attach the preceding script to your Vortex object, populate the Spawn Enemy field with the Enemy Prefab, and save the Vortex object as a Prefab. Scatter a bunch of Vortex Prefabs around the level and you can get the hydra effect, where killing one enemy will create two more enemies or even more than two! Also, if you haven't already done so, you may want to go to the Physics Manager option and adjust the settings so that enemies won't collide with other enemies. One more thing—those enemies sort of glide out of their portals very awkwardly. Let's boost the gravity so they fall faster. Click on the main Enemy Prefab and change the Gravity Scale value of the RigidBody 2D component to 30. Now, they'll fall properly! Pro tip There are so many things you can do with enemy spawners that go far, far outside the context of this article. Take a shot at adding some features yourself! Here are a few ideas: Make the spawn vortexes play a special visual effect when an enemy is spawned Give vortexes a range so that they only spawn an enemy if another enemy was killed in their range Make vortexes move around the level Make vortexes have multiple purposes so that enemies can walk into one and come out another Have a special gold enemy worth bonus points spawn after every 100 kills Make an enemy that, when defeated, spawns other enemies or even collectable objects that earn the player bonus points! Summary So, what have we learned here today aside from the fact that shooting enemies with bullets earns you points? Well, check this out. You now know how to use parallax scrolling, 2D layers, and generate objects; and how to use a scoring system. Enemies dying, enemies spawning, freakin' vortexes? I know, you're sitting there going, "Dude, OK, I'm ready to get started on my first 2D game... the next side scrolling MMO Halo meets Candy Crush with bits of Mass Effect and a little Super Mario Bros!" Resources for Article: Further resources on this subject: Unity Game Development: Interactions (Part 1) [Article] Unity Game Development: Interactions (Part 2) [Article] Unity Game Development: Welcome to the 3D world [Article]
Read more
  • 0
  • 0
  • 4395
article-image-article-beating-back-the-horde
Packt
18 Feb 2014
9 min read
Save for later

Beating Back the Horde

Packt
18 Feb 2014
9 min read
(For more resources related to this topic, see here.) What kind of game will we be making? We are going to make one of the classics, a Tower Defense game. (http://old.casualcollective.com/#games/FETD2.) Our game won't be as polished as the example, but it gives you a solid base to work with and develop further. Mission briefing We will use the cloning tools again to create hordes of enemies to fight. We will also use these tools to create cannons and cannonballs. It's easy to re-use assets from other projects in Scratch 2.0. The new Backpack feature allows you to easily exchange assets between projects. How this works will be demonstrated in this article. Why is it awesome? This example is a lot more involved than the previous one. The final result will be a much more finished game which still leaves plenty of room to adapt and continue building on. While making this game, you will learn how to draw a background and how to make and use different costumes for a single sprite. We will make full use of the cloning technique to create many copies of similar objects. We will also use more variables and another type of variable called List to keep track of all the things going on in the game. You will also learn about a simple way to create movement patterns for computer-controlled objects. Your Hotshot objectives We will divide the articlein to the following tasks based primarily on the game sprites and their behavior: Creating a background Creating enemies Creating cannons Fighting back Mission checklist Click on the Create button to start a new project. Remove the Scratch cat by right-clicking on it and selecting delete. Creating a background Because the placement and the route to walk is important in this kind of game, we will start with the creation of the background. To the left of the Sprites window, you will see a separate picture. Underneath is the word Stage and another word, the name of the picture that's being shown. This picture is white when you start a new project because nothing is drawn on it yet. The following is an example with our background image already drawn in:   Engage thrusters We will draw a grassy field with a winding road running through it when looked at from the top, by going through the following steps: Click on the white image. Next click on the Backdrops tab to get to the drawing tool. This is similar to the Costumes tab for sprites, but the size of the drawing canvas is clearly limited to the size of the stage. Choose a green color and draw a rectangle from the top-left to the bottom-right of the canvas. Then click on the Fill tool and fill the rectangle with the same color to create a grassy background. On top of the field, we will draw a path which the enemies will use to walk on. Switch the Fill tool to a brown color. Draw rectangles to form a curving path as shown in the following screenshot. The background is now done. Let's save our work before moving on. Objective complete – mini debriefing The background is just a pretty picture with no direct functionality in the game. It tells the player what to expect in the game. It will be logical that enemies are going to follow the road that was drawn. We will also use this road as a guideline when scripting the movement path of the enemies. The open spaces between the path make it obvious where the player could place the cannons. Creating enemies We will quickly create an enemy sprite to make use of the background we just drew. These enemies will follow the path drawn in the background. Because the background image is fixed, we can determine exactly where the turns are. We will use a simple movement script that sends the enemies along the path from one end of the stage to the other. Like with the targets in the previous project, we will use a base object that creates clones of itself that will actually show up on stage. Prepare for lift off We will first draw an enemy sprite. Let's keep this simple for now. We can always add to the visual design later. The steps to draw it are as follows: Click on the paintbrush icon to create a new sprite. Choose a red color and draw a circle. Make sure the circle is of proper size compared to the path in the background. Fill the circle with the same color. We name the new sprite enemy1. That's all for now! We will add more to the appearance of the enemy sprite later. The enemy sprite appears as a red circle large enough to fit the path.   Engage thrusters Let's make it functional first with a script. We will place the base enemy sprite at the start of the path and have it create clones. Then we will program the clones to follow the path as shown in the following steps: The script will start when the when <green flag> clicked block is clicked. Place the sprite at the start of the path with a go to x: -240 y: 0 block. Wait for three seconds by using the wait ... secs block to allow the player to get ready for the game. Add a repeat … block. Fill in 5 to create five clones per wave. Insert a create clone of <myself> block. Then wait for two seconds by using the wait ... secs block so the enemy clones won't be spawned too quickly. Before we start moving the clones, we have to determine what path they will follow. The key information here are the points where the path bends in a new direction. We can move the enemies from one bend to another in an orderly manner. Be warned that it may take some time to complete this step. You will probably need to test and change the numbers you are going to use to move the sprites correctly. If you don't have the time to figure it all out, you can check and copy the image with the script blocks at the end of this step to get a quick result. Do you remember how the xy-coordinate system of the stage worked from the last project? Get a piece of paper (or you can use the text editor on your computer) and get ready to take some notes. Examine the background you drew on the stage, and write down all the xy-coordinates that the path follows in order. These points will serve as waypoints. Look at the screenshot to see the coordinates that I came up with. But remember that the numbers for your game could be different if you drew the path differently. To move the enemy sprites, we will use the glide … secs to x: … y: ... blocks. With this block, a sprite will move fluidly to the given point in the given amount of time as shown in the following steps: Start the clone script with a when I start as a clone block. Beyond the starting point, there will be seven points to move to. So stack together seven glide … blocks. In the coordinate slots, fill in the coordinates you just wrote down in the correct order. Double-check this since filling in a wrong number will cause the enemies to leave the path. Deciding how long a sprite should take to complete a segment depends on the length of that segment. This requires a bit of guesswork since we didn't use an exact drawing method. Your most accurate information is the differences between the coordinates you used from point to point. Between the starting point (-240,0) and the first waypoint (-190,0), the enemy sprite will have moved 50 pixels. Let's say we want to move 10 pixels per second. That means the sprite should move to it's new position in 5 seconds. The difference between the first (-190,0) and the second (-190,125) waypoint is 125. So according to the same formula, the sprite should move along this segment of the path in 12.5 seconds. Continue calculating the glide speeds like this for the other blocks. These are the numbers I came up with : 5, 12.5, 17, 26.5, 15.5, 14, and 10.5, but remember that yours may be different You can use the formula new position – old position / 10 = result to figure out the numbers you need to use. To finish off, delete the clone when it reaches the end of the path. Test your script and see the enemies moving along the path. You might notice they are very slow and bunched together because they don't travel enough distances between spawns. Let's fix that by adding a variable speed multiplier. Not only can we easily tweak the speed of the sprites but we can also use this later to have other enemy sprites move at different speeds, as shown in the following steps: Create a variable and make sure it is for this sprite only. Name it multiplier_R. The R stands for red, the color of this enemy. Place set <multiplier_R> to … at the start of the <green flag> script. Fill in 0.3 as a number for the basic enemy. Take the speed numbers you filled in previously and multiply them with the multiplier. Use a ...*... operator block. Place the multiplier_R variable in one slot. Type the correct number in the other slot. Place the calculation in the glide block instead of the fixed number.The completed scripts for enemy movement will look as follows: Objective complete – mini debriefing Test the game again and see how the enemies move much faster, about three times as fast if you have used 0.3 for the multiplier. You can play with the variable number a bit to see the effect. If you decrease the multiplier, the enemies will move even faster. If you increase the number, the enemies will become slower.
Read more
  • 0
  • 0
  • 1146

article-image-grasping-hammer
Packt
17 Feb 2014
14 min read
Save for later

Grasping Hammer

Packt
17 Feb 2014
14 min read
(For more resources related to this topic, see here.) Terminology In this article, I will be guiding you through many examples using Hammer. There are a handful of terms that will recur many times that you will need to know. It might be a good idea to bookmark this page, so you can flip back and refresh your memory. Brush A brush is a piece of world geometry created with block tool. Brushes make up the basic building blocks of a map and must be convex. A convex object's faces cannot see each other, while a concave object's faces can. Imagine if you're lying on the pitched roof of a generic house. You wouldn't be able to see the other side of the roof because the profile of the house is a convex pentagon. If you moved the point of the roof down inside the house, you would be able to see the other side of the roof because the profile would then be concave. This can be seen in the following screenshot: Got it? Good. Don't think you're limited by this; you can always create convex shapes out of more than one brush. Since we're talking about houses, brushes are used to create the walls, floor, ceiling, and roof. Brushes are usually geometrically very simple; upon creation, common brushes have six faces and eight vertices like a cube. The brushes are outlined in the following screenshot: Not all brushes have six sides; however, the more advanced brushwork techniques can create brushes that have (almost) as many sides as you want. You need to be careful while making complex brushes with the more advanced tools though. This is because it can be quite easy to make an invalid solid if you're not careful. An invalid solid will cause errors during compilation and will make your map unplayable. A concave brush is an example of an invalid solid. World brushes are completely static or unmoving. If you want your brushes to have a special function, they need to be turned into entities. Entity An entity is anything in the game that has a special function. Entities come in two flavors: brush-based and point-based. A brush-based entity, as you can probably guess, is created from a brush. A sliding door or a platform lift are examples of brush-based entities. Point-based entities, on the other hand, are created in a point in space with the entity tool. Lights, models, sounds, and script events are all point-based entities. In the following figure, the models or props are highlighted: World The world is everything inside of a map that you create. Brushes, lights, triggers, sounds, models, and so on are all part of the world. They must all be contained within a sealed map made out of world brushes. Void The void is nothing or everything that isn't the world. The world must be sealed off from the void in order to function correctly when compiled and played in game. Imagine the void as outer space or a vacuum. World brushes seal off the world from the void. If there are any gaps in world brushes (or if there are any entities floating in the void), this will create a leak, and the engine will not be able to discern what the world is and what the void is. If a leak exists in your map, the engine will not know what is supposed to be seen during the compile! The map will compile, but performance-reducing side effects such as bland lighting and excess rendered polygons will plague your map. Settings If at any point in your mapping experience, Hammer doesn't seem to be operating the way you want it to be, go to Tools | Options, and see if there's any preferences you would like to change. You can customize general settings or options related to the 2D and 3D views. If you're coming from another editor, perhaps there's a setting that will make your Hammer experience similar to what you're used to. Loading Hammer for the first time You'll be opening SteamsteamappscommonHalf-Life 2bin often, so you may want to create a desktop shortcut for easier access. Run Hammer.bat from the bin folder to launch Valve Hammer Editor, Valve's map (level) creator, so you can start creating a map. Hammer will prompt you to choose a game configuration, so choose which game you want to map for. I will be using Half-Life 2: Episode Two in the following examples: When you first open up Hammer, you will see a blank gray screen surrounded by a handful of menus, as shown in the following screenshot: Like every other Microsoft Windows application, there is a main menu in the top-left corner of the screen that lets you open, save, or create a document. As far as Hammer is concerned, our documents are maps, and they are saved with the .vmf file extension. So let's open the File menu and load an example map so we can poke around a bit. Load the map titled Chapter2_Example.vmf. The Hammer overview In this section, we will be learning to recognize the different areas of Hammer and what they do. Being familiar with your environment is important! Viewports There are four main windows or viewports in Hammer, as shown in the following screenshot: By default, the top-left window is the 3D view or camera view. The top-right window is the top (x/y) view, the bottom-right window is the side (x/z) view, and the bottom-left window is the front (y/z) view. If you would like to change the layout of the windows, simply click on the top-left corner of any window to change what is displayed. In this article, I will be keeping the default layout but that does not mean you have to! Set up Hammer any way you'd like. For instance, if you would prefer your 3D view to be larger, grab the cross at the middle of the four screens and drag to extend the areas. The 3D window has a few special 3D display types such as Ray-Traced Preview and Lightmap Grid. We will be learning more about these later, but for now, just know that 3D Ray-traced Preview simulates the way light is cast. It does not mimic what you would actually see in-game, but it can be a good first step before compile to see what your lighting may look like. The 3D Lighting Preview will open in a new window and will update every time a camera is moved or a light entity is changed. You cannot navigate directly in the lighting preview window, so you will need to use the 2D cameras to change the viewing perspective. The Map toolbar Located to the left of the screen, the Map toolbar holds all the mapping tools. You will probably use this toolbar the most. The tools will each be covered in depth later on, but here's a basic overview, as shown in the following screenshot, starting from the first tool: The Selection Tool The Selection Tool is pretty self-explanatory; use this tool to select objects. The hot key for this is Shift + S. This is the tool that you will probably use the most. This tool selects objects in the 3D and 2D views and also lets you drag selection boxes in the 2D views. The Magnify Tool The Magnify Tool will zoom in and out in any view. You could also just use the mouse wheel if you have one or the + and – keys on the numerical keypad for the 2D views. The hot key for the magnify tool is Shift + G. The Camera Tool The Camera Tool enables 3D view navigation and lets you place multiple different cameras into the 2D views. Its hot key is Shift + C. The Entity Tool The Entity Tool places entities into the map. If clicked on the 3D view, an entity is placed on the closest brush to the mouse cursor. If used in the 2D view, a crosshair will appear noting the origin of the entity, and the Enter key will add it to the map at the origin. The entity placed in the map is specified by the object bar. The hot key is Shift + E. The Block Tool The Block Tool creates brushes. Drag a box in any 2D view and hit the Enter key to create a brush within the bounds of the box. The object bar specifies which type of brush will be created. The default is box and the hot key for this is Shift + B. The Texture Tool The Texture Tool allows complete control over how you paint your brushes. For now, just know where it is and what it does; the hot key is Shift + A. The Apply Current Texture Tool Clicking on the Apply Current Texture icon will apply the selected texture to the selected brush or brushes. The Decal Tool The Decal Tool applies decals and little detail textures to brushes and the hot key is Shift + D. The Overlay Tool The Overlay Tool is similar to the decal tool. However, overlays are a bit more powerful than decals. Shift + O will be the hot key. The Clipping Tool The Clipping Tool lets you slice brushes into two or more pieces, and the hot key is Shift + X. The Vertex manipulation Tool The Vertex manipulation Tool, or VM tool, allows you to move the individual vertices and edges of brushes any way you like. This is one of the most powerful tools you have in your toolkit! Using this tool improperly, however, is the easiest way to corrupt your map and ruin your day. Not to worry though, we'll learn about this in great detail later on. The hot key is Shift + V. The selection mode bar The selection mode bar (located at the top-right corner by default) lets you choose what you want to select. If Groups is selected, you will select an entire group of objects (if they were previously grouped) when you click on something. The Objects selection mode will only select individual objects within groups. Solids will only select solid objects. The texture bar Located just beneath the selection mode toolbar, the texture bar, as shown, in the following screenshot, shows a thumbnail preview of your currently selected (active) texture and has two buttons that let you select or replace a texture. What a nifty tool, eh? The filter control bar The filter control bar controls your VisGroups (short for visual groups). VisGroups separate your map objects into different categories, similar to layers in the image editing software. To make your mapping life a bit easier, you can toggle visibility of object groups. If, for example, you're trying to sculpt some terrain but keep getting your view blocked by tree models, you can just uncheck the Props box, as shown in the following screenshot, to hide all the trees! There are multiple automatically generated VisGroups such as entities, displacements, and nodraws that you can easily filter through. Don't think you're limited to this though; you can create your own VisGroup with any selection at any time. The object bar The object bar lets you control what type of brush you are creating with the brush tool. This is also where you turn brushes into brush-based entities and create and place prefabs, as shown in the following screenshot: Navigating in 3D You will be spending most of your time in the main four windows, so now let's get comfortable navigating in them, starting with the 3D viewport. Looking around Select the camera tool on the map tools bar. It's the third one down on the list and looks like a red 35 mm camera. Holding the left mouse button while in the 3D view will allow you to look around from a stationary point. Holding the right mouse button will allow you to pan left, right, up, and down in the 3D space. Holding both left and right mouse buttons down together will allow you to move forward and backwards as well as pan left and right. Scrolling the mouse wheel will move the camera forward and backwards. Practice flying down the example map hallway. While looking down the hallway, hold the right mouse button to rise up through the grate and see the top of the map. If you would prefer another method of navigating in 3D, you can use the W, S, A, and D keys to move around while the left mouse button is pressed. Just like your normal FPS game, W moves forward in the direction of the camera, S moves backwards, and A and D move left and right, respectively. You can also move the mouse to look around while moving. Being comfortable with the 3D view is necessary in order to become proficient in creating and scripting 3D environments. As with everything, practice makes perfect, so don't be discouraged if you find yourself hitting the wrong buttons. Having the camera tool selected is not necessary to navigate in 3D. With any tool selected, hold Space bar while the cursor is in the 3D window to activate the 3D navigation mode. While holding Space bar, the navigation functions exactly as it does as if the camera tool was selected. Releasing Space bar will restore normal functionality to the currently selected tool. This can be a huge time saver down the line when you're working on very technical object placements. Multiple cameras If you find yourself bouncing around between different areas of the map, or even changing angles near the same object, you can create multiple cameras and juggle between them. With the camera tool selected, hold Shift and drag a line with the left mouse button in any 2D viewport. The start of the line will be the camera origin, and the end of the line will be the camera's target. Whenever you create a new camera, the newly created camera becomes active and displays its view in the 3D viewport. To cycle between cameras, press the Page Up and Page Down buttons, or click on a camera in any 2D view. Camera locations are stored in the map file when you save, so you don't have to worry about losing them when you exit. Pressing the Delete key with a camera selected will remove the active camera. If you delete the only camera, your view will snap to the origin (0, 0, 0) but you will still be able to look around and create other cameras. In essence, you will always have at least one camera in your map. Selecting objects in the 3D viewport To select an object in the 3D viewport, you must have the selection tool active, as shown in the following screenshot. A quick way to activate the selection tool is to hit the Esc key while any tool is selected, or use the Shift + S hot key. A selected brush or a group of brushes will be highlighted in red with yellow edges as shown in the following screenshot: To deselect anything within the 3D window, click on any other brush, or on the background (void), or simply hit the Esc key. If you want to select an object behind another object, press and hold the left mouse button on the front object. This will cycle through all the objects that are located behind the cursor. You will be able to see the selected objects changing in the 2D and 3D windows about once per second. Simply release the mouse button to complete your selection. To select multiple brushes or objects in the 3D window, hold Ctrl and left-click on multiple brushes. Clicking on a selected object while Ctrl is held will deselect the object. If you've made a mistake choosing objects, you can undo your selections with Ctrl + Z or navigate to Edit | Undo.
Read more
  • 0
  • 0
  • 1638

article-image-beyond-grading
Packt
16 Jan 2014
5 min read
Save for later

Beyond Grading

Packt
16 Jan 2014
5 min read
(for more resources related to this topic, see here.) Kudos As the final look of the frame is achieved during the compositing stage, there will always be numerous occasions where there is a requirement for more render passes to finalize the image. This results in extra 3D renders, along with more time and money. Also, few inevitable applications that give life to an image, such as lens effects (Defocus, Glares, and motions blur), are render intensive. Blender Compositor provides alternate procedures for these effects, without having to go back to 3D renders. A well planned CG pipeline can always provide sufficient data to be able to use these techniques during the compositing stage. Relighting Relighting is a compositing technique that is used to add extra light information not existing in the received 3D render information. This process facilitates additional creative tweaks in compositing. Though this technique can only provide light without considering shadowing information, additional procedures can provide a convincing approach to this limitation. The Normal node Relighting in Blender can be performed using the Normal node. The following screenshot shows the relighting workflow to add a cool light from the right screen. The following illustration uses a Hue Saturation Value node to attain the fake light color. Alternatively, any grading nodes can be used for similar effect. The technique is to use the Dot output of the Normal node as the factor input for any grade node. The following screenshot shows relighting with a cyan color light from the top using the Normal node: The light direction can be modified by left-clicking and dragging on the diffused sphere thumbnail image provided on the node. This fake lighting works great when used as secondary light highlights. However, as seen on the vertical brick in the preceding screenshot, light leaks can be encountered as shadowing is not considered. This can often spoil the fun. A quick fix for this is to use the Ambient Occlusion information to occlude the unwanted areas. The following screenshot illustrates the workflow of using the Ambient Occlusion pass along with the normal pass to resolve the light leak issue. The technique is to multiply the dot output of the Normal node with Ambient Occlusion info from the rendered image using Mix or Math nodes. As it can be observed in the following screenshot, the blue light leaks on the inside parts of the vertical brick is minimized by the Ambient Occlusion information. This solution works as long as relighting is not the primary lighting for the scene. Another issue that can be encountered while using the Normal node is negative values. These values will affect the nonlight areas, leading to an unwanted effect. The procedure to curb these unwanted values is to clamp them from the Dot output of the Normal node to zero, before using as a mask input to grade nodes. The following screenshot illustrates the issue with negative values. All pixels that have an over-saturated orange color are a result of negative values. The following screenshot shows the workflow to clamp the negative values from the dot information of a normal pass. A map value is connected between the grade node and Normal node, with the Use Minimum option on. This makes sure that only negative values are clamped to zero and all other values are unchanged. The Fresnel effect The Fresnel option available in shader parameters is used to modify the reflection intensity, based on the viewing angle, to simulate a metallic behavior. After 3D rendering, altering this property requires rerendering. Blender provides an alternate method to build and modify the Fresnel effect in compositing, using the Normal node. The following screenshot illustrates the Fresnel workflow. In this procedure, the dot output of a Normal node is connected to the Map Range node and the To Min/ To Max values are tweaked to obtain a black-and-white mask map, as shown in the screenshot. A Math node is connected to this mask input to clamp information to the 0-1 range. The 3D-combined render output is rebuilt using the diffuse, specular, and reflection passes from the 3D render. While rebuilding, the mask created using the Normal node should be applied as a mask to the factor input of the reflection Add node. This results in applying reflection only to the white areas of the mask, thereby exhibiting the Fresnel effect. A similar technique can be used to add edge highlights, using the mask as a factor input to the grade nodes. Summary This article dealt with advanced compositing techniques beyond grading. These techniques emphasize alternate methods in Blender Compositing for some specific 3D render requirements that can save lots of render times, thereby also saving budgets in making a CG film. Resources for Article: Further resources on this subject: Introduction to Blender 2.5: Color Grading [article] Blender Engine : Characters [article] Managing Blender Materials [article]
Read more
  • 0
  • 0
  • 2953
article-image-integrating-direct3d-xaml-and-windows-81
Packt
16 Jan 2014
9 min read
Save for later

Integrating Direct3D with XAML and Windows 8.1

Packt
16 Jan 2014
9 min read
(For more resources related to this topic, see here.) Preparing the swap chain for a Windows Store app Getting ready To target Windows 8.1, we need to use Visual Studio 2013. For the remainder of the article, we will assume that these can be located upon navigating to .ExternalBinDirectX11_2-Signed-winrt under the solution location. How to do it… We'll begin by creating a new class library and reusing a majority of the Common project used throughout the book so far, then we will create a new class D3DApplicationWinRT inheriting from D3DApplicationBase to be used as a starting point for our Windows Store app's render targets. Within Visual Studio, create a new Class Library (Windows Store apps) called Common.WinRT. New Project dialog to create a class library project for Windows Store apps Add references to the following SharpDX assemblies: SharpDX.dll, SharpDX.D3DCompiler.dll, SharpDX.Direct2D1.dll, SharpDX.Direct3D11.dll, and SharpDX.DXGI within .ExternalBinDirectX11_2-Signed-winrt. Right-click on the new project; navigate to Add | Existing item... ; and select the following files from the existing Common project: D3DApplicationBase.cs, DeviceManager.cs, Mesh.cs, RendererBase.cs, and HLSLFileIncludeHandlers.hlsl, and optionally, FpsRenderer.cs and TextRenderer.cs. Instead of duplicating the files, we can choose to Add As Link within the file selection dialog, as shown in the following screenshot: Files can be added as a link instead of a copy Any platform-specific code can be wrapped with a check for the NETFX_CORE definition, as shown in the following snippet: #if NETFX_CORE ...Windows Store app code #else ...Windows Desktop code #endif Add a new C# abstract class called D3DApplicationWinRT. // Implements support for swap chain description for // Windows Store apps public abstract class D3DApplicationWinRT : D3DApplicationBase { ... } In order to reduce the chances of our app being terminated to reclaim system resources, we will use the new SharpDX.DXGI.Device3.Trim function whenever our app is suspended (native equivalent is IDXGIDevice3::Trim). The following code shows how this is done: public D3DApplicationWinRT() : base() { // Register application suspending event Windows.ApplicationModel.Core .CoreApplication.Suspending += OnSuspending; } // When suspending hint that resources may be reclaimed private void OnSuspending(Object sender, Windows.ApplicationModel.SuspendingEventArgs e) { // Retrieve the DXGI Device3 interface from our // existing Direct3D device. using (SharpDX.DXGI.Device3 dxgiDevice = DeviceManager .Direct3DDevice.QueryInterface<SharpDX.DXGI.Device3>()) { dxgiDevice.Trim(); } } The existing D3DApplicationBase.CreateSwapChainDescription function is not compatible with Windows Store apps. Therefore, we will override this and create a SwapChainDescription1 instance that is compatible with Windows Store apps. The following code shows the changes necessary: protected override SharpDX.DXGI.SwapChainDescription1 CreateSwapChainDescription() { var desc = new SharpDX.DXGI.SwapChainDescription1() { Width = Width, Height = Height, Format = SharpDX.DXGI.Format.B8G8R8A8_UNorm, Stereo = false, SampleDescription.Count = 1, SampleDescription.Quality = 0, Usage = SharpDX.DXGI.Usage.BackBuffer | SharpDX.DXGI.Usage.RenderTargetOutput, Scaling = SharpDX.DXGI.Scaling.Stretch, BufferCount = 2, SwapEffect = SharpDX.DXGI.SwapEffect.FlipSequential, Flags = SharpDX.DXGI.SwapChainFlags.None }; return desc; } We will not be implementing the Direct3D render loop within a Run method for our Windows Store apps—this is because we will use the existing composition events where appropriate. Therefore, we will create a new abstract method Render and provide a default empty implementation of Run. public abstract void Render(); [Obsolete("Use the Render method for WinRT", true)] public override void Run() { } How it works… As of Windows 8.1 and DirectX Graphics Infrastructure (DXGI) 1.3, all Direct3D devices created by our Windows Store apps should call SharpDX.DXGI.Device3.Trim when suspending to reduce the memory consumed by the app and graphics driver. This reduces the chance that our app will be terminated to reclaim resources while it is suspended—although our application should consider destroying other resources as well. When resuming, drivers that support trimming will recreate the resources as required. We have used Windows.ApplicationModel.Core.CoreApplication rather than Windows.UI.Xaml.Application for the Suspending event, so that we can use the class for both an XAML-based Direct3D app as well as one that implements its own Windows.ApplicationModel.Core.IFrameworkView in order to render to CoreWindow directly. Windows Store apps only support a flip presentation model and therefore require that the swap chain is created using a SharpDX.DXGI.SwapEffect.FlipSequential swap effect; this in turn requires between two and 16 buffers specified in the SwapChainDescription1.BufferCount property. When using a flip model, it is also necessary to specify the SwapChainDescription1.SampleDescription property with Count=1 and Quality=0, as multisample anti-aliasing (MSAA) is not supported on the swap chain buffers themselves. A flip presentation model avoids unnecessarily copying the swap-chain buffer and increases the performance. By removing Windows 8.1 specific calls (such as the SharpDX.DXGI.Device3.Trim method), it is also possible to implement this recipe using Direct3D 11.1 for Windows Store apps that target Windows 8. See also The Rendering to a CoreWindow and Rendering to a SwapChainPanel recipes show how to create swap chains for non-XAML and XAML apps respectively NuGet Package Manager can be downloaded from http://visualstudiogallery.msdn.microsoft.com/4ec1526c-4a8c-4a84-b702-b21a8f5293ca You can find the flip presentation model on MSDN at http://msdn.microsoft.com/en-us/library/windows/desktop/hh706346(v=vs.85).aspx Rendering to a CoreWindow The XAML view provider found in the Windows Store app graphics framework cannot be modified. Therefore, when we want to implement the application's graphics completely within DirectX/Direct3D without XAML interoperation, it is necessary to create a basic view provider that allows us to connect our DirectX graphics device resources to the windowing infrastructure of our Windows Store app. In this recipe, we will implement a CoreWindow swap-chain target and look at how to hook Direct3D directly to the windowing infrastructure of a Windows Store app, which is exposed by the CoreApplication, IFrameworkViewSource, IFrameworkView, and CoreWindow .NET types. This recipe continues from where we left off with the Preparing the swap chain for Windows Store apps recipe. How to do it… We will first update the Common.WinRT project to support the creation of a swap chain for a Windows Store app's CoreWindow instance and then implement a simple Hello World application. Let's begin by creating a new abstract class within the Common.WinRT project, called D3DAppCoreWindowTarget and descending from the D3DApplicationWinRT class from our previous recipe. The default constructor accepts the CoreWindow instance and attaches a handler to its SizeChanged event. using Windows.UI.Core; using SharpDX; using SharpDX.DXGI; ... public abstract class D3DAppCoreWindowTarget : D3DApplicationWinRT { // The CoreWindow this instance renders to private CoreWindow _window; public CoreWindow Window { get { return _window; } } public D3DAppCoreWindowTarget(CoreWindow window) { _window = window; Window.SizeChanged += (sender, args) => { SizeChanged(); }; } ... } Within our new class, we will now override the CurrentBounds property and the CreateSwapChain function in order to return the correct size and create the swap chain for the associated CoreWindow. // Retrieve current bounds of CoreWindow public override SharpDX.Rectangle CurrentBounds { get { return new SharpDX.Rectangle( (int)_window.Bounds.X, (int)_window.Bounds.Y, (int)_window.Bounds.Width, (int)_window.Bounds.Height); } } // Create the swap chain protected override SharpDX.DXGI.SwapChain1 CreateSwapChain( SharpDX.DXGI.Factory2 factory, SharpDX.Direct3D11.Device1 device, SharpDX.DXGI.SwapChainDescription1 desc) { // Create the swap chain for the CoreWindow using (var coreWindow = new ComObject(_window)) return new SwapChain1(factory, device, coreWindow, ref desc); } This completes the changes to our Common.WinRT project. Next, we will create a Hello World Direct3D Windows Store app rendering directly to the application's CoreWindow instance. Visual Studio 2013 does not provide us with a suitable C# project template to create a non-XAML Windows Store app, so we will begin by creating a new C# Windows Store Blank App (XAML) project. Add references to the SharpDX assemblies: SharpDX.dll, SharpDX.Direct3D11.dll, SharpDX.D3DCompiler.dll, and SharpDX.DXGI.dll. Also, add a reference to the Common.WinRT project. Next, we remove the two XAML files from the project: App.xaml and MainPage.xaml. We will replace the previous application entry point, App.xaml, with a new static class called App. This will house the main entry point for our application where we start our Windows Store app using a custom view provider, as shown in the following snippet: using Windows.ApplicationModel.Core; using Windows.Graphics.Display; using Windows.UI.Core; ... internal static class App { [MTAThread] private static void Main() { var viewFactory = new D3DAppViewProviderFactory(); CoreApplication.Run(viewFactory); } // The custom view provider factory class D3DAppViewProviderFactory : IFrameworkViewSource { public IFrameworkView CreateView() { return new D3DAppViewProvider(); } } class D3DAppViewProvider : SharpDX.Component, IFrameworkView { ... } } The implementation of the IFrameworkView members of D3DAppViewProvider allows us to initialize an instance of a concrete descendent of the D3DAppCoreWindowTarget class within SetWindow and to implement the main application loop in the Run method. Windows.UI.Core.CoreWindow window;D3DApp d3dApp; // descends from D3DAppCoreWindowTarget public void Initialize(CoreApplicationView applicationView) { } public void Load(string entryPoint) { } public void SetWindow(Windows.UI.Core.CoreWindow window) { RemoveAndDispose(ref d3dApp); this.window = window; d3dApp = ToDispose(new D3DApp(window)); d3dApp.Initialize(); } public void Uninitialize() { } public void Run() { // Specify the cursor type as the standard arrow. window.PointerCursor = new CoreCursor( CoreCursorType.Arrow, 0); // Activate the application window, making it visible // and enabling it to receive events. window.Activate(); // Set the DPI and handle changes d3dApp.DeviceManager.Dpi = Windows.Graphics.Display .DisplayInformation.GetForCurrentView().LogicalDpi; Windows.Graphics.Display.DisplayInformation .GetForCurrentView().DpiChanged += (sender, args) => { d3dApp.DeviceManager.Dpi = Windows.Graphics.Display .DisplayInformation.GetForCurrentView().LogicalDpi; }; // Enter the render loop. Note that Windows Store apps // should never exit here. while (true) { // Process events incoming to the window. window.Dispatcher.ProcessEvents( CoreProcessEventsOption.ProcessAllIfPresent); // Render frame d3dApp.Render(); } } The D3DApp class follows the same structure from our previous recipes throughout the book. There are only a few minor differences as highlighted in the following code snippet: class D3DApp: Common.D3DAppCoreWindowTarget { public D3DApp(Windows.UI.Core.CoreWindow window) : base(window) { this.VSync=true; } // Private member fields ... protected override void CreateDeviceDependentResources( Common.DeviceManager deviceManager) { ... create all device resources ... and create renderer instances here } // Render frame public override void Render() { var context = this.DeviceManager.Direct3DContext; // OutputMerger targets must be set every frame context.OutputMerger.SetTargets( this.DepthStencilView, this.RenderTargetView); // Clear depthstencil and render target context.ClearDepthStencilView( this.DepthStencilView, SharpDX.Direct3D11.DepthStencilClearFlags.Depth | SharpDX.Direct3D11.DepthStencilClearFlags.Stencil , 1.0f, 0); context.ClearRenderTargetView( this.RenderTargetView, SharpDX.Color.LightBlue); ... setup context pipeline state ... perform rendering commands // Present the render target Present(); } } The following screenshot shows an example of the output using CubeRenderer, and overlaying the 2D text with the TextRenderer class: Output from the simple Hello World sample using the CoreWindow render target
Read more
  • 0
  • 0
  • 2262

article-image-article-anatomy-sprite-kit-project
Packt
16 Jan 2014
12 min read
Save for later

Anatomy of a Sprite Kit project

Packt
16 Jan 2014
12 min read
(For more resources related to this topic, see here.) A Sprite Kit project consists of things usual to any iOS project. It has the AppDelegate, Storyboard, and ViewController classes. It has the usual structure of any iOS application. However, there are differences in ViewController.view, which has the SKView class in Storyboard. You will handle everything that is related to Sprite Kit in SKView. This class will render your gameplay elements such as sprites, nodes, backgrounds, and everything else. You can't draw Sprite Kit elements on other views. It's important to understand that Sprite Kit introduces its own coordinate system. In UIkit, the origin (0,0) is located at the top-left corner, whereas Sprite Kit locates the origin at the bottom-left corner. The reason why this is important to understand is because of the fact that all elements will be positioned relative to the new coordinate system. This system originates from OpenGL, which Sprite Kit uses in implementation. Scenes An object where you place all of your other objects is the SKScene object. It represents a single collection of objects such as a level in your game. It is like a canvas where you position your Sprite Kit elements. Only one scene at a time is present on SKView. A view knows how to transition between scenes and you can have nice animated transitions. You may have one scene for menus, one for the gameplay scene, and another for the scene that features after the game ends. If you open your ViewController.m file, you will see how the SKScene object is created in the viewDidLoad method. Each SKView should have a scene, as all other objects are added to it. The scene and its object form the node tree, which is a hierarchy of all nodes present in the scene. Open the ERGMyScene.m file. Here, you can find the method where scene initialization and setup take place: - (id)initWithSize:(CGSize)size The scene holds the view hierarchy of its nodes and draws all of them. Nodes are very much like UIViews; you can add nodes as children to other nodes, and the parent node will apply its effects to all of its children, effects such as rotation or scaling, which makes working with complex nodes so much easier. Each node has a position property that is represented by CGPoint in scene coordinates, which is used to set coordinates of the node. Changing this position property also changes the node's position on the screen. After you have created a node and set its position, you need to add it to your scene node hierarchy. You can add it either to the scene or to any existing node by calling the addChild: method. You can see this in your test project with the following line: [self addChild:myLabel]; After the node has been added to a visible node or scene, it will be drawn by the scene in the next iteration of the run loop. Nodes The methods that create SKLabelNode are self-explanatory and it is used to represent text in a scene. The main building block of every scene is SKNode. Most things you can see on the screen of any given Sprite Kit game is probably a result of SKNode. Node types There are different node types that serve different purposes: SKNode: This is a parent class for different types of nodes SKSpriteNode: This is a node that draws textured sprites SKLabelNode: This is a node that displays text SKShapeNode: This is a node that renders a shape based on a Core Graphics path SKEmitterNode: This is a node that creates and renders particles SKEffectNode: This is a node that applies a Core Image filter to its children Each of them has their own initializer methods—you create one, add it to your scene, and it does the job it was assigned to do. Some node properties and methods that you might find useful are: position: This is a CGPoint representing the position of a node in its parent coordinate system. zPosition: This is a CGFloat that represents the position of a node on an imaginary Z axis. Nodes with higher zPosition will be over the nodes that have lower zPosition. If nodes have the same zPosition, the ones that were created later will appear on top of those that were created before. xScale and yScale: This is a CGFloat that allows you to change the size of any node. The default is 1.0 and setting it to any other value will change the sprite size. It is not recommended, but if you have an image of a certain resolution, scaling it will upscale the image and it will look distorted. Making nodes smaller can lead to other visual artifacts. But if you need quick and easy ways to change the size of your nodes, this property is there. name: This is the name of the node that is used to locate it in the node hierarchy. This allows you to be flexible in your scenes as you no longer need to store pointers to your nodes, and also saves you a lot of headache. childNodeWithName:(NSString *)name: This finds a child node with the specified name. If there are many nodes with the same name, the first one is returned. enumerateChildNodesWithName:usingBlock:: This allows you torun custom code on your nodes. This method is heavily used throughout any Sprite Kit game and can be used for movement, state changing, and other tasks. Actions Actions are one of the ways to add life to your game and make things interactive. Actions allow you to perform different things such as moving, rotating, or scaling nodes, playing sounds, and even running your custom code. When the scene processes its nodes, actions that are linked to these nodes are executed. To create a node, you run a class method on an action that you need, set its properties, and call the runAction: method on your node with action as a parameter. You may find some actions in the touchesBegan: method in ERGMyScene.m. In this method, you can see that a new node (of the type SKSpriteNode) is created, and then a new action is created and attached to it. This action is embedded into another action that makes it repeat forever, and then a sprite runs the action and you see a rotating sprite on the screen. To complete the preceding process, it took only five lines, and it is very intuitive. This is one of the Sprite Kit strengths—simplicity and self-documenting code. As you might have noticed, Apple names methods in a simpler way so that you can understand what it does just by reading the method. Try to adhere to the same practice and name your variables and methods so that their function can be understood immediately. Avoid naming objects a or b, use characterSprite or enemyEmitter instead. There are different action types; here we will list some that you may need in your first project: Move actions (moveTo:duration:, moveBy, followPath) are actions that move the node by a specified distance in points Rotate actions are actions that rotate your nodes by a certain angle (rotateByAngle:duration:) Actions that change node scale over time (scaleBy:duration) Actions that combine other actions (sequence: to play actions one after another, and repeatAction: to play an action a certain amount of times or forever) There are many other actions and you might look up the SKAction class reference if you want to learn more about actions. Game loop Unlike UIKit, which is based on events and waits for user input before performing any drawing or interactions, Sprite Kit evaluates all nodes, their interactions, and physics as fast as it can (capped at 60 times per second) and produces results on screen. In the following figure, you can see the way a game loop operates: First, the scene calls the update:(CFTimeInterval)currentTime method and sends it the time at which this method was invoked. The usual practice is to save the time of the last update and calculate the time that it took from the last update (delta) to the current update to move sprites by a given number of points, by multiplying the velocity of a sprite by delta, so you will get the same movement regardless of FPS. For example, if you want a sprite to move 100 pixels every second, regardless of your game performance, you multiply delta by 100. This way, if it took long to process the scene, your sprite will move slightly further for this frame; if it is processed fast, it will move just a short distance. Either way you get expected results without complex calculations. After the update is done, the scene evaluates actions, simulates physics, and renders itself on screen. This process repeats itself as soon as it's finished. This allows for smooth movement and interactions. You will write the most essential code in the update: method, since it is getting called many times per second and everything on screen happens with the code we write in this method. You will usually iterate over all objects in your scene and dispatch some job for each to do, such as character moving and bullets disappearing off screen. The update: method is not used in a template project, but it is there if you want to customize it. Let's see how we can use it to move the Hello, World! label off the screen. First, find where the label is created in the scene init method, and find this line: myLabel.text = @"Hello, World!"; Add this code right after it: myLabel.name = @"theLabel"; Find the update: method; it looks like this: - (void)update:(CFTimeInterval)currentTime Insert the following code into it: [self enumerateChildNodesWithName:@"theLabel" usingBlock:^(SKNode *node, BOOL *stop) { node.position = CGPointMake(node.position.x - 2, node.position.y); if (node.position.x < - node.frame.size.width) { node.position = CGPointMake(self.frame.size.width, node.position.y); } }]; This method first finds the child node with the name "theLabel", and as we named our label the same, it finds it and gives control to the block inside. The child that it found is a node. If it finds other nodes with the name "theLabel", it will call the same block on all of them in the order they were found. Inside the block, we change the label position by 2 pixels to the left, keeping the vertical position the same. Then, we do a check, if the position of the label from the left border of the screen is further than the length of the label, we move the label to the right-hand side of the screen. This way, we create a seamless movement that should appear to be coming out of the right-hand side as soon as the label moves off screen. But if you run the project again, you will notice that the label does not disappear. The label takes a bit longer to disappear and blinks on screen instead of moving gracefully. There are two problems with our code. The first issue is that the frame is not changing when you rotate the screen, it stays the same even if you rotate the screen. This happens because the scene size is incorrectly calculated at startup. But we will fix that using the following steps: Locate the Endless Runner project root label in the left pane with our files. It says Endless Runner, 2 targets, iOS SDK 7.0 . Select it and select the General pane on the main screen. There, find the device orientation and the checkboxes near it. Remove everything but Landscape Left and Landscape Right . We will be making our game in landscape and we don't need the Portrait mode. Next, locate your ERGViewController.m file. Find the viewDidLoad method. Copy everything after the [super viewDidLoad] call. Make a new method and add the following code: - (void) viewWillLayoutSubviews { // Configure the view. [super viewWillLayoutSubviews]; SKView * skView = (SKView *)self.view; skView.showsFPS = YES; skView.showsNodeCount = YES; // Create and configure the scene. SKScene * scene = [ERGMyScene sceneWithSize:skView.bounds.size]; scene.scaleMode = SKSceneScaleModeAspectFill; // Present the scene. [skView presentScene:scene]; } Let's see why calculations of frame size are incorrect by default. When the view has finished its load, the viewDidLoad method is getting called, but the view still doesn't have the correct frame. It is only set to the correct dimensions sometime later and it returns a portrait frame before that time. We fix this issue by setting up the scene after we get the correct frame. The second problem is the anchoring of the nodes. Unlike UIViews, which are placed on screen using their top-left corner coordinates, SKNodes are getting placed on the screen based on their anchorPoint property. The following figure explains what anchor points are. By default, the anchor point is set at (0.5, 0.5), which means that the sprite position is its center point. This comes in handy when you need to rotate the sprite, as this way it rotates around its center axis. Imagine that the square in the preceding figure is your sprite. Different anchor points mean that you use these points as the position of the sprite. The anchor point at (0, 0) means that the left-bottom corner of our sprite will be on the position of the sprite itself. If it is at (0.5, 0.5), the center of the sprite will be on the position point. Anchor points go from 0 to 1 and represent the size of the sprite. So, if you make your anchor point (0.5, 0.5), it will be exactly on sprite center. We might want to use the (0,0) anchor point for our text label. Summary In this article, we saw the different parts of the Sprite Kit project, got an idea about various objects such as scenes, nodes, and so on. Resources for Article : Further resources on this subject: Interface Designing for Games in iOS [Article] Linking OpenCV to an iOS project [Article] Creating a New iOS Social Project [Article]
Read more
  • 0
  • 0
  • 1594