Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-using-leap-motion-controller-arduino
Packt
19 Nov 2014
18 min read
Save for later

Using the Leap Motion Controller with Arduino

Packt
19 Nov 2014
18 min read
This article by Brandon Sanders, the author of the book Mastering Leap Motion, focuses on what he specializes in—hardware. While normal applications are all fine and good, he finds it much more gratifying if a program he writes has an impact in the physical world. (For more resources related to this topic, see here.) One of the most popular hobbyist hardware solutions, as I'm sure you know, is the Arduino. This cute little blue board from Italy brought the power of micro controllers to the masses. Throughout this article, we're going to work on integrating a Leap Motion Controller with an Arduino board via a simplistic program; the end goal is to make the built-in LED on an Arduino board blink either slower or faster depending on how far a user's hand is away from the Leap. While this is a relatively simple task, it's a great way to demonstrate how you can connect something like the Leap to an external piece of hardware. From there, it's only a hop, skip, and jump to control robots and other cool things with the Leap! This project will follow the client-server model of programming: we'll be writing a simple Java server which will be run from a computer, and a C++ client which will run on an Arduino board connected to the computer. The server will be responsible for retrieving Leap Motion input and sending it to the client, while the client will be responsible for making an LED blink based on data received from the server. Before we begin, I'd like to note that you can download the completed (and working) project from GitHub at https://github.com/Mizumi/Mastering-Leap-Motion-Chapter-9-Project-Leapduino. A few things you'll need Before you begin working on this tutorial, there are a few things you're going to need: A computer (for obvious reasons). A Leap Motion Controller. An Arduino of some kind. This tutorial is based around the Uno model, but other similar models like the Mega should work just as well. A USB cable to connect your Arduino to your computer. Optionally, the Eclipse IDE (this tutorial will assume you're using Eclipse for the sake of readability and instruction). Setting up the environment First off, you're going to need a copy of the Leap Motion SDK so that you can add the requisite library jar files and DLLs to the project. If you don't already have it, you can get a copy of the SDK from https://www.developer.leapmotion.com/downloads/. Next, you're going to need the Java Simple Serial Connector (JSSC) library and the Arduino IDE. You can download the library JAR file for JSSC from GitHub at https://github.com/scream3r/java-simple-serial-connector/releases. Once the download completes, extract the JAR file from the downloaded ZIP folder and store it somewhere safe; you'll need it later on in this tutorial. You can then proceed to download the Arduino IDE from their official website at http://arduino.cc/en/Main/Software. If you're on Windows, you will be able to download a Windows installer file which will automagically install the entire IDE on to your computer. On the other hand, Mac and Linux users will need to instead download .zip or .tgz files and then extract them manually, running the executable binary from the extracted folder contents. Setting up the project To set up our project, perform the following steps: The first thing we're going to do is create a new Java project. This can be easily achieved by opening up Eclipse (to reiterate for the third time, this tutorial will assume you're using Eclipse) and heading over to File -> New -> Java Project. You will then be greeted by a project creation wizard, where you'll be prompted to choose a name for the project (I used Leapduino). Click on the Finish button when you're done. My current development environment is based around the Eclipse IDE for Java Developers, which can be found at http://www.eclipse.org/downloads. The instructions that follow will use Eclipse nomenclature and jargon, but they will still be usable if you're using something else (like NetBeans). Once the project is created, navigate to it in the Package Explorer window. You'll want to go ahead and perform the following actions: Create a new package for the project by right-clicking on the src folder for your project in the Package Explorer and then navigating to New | Package in the resulting tooltip. You can name it whatever you like; I personally called mine com.mechakana.tutorials. You'll now want to add three files to our newly-created package: Leapduino.java, LeapduinoListener.java, and RS232Protocol.java. To create a new file, simply right-click on the package and then navigate to New | Class. Create a new folder in your project by right-clicking on the project name in the Package Explorer and then navigating to New | Folder in the resulting tooltip. For the purposes of this tutorial, please name it Leapduino. Now add one file to your newly created folder: Leapduino.ino. This file will contain all of the code that we're going to upload to the Arduino. With all of our files created, we need to add the libraries to the project. Go ahead and create a new folder at the root directory of your project, called lib. Within the lib folder, you'll want to place the jssc.jar file that you downloaded earlier, along with the LeapJava.jar file from the Leap Motion SDK. Then, you will want to add the appropriate Leap.dll and LeapJava.dll files for your platform to the root of your project. Finally, you'll need to modify your Java build path to link the LeapJava.jar and jssc.jar files to your project. This can be achieved by right-clicking on your project in the Package Explorer (within Eclipse) and navigating to Build Path… | Configure Build Path…. From there, go to the Libraries tab and click on Add JARs…, selecting the two aforementioned JAR files (LeapJava.jar and jssc.jar). When you're done, your project should look similar to the following screenshot: And you're done; now to write some code! Writing the Java side of things With everything set up and ready to go, we can start writing some code. First off, we're going to write the RS232Protocol class, which will allow our application to communicate with any Arduino board connected to the computer via a serial (RS-232) connection. This is where the JSSC library will come into play, allowing us to quickly and easily write code that would otherwise be quite lengthy (and not fun). Fun fact RS-232 is a standard for serial communications and transmission of data. There was a time when it was a common feature on a personal computer, used for modems, printers, mice, hard drives, and so on. With time, though, the Universal Serial Bus (USB) technology replaced RS-232 for many of those roles. Despite this, today's industrial machines, scientific equipment and (of course) robots still make heavy usage of this protocol due to its light weight and ease of use; the Arduino is no exception! Go ahead and open up the RS232Protocol.java file which we created earlier, and enter the following: package com.mechakana.tutorials; import jssc.SerialPort; import jssc.SerialPortEvent; import jssc.SerialPortEventListener; import jssc.SerialPortException; public class RS232Protocol { //Serial port we're manipulating. private SerialPort port; //Class: RS232Listener public class RS232Listener implements SerialPortEventListener {    public void serialEvent(SerialPortEvent event)    {      //Check if data is available.      if (event.isRXCHAR() && event.getEventValue() > 0)      {        try        {          int bytesCount = event.getEventValue();          System.out.print(port.readString(bytesCount));        }                 catch (SerialPortException e) { e.printStackTrace(); }      }    } } //Member Function: connect public void connect(String newAddress) {    try    {      //Set up a connection.      port = new SerialPort(newAddress);         //Open the new port and set its parameters.      port.openPort();      port.setParams(38400, 8, 1, 0);               //Attach our event listener.      port.addEventListener(new RS232Listener());    }     catch (SerialPortException e) { e.printStackTrace(); } } //Member Function: disconnect public void disconnect() {    try { port.closePort(); }     catch (SerialPortException e) { e.printStackTrace(); } } //Member Function: write public void write(String text) {    try { port.writeBytes(text.getBytes()); }     catch (SerialPortException e) { e.printStackTrace(); } } } All in all, RS232Protocol is a simple class—there really isn't a whole lot to talk about here! However, I'd love to point your attention to one interesting part of the class: public class RS232Listener implements SerialPortEventListener { public void serialEvent(SerialPortEvent event) { /*code*/ } } You might have found it rather odd that we didn't create a function for reading from the serial port—we only created a function for writing to it. This is because we've opted to utilize an event listener, the nested RS232Listener class. Under normal operating conditions, this class's serialEvent function will be called and executed every single time new information is received from the port. When this happens, the function will print all of the incoming data out to the user's screen. Isn't that nifty? Moving on, our next class is a familiar one—LeapduinoListener, a simple Listener implementation. This class represents the meat of our program, receiving Leap Motion tracking data and then sending it over our serial port to the connected Arduino. Go ahead and open up LeapduinoListener.java and enter the following code: package com.mechakana.tutorials; import com.leapmotion.leap.*; public class LeapduinoListener extends Listener {   //Serial port that we'll be using to communicate with the Arduino. private RS232Protocol serial; //Constructor public LeapduinoListener(RS232Protocol serial) {    this.serial = serial; } //Member Function: onInit public void onInit(Controller controller) {    System.out.println("Initialized"); } //Member Function: onConnect public void onConnect(Controller controller) {    System.out.println("Connected"); } //Member Function: onDisconnect public void onDisconnect(Controller controller) {    System.out.println("Disconnected"); } //Member Function: onExit public void onExit(Controller controller) {    System.out.println("Exited"); } //Member Function: onFrame public void onFrame(Controller controller) {    //Get the most recent frame.    Frame frame = controller.frame();    //Verify a hand is in view.    if (frame.hands().count() > 0)    {      //Get some hand tracking data.      int hand = (int) (frame.hands().frontmost().palmPosition().getY());      //Send the hand pitch to the Arduino.      serial.write(String.valueOf(hand));      //Give the Arduino some time to process our data.      try { Thread.sleep(30); }      catch (InterruptedException e) { e.printStackTrace(); }    } } } In this class, we've got the basic Leap Motion API onInit, onConnect, onDisconnect, onExit, and onFrame functions. Our onFrame function is fairly straightforward: we get the most recent frame, verify a hand is within view, retrieve its y axis coordinates (height from the Leap Motion Controller) and then send it off to the Arduino via our instance of the RS232Protocol class (which gets assigned during initialization). The remaining functions simply print text out to the console telling us when the Leap has initialized, connected, disconnected, and exited (respectively). And now, for our final class on the Java side of things: Leapduino! This class is a super basic main class that simply initializes the RS232Protocol class and the LeapduinoListener—that's it! Without further ado, go on ahead and open up Leapduino.java and enter the following code: package com.mechakana.tutorials; import com.leapmotion.leap.Controller; public class Leapduino { //Main public static final void main(String args[]) {      //Initialize serial communications.    RS232Protocol serial = new RS232Protocol();    serial.connect("COM4");    //Initialize the Leapduino listener.    LeapduinoListener leap = new LeapduinoListener(serial);    Controller controller = new Controller();    controller.addListener(leap); } } Like all of the classes so far, there isn't a whole lot to say here. That said, there is one line that you must absolutely be aware of, since it can change depending on how you're Arduino is connected: serial.connect("COM4"); Depending on which port Windows chose for your Arduino when it connected to your computer (more on that next), you will need to modify the COM4 value in the above line to match the port your Arduino is on. Examples of values you'll probable use are COM3, COM4, and COM5. And with that, the Java side of things is complete. If you run this project right now, most likely all you'll see will be two lines of output: Initialized and Connected. If you want to see anything else happen, you'll need to move on to the next section and get the Arduino side of things working. Writing the Arduino side of things With our Java coding done, it's time to write some good-old C++ for the Arduino. If you were able to use the Windows installer for Arduino, simply navigate to the Leapduino.ino file in your Eclipse project explorer and double click on it. If you had to extract the entire Arduino IDE and store it somewhere instead of running a simple Windows installer, navigate to it and launch the Arduino.exe file. From there, select File | Open, navigate to the Leapduino.ino file on your computer and double click on it. You will now be presented with a screen similar to the one here: This is the wonderful Arduino IDE—a minimalistic and straightforward text editor and compiler for the Arduino microcontrollers. On the top left of the IDE, you'll find two circular buttons: the check mark verifies (compiles) your code to make sure it works, and the arrow deploys your code to the Arduino board connected to your computer. On the bottom of the IDE, you'll find the compiler output console (the black box), and on the very bottom right you'll see a line of text telling you which Arduino model is connected to your computer, and on what port (I have an Arduino Uno on COM4 in the preceding screenshot). As is typical for many IDEs and text editors, the big white area in the middle is where your code will go. So without further ado, let's get started with writing some code! Input all of the text shown here into the Arduino IDE: //Most Arduino boards have an LED pre-wired to pin 13. int led = 13; //Current LED state. LOW is off and HIGH is on. int ledState = LOW; //Blink rate in milliseconds. long blinkRate = 500; //Last time the LED was updated. long previousTime = 0; //Function: setup void setup() { //Initialize the built-in LED (assuming the Arduino board has one) pinMode(led, OUTPUT); //Start a serial connection at a baud rate of 38,400. Serial.begin(38400); } //Function: loop void loop() { //Get the current system time in milliseconds. unsigned long currentTime = millis(); //Check if it's time to toggle the LED on or off. if (currentTime - previousTime >= blinkRate) {    previousTime = currentTime;       if (ledState == LOW) ledState = HIGH;    else ledState = LOW;       digitalWrite(led, ledState); } //Check if there is serial data available. if (Serial.available()) {    //Wait for all data to arrive.    delay(20);       //Our data.    String data = "";       //Iterate over all of the available data and compound it into      a string.    while (Serial.available())      data += (char) (Serial.read());       //Set the blink rate based on our newly-read data.    blinkRate = abs(data.toInt() * 2);       //A blink rate lower than 30 milliseconds won't really be      perceptable by a human.    if (blinkRate < 30) blinkRate = 30;       //Echo the data.    Serial.println("Leapduino Client Received:");    Serial.println("Raw Leap Data: " + data + " | Blink Rate (MS):      " + blinkRate); } } Now, let's go over the contents. The first few lines are basic global variables, which we'll be using throughout the program (the comments do a good job of describing them, so we won't go into much detail here). The first function, setup, is an Arduino's equivalent of a constructor; it's called only once, when the Arduino is first turned on. Within the setup function, we initialize the built-in LED (most Arduino boards have an LED pre-wired to pin 13) on the board. We then initialize serial communications at a baud rate of 38,400 bits per second—this will allow our board to communicate with the computer later on. Fun fact The baud rate (abbreviated as Bd in some diagrams) is the unit for symbol rate or modulation rate in symbols or pulses per second. Simply put, on serial ports, the baud rate controls how many bits a serial port can send per second—the higher the number, the faster a serial port can communicate. The question is, why don't we set a ridiculously high rate? Well, the higher you go with the baud rate, the more likely it is for there to be data loss—and we all know data loss just isn't good. For many applications, though, a baud rate of 9,600 to 38,400 bits per second is sufficient. Moving on to the second function, loop is the main function in any Arduino program, which is repeatedly called while the Arduino is turned on. Due to this functionality, many programs will treat any code within this function as if it were inside a while (true) loop. In loop, we start off by getting the current system time (in milliseconds) and then comparing it to our ideal blink rate for the LED. If the time elapsed since our last blink exceeds the ideal blink rate, we'll go ahead and toggle the LED on or off accordingly. We then proceed to check if any data has been received over the serial port. If it has, we'll proceed to wait for a brief period of time, 20 milliseconds, to make sure all data has been received. At that point, our code will proceed to read in all of the data, parse it for an integer (which will be our new blink rate), and then echo the data back out to the serial port for diagnostics purposes. As you can see, an Arduino program (or sketch, as they are formally known) is quite simple. Why don't we test it out? Deploying and testing the application With all of the code written, it's time to deploy the Arduino side of things to the, well, Arduino. The first step is to simply open up your Leapduino.ino file in the Arduino IDE. Once that's done, navigate to Tools | Board and select the appropriate option for your Arduino board. In my case, it's an Arduino Uno. At this point, you'll want to verify that you have an Arduino connected to your computer via a USB cable—after all, we can't deploy to thin air! At this point, once everything is ready, simply hit the Deploy button in the top-left of the IDE, as seen here: If all goes well, you'll see the following output in the console after 15 or so seconds: And with that, your Arduino is ready to go! How about we test it out? Keeping your Arduino plugged into your computer, go on over to Eclipse and run the project we just made. Once it's running, try moving your hand up and down over your Leap Motion controller; if all goes well, you'll see the following output from within the console in Eclipse: All of that data is coming directly from the Arduino, not your Java program; isn't that cool? Now, take a look at your Arduino while you're doing this; you should notice that the built-in LED (circled in the following image, labelled L on the board itself) will begin to blink slower or faster depending on how close your hand gets to the Leap. Circled in red: the built-in L LED on an Arduino Uno, wired to pin 13 by default. With this, you've created a simple Leap Motion application for use with an Arduino. From here, you could go on to make an Arduino-controlled robotic arm driven by coordinates from the Leap, or maybe an interactive light show. The possibilities are endless, and this is just the (albeit extremely, extremely simple) tip of the iceberg. Summary In this article, you had a lengthy look at some things you can do with the Leap Motion Controller and hardware such as Arduino. If you have any questions, I encourage you to contact me directly at [email protected]. You can also visit my website, http://www.mechakana.com, for more technological goodies and tutorials. Resources for Article: Further resources on this subject: Major SDK components [Article] 2D Twin-stick Shooter [Article] What's Your Input? [Article]
Read more
  • 0
  • 0
  • 10269

article-image-2d-twin-stick-shooter
Packt
11 Nov 2014
21 min read
Save for later

2D Twin-stick Shooter

Packt
11 Nov 2014
21 min read
This article written by John P. Doran, the author of Unity Game Development Blueprints, teaches us how to use Unity to prepare a well formed game. It also gives people experienced in this field a chance to prepare some great stuff. (For more resources related to this topic, see here.) The shoot 'em up genre of games is one of the earliest kinds of games. In shoot 'em ups, the player character is a single entity fighting a large number of enemies. They are typically played with a top-down perspective, which is perfect for 2D games. Shoot 'em up games also exist with many categories, based upon their design elements. Elements of a shoot 'em up were first seen in the 1961 Spacewar! game. However, the concept wasn't popularized until 1978 with Space Invaders. The genre was quite popular throughout the 1980s and 1990s and went in many different directions, including bullet hell games, such as the titles of the Touhou Project. The genre has recently gone through a resurgence in recent years with games such as Bizarre Creations' Geometry Wars: Retro Evolved, which is more famously known as a twin-stick shooter. Project overview Over the course of this article, we will be creating a 2D multidirectional shooter game similar to Geometry Wars. In this game, the player controls a ship. This ship can move around the screen using the keyboard and shoot projectiles in the direction that the mouse is points at. Enemies and obstacles will spawn towards the player, and the player will avoid/shoot them. This article will also serve as a refresher on a lot of the concepts of working in Unity and give an overview of the recent addition of native 2D tools into Unity. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from beginning to end. Here is the outline of our tasks: Setting up the project Creating our scene Adding in player movement Adding in shooting functionality Creating enemies Adding GameController to spawn enemy waves Particle systems Adding in audio Adding in points, score, and wave numbers Publishing the game Prerequisites Before we start, we will need to get the latest Unity version, which you can always get by going to http://unity3d.com/unity/download/ and downloading it there: At the time of writing this article, the version is 4.5.3, but this project should work in future versions with minimal changes. Navigate to the preceding URL, and download the Chapter1.zip package and unzip it. Inside the Chapter1 folder, there are a number of things, including an Assets folder, which will have the art, sound, and font files you'll need for the project as well as the Chapter_1_Completed.unitypackage (this is the complete article package that includes the entire project for you to work with). I've also added in the complete game exported (TwinstickShooter Exported) as well as the entire project zipped up in the TwinstickShooter Project.zip file. Setting up the project At this point, I have assumed that you have Unity freshly installed and have started it up. With Unity started, go to File | New Project. Select Project Location of your choice somewhere on your hard drive, and ensure you have Setup defaults for set to 2D. Once completed, select Create. At this point, we will not need to import any packages, as we'll be making everything from scratch. It should look like the following screenshot: From there, if you see the Welcome to Unity pop up, feel free to close it out as we won't be using it. At this point, you will be brought to the general Unity layout, as follows: Again, I'm assuming you have some familiarity with Unity before reading this article; if you would like more information on the interface, please visit http://docs.unity3d.com/Documentation/Manual/LearningtheInterface.html. Keeping your Unity project organized is incredibly important. As your project moves from a small prototype to a full game, more and more files will be introduced to your project. If you don't start organizing from the beginning, you'll keep planning to tidy it up later on, but as deadlines keep coming, things may get quite out of hand. This organization becomes even more vital when you're working as part of a team, especially if your team is telecommuting. Differing project structures across different coders/artists/designers is an awful mess to find yourself in. Setting up a project structure at the start and sticking to it will save you countless minutes of time in the long run and only takes a few seconds, which is what we'll be doing now. Perform the following steps: Click on the Create drop-down menu below the Project tab in the bottom-left side of the screen. From there, click on Folder, and you'll notice that a new folder has been created inside your Assets folder. After the folder is created, you can type in the name for your folder. Once done, press Enter for the folder to be created. We need to create folders for the following directories:      Animations      Prefabs      Scenes      Scripts      Sprites If you happen to create a folder inside another folder, you can simply drag-and-drop it from the left-hand side toolbar. If you need to rename a folder, simply click on it once and wait, and you'll be able to edit it again. You can also use Ctrl + D to duplicate a folder if it is selected. Once you're done with the aforementioned steps, your project should look something like this: Creating our scene Now that we have our project set up, let's get started with creating our player: Double-click on the Sprites folder. Once inside, go to your operating system's browser window, open up the Chapter 1/Assets folder that we provided, and drag the playerShip.png file into the folder to move it into our project. Once added, confirm that the image is Sprite by clicking on it and confirming from the Inspector tab that Texture Type is Sprite. If it isn't, simply change it to that, and then click on the Apply button. Have a look at the following screenshot: If you do not want to drag-and-drop the files, you can also right-click within the folder in the Project Browser (bottom-left corner) and select Import New Asset to select a file from a folder to bring it in. The art assets used for this tutorial were provided by Kenney. To see more of their work, please check out www.kenney.nl. Next, drag-and-drop the ship into the scene (the center part that's currently dark gray). Once completed, set the position of the sprite to the center of the Screen (0, 0) by right-clicking on the Transform component and then selecting Reset Position. Have a look at the following screenshot: Now, with the player in the world, let's add in a background. Drag-and-drop the background.png file into your Sprites folder. After that, drag-and-drop a copy into the scene. If you put the background on top of the ship, you'll notice that currently the background is in front of the player (Unity puts newly added objects on top of previously created ones if their position on the Z axis is the same; this is commonly referred to as the z-order), so let's fix that. Objects on the same Z axis without sorting layer are considered to be equal in terms of draw order; so just because a scene looks a certain way this time, when you reload the level it may look different. In order to guarantee that an object is in front of another one in 2D space is by having different Z values or using sorting layers. Select your background object, and go to the Sprite Renderer component from the Inspector tab. Under Sorting Layer, select Add Sorting Layer. After that, click on the + icon for Sorting Layers, and then give Layer 1 a name, Background. Now, create a sorting layer for Foreground and GUI. Have a look at the following screenshot: Now, place the player ship on the foreground and the background by selecting the object once again and then setting the Sorting Layer property via the drop-down menu. Now, if you play the game, you'll see that the ship is in front of the background, as follows: At this point, we can just duplicate our background a number of times to create our full background by selecting the object in the Hierarchy, but that is tedious and time-consuming. Instead, we can create all of the duplicates by either using code or creating a tileable texture. For our purposes, we'll just create a texture. Delete the background sprite by left-clicking on the background object in the Hierarchy tab on the left-hand side and then pressing the Delete key. Then select the background sprite in the Project tab, change Texture Type in the Inspector tab to Texture, and click on Apply. Now let's create a 3D cube by selecting Game Object | Create Other | Cube from the top toolbar. Change the object's name from Cube to Background. In the Transform component, change the Position to (0, 0, 1) and the Scale to (100, 100, 1). If you are using Unity 4.6 you will need to go to Game Object | 3D Object | Cube to create the cube. Since our camera is at 0, 0, -10 and the player is at 0, 0, 0, putting the object at position 0, 0, 1 will put it behind all of our sprites. By creating a 3D object and scaling it, we are making it really large, much larger than the player's monitor. If we scaled a sprite, it would be one really large image with pixelation, which would look really bad. By using a 3D object, the texture that is applied to the faces of the 3D object is repeated, and since the image is tileable, it looks like one big continuous image. Remove Box Collider by right-clicking on it and selecting Remove Component. Next, we will need to create a material for our background to use. To do so, under the Project tab, select Create | Material, and name the material as BackgroundMaterial. Under the Shader property, click on the drop-down menu, and select Unlit | Texture. Click on the Texture box on the right-hand side, and select the background texture. Once completed, set the Tiling property's x and y to 25. Have a look at the following screenshot: In addition to just selecting from the menu, you can also drag-and-drop the background texture directly onto the Texture box, and it will set the property. Tiling tells Unity how many times the image should repeat in the x and y positions, respectively. Finally, go back to the Background object in Hierarchy. Under the Mesh Renderer component, open up Materials by left-clicking on the arrow, and change Element 0 to our BackgroundMaterial material. Consider the following screenshot: Now, when we play the game, you'll see that we now have a complete background that tiles properly. Scripting 101 In Unity, the behavior of game objects is controlled by the different components that are attached to them in a form of association called composition. These components are things that we can add and remove at any time to create much more complex objects. If you want to do anything that isn't already provided by Unity, you'll have to write it on your own through a process we call scripting. Scripting is an essential element in all but the simplest of video games. Unity allows you to code in either C#, Boo, or UnityScript, a language designed specifically for use with Unity and modelled after JavaScript. For this article, we will use C#. C# is an object-oriented programming language—an industry-standard language similar to Java or C++. The majority of plugins from Asset Store are written in C#, and code written in C# can port to other platforms, such as mobile, with very minimal code changes. C# is also a strongly-typed language, which means that if there is any issue with the code, it will be identified within Unity and will stop you from running the game until it's fixed. This may seem like a hindrance, but when working with code, I very much prefer to write correct code and solve problems before they escalate to something much worse. Implementing player movement Now, at this point, we have a great-looking game, but nothing at all happens. Let's change that now using our player. Perform the following steps: Right-click on the Scripts folder you created earlier, click on Create, and select the C# Script label. Once you click on it, a script will appear in the Scripts folder, and it should already have focus and should be asking you to type a name for the script—call it PlayerBehaviour. Double-click on the script in Unity, and it will open MonoDevelop, which is an open source integrated development environment (IDE) that is included with your Unity installation. After MonoDevelop has loaded, you will be presented with the C# stub code that was created automatically for you by Unity when you created the C# script. Let's break down what's currently there before we replace some of it with new code. At the top, you will see two lines: using UnityEngine;using System.Collections; The engine knows that if we refer to a class that isn't located inside this file, then it has to reference the files within these namespaces for the referenced class before giving an error. We are currently using two namespaces. The UnityEngine namespace contains interfaces and class definitions that let MonoDevelop know about all the addressable objects inside Unity. The System.Collections namespace contains interfaces and classes that define various collections of objects, such as lists, queues, bit arrays, hash tables, and dictionaries. We will be using a list, so we will change the line to the following: using System.Collections.Generic; The next line you'll see is: public class PlayerBehaviour : MonoBehaviour { You can think of a class as a kind of blueprint for creating a new component type that can be attached to GameObjects, the objects inside our scenes that start out with just a Transform and then have components added to them. When Unity created our C# stub code, it took care of that; we can see the result, as our file is called PlayerBehaviour and the class is also called PlayerBehaviour. Make sure that your .cs file and the name of the class match, as they must be the same to enable the script component to be attached to a game object. Next up is the: MonoBehaviour section of the code. The : symbol signifies that we inherit from a particular class; in this case, we'll use MonoBehaviour. All behavior scripts must inherit from MonoBehaviour directly or indirectly by being derived from it. Inheritance is the idea of having an object to be based on another object or class using the same implementation. With this in mind, all the functions and variables that existed inside the MonoBehaviour class will also exist in the PlayerBehaviour class, because PlayerBehaviour is MonoBehaviour. For more information on the MonoBehaviour class and all the functions and properties it has, check out http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Directly after this line, we will want to add some variables to help us with the project. Variables are pieces of data that we wish to hold on to for one reason or another, typically because they will change over the course of a program, and we will do different things based on their values. Add the following code under the class definition: // Movement modifier applied to directional movement.public float playerSpeed = 2.0f;// What the current speed of our player isprivate float currentSpeed = 0.0f;/** Allows us to have multiple inputs and supports keyboard,* joystick, etc.*/public List<KeyCode> upButton;public List<KeyCode> downButton;public List<KeyCode> leftButton;public List<KeyCode> rightButton;// The last movement that we've madeprivate Vector3 lastMovement = new Vector3(); Between the variable definitions, you will notice comments to explain what each variable is and how we'll use it. To write a comment, you can simply add a // to the beginning of a line and everything after that is commented upon so that the compiler/interpreter won't see it. If you want to write something that is longer than one line, you can use /* to start a comment, and everything inside will be commented until you write */ to close it. It's always a good idea to do this in your own coding endeavors for anything that doesn't make sense at first glance. For those of you working on your own projects in teams, there is an additional form of commenting that Unity supports, which may make your life much nicer: XML comments. They take up more space than the comments we are using, but also document your code for you. For a nice tutorial about that, check out http://unitypatterns.com/xml-comments/. In our game, the player may want to move up using either the arrow keys or the W key. You may even want to use something else. Rather than restricting the player to just having one button, we will store all the possible ways to go up, down, left, or right in their own container. To do this, we are going to use a list, which is a holder for multiple objects that we can add or remove while the game is being played. For more information on lists, check out http://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx One of the things you'll notice is the public and private keywords before the variable type. These are access modifiers that dictate who can and cannot use these variables. The public keyword means that any other class can access that property, while private means that only this class will be able to access this variable. Here, currentSpeed is private because we want our current speed not to be modified or set anywhere else. But, you'll notice something interesting with the public variables that we've created. Go back into the Unity project and drag-and-drop the PlayerBehaviour script onto the playerShip object. Before going back to the Unity project though, make sure that you save your PlayerBehaviour script. Not saving is a very common mistake made by people working with MonoDevelop. Have a look at the following screenshot: You'll notice now that the public variables that we created are located inside Inspector for the component. This means that we can actually set those variables inside Inspector without having to modify the code, allowing us to tweak values in our code very easily, which is a godsend for many game designers. You may also notice that the names have changed to be more readable. This is because of the naming convention that we are using with each word starting with a capital letter. This convention is called CamelCase (more specifically headlessCamelCase). Now change the Size of each of the Button variables to 2, and fill in the Element 0 value with the appropriate arrow and Element 1 with W for up, A for left, S for down, and D for right. When this is done, it should look something like the following screenshot: Now that we have our variables set, go back to MonoDevelop for us to work on the script some more. The line after that is a function definition for a method called Start; it isn't a user method but one that belongs to MonoBehaviour. Where variables are data, functions are the things that modify and/or use that data. Functions are self-contained modules of code (enclosed within braces, { and }) that accomplish a certain task. The nice thing about using a function is that once a function is written, it can be used over and over again. Functions can be called from inside other functions: void Start () {} Start is only called once in the lifetime of the behavior when the game starts and is typically used to initialize data. If you're used to other programming languages, you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project. However, for this behavior, we will not need to use the Start function. Perform the following steps: Delete the Start function and its contents. The next function that we see included is the Update function. Also inherited from MonoBehaviour, this function is called for every frame that the component exists in and for each object that it's attached to. We want to update our player ship's rotation and movement every turn. Inside the Update function (between { and }), put the following lines of code: // Rotate player to face mouse Rotation(); // Move the player's body Movement(); Here, I called two functions, but these functions do not exist, because we haven't created them yet. Let's do that now! Below the Update function and before }, put the following function to close the class: // Will rotate the ship to face the mouse.void Rotation(){// We need to tell where the mouse is relative to the// playerVector3 worldPos = Input.mousePosition;worldPos = Camera.main.ScreenToWorldPoint(worldPos);/*   * Get the differences from each axis (stands for   * deltaX and deltaY)*/float dx = this.transform.position.x - worldPos.x;float dy = this.transform.position.y - worldPos.y;// Get the angle between the two objectsfloat angle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg;/*   * The transform's rotation property uses a Quaternion,   * so we need to convert the angle in a Vector   * (The Z axis is for rotation for 2D).*/Quaternion rot = Quaternion.Euler(new Vector3(0, 0, angle + 90));// Assign the ship's rotationthis.transform.rotation = rot;} Now if you comment out the Movement line and run the game, you'll notice that the ship will rotate in the direction in which the mouse is. Have a look at the following screenshot: Below the Rotation function, we now need to add in our Movement function the following code: // Will move the player based off of keys pressedvoid Movement(){// The movement that needs to occur this frameVector3 movement = new Vector3();// Check for inputmovement += MoveIfPressed(upButton, Vector3.up);movement += MoveIfPressed(downButton, Vector3.down);movement += MoveIfPressed(leftButton, Vector3.left);movement += MoveIfPressed(rightButton, Vector3.right);/*   * If we pressed multiple buttons, make sure we're only   * moving the same length.*/movement.Normalize ();// Check if we pressed anythingif(movement.magnitude > 0){   // If we did, move in that direction   currentSpeed = playerSpeed;   this.transform.Translate(movement * Time.deltaTime *                           playerSpeed, Space.World);   lastMovement = movement;}else{   // Otherwise, move in the direction we were going   this.transform.Translate(lastMovement * Time.deltaTime                           * currentSpeed, Space.World);   // Slow down over time   currentSpeed *= .9f;}} Now inside this function I've created another function called MoveIfPressed, so we'll need to add that in as well. Below this function, add in the following function as well: /** Will return the movement if any of the keys are pressed,* otherwise it will return (0,0,0)*/Vector3 MoveIfPressed( List<KeyCode> keyList, Vector3 Movement){// Check each key in our listforeach (KeyCode element in keyList){   if(Input.GetKey(element))   {     /*       * It was pressed so we leave the function       * with the movement applied.     */    return Movement; }}// None of the keys were pressed, so don't need to movereturn Vector3.zero;} Now, save your file and move back into Unity. Save your current scene as Chapter_1.unity by going to File | Save Scene. Make sure to save the scene to our Scenes folder we created earlier. Run the game by pressing the play button. Have a look at the following screenshot: Now you'll see that we can move using the arrow keys or the W A S D keys, and our ship will rotate to face the mouse. Great! Summary This article talks about the 2D twin-stick shooter game. It helps to bring you to familiarity with the game development features in Unity. Resources for Article: Further resources on this subject: Components in Unity [article] Customizing skin with GUISkin [article] What's Your Input? [article]
Read more
  • 0
  • 1
  • 5947

article-image-scaling-friendly-font-rendering-distance-fields
Packt
28 Oct 2014
8 min read
Save for later

Scaling friendly font rendering with distance fields

Packt
28 Oct 2014
8 min read
This article by David Saltares Márquez and Alberto Cejas Sánchez, the authors of Libgdx Cross-platform Game Development Cookbook, describes how we can generate a distance field font and render it in Libgdx. As a bitmap font is scaled up, it becomes blurry due to linear interpolation. It is possible to tell the underlying texture to use the nearest filter, but the result will be pixelated. Additionally, until now, if you wanted big and small pieces of text using the same font, you would have had to export it twice at different sizes. The output texture gets bigger rather quickly, and this is a memory problem. (For more resources related to this topic, see here.) Distance field fonts is a technique that enables us to scale monochromatic textures without losing out on quality, which is pretty amazing. It was first published by Valve (Half Life, Team Fortress…) in 2007. It involves an offline preprocessing step and a very simple fragment shader when rendering, but the results are great and there is very little performance penalty. You also get to use smaller textures! In this article, we will cover the entire process of how to generate a distance field font and how to render it in Libgdx. Getting ready For this, we will load the data/fonts/oswald-distance.fnt and data/fonts/oswald.fnt files. To generate the fonts, Hiero is needed, so download the latest Libgdx package from http://libgdx.badlogicgames.com/releases and unzip it. Make sure the samples projects are in your workspace. Please visit the link https://github.com/siondream/libgdx-cookbook to download the sample projects which you will need. How to do it… First, we need to generate a distance field font with Hiero. Then, a special fragment shader is required to finally render scaling-friendly text in Libgdx. Generating distance field fonts with Hiero Open up Hiero from the command line. Linux and Mac users only need to replace semicolons with colons and back slashes with forward slashes: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensions gdx-toolsgdx-tools.jar com.badlogic.gdx.tools.hiero.Hiero Select the font using either the System or File options. This time, you don't need a really big size; the point is to generate a small texture and still be able to render text at high resolutions, maintaining quality. We have chosen 32 this time. Remove the Color effect, and add a white Distance field effect. Set the Spread effect; the thicker the font, the bigger should be this value. For Oswald, 4.0 seems to be a sweet spot. To cater to the spread, you need to set a matching padding. Since this will make the characters render further apart, you need to counterbalance this by the setting the X and Y values to twice the negative padding. Finally, set the Scale to be the same as the font size. Hiero will struggle to render the charset, which is why we wait until the end to set this property. Generate the font by going to File | Save BMFont files (text).... The following is the Hiero UI showing a font texture with a Distance field effect applied to it: Distance field fonts shader We cannot use the distance field texture to render text for obvious reasons—it is blurry! A special shader is needed to get the information from the distance field and transform it into the final, smoothed result. The vertex shader found in data/fonts/font.vert is simple. The magic takes place in the fragment shader, found in data/fonts/font.frag and explained later. First, we sample the alpha value from the texture for the current fragment and call it distance. Then, we use the smoothstep() function to obtain the actual fragment alpha. If distance is between 0.5-smoothing and 0.5+smoothing, Hermite interpolation will be used. If the distance is greater than 0.5+smoothing, the function returns 1.0, and if the distance is smaller than 0.5-smoothing, it will return 0.0. The code is as follows: #ifdef GL_ES precision mediump float; precision mediump int; #endif   uniform sampler2D u_texture;   varying vec4 v_color; varying vec2 v_texCoord;   const float smoothing = 1.0/128.0;   void main() {    float distance = texture2D(u_texture, v_texCoord).a;    float alpha = smoothstep(0.5 - smoothing, 0.5 + smoothing, distance);    gl_FragColor = vec4(v_color.rgb, alpha * v_color.a); } The smoothing constant determines how hard or soft the edges of the font will be. Feel free to play around with the value and render fonts at different sizes to see the results. You could also make it uniform and configure it from the code. Rendering distance field fonts in Libgdx Let's move on to DistanceFieldFontSample.java, where we have two BitmapFont instances: normalFont (pointing to data/fonts/oswald.fnt) and distanceShader (pointing to data/fonts/oswald-distance.fnt). This will help us illustrate the difference between the two approaches. Additionally, we have a ShaderProgram instance for our previously defined shader. In the create() method, we instantiate both the fonts and shader normally: normalFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald.fnt")); normalFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); normalFont.setScale(4.5f);   distanceFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald-distance.fnt")); distanceFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); distanceFont.setScale(4.5f);   fontShader = new ShaderProgram(Gdx.files.internal("data/fonts/font.vert"), Gdx.files.internal("data/fonts/font.frag"));   if (!fontShader.isCompiled()) {    Gdx.app.error(DistanceFieldFontSample.class.getSimpleName(), "Shader compilation failed:n" + fontShader.getLog()); } We need to make sure that the texture our distanceFont just loaded is using linear filtering: distanceFont.getRegion().getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear); Remember to free up resources in the dispose() method, and let's get on with render(). First, we render some text with the regular font using the default shader, and right after this, we do the same with the distance field font using our awesome shader: batch.begin(); batch.setShader(null); normalFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 50.0f);   batch.setShader(fontShader); distanceFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 250.0f); batch.end(); The results are pretty obvious; it is a huge win of memory and quality over a very small price of GPU time. Try increasing the font size even more and be amazed at the results! You might have to slightly tweak the smoothing constant in the shader code though: How it works… Let's explain the fundamentals behind this technique. However, for a thorough explanation, we recommend that you read the original paper by Chris Green from Valve (http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf). A distance field is a derived representation of a monochromatic texture. For each pixel in the output, the generator determines whether the corresponding one in the original is colored or not. Then, it examines its neighborhood to determine the 2D distance in pixels, to a pixel with the opposite state. Once the distance is calculated, it is mapped to a [0, 1] range, with 0 being the maximum negative distance and 1 being the maximum positive distance. A value of 0.5 indicates the exact edge of the shape. The following figure illustrates this process: Within Libgdx, the BitmapFont class uses SpriteBatch to render text normally, only this time, it is using a texture with a Distance field effect applied to it. The fragment shader is responsible for performing a smoothing pass. If the alpha value for this fragment is higher than 0.5, it can be considered as in; it will be out in any other case: This produces a clean result. There's more… We have applied distance fields to text, but we have also mentioned that it can work with monochromatic images. It is simple; you need to generate a low resolution distance field transform. Luckily enough, Libgdx comes with a tool that does just this. Open a command-line window, access your Libgdx package folder and enter the following command: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensionsgdx-tools gdx-tools.jar com.badlogic.gdx.tools.distancefield.DistanceFieldGenerator The distance field font generator takes the following parameters: --color: This parameter is in hexadecimal RGB format; the default is ffffff --downscale: This is the factor by which the original texture will be downscaled --spread: This is the edge scan distance, expressed in terms of the input Take a look at this example: java […] DistanceFieldGenerator --color ff0000 --downscale 32 --spread 128 texture.png texture-distance.png Alternatively, you can use the gdx-smart-font library to handle scaling. It is a simpler but a bit more limited solution (https://github.com/jrenner/gdx-smart-font). Summary In this article, we have covered the entire process of how to generate a distance field font and how to render it in Libgdx. Further resources on this subject: Cross-platform Development - Build Once, Deploy Anywhere [Article] Getting into the Store [Article] Adding Animations [Article]
Read more
  • 0
  • 0
  • 4355
Banner background image

article-image-animation-and-unity3d-physics
Packt
27 Oct 2014
5 min read
Save for later

Animation and Unity3D Physics

Packt
27 Oct 2014
5 min read
In this article, written by K. Aava Rani, author of the book, Learning Unity Physics, you will learn to use Physics in animation creation. We will see that there are several animations that can be easily handled by Unity3D's Physics. During development, you will come to know that working with animations and Physics is easy in Unity3D. You will find the combination of Physics and animation very interesting. We are going to cover the following topics: Interpolate and Extrapolate Cloth component and its uses in animation (For more resources related to this topic, see here.) Developing simple and complex animations As mentioned earlier, you will learn how to handle and create simple and complex animations using Physics, for example, creating a rope animation and hanging ball. Let's start with the Physics properties of a Rigidbody component, which help in syncing animation. Interpolate and Extrapolate Unity3D offers a way that its Rigidbody component can help in the syncing of animation. Using the interpolation and extrapolation properties, we sync animations. Interpolation is not only for animation, it works also with Rigidbody. Let's see in detail how interpolation and extrapolation work: Create a new scene and save it. Create a Cube game object and apply Rigidbody on it. Look at the Inspector panel shown in the following screenshot. On clicking Interpolate, a drop-down list that contains three options will appear, which are None, Interpolate, and Extrapolate. By selecting any one of them, we can apply the feature. In interpolation, the position of an object is calculated by the current update time, moving it backwards one Physics update delta time. Delta time or delta timing is a concept used among programmers in relation to frame rate and time. For more details, check out http://docs.unity3d.com/ScriptReference/Time-deltaTime.html. If you look closely, you will observe that there are at least two Physics updates, which are as follows: Ahead of the chosen time Behind the chosen time Unity interpolates between these two updates to get a position for the update position. So, we can say that the interpolation is actually lagging behind one Physics update. The second option is Extrapolate, which is to use for extrapolation. In this case, Unity predicts the future position for the object. Although, this does not show any lag, but incorrect prediction sometime causes a visual jitter. One more important component that is widely used to animate cloth is the Cloth component. Here, you will learn about its properties and how to use it. The Cloth component To make animation easy, Unity provides an interactive component called Cloth. In the GameObject menu, you can directly create the Cloth game object. Have a look at the following screenshot: Unity also provides Cloth components in its Physics sections. To apply this, let's look at an example: Create a new scene and save it. Create a Plane game object. (We can also create a Cloth game object.) Navigate to Component | Physics and choose InteractiveCloth. As shown in the following screenshot, you will see the following properties in the Inspector panel: Let's have a look at the properties one by one. Blending Stiffness and Stretching Stiffness define the blending and stretching stiffness of the Cloth while Damping defines the damp motion of the Cloth. Using the Thickness property, we decide thickness of the Cloth, which ranges from 0.001 to 10,000. If we enable the Use Gravity property, it will affect the Cloth simulation. Similarly, if we enable Self Collision, it allows the Cloth to collide with itself. For a constant or random acceleration, we apply the External Acceleration and Random Acceleration properties, respectively. World Velocity Scale decides movement of the character in the world, which will affect the Cloth vertices. The higher the value, more movement of the character will affect. World Acceleration works similarly. The Interactive Cloth component depends on the Cloth Renderer components. Lots of Cloth components in a game reduces the performance of game. To simulate clothing in characters, we use the Skinned Cloth component. Important points while using the Cloth component The following are the important points to remember while using the Cloth component: Cloth simulation will not generate tangents. So, if you are using a tangent dependent shader, the lighting will look wrong for a Cloth component that has been moved from its initial position. We cannot directly change the transform of moving Cloth game object. This is not supported. Disabling the Cloth before changing the transform is supported. The SkinnedCloth component works together with SkinnedMeshRenderer to simulate clothing on a character. As shown in the following screenshot, we can apply Skinned Cloth: As you can see in the following screenshot, there are different properties that we can use to get the desired effect: We can disable or enable the Skinned Cloth component at any time to turn it on or off. Summary In this article, you learned about how interpolation and extrapolation work. We also learned about Cloth component and its uses in animation with an example. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [article] Unity Networking – The Pong Game [article] The physics engine [article]
Read more
  • 0
  • 0
  • 4856

article-image-animations-cocos2d-x
Packt
23 Sep 2014
24 min read
Save for later

Animations in Cocos2d-x

Packt
23 Sep 2014
24 min read
In this article, created by Siddharth Shekhar, the author of Learning Cocos2d-x Game Development, we will learn different tools that can be used to animate the character. Then, using these animations, we will create a simple state machine that will automatically check whether the hero is falling or is being boosted up into the air, and depending on the state, the character will be animated accordingly. We will cover the following in this article: Animation basics TexturePacker Creating spritesheet for the player Creating and coding the enemy animation Creating the skeletal animation Coding the player walk cycle (For more resources related to this topic, see here.) Animation basics First of all, let's understand what animation is. An animation is made up of different images that are played in a certain order and at a certain speed, for example, movies that run images at 30 fps or 24 fps, depending on which format it is in, NTSC or PAL. When you pause a movie, you are actually seeing an individual image of that movie, and if you play the movie in slow motion, you will see the frames or images that make up to create the full movie. In games while making animations, we will do the same thing: adding frames and running them at a certain speed. We will control the images to play in a particular sequence and interval by code. For an animation to be "smooth", you should have at least 24 images or frames being played in a second, which is known as frames per second (FPS). Each of the images in the animation is called a frame. Let's take the example of a simple walk cycle. Each walk cycle should be of 24 frames. You might say that it is a lot of work, and for sure it is, but the good news is that these 24 frames can be broken down into keyframes, which are important images that give the illusion of the character walking. The more frames you add between these keyframes, the smoother the animation will be. The keyframes for a walk cycle are Contact, Down, Pass, and Up positions. For mobile games, as we would like to get away with as minimal work as possible, instead of having all the 24 frames, some games use just the 4 keyframes to create a walk animation and then speed up the animation so that player is not able to see the missing frames. So overall, if you are making a walk cycle for your character, you will create eight images or four frames for each side. For a stylized walk cycle, you can even get away with a lesser number of frames. For the animation in the game, we will create images that we will cycle through to create two sets of animation: an idle animation, which will be played when the player is moving down, and a boost animation, which will get played when the player is boosted up into the air. Creating animation in games is done using two methods. The most popular form of animation is called spritesheet animation and the other is called skeletal animation. Spritesheet animation Spritesheet animation is when you keep all the frames of the animation in a single file accompanied by a data file that will have the name and location of each of the frames. This is very similar to the BitmapFont. The following is the spritesheet we will be using in the game. For the boost and idle animations, each of the frames for the corresponding animation will be stored in an array and made to loop at a particular predefined speed. The top four images are the frames for the boost animation. Whenever the player taps on the screen, the animation will cycle through these four images appearing as if the player is boosted up because of the jetpack. The bottom four images are for the idle animation when the player is dropping down due to gravity. In this animation, the character will look as if she is blinking and the flames from the jetpack are reduced and made to look as if they are swaying in the wind. Skeletal animation Skeletal animation is relatively new and is used in games such as Rayman Origins that have loads and loads of animations. This is a more powerful way of making animations for 2D games as it gives a lot of flexibility to the developer to create animations that are fast to produce and test. In the case of spritesheet animations, if you had to change a single frame of the animation, the whole spritesheet would have to be recreated causing delay; imagine having to rework 3000 frames of animations in your game. If each frame was hand painted, it would take a lot of time to produce the individual images causing delay in production time, not to mention the effort and time in redrawing images. The other problem is device memory. If you are making a game for the PC, it would be fine, but in the case of mobiles where memory is limited, spritesheet animation is not a viable option unless cuts are made to the design of the game. So, how does skeletal animation work? In the case of skeletal animation, each item to be animated is stored in a separate spritesheet along with the data file for the locations of the individual images for each body part and object to be animated, and another data file is generated that positions and rotates the individual items for each of the frames of the animation. To make this clearer, look at the spritesheet for the same character created with skeletal animation: Here, each part of the body and object to be animated is a separate image, unlike the method used in spritesheet animation where, for each frame of animation, the whole character is redrawn. TexturePacker To create a spritesheet animation, you will have to initially create individual frames in Photoshop, Illustrator, GIMP or any other image editing software. I have already made it and have each of the images for the individual frames ready. Next, you will have to use a software to create spritesheets from images. TexturePacker is a very popular software that is used by industry professionals to create spritesheets. You can download it from https://www.codeandweb.com/. These are the same guys who made PhysicsEditor, which we used to make shapes for Box2D. You can use the trial version of this software. While downloading, choose the version that is compatible with your operating system. Fortunately, TexturePacker is available for all the major operating systems, including Linux. Refer to the following screenshot to check out the steps to use TexturePacker: Once you have downloaded TexturePacker, you have three options: you can click to try the full version for a week, or you can purchase the license, or click on the essential version to use in the trial version. In the trial version, some of the professional features are disabled, so I recommend trying the professional features for a week. Once you click the option, you should see the following interface: Texture packer has three panels; let's start from the right. The right-hand side panel will display the names of all the images that you select to create the spritesheet. The center panel is a preview window that shows how the images are packed. The left-hand side panel gives you options to store the packed texture and data file to be published to and decide the maximum size of the packed image. The Layout section gives a lot of flexibility to set up the individual images in TexturePacker, and then you have the advanced section. Let's look at some of the key items on the panel on the left. The display section The display section consists of the following options: Data Format: As we saw earlier, each exported file creates a spritesheet that has a collection of images and a data file that keeps track of the positions on the spritesheet. The data format usually changes depending upon the framework or engine. In TexturePacker, you can select the framework that you are using to develop the game, and TexturePacker will create a data file format that is compatible with the framework. If you look at the drop-down menu, you can see a lot of popular frameworks and engines in the list such as 2DToolkit, OGRE, Cocos2d, Corona SDK, LibGDX, Moai, Sparrow/Starling, SpriteKit, and Unity. You can also create a regular JSON file too if you wish. Java Script Object Notification (JSON) is similar to an XML file that is used to store and retrieve data. It is a collection of names and value pairs used for data interchanging. Data file: This is the location where you want the exported file to be placed. Texture format: Usually, this is set to .png, but you can select the one that is most convenient. Apart from PNG, you also have PVR, which is used so that people cannot view the image readily and also provides image compression. Png OPT file: This is used to set the quality of PNG images. Image format: This sets the RGB format to be used; usually, you would want this to be set at the default value. AutoSD: If you are going to create images for different resolutions, this option allows you to create resources depending on the different resolutions you are developing the game for, without the need for going into the graphics software, shrinking the images and packing them again for all the resolutions. Content protection: This protects the image and data file with an encryption key so that people can't steal spritesheets from the game file. The Geometry section The Geometry section consists of the following options: Max size: You can specify the maximum width and height of the spritesheet depending upon the framework. Usually, all frameworks allow up to 4092 x 4092, but it mostly depends on the device. Fixed size: Apparently, if you want a fixed size, you will go with this option. Size constraint: Some frameworks prefer the spritesheets to be in the power of 2 (POT), for example, 32x32, 64x64, 256x256, and so on. If this is the case, you need to select the size accordingly. For Cocos2d, you can choose any size. Scale: This is used to scale up or scale down the image. The Layout section The Layout section consists of the following options: Algorithm: This is the algorithm that will be used to make sure that the images you select to create the spritesheet are packed in the most efficient way. If you are using the pro version, choose MaxRects, but if you are using the essential version, you will have to choose Basic. Border Padding / Shape Padding: Border padding packs the gap between the border of the spritesheet and the image that it is surrounding. Shape padding is the padding between the individual images of the spritesheets. If you find that the images are getting overlapped while playing the animation in the game, you might want to increase the values to avoid overlapping. Trim: This removes the extra alpha that is surrounding the image, which would unnecessarily increase the image size of the spritesheet. Advanced features The following are some miscellaneous options in TexturePacker: Texture path: This appends the path of the texture file at the beginning of the texture name Clean transparent pixels: This sets the transparent pixels color to #000 Trim sprite names: This will remove the extension from the names of the sprites (.png and .jpg), so while calling for the name of the frame, you will not have to use extensions Creating a spritesheet for the player Now that we understand the different items in the TextureSettings panel of TexturePacker, let's create our spritesheet for the player animation from individual frames provided in the Resources folder. Open up the folder in the system and select all the images for the player that contains the idle and boost frames. There will be four images for each of the animation. Select all eight images and click-and-drag all the images to the Sprites panel, which is the right-most panel of TexturePacker. Once you have all the images on the Sprites panel, the preview panel at the center will show a preview of the spritesheet that will be created: Now on the TextureSettings panel, for the Data format option, select cocos2d. Then, in the Data file option, click on the folder icon on the right and select the location where you would like to place the data file and give the name as player_anim. Once selected, you will see that the Texture file location also auto populates with the same location. The data file will have a format of .plist and the texture file will have an extension of .png. The .plist format creates data in a markup language similar to XML. Although it is more common on Mac, you can use this data type independent of the platform you use while developing the game using Cocos2d-x. Keep the rest of the settings the same. Save the file by clicking on the save icon on the top to a location where the data and spritesheet files are saved. This way, you can access them easily the next time if you want to make the same modifications to the spritesheet. Now, click on the Publish button and you will see two files, player_anim.plist and player_anim.png, in the location you specified in the Data file and Location file options. Copy and paste these two files in the Resources folder of the project so that we can use these files to create the player states. Creating and coding enemy animation Now, let's create a similar spritesheet and data file for the enemy also. All the required files for the enemy frames are provided in the Resources folder. So, once you create the spritesheet for the enemy, it should look something like the following screenshot. Don't worry if the images are shown in the wrong sequence, just make sure that the files are numbered correctly from 1 to 4 and it is in the sequence the animations needs to be played in. Now, place the enemy_anim.png spritesheet and data file in the Resources folder in the directory and add the following lines of code in the Enemy.cpp file to animate the enemy:   //enemy animation       CCSpriteBatchNode* spritebatch = CCSpriteBatchNode::create("enemy_anim.png");    CCSpriteFrameCache* cache = CCSpriteFrameCache::sharedSpriteFrameCache();    cache->addSpriteFramesWithFile("enemy_anim.plist");       this->createWithSpriteFrameName("enemy_idle_1.png");    this->addChild(spritebatch);             //idle animation    CCArray* animFrames = CCArray::createWithCapacity(4);      char str1[100] = {0};    for(int i = 1; i <= 4; i++)    {        sprintf(str1, "enemy_idle_%d.png", i);        CCSpriteFrame* frame = cache->spriteFrameByName( str1 );        animFrames->addObject(frame);    }           CCAnimation* idleanimation = CCAnimation::createWithSpriteFrames(animFrames, 0.25f);    this->runAction (CCRepeatForever::create(CCAnimate::create(idleanimation))) ; This is very similar to the code for the player. The only difference is that for the enemy, instead of calling the function on the hero, we call it to the same class. So, now if you build and run the game, you should see the enemy being animated. The following is the screenshot from the updated code. You can now see the flames from the booster engine of the enemy. Sadly, he doesn't have a boost animation but his feet swing in the air. Now that we have mastered the spritesheet animation technique, let's see how to create a simple animation using the skeletal animation technique. Creating the skeletal animation Using this technique, we will create a very simple player walk cycle. For this, there is a software called Spine by Esoteric Software, which is a very widely used professional software to create skeletal animations for 2D games. The software can be downloaded from the company's website at http://esotericsoftware.com/spine-purchase: There are three versions of the software available: the trial, essential, and professional versions. Although majority of the features of the professional version are available in the essential version, it doesn't have ghosting, meshes, free-form deformation, skinning, and IK pinning, which is in beta stage. The inclusion of these features does speed up the animation process and certainly takes out a lot of manual work for the animator or illustrator. To learn more about these features, visit the website and hover the mouse over these features to have a better understanding of what they do. You can follow along by downloading the trial version, which can be done by clicking the Download trial link on the website. Spine is available for all platforms including Windows, Mac, and Linux. So download it for the OS of your choice. On Mac, after downloading and running the software, it will ask to install X11, or you can download and install it from http://xquartz.macosforge.org/landing/. After downloading and installing the plugin, you can open Spine. Once the software is up and running, you should see the following window: Now, create a new project by clicking on the spine icon on the top left. As we can see in the screenshot, we are now in the SETUP mode where we set up the character. On the Tree panel on the right-hand side, in the Hierarchy pane, select the Images folder. After selecting the folder, you will be able to select the path where the individual files are located for the player. Navigate to the player_skeletal_anim folder where all the images are present. Once selected, you will see the panel populate with the images that are present in the folder, namely the following: bookGame_player_Lleg bookGame_player_Rleg bookGame_player_bazooka bookGame_player_body bookGame_player_hand bookGame_player_head Now drag-and-drop all the files from the Images folder onto the scene. Don't worry if the images are not in the right order. In the Draw Order dropdown in the Hierarchy panel, you can move around the different items by drag-and-drop to make them draw in the order that you want them to be displayed. Once reordered, move the individual images on the screen to the appropriate positions: You can move around the images by clicking on the translate button on the bottom of the screen. If you hover over the buttons, you can see the names of the buttons. We will now start creating the bones that we will use to animate the character. In the panel on the bottom of the Tools section, click on the Create button. You should now see the cursor change to the bone creation icon. Before you create a bone, you have to always select the bone that will be the parent. In this case, we select the root bone that is in the center of the character. Click on it and drag downwards and hold the Shift key at the same time. Click-and-drag downwards up to the end of the blue dress of the character; make sure that the blue dress is highlighted. Now release the mouse button. The end point of this bone will be used as the hip joint from where the leg bones will be created for the character. Now select the end of the newly created bone, which you made in the last step, and click-and-drag downwards again holding Shift at the same time to make a bone that goes all the way to the end of the leg. With the leg still getting highlighted, release the mouse button. To create the bone for the other leg, create a new bone again starting from end of the first bone and the hip joint, and while the other leg is selected, release the mouse button to create a bone for the leg. Now, we will create a bone for the hand. Select the root node, the node in the middle of the character while holding Shift again, and draw a bone to the hand while the hand is highlighted. Create a bone for the head by again selecting the root node selected earlier. Draw a bone from the root node to the head while holding Shift and release the mouse button once you are near the ear of the character and the head is highlighted. You will notice that we never created a bone for the bazooka. For the bazooka, we will make the hand as the parent bone so that when the hand gets rotated, the bazooka also rotates along. Click on the bazooka node on the Hierarchy panel (not the image) and drag it to the hand node in the skeleton list. You can rotate each of the bones to check whether it is rotating properly. If not, you can move either the bones or images around by locking either one of them in its place so that you can move or rotate the other freely by clicking either the bones or the images button in the compensate button at the bottom of the screen. The following is the screenshot that shows my setup. You can use it to follow and create the bones to get a more satisfying animation. To animate the character, click on the SETUP button on the top and the layout will change to ANIMATE. You will see that a new timeline has appeared at the bottom. Click on the Animations tab in Hierarchy and rename the animation name from animation to runCycle by double-clicking on it. We will use the timeline to animate the character. Click on the Dopesheet icon at the bottom. This will show all the keyframes that we have made for the animation. As we have not created any, the dopesheet is empty. To create our first keyframe, we will click on the legs and rotate both the bones so that it reflects the contact pose of the walk cycle. Now to set a keyframe, click on the orange-colored key icon next to Rotate in the Transform panel at the bottom of the screen. Click on the translate key, as we will be changing the translation as well later. Once you click on it, the dopesheet will show the bones that you just rotated and also show what changes you made to the bone. Here, we rotated the bone, so you will see Rotation under the bones, and as we clicked on the translate key, it will show the Translate also. Now, frame 24 is the same as frame 0. So, to create the keyframe at frame 24, drag the timeline scrubber to frame 24 and click on the rotate and translate keys again. To set the keyframe at the middle where the contact pose happens but with opposite legs, rotate the legs to where the opposite leg was and select the keys to create a keyframe. For frames 6 and 18, we will keep the walk cycle very simple, so just raise the character above by selecting the root node, move it up in the y direction and click the orange key next to the translate button in the Transform panel at the bottom. Remember that you have to click it once in frame 6 and then move the timeline scrubber to frame 18, move the character up again, and click on the key again to create keyframes for both frames 6 and 18. Now the dopesheet should look as follow: Now to play the animation in a loop, click on the Repeat Animation button to the right of the Play button and then on the Play button. You will see the simple walk animation we created for the character. Next, we will export the data required to create the animation in Cocos2d-x. First, we will export the data for the animation. Click on the Spine button on top and select Export. The following window should pop up. Select JSON and choose the directory in which you would want to save the file to and click on Export: That is not all; we have to create a spritesheet and data file just as we created one in texture packer. There is an inbuilt tool in Spine to create a packed spritesheet. Again, click on the Spine icon and this time select Texture Packer. Here, in the input directory, select the Images folder from where we imported all the images initially. For the output directory, select the location to where the PNG and data files should be saved to. If you click on the settings button, you will see that it looks very similar to what we saw in TexturePacker. Keep the default values as they are. Click on Pack and give the name as player. This will create the .png and .atlas files, which are the spritesheet and data file, respectively: You have three files instead of the two in TexturePacker. There are two data files and an image file. While exporting the JSON file, if you didn't give it a name, you can rename the file manually to player.json just for consistency. Drag the player.atlas, player.json, and player.png files into the project folder. Finally, we come to the fun part where we actually use the data files to animate the character. For testing, we will add the animations to the HelloWorldScene.cpp file and check the result. Later, when we add the main menu, we will move it there so that it shows as soon as the game is launched. Coding the player walk cycle If you want to test the animations in the current project itself, add the following to the HelloWorldScene.h file first: #include <spine/spine-cocos2dx.h> Include the spine header file and create a variable named skeletonNode of the CCSkeletalAnimation type: extension::CCSkeletonAnimation* skeletonNode; Next, we initialize the skeletonNode variable in the HelloWorldScene.cpp file:    skeletonNode = extension::CCSkeletonAnimation::createWithFile("player.json", "player.atlas", 1.0f);    skeletonNode->addAnimation("runCycle",true,0,0);    skeletonNode->setPosition(ccp(visibleSize.width/2 , skeletonNode->getContentSize().height/2));    addChild(skeletonNode); Here, we give the two data files into the createWithFile() function of CCSkeletonAnimation. Then, we initiate it with addAnimation and give it the animation name we gave when we created the animation in Spine, which is runCycle. We next set the position of the skeletonNode; we set it right above the bottom of the screen. Next, we add the skeletonNode to the display list. Now, if you build and run the project, you will see the player getting animated forever in a loop at the bottom of the screen: On the left, we have the animation we created using TexturePacker from CodeAndWeb, and in the middle, we have the animation that was created using Spine from Esoteric Software. Both techniques have their set of advantages, and it also depends upon the type and scale of the game that you are making. Depending on this, you can choose the tool that is more tuned to your needs. If you have a smaller number of animations in your game and if you have good artists, you could use regular spritesheet animations. If you have a lot of animations or don't have good animators in your team, Spine makes the animation process a lot less cumbersome. Either way, both tools in professional hands can create very good animations that will give life to the characters in the game and therefore give a lot of character to the game itself. Summary This article took a very brief look at animations and how to create an animated character in the game using the two of the most popular animation techniques used in games. We also looked at FSM and at how we can create a simple state machine between two states and make the animation change according to the state of the player at that moment. Resources for Article: Further resources on this subject: Moving the Space Pod Using Touch [Article] Sprites [Article] Cocos2d-x: Installation [Article]
Read more
  • 0
  • 0
  • 5601

article-image-physics-engine
Packt
04 Sep 2014
9 min read
Save for later

The physics engine

Packt
04 Sep 2014
9 min read
In this article by Martin Varga, the author of Learning AndEngine, we will look at the physics in AndEngine. (For more resources related to this topic, see here.) AndEngine uses the Android port of the Box2D physics engine. Box2D is very popular in games, including the most popular ones such as Angry Birds, and many game engines and frameworks use Box2D to simulate physics. It is free, open source, and written in C++, and it is available on multiple platforms. AndEngine offers a Java wrapper API for the C++ Box2D backend, and therefore, no prior C++ knowledge is required to use it. Box2D can simulate 2D rigid bodies. A rigid body is a simplification of a solid body with no deformations. Such objects do not exist in reality, but if we limit the bodies to those moving much slower than the speed of light, we can say that solid bodies are also rigid. Box2D uses real-world units and works with physics terms. A position in a scene in AndEngine is defined in pixel coordinates, whereas in Box2D, it is defined in meters. AndEngine uses a pixel to meter conversion ratio. The default value is 32 pixels per meter. Basic terms Box2D works with something we call a physics world. There are bodies and forces in the physics world. Every body in the simulation has the following few basic properties: Position Orientation Mass (in kilograms) Velocity (in meters per second) Torque (or angular velocity in radians per second) Forces are applied to bodies and the following Newton's laws of motion apply: The first law, An object that is not moving or moving with constant velocity will stay that way until a force is applied to it, can be tweaked a bit The second law, Force is equal to mass multiplied by acceleration, is especially important to understand what will happen when we apply force to different objects The third law, For every action, there is an equal and opposite reaction, is a bit flexible when using different types of bodies Body types There are three different body types in Box2D, and each one is used for a different purpose. The body types are as follows: Static body: This doesn't have velocity and forces do not apply to a static body. If another body collides with a static body, it will not move. Static bodies do not collide with other static and kinematic bodies. Static bodies usually represent walls, floors, and other immobile things. In our case, they will represent platforms which don't move. Kinematic body: This has velocity, but forces don't apply to it. If a kinematic body is moving and a dynamic body collides with it, the kinematic body will continue in its original direction. Kinematic bodies also do not collide with other static and kinematic bodies. Kinematic bodies are useful to create moving platforms, which is exactly how we are going to use them. Dynamic body: A dynamic body has velocity and forces apply to it. Dynamic bodies are the closest to real-world bodies and they collide with all types of bodies. We are going to use a dynamic body for our main character. It is important to understand the consequences of choosing each body type. When we define gravity in Box2D, it will pull all dynamic bodies to the direction of the gravitational acceleration, but static bodies will remain still and kinematic bodies will either remain still or keep moving in their set direction as if there was no gravity. Fixtures Every body is composed of one or more fixtures. Each fixture has the following four basic properties: Shape: In Box2D, fixtures can be circles, rectangles, and polygons Density: This determines the mass of the fixture Friction: This plays a major role in body interactions Elasticity: This is sometimes called restitution and determines how bouncy the object is There are also special properties of fixtures such as filters and filter categories and a single Boolean property called sensor. Shapes The position of fixtures and their shapes in the body determine the overall shape, mass, and the center of gravity of the body. The upcoming figure is an example of a body that consists of three fixtures. The fixtures do not need to connect. They are part of one body, and that means their positions relative to each other will not change. The red dot represents the body's center of gravity. The green rectangle is a static body and the other three shapes are part of a dynamic body. Gravity pulls the whole body down, but the square will not fall. Density Density determines how heavy the fixtures are. Because Box2D is a two-dimensional engine, we can imagine all objects to be one meter deep. In fact, it doesn't matter as long as we are consistent. There are two bodies, each with a single circle fixture, in the following figure. The left circle is exactly twice as big as the right one, but the right one has double the density of the first one. The triangle is a static body and the rectangle and the circles are dynamic, creating a simple scale. When the simulation is run, the scales are balanced. Friction Friction defines how slippery a surface is. A body can consist of multiple fixtures with different friction values. When two bodies collide, the final friction is calculated from the point of collision based on the colliding fixtures. Friction can be given a value between 0 and 1, where 0 means completely frictionless and 1 means super strong friction. Let's say we have a slope which is made of a body with a single fixture that has a friction value of 0.5, as shown in the following figure: The other body consists of a single square fixture. If its friction is 0, the body slides very fast all the way down. If the friction is more than 0, then it would still slide, but slow down gradually. If the value is more than 0.25, it would still slide but not reach the end. Finally, with friction close to 1, the body will not move at all. Elasticity The coefficient of restitution is a ratio between the speeds before and after a collision, and for simplicity, we can call the material property elasticity. In the following figure, there are three circles and a rectangle representing a floor with restitution 0, which means not bouncy at all. The circles have restitutions (from left to right) of 1, 0.5, and 0. When this simulation is started, the three balls will fall with the same speed and touch the floor at the same time. However, after the first bounce, the first one will move upwards and climb all the way to the initial position. The middle one will bounce a little and keep bouncing less and less until it stops. The right one will not bounce at all. The following figure shows the situation after the first bounce: Sensor When we need a fixture that detects collisions but is otherwise not affected by them and doesn't affect other fixtures and bodies, we use a sensor. A goal line in a 2D air hockey top-down game is a good example of a sensor. We want it to detect the disc passing through, but we don't want it to prevent the disc from entering the goal. The physics world The physics world is the whole simulation including all bodies with their fixtures, gravity, and other settings that influence the performance and quality of the simulation. Tweaking the physics world settings is important for large simulations with many objects. These settings include the number of steps performed per second and the number of velocity and position interactions per step. The most important setting is gravity, which is determined by a vector of gravitational acceleration. Gravity in Box2D is simplified, but for the purpose of games, it is usually enough. Box2D works best when simulating a relatively small scene where objects are a few tens of meters big at most. To simulate, for example, a planet's (radial) gravity, we would have to implement our own gravitational force and turn the Box2D built-in gravity off. Forces and impulses Both forces and impulses are used to make a body move. Gravity is nothing else but a constant application of a force. While it is possible to set the position and velocity of a body in Box2D directly, it is not the right way to do it, because it makes the simulation unrealistic. To move a body properly, we need to apply a force or an impulse to it. These two things are almost the same. While forces are added to all the other forces and change the body velocity over time, impulses change the body velocity immediately. In fact, an impulse is defined as a force applied over time. We can imagine a foam ball falling from the sky. When the wind starts blowing from the left, the ball will slowly change its trajectory. Impulse is more like a tennis racket that hits the ball in flight and changes its trajectory immediately. There are two types of forces and impulses: linear and angular. Linear makes the body move left, right, up, and down, and angular makes the body spin around its center. Angular force is called torque. Linear forces and impulses are applied at a given point, which will have different effects based on the position. The following figure shows a simple body with two fixtures and quite high friction, something like a carton box on a carpet. First, we apply force to the center of the large square fixture. When the force is applied, the body simply moves on the ground to the right a little. This is shown in the following figure: Second, we try to apply force to the upper-right corner of the large box. This is shown in the following figure: Using the same force at a different point, the body will be toppled to the right side. This is shown in the following figure:
Read more
  • 0
  • 0
  • 2086
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-components-unity
Packt
26 Aug 2014
13 min read
Save for later

Components in Unity

Packt
26 Aug 2014
13 min read
In this article by Simon Jackson, author of Mastering Unity 2D Game Development, we will have a walkthrough of the new 2D system and other new features. We will then understand some of the Unity components deeply. We will then dig into animation and its components. (For more resources related to this topic, see here.) Unity 4.3 improvements Unity 4.3 was not just about the new 2D system; there are also a host of other improvements and features with this release. The major highlights of Unity 4.3 are covered in the following sections. Improved Mecanim performance Mecanim is a powerful tool for both 2D and 3D animations. In Unity 4.3, there have been many improvements and enhancements, including a new game object optimizer that ensures objects are more tightly bound to their skeletal systems and removes unnecessary transform holders. Thus making Mecanim animations lighter and smoother. Refer to the following screenshot: In Unity 4.3, Mecanim also adds greater control to blend animations together, allowing the addition of curves to have smooth transitions, and now it also includes events that can be hooked into at every step. The Windows Phone API improvements and Windows 8.1 support Unity 4.2 introduced Windows Phone and Windows 8 support, since then things have been going wild, especially since Microsoft has thrown its support behind the movement and offered free licensing for the existing Pro owners. Refer to the following screenshot: Unity 4.3 builds solidly on the v4 foundations by bringing additional platform support, and it closes some more gaps between the existing platforms. Some of the advantages are as follows: The emulator is now fully supported with Windows Phone (new x86 phone build) It has more orientation support, which allows even the splash screens to rotate properly and enabling pixel perfect display It has trial application APIs for both Phone and Windows 8 It has improved sensors and location support On top of this, with the recent release of Windows 8.1, Unity 4.3 now also supports Windows 8.1 fully; additionally, Unity 4.5.3 will introduce support Windows Phone 8.1 and universal projects. Dynamic Nav Mesh (Pro version only) If you have only been using the free version of Unity till now, you will not be aware of what a Nav Mesh agent is. Nav Meshes are invisible meshes that are created for your 3D environment at the build time to simplify path finding and navigation for movable entities. Refer to the following screenshot: You can, of course, create the simplified models for your environment and use them in your scenes; however, every time you change your scene, you need to update your navigation model. Nav Meshes simply remove this overhead. Nav Meshes are crucial, especially in larger environments where collision and navigation calculations can make the difference between your game running well or not. Unity 4.3 has improved this by allowing more runtime changes to the dynamic Nav Mesh, allowing you to destroy parts of your scene that alter the walkable parts of your terrain. Nav Mesh calculations are also now multithreaded to give even an even better speed boost to your game. Also, there have been many other under-the-hood fixes and tweaks. Editor updates The Unity editor received a host of updates in Unity 4.3 to improve the performance and usability of the editor, as you can see in the following demo screenshot. Granted most of the improvements are behind the scenes. The improved Unity Editor GUI with huge improvements The editor refactored a lot of the scripting features on the platform, primarily to reduce the code complexity required for a lot of scripting components, such as unifying parts of the API into single components. For example, the LookLikeControls and LookLikeInspector options have been unified into a single LookLike function, which allows easier creation of the editor GUI components. Further simplification of the programmable editor interface is an ongoing task and a lot of headway is being made in each release. Additionally, the keyboard controls have been tweaked to ensure that the navigation works in a uniform way and the sliders/fields work more consistently. MonoDevelop 4.01 Besides the editor features, one of the biggest enhancements has to be the upgrade of the MonoDevelop editor (http://monodevelop.com/), which Unity supports and is shipped with. This has been a long running complaint for most developers simply due to the brand new features in the later editions. Refer to the following screenshot: MonoDevelop isn't made by Unity; it's an open source initiative run by Xamarin hosted on GitHub (https://github.com/mono/monodevelop) for all the willing developers to contribute and submit fixes to. Although the current stable release is 4.2.1, Unity is not fully up to date. Hopefully, this recent upgrade will mean that Unity can keep more in line with the future versions of this free tool. Sadly, this doesn't mean that Unity has yet been upgraded from the modified V2 version of the Mono compiler (http://www.mono-project.com/Main_Page) it uses to the current V3 branch, most likely, due to the reduced platform and the later versions of the Mono support. Movie textures Movie textures is not exactly a new feature in Unity as it has been available for some time for platforms such as Android and iOS. However, in Unity 4.3, it was made available for both the new Windows 8 and Windows Phone platforms. This adds even more functionality to these platforms that were missing in the initial Unity 4.2 release where this feature was introduced. Refer to the following screenshot: With movie textures now added to the platform, other streaming features are also available, for example, webcam (or a built-in camera in this case) and microphone support were also added. Understanding components Components in Unity are the building blocks of any game; almost everything you will use or apply will end up as a component on a GameObject inspector in a scene. Until you build your project, Unity doesn't know which components will be in the final game when your code actually runs (there is some magic applied in the editor). So, these components are not actually attached to your GameObject inspector but rather linked to them. Accessing components using a shortcut Now, in the previous Unity example, we added some behind-the-scenes trickery to enable you to reference a component without first discovering it. We did this by adding shortcuts to the MonoBehavior class that the game object inherits from. You can access the components with the help of the following code: this.renderer.collider.attachedRigidbody.angularDrag = 0.2f; What Unity then does behind the scenes for you is that it converts the preceding code to the following code: var renderer = this.GetComponent<Renderer>(); var collider = renderer.GetComponent<Collider>(); var ridgedBody = collider.GetComponent<Rigidbody>(); ridgedBody.angularDrag = 0.2f; The preceding code will also be the same as executing the following code: GetComponent<Renderer>().GetComponent<Collider>().GetComponent<Rigidbody>().angularDrag = 0.2f; Now, while this is functional and working, it isn't very performant or even a best practice as it creates variables and destroys them each time you use them; it also calls GetComponent for each component every time you access them. Using GetComponent in the Start or Awake methods isn't too bad as they are only called once when the script is loaded; however, if you do this on every frame in the update method, or even worse, in FixedUpdate methods, the problem multiplies; not to say you can't, you just need to be aware of the potential cost of doing so. A better way to use components – referencing Now, every programmer knows that they have to worry about garbage and exactly how much memory they should allocate to objects for the entire lifetime of the game. To improve things based on the preceding shortcut code, we simply need to manually maintain the references to the components we want to change or affect on a particular object. So, instead of the preceding code, we could simply use the following code: Rigidbody myScriptRigidBody; void Awake() { var renderer = this.GetComponent<Renderer>(); var collider = renderer.GetComponent<Collider>(); myScriptRigidBody = collider.GetComponent<Rigidbody>(); } void Update() { myScriptRigidBody.angularDrag = 0.2f * Time.deltaTime; } This way the RigidBody object that we want to affect can simply be discovered once (when the scripts awakes); then, we can just update the reference each time a value needs to be changed instead of discovering it every time. An even better way Now, it has been pointed out (by those who like to test such things) that even the GetComponent call isn't as fast as it should be because it uses C# generics to determine what type of component you are asking for (it's a two-step process: first, you determine the type and then get the component). However, there is another overload of the GetComponent function in which instead of using generics, you just need to supply the type (therefore removing the need to discover it). To do this, we will simply use the following code instead of the preceding GetComponent<>: myScriptRigidBody =(Rigidbody2D)GetComponent(typeof(Rigidbody2D)); The code is slightly longer and arguably only gives you a marginal increase, but if you need to use every byte of the processing power, it is worth keeping in mind. If you are using the "." shortcut to access components, I recommend that you change that practice now. In Unity 5, they are being removed. There will, however, be a tool built in the project's importer to upgrade any scripts you have using the shortcuts that are available for you. This is not a huge task, just something to be aware of; act now if you can! Animation components All of the animation in the new 2D system in Unity uses the new Mecanim system (introduced in Version 4) for design and control, which once you get used to is very simple and easy to use. It is broken up into three main parts: animation controllers, animation clips, and animator components. Animation controllers Animation controllers are simply state machines that are used to control when an animation should be played and how often, including what conditions control the transition between each state. In the new 2D system, there must be at least one controller per animation for it to play, and controllers can contain many animations as you can see here with three states and transition lines between them: Animation clips Animation clips are the heart of the animation system and have come very far from their previous implementation in Unity. Clips were used just to hold the crafted animations of the 3D models with a limited ability to tweak them for use on a complete 3D model: The new animation dope sheet system (as shown in the preceding screenshot) is very advanced; in fact, now it tracks almost every change in the inspector for sprites, allowing you to animate just about everything. You can even control which sprite from a spritesheet is used for each frame of the animation. The preceding screenshot shows a three-frame sprite animation and a modified x position modifier for the middle image, giving a hopping effect to the sprite as it runs. This ability of the dope sheet system implies there is less burden on the shoulders of art designers to craft complex animations as the animation system itself can be used to produce a great effect. Sprites don't have to be picked from the same spritesheet to be animated. They can come from individual textures or picked from any spritesheet you have imported. The Animator component To use the new animation prepared in a controller, you need to apply it to a game object in the scene. This is done through the Animator component, as shown here: The only property we actually care about in 2D is the Controller property. This is where we attach the controller we just created. Other properties only apply to the 3D humanoid models, so we can ignore them for 2D. For more information about the complete 3D Mecanim system, refer to the Unity Learn guide at http://unity3d.com/learn/tutorials/modules/beginner/animation. Animation is just one of the uses of the Mecanim system. Setting up animation controllers So, to start creating animations, you first need an animation controller in order to define your animation clips. As stated before, this is just a state machine that controls the execution of animations even if there is only one animation. In this case, the controller runs the selected animation for as long as it's told to. If you are browsing around the components that can be added to the game object, you will come across the Animator component, which takes a single animation clip as a parameter. This is the legacy animation system for backward compatibility only. Any new animation clip created and set to this component will not work; it will simply generate a console log item stating The AnimationClip used by the Animation component must be marked as Legacy. So, in Unity 4.3 onwards, just avoid this. Creating an animation controller is just as easy as any other game object. In the Project view, simply right-click on the view and select Create | Animator Controller. Opening the new animation will show you the blank animator controller in the Mecanim state manager window, as shown in the following screenshot: There is a lot of functionality in the Mecanim state engine, which is largely outside the scope of this article. Check out for more dedicated books on this, such as Unity 4 Character Animation with Mecanim, Jamie Dean, Packt Publishing. If you have any existing clips, you can just drag them to the Mecanim controller's Edit window; alternatively, you can just select them in the Project view, right-click on them, and select From selected clip under Create. However, we will cover more of this later in practice. Once you have a controller, you can add it to any game object in your project by clicking on Add Component in the inspector or by navigating to Component | Create and Miscellaneous | Animator and selecting it. Then, you can select your new controller as the Controller property of the animator. Alternatively, you can just drag your new controller to the game object you wish to add it to. Clips in a controller are bound to the spritesheet texture of the object the controller is attached to. Changing or removing this texture will prevent the animation from being displayed correctly. However, it will appear as it's still running. So with a controller in place, let's add some animation to it. Summary In this article, we did a detailed analysis of the new 2D features added in Unity 4.3. Then we overviewed all the main Unity components. Resources for Article: Further resources on this subject: Parallax scrolling [article] What's Your Input? [article] Unity 3-0 Enter the Third Dimension [article]
Read more
  • 0
  • 0
  • 4940

article-image-building-simple-boat
Packt
25 Aug 2014
15 min read
Save for later

Building a Simple Boat

Packt
25 Aug 2014
15 min read
It's time to get out your hammers, saws, and tape measures, and start building something. In this article, by Gordon Fisher, the author of Blender 3D Basics Beginner's Guide Second Edition, you're going to put your knowledge of building objects to practical use, as well as your knowledge of using the 3D View to build a boat. It's a simple but good-looking and water-tight craft that has three seats, as shown in the next screenshot. You will learn about the following topics: Using box modeling to convert a cube into a boat Employing box modeling's power methods, extrusion, and subdividing edges Joining objects together into a single object Adding materials to an object Using a texture for greater detail (For more resources related to this topic, see here.) Turning a cube into a boat with box modeling You are going to turn the default Blender cube into an attractive boat, similar to the one shown in the following screenshot. First, you should know a little bit about boats. The front is called the bow, and is pronounced the same as bowing to the Queen. The rear is called the stern or the aft. The main body of the boat is the hull, and the top of the hull is the gunwale, pronounced gunnel. You will be using a technique called box modeling to make the boat. Box modeling is a very standard method of modeling. As you might expect from the name, you start out with a box and sculpt it like a piece of clay to make whatever you want. There are three methods that you will use in most of the instances for box modeling: extrusion, subdividing edges, and moving, or translating vertices, edges, and faces. Using extrusion, the most powerful tool for box modeling Extrusion is similar to turning dough into noodles, by pushing them through a die.  Blender pushed out the edge and connected it to the old edge with a face. While extruding a face, the face gets pushed out and gets connected to the old edges by new faces. Time for action – extruding to make the inside of the hull The first step here is to create an inside for the hull. You will extrude the face without moving it, and shrink it a bit. This will create the basis for the gunwale: Create a new file and zoom into the default cube. Select Wireframe from the Viewport Shading menu on the header. Press the Tab key to go to Edit Mode. Choose Face Selection mode from the header. It is the orange parallelogram. Select the top face with the RMB. Press the E key to extrude the face, then immediately press Enter. Move the mouse away from the cube. Press the S key to scale the face with the mouse. While you are scaling it, press Shift + Ctrl, and scale it to 0.9. Watch the scaling readout in the 3D View header. Press the NumPad 1 key to change to the Front view and press the 5 key on the NumPad to change to the Ortho view. Move the cursor to a place a little above the top of the cube. Press E, and Blender will create a new face and let you now move it up or down. Move it down. When you are close to the bottom, press the Ctrl + Shift buttons, and move it down until the readout on the 3D View header is 1.9. Click the LMB to release the face. It will look like the following screenshot: What just happened? You just created a simple hull for your boat. It's going to look better, but at least you got the thickness of the hull established. Pressing the E key extrudes the face, making a new face and sides that connect the new face with the edges used by the old face. You pressed Enter immediately after the E key the first time, so that the new face wouldn't get moved. Then, you scaled it down a little to establish the thickness of the hull. Next, you extruded the face again. As you watched the readout, did you notice that it said D: -1.900 (1.900) normal? When you extrude a face, Blender is automatically set up to move the face along its normal, so that you can move it in or out, and keep it parallel with the original location. For your reference, the 4909_05_making the hull1.blend file, which has been included in the download pack, has the first extrusion. The 4909_05_making the hull2.blend file has the extrusion moved down. The 4909_05_making the hull3.blend file has the bottom and sides evened out. Using normals in 3D modeling What is a normal? The normal is an unseen ray that is perpendicular to a face. This is illustrated in the following image by the red line: Blender has many uses for the normal: It lets Blender extrude a face and keep the extruded face in the same orientation as the face it was extruded from This also keeps the sides straight and tells Blender in which direction a face is pointing Blender can also use the normal to calculate how much light a particular face receives from a given lamp, and in which direction lights are pointed Modeling tip If you create a 3D model and it seems perfect except that there is this unexplained hole where a face should have been, you may have a normal that faces backwards. To help you, Blender can display the normals for you. Time for action – displaying normals Displaying the normal does not affect the model, but sometimes it can help you in your modeling to see which way your faces are pointing: Press Ctrl + MMB and use the mouse to zoom out so that you can see the whole cube. In the 3D View, press N to get the Properties Panel. Scroll down in the Properties Panel until you get to the Mesh Display subpanel. Go down to where it says Normals. There are two buttons like the edge select and face select buttons in the 3D View header. Click on the button with a cube and an orange rhomboid, as outlined in the next screenshot, the Face Select button, to choose selecting the normals of the faces. Beside the Face Select button, there is a place where you can adjust the displayed size of the normal, as shown in the following screenshot. The displayed normals are the blue lines. Set Normals Size to 0.8. In the following image, I used the cube as it was just before you made the last extrusion so that it displays the normals a little better. Press the MMB, use the mouse to rotate your view of the cube, and look at the normals. Click on the Face Select button in the Mesh Display subpanel again to turn off the normals display. What just happened? To see the normals, you opened up the Properties Panel and instructed Blender to display them. They are displayed as little blue lines, and you can create them in whatever size that works best for you. Normals, themselves, have no length, just a direction. So, changing this setting does not affect the model. It's there for your use when you need to analyze the problems with the appearance of your model. Once you saw them, you turned them off. For your reference, the 4909_05_displaying normals.blend file has been included in the download pack. It has the cube with the first extrusion, and the normal display turned on. Planning what you are going to make It always helps to have an idea in mind of what you want to build. You don't have to get out caliper micrometers and measure every last little detail of something you want to model, but you should at least have some pictures as reference, or an idea of the actual dimensions of the object that you are trying to model. There are many ways to get these dimensions, and we are going to use several of these as we build our boats. Choosing which units to model in I went on the Internet and found the dimensions of a small jon boat for fishing. You are not going to copy it exactly, but knowing what size it should be will make the proportions that you choose more convincing. As it happened, it was an American boat, and the size was given in feet and inches. Blender supports three kinds of units for measuring distance: Blender units, Metric units, and Imperial units. Blender units are not tied to any specific measurement in the real world as Metric and Imperial units are. To change the units of measurement, go to the Properties window, to the right of the 3D View window, as shown in the following image, and choose the Scene button. It shows a light, a sphere, and a cylinder. In the following image, it's highlighted in blue. In the second subpanel, the Units subpanel lets you select which units you prefer. However, rather than choosing between Metric or Imperial, I decided to leave the default settings as they were. As the measurements that I found were Imperial measurements, I decided to interpret the Imperial measurements as Blender measurements, equating 1 foot to 1 Blender unit, and each inch as 0.083 Blender units. If I have an Imperial measurement that is expressed in inches, I just divide it by 12 to get the correct number in Blender units. The boat I found on the Internet is 9 feet and 10 inches long, 56 inches wide at the top, 44 inches wide at the bottom, and 18 inches high. I converted them to decimal Blender units or 9.830 long, 4.666 wide at the top, 3.666 wide at the bottom, and 1.500 high. Time for action – making reference objects One of the simplest ways to see what size your boat should be is to have boxes of the proper size to use as guides. So now, you will make some of these boxes: In the 3D View window, press the Tab key to get into Object Mode. Press A to deselect the boat. Press the NumPad 3 key to get the side view. Make sure you are in Ortho view. Press the 5 key on the NumPad if needed. Press Shift + A and choose Mesh and then Cube from the menu. You will use this as a reference block for the size of the boat. In the 3D View window Properties Panel, in the Transform subpanel, at the top, click on the Dimensions button, and change the dimensions for the reference block to 4.666 in the X direction, 9.83 in the Y direction, and 1.5 in the Z direction. You can use the Tab key to go from X to Y to Z, and press Enter when you are done. Move the mouse over the 3D View window, and press Shift + D to duplicate the block. Then press Enter. Press the NumPad 1 key to get the front view. Press G and then Z to move this block down, so its top is in the lower half of the first one. Press S, then X, then the number 0.79, and then Enter. This will scale it to 79 percent along the X axis. Look at the readout. It will show you what is happening. This will represent the width of the boat at the bottom of the hull. Press the MMB and rotate the view to see what it looks like. What just happened? To make accurate models, it helps to have references. For this boat that you are building, you don't need to copy another boat exactly, and the basic dimensions are enough. You got out of Edit Mode, and deselected the boat so that you could work on something else, without affecting the boat. Then, you made a cube, and scaled it to the dimensions of the boat, at the top of the hull, to use as a reference block. You then copied the reference block, and scaled the copy down in X for the width of the boat at the bottom of the hull as shown in the following image: Reference objects, like reference blocks and reference spheres, are handy tools. They are easy to make and have a lot of uses. For your reference, the 4909_05_making reference objects.blend file has been included in the download pack. It has the cube and the two reference blocks. Sizing the boat to the reference blocks Now that the reference blocks have been made, you can use them to guide you when making the boat. Time for action – making the boat the proper length Now that you've made the reference blocks the right size, it's time to make the boat the same dimensions as the blocks: Change to the side view by pressing the NumPad 3 key. Press Ctrl + MMB and the mouse to zoom in, until the reference blocks fill almost all of the 3D View. Press Shift + MMB and the mouse to re-center the reference blocks. Select the boat with the RMB. Press the Tab key to go into Edit Mode, and then choose the Vertex Select mode button from the 3D View header. Press A to deselect all vertices. Then, select the boat's vertices on the right-hand side of the 3D View. Press B to use the border select, or press C to use the circle select mode, or press Ctrl + LMB for the lasso select. When the vertices are selected, press G and then Y, and move the vertices to the right with the mouse until they are lined up with the right-hand side of the reference blocks. Press the LMB to drop the vertices in place. Press A to deselect all the vertices, select the boat's vertices on the left-hand side of the 3D View, and move them to the left until they are lined up with the left-hand side of the reference blocks, as shown in the following image: What just happened? You made sure that the screen was properly set up for working by getting into the side view in the Ortho mode. Next, you selected the boat, got into Edit Mode, and got ready to move the vertices. Then, you made the boat the proper length, by moving the vertices so that they lined up with the reference blocks. For your reference, the 4909_05_proper length.blend file has been included in the download pack. It has the bow and stern properly sized. Time for action – making the boat the proper width and height Making the boat the right length was pretty easy. Setting the width and height requires a few more steps, but the method is very similar: Press the NumPad 1 key to change to the front view. Use Ctrl + MMB to zoom into the reference blocks. Use Shift + MMB to re-center the boat so that you can see all of it. Press A to deselect all the vertices, and using any method select all of the vertices on the left of the 3D View. Press G and then X to move the left-side vertices in X, until they line up with the wider reference block, as shown in the following image. Press the LMB to release the vertices. Press A to deselect all the vertices. Select only the right-hand vertices with a method different from the one you used to select the left-hand vertices. Then, press G and then X to move them in X, until they line up with the right side of the wider reference block. Press the LMB when they are in place. Deselect all the vertices. Select only the top vertices, and press G and then Z to move them in the Z direction, until they line up with the top of the wider reference block. Deselect all the vertices. Now, select only the bottom vertices, and press G and then Z to move them in the Z direction, until they line up with the bottom of the wider reference block, as shown in the following image: Deselect all the vertices. Next, select only the bottom vertices on the left. Press G and then X to move them in X, until they line up with the narrower reference block. Then, press the LMB. Finally, deselect all the vertices, and select only the bottom vertices on the right. Press G and then X to move them in the X axis, until they line up with the narrower reference block, as shown in the following image. Press the LMB to release them: Press the NumPad 3 key to switch to the Side view again. Use Ctrl + MMB to zoom out if you need to. Press A to deselect all the vertices. Select only the bottom vertices on the right, as in the following illustration. You are going to make this the stern end of the boat. Press G and then Y to move them left in the Y axis just a little bit, so that the stern is not completely straight up and down. Press the LMB to release them. Now, select only the bottom vertices on the left, as highlighted in the following illustration. Make this the bow end of the boat. Move them right in the Y axis just a little bit. Go a bit further than the stern, so that the angle is similar to the right side, as shown here, maybe about 1.3 or 1.4. It's your call. What just happened? You used the reference blocks to guide yourself in moving the vertices into the shape of a boat. You adjusted the width and the height, and angled the hull. Finally, you angled the stern and the bow. It floats, but it's still a bit boxy. For your reference, the 4909_05_proper width and height1.blend file has been included in the download pack. It has both sides aligned with the wider reference block. The 4909_05_proper width and height2.blend file has the bottom vertices aligned to the narrower reference block. The 4909_05_proper width and height3.blend file has the bow and stern finished.
Read more
  • 0
  • 0
  • 2486

article-image-whats-your-input
Packt
20 Aug 2014
7 min read
Save for later

What's Your Input?

Packt
20 Aug 2014
7 min read
This article by Venita Pereira, the author of the book Learning Unity 2D Game Development by Example, teaches us all about the various input types and states of a game. We will then go on to learn how to create buttons and the game controls by using code snippets for input detection. "Computers are finite machines; when given the same input, they always produce the same output." – Greg M. Perry, Sams Teach Yourself Beginning Programming in 24 Hours (For more resources related to this topic, see here.) Overview The list of topics that will be covered in this article is as follows: Input versus output Input types Output types Input Manager Input detection Buttons Game controls Input versus output We will be looking at exactly what both input and output in games entail. We will look at their functions, importance, and differentiations. Input in games Input may not seem a very important part of a game at first glance, but in fact it is very important, as input in games involves how the player will interact with the game. All the controls in our game, such as moving, special abilities, and so forth, depend on what controls and game mechanics we would like in our game and the way we would like them to function. Most games have the standard control setup of moving your character. This is to help usability, because if players are already familiar with the controls, then the game is more accessible to a much wider audience. This is particularly noticeable with games of the same genre and platform. For instance, endless runner games usually make use of the tilt mechanic which is made possible by the features of the mobile device. However, there are variations and additions to the pre-existing control mechanics; for example, many other endless runners make use of the simple swipe mechanic, and there are those that make use of both. When designing our games, we can be creative and unique with our controls, thereby innovating a game, but the controls still need to be intuitive for our target players. When first designing our game, we need to know who our target audience of players includes. If we would like our game to be played by young children, for instance, then we need to ensure that they are able to understand, learn, and remember the controls. Otherwise, instead of enjoying the game, they will get frustrated and stop playing it entirely. As an example, a young player may hold a touchscreen device with their fingers over the screen, thereby preventing the input from working correctly depending on whether the game was first designed to take this into account and support this. Different audiences of players interact with a game differently. Likewise, if a player is more familiar with the controls on a specific device, then they may struggle with different controls. It is important to create prototypes to test the input controls of a game thoroughly. Developing a well-designed input system that supports usability and accessibility will make our game more immersive. Output in games Output is the direct opposite of input; it provides the necessary information to the player. However, output is just as essential to a game as input. It provides feedback to the player, letting them know how they are doing. Output lets the player know whether they have done an action correctly or they have done something wrong, how they have performed, and their progression in the form of goals/missions/objectives. Without feedback, a player would feel lost. The player would potentially see the game as being unclear, buggy, or even broken. For certain types of games, output forms the heart of the game. The input in a game gets processed by the game to provide some form of output, which then provides feedback to the player, helping them learn from their actions. This is the cycle of the game's input-output system. The following diagram represents the cycle of input and output: Input types There are many different input types that we can utilize in our games. These various input types can form part of the exciting features that our games have to offer. The following image displays the different input types: The most widely used input types in games include the following: Keyboard: Key presses from a keyboard are supported by Unity and can be used as input controls in PC games as well as games on any other device that supports a keyboard. Mouse: Mouse clicks, motion (of the mouse), and coordinates are all inputs that are supported by Unity. Game controller: This is an input device that generally includes buttons (including shoulder and trigger buttons), a directional pad, and analog sticks. The game controller input is supported by Unity. Joystick: A joystick has a stick that pivots on a base that provides movement input in the form of direction and angle. It also has a trigger, throttle, and extra buttons. It is commonly used in flight simulation games to simulate the control device in an aircraft's cockpit and other simulation games that simulate controlling machines, such as trucks and cranes. Modern game controllers make use of a variation of joysticks known as analog sticks and are therefore treated as the same class of input device as joysticks by Unity. Joystick input is supported by Unity. Microphone: This provides audio input commands for a game. Unity supports basic microphone input. For greater fidelity, a third-party audio recognition tool would be required. Camera: This provides visual input for a game using image recognition. Unity has webcam support to access RGB data, and for more advanced features, third-party tools would be required. Touchscreen: This provides multiple touch inputs from the player's finger presses on the device's screen. This is supported by Unity. Accelerometer: This provides the proper acceleration force at which the device is moved and is supported by Unity. Gyroscope: This provides the orientation of the device as input and is supported by Unity. GPS: This provides the geographical location of the device as input and is supported by Unity. Stylus: Stylus input is similar to touchscreen input in that you use a stylus to interact with the screen; however, it provides greater precision. The latest version of Unity supports the Android stylus. Motion controller: This provides the player's motions as input. Unity does not support this, and therefore, third-party tools would be required. Output types The main output types in games are as follows: Visual output Audio output Controller vibration Unity supports all three. Visual output The Head-Up Display (HUD) is the gaming term for the game's Graphical User Interface (GUI) that provides all the essential information as visual output to the player as well as feedback and progress to the player as shown in the following image: HUD, viewed June 22, 2014, http://opengameart.org/content/golden-ui Other visual output includes images, animations, particle effects, and transitions. Audio Audio is what can be heard through an audio output, such as a speaker, to provide feedback that supports and emphasizes the visual output and, therefore, increases immersion. The following image displays a speaker: Speaker, viewed June 22, 2014, http://pixabay.com/en/loudspeaker-speakers-sound-music-146583/ Controller vibration Controller vibration provides feedback for instances where the player collides with an object or environmental feedback for earthquakes to provide even more immersion as in the following image: Having a game that is designed to provide output meaningfully not only makes it clearer and more enjoyable, but can truly bring the world to life, making it truly engaging for the player.
Read more
  • 0
  • 0
  • 4709

article-image-moving-space-pod-using-touch
Packt
18 Aug 2014
10 min read
Save for later

Moving the Space Pod Using Touch

Packt
18 Aug 2014
10 min read
Moving the Space Pod Using Touch This article written by, Frahaan Hussain, Arutosh Gurung, and Gareth Jones, authors of the book Cocos2d-x Game Development Essentials, will cover how to set up touch events within our game. So far, the game has had no user interaction from a gameplay perspective. This article will rectify this by adding touch controls in order to move the space pod and avoid the asteroids. (For more resources related to this topic, see here.) The topics that will be covered in this article are as follows: Implementing touch Single-touch Multi-touch Using touch locations Moving the spaceship when touching the screen There are two main routes to detect touch provided by Cocos2d-x: Single-touch: This method detects a single-touch event at any given time, which is what will be implemented in the game as it is sufficient for most gaming circumstances Multi-touch: This method provides the functionality that detects multiple touches simultaneously; this is great for pinching and zooming; for example, the Angry Birds game uses this technique Though a single-touch will be the approach that the game will incorporate, multi-touch will also be covered in this article so that you are aware of how to use this in future games. The general process for setting up touches The general process of setting up touch events, be it single or multi-touch, is as follows: Declare the touch functions. Declare a listener to listen for touch events. Assign touch functions to appropriate touch events as follows: When the touch has begun When the touch has moved When the touch has ended Implement touch functions. Add appropriate game logic/code for when touch events have occurred. Single-touch events Single-touch events can be detected at any given time, and for many games this is sufficient as it is for this game. Follow these steps to implement single-touch events into a scene: Declare touch functions in the GameScene.h file as follows: bool onTouchBegan(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchMoved(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchEnded(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchCancelled(cocos2d::Touch *touch, cocos2d::Event * event); This is what the GameScene.h file will look like: The previous functions do the following: The onTouchBegan function detects when a single-touch has occurred, and it returns a Boolean value. This should be true if the event is swallowed by the node and false indicates that it will keep on propagating. The onTouchMoved function detects when the touch moves. The onTouchEnded function detects when the touch event has ended, essentially when the user has lifted up their finger. The onTouchCancelled function detects when a touch event has ended but not by the user; for example, a system alert. The general practice is to call the onTouchEnded method to run the same code, as it can be considered the same event for most games. Declare a Boolean variable in the GameScene.h file, which will be true if the screen is being touched and false if it isn't, and also declare a float variable to keep track of the position being touched: bool isTouching;float touchPosition; This is how it will look in the GameScene.h file: Add the following code in the init() method of GameScene.cpp: auto listener = EventListenerTouchOneByOne::create(); listener->setSwallowTouches(true); listener->onTouchBegan = CC_CALLBACK_2(GameScreen::onTouchBegan, this); listener->onTouchMoved = CC_CALLBACK_2(GameScreen::onTouchMoved, this); listener->onTouchEnded = CC_CALLBACK_2(GameScreen::onTouchEnded, this); listener->onTouchCancelled = CC_CALLBACK_2(GameScreen::onTouchCancelled, this); this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); isTouching = false; touchPosition = 0; This is how it will look in the GameScene.cpp file: There is quite a lot of new code in the previous code snippet, so let's run through it line by line: The first statement declares and initializes a listener for a single-touch The second statement prevents layers underneath from where the touch occurred by detecting the touches The third statement assigns our onTouchBegan method to the onTouchBegan listener The fourth statement assigns our onTouchMoved method to the onTouchMoved listener The fifth statement assigns our onTouchEnded method to the onTouchEnded listener The sixth statement assigns our onTouchCancelled method to the onTouchCancelled listener The seventh statement sets the touch listener to the event dispatcher so the events can be detected The eighth statement sets the isTouching variable to false as the player won't be touching the screen initially when the game starts The final statement initializes the touchPosition variable to 0 Implement the touch functions inside the GameScene.cpp file: bool GameScreen::onTouchBegan(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = true;touchPosition = touch->getLocation().x;return true;}void GameScreen::onTouchMoved(cocos2d::Touch *touch,cocos2d::Event * event){// not used for this game}void GameScreen::onTouchEnded(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = false;}void GameScreen::onTouchCancelled(cocos2d::Touch *touch,cocos2d::Event * event){onTouchEnded(touch, event);} The following is what the GameScene.cpp file will look like: Let's go over the touch functions that have been implemented previously: The onTouchBegan method will set the isTouching variable to true as the user is now touching the screen and is storing the starting touch position The onTouchMoved function isn't used in this game but it has been implemented so that you are aware of the steps for implementing it (as an extra task, you can implement touch movement so that if the user moves his/her finger from one side to another direction, the space pod gets changed) The onTouchEnded method will set the isTouching variable to false as the user is no longer touching the screen The onTouchCancelled method will call the onTouchEnded method as a touch event has essentially ended If the game were to be run, the space pod wouldn't move as the movement code hasn't been implemented yet. It will be implemented within the update() method to move left when the user touches in the left half of the screen and move right when user touches in the right half of the screen. Add the following code at the end of the update() method: // check if the screen is being touchedif (true == isTouching){// check which half of the screen is being touchedif (touchPosition < visibleSize.width / 2){// move the space pod leftplayerSprite->setPosition().x(playerSprite->getPosition().x - (0.50 * visibleSize.width * dt));// check to prevent the space pod from going offthe screen (left side)if (playerSprite->getPosition().x <= 0 +(playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(playerSprite->getContentSize().width / 2);}}else{// move the space pod rightplayerSprite->setPosition().x(playerSprite->getPosition().x + (0.50 * visibleSize.width * dt));// check to prevent the space pod from going off thescreen (right side)if (playerSprite->getPosition().x >=visibleSize.width - (playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(visibleSize.width -(playerSprite->getContentSize().width / 2));}}} The following is how this will look after adding the code: The preceding code performs the following steps: Checks whether the screen is being touched. Checks which side of the screen is being touched. Moves the player left or right. Checks whether the player is going off the screen and if so, stops him/her from moving. Repeats the process until the screen is no longer being touched. This section covered how to set up single-touch events and implement them within the game to be able to move the space pod left and right. Multi-touch events Multi-touch is set up in a similar manner of declaring the functions and creating a listener to actively listen out for touch events. Follow these steps to implement multi-touch into a scene: Firstly, the multi-touch feature needs to be enabled in the AppController.mm file, which is located within the ios folder. To do so, add the following code line below the viewController.view = eaglView; line: [eaglView setMultipleTouchEnabled: YES]; The following is what the AppController.mm file will look like: Declare the touch functions within the game scene header file (the functions do the same thing as the single-touch equivalents but enable multiple touches that can be detected simultaneously): void onTouchesBegan(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesMoved(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesEnded(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesCancelled(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); The following is what the header file will look like: Add the following code in the init() method of the scene.cpp file to listen to the multi-touch events that will use the EventListenerTouchAllAtOnce class, which allows multiple touches to be detected at once: auto listener = EventListenerTouchAllAtOnce::create();listener->onTouchesBegan = CC_CALLBACK_2(GameScreen::onTouchesBegan, this);listener->onTouchesMoved = CC_CALLBACK_2(GameScreen::onTouchesMoved, this);listener->onTouchesEnded = CC_CALLBACK_2(GameScreen::onTouchesEnded, this);listener->onTouchesCancelled = CC_CALLBACK_2(GameScreen::onTouchesCancelled, this);this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); The following is how this will look: Implement the following multi-touch functions inside the scene.cpp: void GameScreen::onTouchesBegan(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("Multi-touch BEGAN"); } void GameScreen::onTouchesMoved(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { for (int i = 0; i < touches.size(); i++) { CCLOG("Touch %i: %f", i, touches[i]- >getLocation().x); } } void GameScreen::onTouchesEnded(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE ENDED"); } Moving the Space Pod Using Touch [ 92 ] void GameScreen::onTouchesCancelled(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE BEEN CANCELLED"); } The following is how this will look: The multi-touch functions just print out a log, stating that they have occurred, but when touches are moved, their respective x positions are logged. This section covered how to implement core foundations for multi-touch events so that they can be used for features such as zooming (for example, zooming into a scene in the Clash Of Clans game) and panning. Multi-touch wasn't incorporated within the game as it wasn't needed, but this section is a good starting point to implement it in future games. Summary This article covered how to set up touch listeners to detect touch events for single-touch and multi-touch. We incorporated single-touch within the game to be able to move the space pod left or right, depending on which half of the screen was being touched. Multi-touch wasn't used as the game didn't require it, but its implementation was shown so that it can be used for future projects. Resources for Article: Further resources on this subject: Cocos2d: Uses of Box2D Physics Engine [article] Cocos2d-x: Installation [article] Thumping Moles for Fun [article]
Read more
  • 0
  • 0
  • 2032
article-image-physics-bullet
Packt
13 Aug 2014
7 min read
Save for later

Physics with Bullet

Packt
13 Aug 2014
7 min read
In this article by Rickard Eden author of jMonkeyEngine 3.0 CookBook we will learn how to use physics in games using different physics engine. This article contains the following recipes: Creating a pushable door Building a rocket engine Ballistic projectiles and arrows Handling multiple gravity sources Self-balancing using RotationalLimitMotors (For more resources related to this topic, see here.) Using physics in games has become very common and accessible, thanks to open source physics engines, such as Bullet. jMonkeyEngine supports both the Java-based jBullet and native Bullet in a seamless manner. jBullet is a Java-based library with JNI bindings to the original Bullet based on C++. jMonkeyEngine is supplied with both of these, and they can be used interchangeably by replacing the libraries in the classpath. No coding change is required. Use jme3-libraries-physics for the implementation of jBullet and jme3-libraries-physics-native for Bullet. In general, Bullet is considered to be faster and is full featured. Physics can be used for almost anything in games, from tin cans that can be kicked around to character animation systems. In this article, we'll try to reflect the diversity of these implementations. Creating a pushable door Doors are useful in games. Visually, it is more appealing to not have holes in the walls but doors for the players to pass through. Doors can be used to obscure the view and hide what's behind them for a surprise later. In extension, they can also be used to dynamically hide geometries and increase the performance. There is also a gameplay aspect where doors are used to open new areas to the player and give a sense of progression. In this recipe, we will create a door that can be opened by pushing it, using a HingeJoint class. This door consists of the following three elements: Door object: This is a visible object Attachment: This is the fixed end of the joint around which the hinge swings Hinge: This defines how the door should move Getting ready Simply following the steps in this recipe won't give us anything testable. Since the camera has no physics, the door will just sit there and we will have no way to push it. If you have made any of the recipes that use the BetterCharacterControl class, we will already have a suitable test bed for the door. If not, jMonkeyEngine's TestBetterCharacter example can also be used. How to do it... This recipe consists of two sections. The first will deal with the actual creation of the door and the functionality to open it. This will be made in the following six steps: Create a new RigidBodyControl object called attachment with a small BoxCollisionShape. The CollisionShape should normally be placed inside the wall where the player can't run into it. It should have a mass of 0, to prevent it from being affected by gravity. We move it some distance away and add it to the physicsSpace instance, as shown in the following code snippet: attachment.setPhysicsLocation(new Vector3f(-5f, 1.52f, 0f)); bulletAppState.getPhysicsSpace().add(attachment); Now, create a Geometry class called doorGeometry with a Box shape with dimensions that are suitable for a door, as follows: Geometry doorGeometry = new Geometry("Door", new Box(0.6f, 1.5f, 0.1f)); Similarly, create a RigidBodyControl instance with the same dimensions, that is, 1 in mass; add it as a control to the doorGeometry class first and then add it to physicsSpace of bulletAppState. The following code snippet shows you how to do this: RigidBodyControl doorPhysicsBody = new RigidBodyControl(new BoxCollisionShape(new Vector3f(.6f, 1.5f, .1f)), 1); bulletAppState.getPhysicsSpace().add(doorPhysicsBody); doorGeometry.addControl(doorPhysicsBody); Now, we're going to connect the two with HingeJoint. Create a new HingeJoint instance called joint, as follows: new HingeJoint(attachment, doorPhysicsBody, new Vector3f(0f, 0f, 0f), new Vector3f(-1f, 0f, 0f), Vector3f.UNIT_Y, Vector3f.UNIT_Y); Then, we set the limit for the rotation of the door and add it to physicsSpace as follows: joint.setLimit(-FastMath.HALF_PI - 0.1f, FastMath.HALF_PI + 0.1f); bulletAppState.getPhysicsSpace().add(joint); Now, we have a door that can be opened by walking into it. It is primitive but effective. Normally, you want doors in games to close after a while. However, here, once it is opened, it remains opened. In order to implement an automatic closing mechanism, perform the following steps: Create a new class called DoorCloseControl extending AbstractControl. Add a HingeJoint field called joint along with a setter for it and a float variable called timeOpen. In the controlUpdate method, we get hingeAngle from HingeJoint and store it in a float variable called angle, as follows: float angle = joint.getHingeAngle(); If the angle deviates a bit more from zero, we should increase timeOpen using tpf. Otherwise, timeOpen should be reset to 0, as shown in the following code snippet: if(angle > 0.1f || angle < -0.1f) timeOpen += tpf; else timeOpen = 0f; If timeOpen is more than 5, we begin by checking whether the door is still open. If it is, we define a speed to be the inverse of the angle and enable the door's motor to make it move in the opposite direction of its angle, as follows: if(timeOpen > 5) { float speed = angle > 0 ? -0.9f : 0.9f; joint.enableMotor(true, speed, 0.1f); spatial.getControl(RigidBodyControl.class).activate(); } If timeOpen is less than 5, we should set the speed of the motor to 0: joint.enableMotor(true, 0, 1); Now, we can create a new DoorCloseControl instance in the main class, attach it to the doorGeometry class, and give it the same joint we used previously in the recipe, as follows: DoorCloseControl doorControl = new DoorCloseControl(); doorControl.setHingeJoint(joint); doorGeometry.addControl(doorControl); How it works... The attachment RigidBodyControl has no mass and will thus not be affected by external forces such as gravity. This means it will stick to its place in the world. The door, however, has mass and would fall to the ground if the attachment didn't keep it up with it. The HingeJoint class connects the two and defines how they should move in relation to each other. Using Vector3f.UNIT_Y means the rotation will be around the y axis. We set the limit of the joint to be a little more than half PI in each direction. This means it will open almost 100 degrees to either side, allowing the player to step through. When we try this out, there may be some flickering as the camera passes through the door. To get around this, there are some tweaks that can be applied. We can change the collision shape of the player. Making the collision shape bigger will result in the player hitting the wall before the camera gets close enough to clip through. This has to be done considering other constraints in the physics world. You can consider changing the near clip distance of the camera. Decreasing it will allow things to get closer to the camera before they are clipped through. This might have implications on the camera's projection. One thing that will not work is making the door thicker, since the triangles on the side closest to the player are the ones that are clipped through. Making the door thicker will move them even closer to the player. In DoorCloseControl, we consider the door to be open if hingeAngle deviates a bit more from 0. We don't use 0 because we can't control the exact rotation of the joint. Instead we use a rotational force to move it. This is what we do with joint.enableMotor. Once the door is open for more than five seconds, we tell it to move in the opposite direction. When it's close to 0, we set the desired movement speed to 0. Simply turning off the motor, in this case, will cause the door to keep moving until it is stopped by an external force. Once we enable the motor, we also need to call activate() on RigidBodyControl or it will not move.
Read more
  • 0
  • 0
  • 1936

article-image-introspecting-maya-python-and-pymel
Packt
23 Jul 2014
8 min read
Save for later

Introspecting Maya, Python, and PyMEL

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) Maya and Python are both excellent and elegant tools that can together achieve amazing results. And while it may be tempting to dive in and start wielding this power, it is prudent to understand some basic things first. In this article, we will look at Python as a language, Maya as a program, and PyMEL as a framework. We will begin by briefly going over how to use the standard Python interpreter, the Maya Python interpreter, the Script Editor in Maya, and your Integrated Development Environment (IDE) or text editor in which you will do the majority of your development. Our goal for the article is to build a small library that can easily link us to documentation about Python and PyMEL objects. Building this library will illuminate how Maya, Python and PyMEL are designed, and demonstrate why PyMEL is superior to maya.cmds. We will use the powerful technique of type introspection to teach us more about Maya's node-based design than any Hypergraph or static documentation can. Creating your library There are generally three different modes you will be developing in while programming Python in Maya: using the mayapy interpreter to evaluate short bits of code and explore ideas, using your Integrated Development Environment to work on the bulk of the code, and using Maya's Script Editor to help iterate and test your work. In this section, we'll start learning how to use all three tools to create a very simple library. Using the interpreter The first thing we must do is find your mayapy interpreter. It should be next to your Maya executable, named mayapy or mayapy.exe. It is a Python interpreter that can run Python code as if it were being run in a normal Maya session. When you launch it, it will start up the interpreter in interactive mode, which means you enter commands and it gives you results, interactively. The >>> and ... characters in code blocks indicate something you should enter at the interactive prompt; the code listing in the article and your prompt should look basically the same. In later listings, long output lines will be elided with ... to save on space. Start a mayapy process by double clicking or calling it from the command line, and enter the following code: >>> print 'Hello, Maya!' Hello, Maya! >>> def hello(): ... return 'Hello, Maya!' ... >>> hello() 'Hello, Maya!' The first statement prints a string, which shows up under the prompting line. The second statement is a multiline function definition. The ... indicates the line is part of the preceding line. The blank line following the ... indicates the end of the function. For brevity, we will leave out empty ... lines in other code listings. After we define our hello function, we invoke it. It returns the string "Hello, Maya!", which is printed out beneath the invocation. Finding a place for our library Now, we need to find a place to put our library file. In order for Python to load the file as a module, it needs to be on some path where Python can find it. We can see all available paths by looking at the path list on the sys module. >>> import sys >>> for p in sys.path: ... print p C:Program FilesAutodeskMaya2013binpython26.zip C:Program FilesAutodeskMaya2013PythonDLLs C:Program FilesAutodeskMaya2013Pythonlib C:Program FilesAutodeskMaya2013Pythonlibplat-win C:Program FilesAutodeskMaya2013Pythonliblib-tk C:Program FilesAutodeskMaya2013bin C:Program FilesAutodeskMaya2013Python C:Program FilesAutodeskMaya2013Pythonlibsite-packages A number of paths will print out; I've replicated what's on my Windows system, but yours will almost definitely be different. Unfortunately, the default paths don't give us a place to put custom code. They are application installation directories, which we should not modify. Instead, we should be doing our coding outside of all the application installation directories. In fact, it's a good practice to avoid editing anything in the application installation directories entirely. Choosing a development root Let's decide where we will do our coding. To be concise, I'll choose C:mayapybookpylib to house all of our Python code, but it can be anywhere. You'll need to choose something appropriate if you are on OSX or Linux; we will use ~/mayapybook/pylib as our path on these systems, but I'll refer only to the Windows path except where more clarity is needed. Create the development root folder, and inside of it create an empty file named minspect.py. Now, we need to get C:mayapybookpylib onto Python's sys.path so it can be imported. The easiest way to do this is to use the PYTHONPATH environment variable. From a Windows command line you can run the following to add the path, and ensure it worked: > set PYTHONPATH=%PYTHONPATH%;C:mayapybookpylib > mayapy.exe >>> import sys >>> 'C:\mayapybook\pylib' in sys.path True >>> import minspect >>> minspect <module 'minspect' from '...minspect.py'> The following is the equivalent commands on OSX or Linux: $ export PYTHONPATH=$PYTHONPATH:~/mayapybook/pylib $ mayapy >>> import sys >>> '~/mayapybook/pylib' in sys.path True >>> import minspect >>> minspect <module 'minspect' from '.../minspect.py'> There are actually a number of ways to get your development root onto Maya's path. The option presented here (using environment variables before starting Maya or mayapy) is just one of the more straightforward choices, and it works for mayapy as well as normal Maya. Calling sys.path.append('C:\mayapybook\pylib') inside your userSetup.py file, for example, would work for Maya but not mayapy (you would need to use maya.standalone.initialize to register user paths, as we will do later). Using set or export to set environment variables only works for the current process and any new children. If you want it to work for unrelated processes, you may need to modify your global or user environment. Each OS is different, so you should refer to your operating system's documentation or a Google search. Some possibilities are setx from the Windows command line, editing /etc/environment in Linux, or editing /etc/launchd.conf on OS X. If you are in a studio environment and don't want to make changes to people's machines, you should consider an alternative such as using a script to launch Maya which will set up the PYTHONPATH, instead of launching the maya executable directly. Creating a function in your IDE Now it is time to use our IDE to do some programming. We'll start by turning the path printing code we wrote at the interactive prompt into a function in our file. Open C:mayapybookpylibminspect.py in your IDE and type the following code: import sys def syspath(): print 'sys.path:' for p in sys.path: print ' ' + p Save the file, and bring up your mayapy interpreter. If you've closed down the one from the last session, make sure C:mayapybookpylib (or whatever you are using as your development root) is present on your sys.path or the following code will not work! See the preceding section for making sure your development root is on your sys.path. >>> import minspect >>> reload(minspect) <module 'minspect' from '...minspect.py'> >>> minspect.syspath() C:Program FilesAutodeskMaya2013binpython26.zip C:Program FilesAutodeskMaya2013PythonDLLs C:Program FilesAutodeskMaya2013Pythonlib C:Program FilesAutodeskMaya2013Pythonlibplat-win C:Program FilesAutodeskMaya2013Pythonliblib-tk C:Program FilesAutodeskMaya2013bin C:Program FilesAutodeskMaya2013Python C:Program FilesAutodeskMaya2013Pythonlibsite-packages First, we import the minspect module. It may already be imported if this was an old mayapy session. That is fine, as importing an already-imported module is fast in Python and causes no side effects. We then use the reload function, which we will explore in the next section, to make sure the most up-to-date code is loaded. Finally, we call the syspath function, and its output is printed. Your actual paths will likely vary. Reloading code changes It is very common as you develop that you'll make changes to some code and want to immediately try out the changed code without restarting Maya or mayapy. You can do that with Python's built-in reload function. The reload function takes a module object and reloads it from disk so that the new code will be used. When we jump between our IDE and the interactive interpreter (or the Maya application) as we did earlier, we will usually reload the code to see the effect of our changes. I will usually write out the import and reload lines, but occasionally will only mention them in text preceding the code. Keep in mind that reload is not a magic bullet. When you are dealing with simple data and functions as we are here, it is usually fine. But as you start building class hierarchies, decorators, and other things that have dependencies or state, the situation can quickly get out of control. Always test your code in a fresh version of Maya before declaring it done to be sure it does not have some lingering defect hidden by reloading. Though once you are a master Pythonista you can ignore these warnings and figure out how to reload just about anything!
Read more
  • 0
  • 0
  • 2856

article-image-customizing-skin-guiskin
Packt
22 Jul 2014
8 min read
Save for later

Customizing skin with GUISkin

Packt
22 Jul 2014
8 min read
(For more resources related to this topic, see here.) Prepare for lift off We will begin by creating a new project in Unity. Let's start our project by performing the following steps: First, create a new project and name it MenuInRPG. Click on the Create new Project button, as shown in the following screenshot: Next, import the assets package by going to Assets | Import Package | Custom Package...; choose Chapter2Package.unityPackage, which we just downloaded, and then click on the Import button in the pop-up window link, as shown in the following screenshot: Wait until it's done, and you will see the MenuInRPGGame and SimplePlatform folders in the Window view. Next, click on the arrow in front of the SimplePlatform folder to bring up the drop-down options and you will see the Scenes folder and the SimplePlatform_C# and SimplePlatform_JS scenes, as shown in the following screenshot: Next, double-click on the SimplePlatform_C# (for a C# user) and SimplePlatform_JS (for a Unity JavaScript user) scenes, as shown in the preceding screenshot, to open the scene that we will work on in this project. When you double-click on either of the SimplePlatform scenes, Unity will display a pop-up window asking whether you want to save the current scene or not. As we want to use the SimplePlatform scene, just click on the Don't Save button to open up the SimplePlatform scene, as shown in the following screenshot: Then, go to the MenuInRPGGame/Resources/UI folder and click on the first file to make sure that the Texture Type and Format fields are selected correctly, as shown in the following screenshot: Why do we set it up in this way? This is because we want to have a UI graphic to look as close to the source image as possible. However, we set the Format field to Truecolor, which will make the size of the image larger than Compress, but will show the right color of the UI graphics. Next, we need to set up the Layers and Tags configurations; for this, go to Edit | Project Settings | Tags and set them as follows: Tags Element 0 UI Element 1 Key Element 2 RestartButton Element 3 Floor Element 4 Wall Element 5 Background Element 6 Door Layers User Layer Background User Layer Level Use Layer UI At last, we will save this scene in the MenuInRPGGame/Scenes folder, and name it MenuInRPG by going to File | Save Scene as... and then save it. Engage thrusters Now we are ready to create a GUI skin; for this, perform the following steps: Let's create a new GUISkin object by going to Assets | Create | GUISkin, and we will see New GUISkin in our Project window. Name the GUISkin object as MenuSkin. Then, click on MenuSkin and go to its Inspector window. We will see something similar to the following screenshot: You will see many properties here, but don't be afraid, because this is the main key to creating custom graphics for our UI. Font is the base font for the GUI skin. From Box to Scroll View, each property is GUIStyle, which is used for creating our custom UI. The Custom Styles property is the array of GUIStyle that we can set up to apply extra styles. Settings are the setups for the entire GUI. Next, we will set up the new font style for our menu UI; go to the Font line in the Inspector view, click the circle icon, and select the Federation Kalin font. Now, you have set up the base font for GUISkin. Next, click on the arrow in front of the Box line to bring up a drop-down list. We will see all the properties, as shown in the following screenshot: For more information and to learn more about these properties, visit http://unity3d.com/support/documentation/Components/class-GUISkin.html. Name is basically the name of this style, which by default is box (the default style of GUI.Box). Next, we will be seeing our custom UI to this GUISkin; click on the arrow in front of Normal to bring up the drop-down list, and you will see two parameters—Background and Text Color. Click on the circle icon on the right-hand side of the Background line to bring up the Select Texture2D window and choose the boxNormal texture, or you can drag the boxNormal texture from the MenuInRPG/Resources/UI folder and drop it to the Background space. We can also use the search bar in the Select Texture2D window or the Project view to find our texture by typing boxNormal in the search bar, as shown in the following screenshot: Then, under the Text Color line, we leave the color as the default color—because we don't need any text to be shown in this style—and repeat the previous step with On Normal by using the boxNormal texture. Next, click on the arrow in front of Active under Background. Choose the boxActive texture, and repeat this step for On Active. Then, go to each property in the Box style and set the following parameters: Border: Left: 14, Right: 14, Top: 14, Bottom: 14 Padding: Left: 6, Right: 6, Top: 6, Bottom: 6 For other properties of this style, we will leave them as default. Next, we go to the following properties in the MenuSkin inspector and set them as follows: Label Normal | Text Color R 27, G: 95, B: 104, A: 255 Window Normal | Background myWindow On Normal | Background myWindow Border Left: 27, Right: 27, Top: 55, Bottom: 96 Padding Left: 30, Right: 30, Top: 60, Bottom: 30 Horizontal Scrollbar Normal | Background horScrollBar Border Left: 4, Right: 4, Top: 4, Bottom: 4 Horizontal Scrollbar Thumb Normal | Background horScrollBarThumbNormal Hover | Background horScrollBarThumbHover Border Left: 4, Right: 4, Top: 4, Bottom: 4 Horizontal Scrollbar Left Button Normal | Background arrowLNormal Hover | Background arrowLHover Fixed Width 14 Fixed Height 15 Horizontal Scrollbar Right Button Normal | Background arrowRNormal Hover | Background arrowRHover Fixed Width 14 Fixed Height 15 Vertical Scrollbar Normal | Background verScrollBar Border Left: 4, Right: 4, Top: 4, Bottom: 4 Padding Left: 0, Right: 0, Top: 0, Bottom: 0 Vertical Scrollbar Thumb Normal | Background verScrollBarThumbNormal Hover | Background verScrollBarThumbHover Border Left: 4, Right: 4, Top: 4, Bottom: 4 Vertical Scrollbar Up Button Normal | Background arrowUNormal Hover | Background arrowUHover Fixed Width 16 Fixed Height 14 Vertical Scrollbar Down Button Normal | Background arrowDNormal Hover | Background arrowDHover Fixed Width 16 Fixed Height 14 We have finished setting up of the default styles. Now we will go to the Custom Styles property and create our custom GUIStyle to use for this menu; go to Custom Styles and under Size, change the value to 6. Then, we will see Element 0 to Element 5. Next, we go to the first element or Element 0; under Name, type Tab Button, and we will see Element 0 change to Tab Button. Set it as follows: Tab Button (or Element 0) Name Tab Button Normal Background tabButtonNormal Text Color R: 27, G: 62, B: 67, A: 255 Hover Background tabButtonHover Text Color R: 211, G: 166, B: 9, A: 255 Active Background tabButtonActive Text Color R: 27, G: 62, B: 67, A: 255 On Normal Background tabButtonActive Text Color R: 27, G: 62, B: 67, A: 255 Border Left: 12, Right: 12, Top: 12, Bottom: 4 Padding Left: 6, Right: 6, Top: 6, Bottom: 4 Font Size 14 Alignment Middle Center Fixed Height 31 The settings are shown in the following screenshot: For the Text Color value, we can also use the eyedropper tool next to the color box to copy the same color, as we can see in the following screenshot: We have finished our first style, but we still have five styles left, so let's carry on with Element 1 with the following settings: Exit Button (or Element 1) Name Exit Button Normal | Background buttonCloseNormal Hover | Background buttonCloseHover Fixed Width 26 Fixed Height 22 The settings for Exit Button are showed in the following screenshot: The following styles will create a style for Element 2: Text Item (or Element 2) Name Text Item Normal | Text Color R: 27, G: 95, B: 104, A: 255 Alignment Middle Left Word Wrap Check The settings for Text Item are shown in the following screenshot: To set up the style for Element 3, the following settings should be done: Text Amount (or Element 3) Name Text Amount Normal | Text Color R: 27, G: 95, B: 104, A: 255 Alignment Middle Right Word Wrap Check The settings for Text Amount are shown in the following screenshot: The following settings should be done to create Selected Item: Selected Item (or Element 4) Name Selected Item Normal | Text Color R: 27, G: 95, B: 104, A: 255 Hover Background itemSelectHover Text Color R: 27, G: 95, B: 104, A: 255 Active Background itemSelectHover Text Color R: 27, G: 95, B: 104, A: 255 On Normal Background itemSelectActive Text Color R: 27, G: 95, B: 104, A: 255 Border Left: 6, Right: 6, Top: 6, Bottom: 6 Margin Left: 2, Right: 2, Top: 2, Bottom: 2 Padding Left: 4, Right: 4, Top: 4, Bottom: 4 Alignment Middle Center Word Wrap Check The settings are shown in the following screenshot: To create the Disabled Click style, the following settings should be done: Disabled Click (or Element 5) Name Disabled Click Normal Background itemSelectNormal Text Color R: 27, G: 95, B: 104, A: 255 Border Left: 6, Right: 6, Top: 6, Bottom: 6 Margin Left: 2, Right: 2, Top: 2, Bottom: 2 Padding Left: 4, Right: 4, Top: 4, Bottom: 4 Alignment Middle Center Word Wrap Check The settings for Disabled Click are shown in the following screenshot: And now, we have finished this step.
Read more
  • 0
  • 0
  • 2740
Packt
07 Jul 2014
12 min read
Save for later

HTML5 Game Development – A Ball-shooting Machine with Physics Engine

Packt
07 Jul 2014
12 min read
(For more resources related to this topic, see here.) Mission briefing In this article, we focus on the physics engine. We will build a basketball court where the player needs to shoot the ball in to the hoop. A player shoots the ball by keeping the mouse button pressed and releasing it. The direction is visualized by an arrow and the power is proportional to the duration of the mouse press and hold event. There are obstacles present between the ball and the hoop. The player either avoids the obstacles or makes use of them to put the ball into the hoop. Finally, we use CreateJS to visualize the physics world into the canvas. You may visit http://makzan.net/html5-games/ball-shooting-machine/ to play a dummy game in order to have a better understanding of what we will be building throughout this article. The following screenshot shows a player shooting the ball towards the hoop, with a power indicator: Why is it awesome? When we build games without a physics engine, we create our own game loop and reposition each game object in every frame. For instance, if we move a character to the right, we manage the position and movement speed ourselves. Imagine that we are coding a ball-throwing logic now. We need to keep track of several variables. We have to calculate the x and y velocity based on the time and force applied. We also need to take the gravity into account; not to mention the different angles and materials we need to consider while calculating the bounce between the two objects. Now, let's think of a physical world. We just defined how objects interact and all the collisions that happen automatically. It is similar to a real-world game; we focus on defining the rule and the world will handle everything else. Take basketball as an example. We define the height of the hoop, size of the ball, and distance of the three-point line. Then, the players just need to throw the ball. We never worry about the flying parabola and the bouncing on the board. Our space takes care of them by using the physics laws. This is exactly what happens in the simulated physics world; it allows us to apply the physics properties to game objects. The objects are affected by the gravity and we can apply forces to them, making them collide with each other. With the help of the physics engine, we can focus on defining the game-play rules and the relationship between the objects. Without the need to worry about collision and movement, we can save time to explore different game plays. We then elaborate and develop the setup further, as we like, among the prototypes. We define the position of the hoop and the ball. Then, we apply an impulse force to the ball in the x and y dimensions. The engine will handle all the things in between. Finally, we get an event trigger if the ball passes through the hoop. It is worth noting that some blockbuster games are also made with a physics engine. This includes games such as Angry Birds, Cut the Rope, and Where's My Water. Your Hotshot objectives We will divide the article into the following eight tasks: Creating the simulated physics world Shooting a ball Handling collision detection Defining levels Launching a bar with power Adding a cross obstacle Visualizing graphics Choosing a level Mission checklist We create a project folder that contains the index.html file and the scripts and styles folders. Inside the scripts folder, we create three files: physics.js, view.js, and game.js. The physics.js file is the most important file in this article. It contains all the logic related to the physics world including creating level objects, spawning dynamic balls, applying force to the objects, and handling collision. The view.js file is a helper for the view logic including the scoreboard and the ball-shooting indicator. The game.js file, as usual, is the entry point of the game. It also manages the levels and coordinates between the physics world and view. Preparing the vendor files We also need a vendors folder that holds the third-party libraries. This includes the CreateJS suite—EaselJS, MovieClip, TweenJS, PreloadJS—and Box2D. Box2D is the physics engine that we are going to use in this article. We need to download the engine code from https://code.google.com/p/box2dweb/. It is a port version from ActionScript to JavaScript. We need the Box2dWeb-2.1.a.3.min.js file or its nonminified version for debugging. We put this file in the vendors folder. Box2D is an open source physics-simulation engine that was created by Erin Catto. It was originally written in C++. Later, it was ported to ActionScript because of the popularity of Flash games, and then it was ported to JavaScript. There are different versions of ports. The one we are using is called Box2DWeb, which was ported from ActionScript's version Box2D 2.1. Using an old version may cause issues. Also, it will be difficult to find help online because most developers have switched to 2.1. Creating a simulated physics world Our first task is to create a simulated physics world and put two objects inside it. Prepare for lift off In the index.html file, the core part is the game section. We have two canvas elements in this game. The debug-canvas element is for the Box2D engine and canvas is for the CreateJS library: <section id="game" class="row"> <canvas id="debug-canvas" width="480" height="360"></canvas> <canvas id="canvas" width="480" height="360"></canvas> </section> We prepare a dedicated file for all the physics-related logic. We prepare the physics.js file with the following code: ;(function(game, cjs, b2d){ // code here later }).call(this, game, createjs, Box2D); Engage thrusters The following steps create the physics world as the foundation of the game: The Box2D classes are put in different modules. We will need to reference some common classes as we go along. We use the following code to create an alias for these Box2D classes: // alias var b2Vec2 = Box2D.Common.Math.b2Vec2 , b2AABB = Box2D.Collision.b2AABB , b2BodyDef = Box2D.Dynamics.b2BodyDef , b2Body = Box2D.Dynamics.b2Body , b2FixtureDef = Box2D.Dynamics.b2FixtureDef , b2Fixture = Box2D.Dynamics.b2Fixture , b2World = Box2D.Dynamics.b2World , b2MassData = Box2D.Collision.Shapes.b2MassData , b2PolygonShape = Box2D.Collision.Shapes.b2PolygonShape , b2CircleShape = Box2D.Collision.Shapes.b2CircleShape , b2DebugDraw = Box2D.Dynamics.b2DebugDraw , b2MouseJointDef = Box2D.Dynamics.Joints.b2MouseJointDef , b2RevoluteJointDef = Box2D.Dynamics.Joints.b2RevoluteJointDef ; We prepare a variable that states how many pixels define 1 meter in the physics world. We also define a Boolean to determine if we need to draw the debug draw: var pxPerMeter = 30; // 30 pixels = 1 meter. Box3D uses meters and we use pixels. var shouldDrawDebug = false; All the physics methods will be put into the game.physics object. We create this literal object before we code our logics: var physics = game.physics = {}; The first method in the physics object creates the world: physics.createWorld = function() { var gravity = new b2Vec2(0, 9.8); this.world = new b2World(gravity, /*allow sleep= */ true); // create two temoporary bodies var bodyDef = new b2BodyDef; var fixDef = new b2FixtureDef; bodyDef.type = b2Body.b2_staticBody; bodyDef.position.x = 100/pxPerMeter; bodyDef.position.y = 100/pxPerMeter; fixDef.shape = new b2PolygonShape(); fixDef.shape.SetAsBox(20/pxPerMeter, 20/pxPerMeter); this.world.CreateBody(bodyDef).CreateFixture(fixDef); bodyDef.type = b2Body.b2_dynamicBody; bodyDef.position.x = 200/pxPerMeter; bodyDef.position.y = 100/pxPerMeter; this.world.CreateBody(bodyDef).CreateFixture(fixDef); // end of temporary code } The update method is the game loop's tick event for the physics engine. It calculates the world step and refreshes debug draw. The world step upgrades the physics world. We'll discuss it later: physics.update = function() { this.world.Step(1/60, 10, 10); if (shouldDrawDebug) { this.world.DrawDebugData(); } this.world.ClearForces(); }; Before we can refresh the debug draw, we need to set it up. We pass a canvas reference to the Box2D debug draw instance and configure the drawing settings: physics.showDebugDraw = function() { shouldDrawDebug = true; //set up debug draw var debugDraw = new b2DebugDraw(); debugDraw.SetSprite(document.getElementById("debug-canvas").getContext("2d")); debugDraw.SetDrawScale(pxPerMeter); debugDraw.SetFillAlpha(0.3); debugDraw.SetLineThickness(1.0); debugDraw.SetFlags(b2DebugDraw.e_shapeBit | b2DebugDraw.e_jointBit); this.world.SetDebugDraw(debugDraw); }; Let's move to the game.js file. We define the game-starting logic that sets up the EaselJS stage and Ticker. It creates the world and sets up the debug draw. The tick method calls the physics.update method: ;(function(game, cjs){ game.start = function() { cjs.EventDispatcher.initialize(game); // allow the game object to listen and dispatch custom events. game.canvas = document.getElementById('canvas'); game.stage = new cjs.Stage(game.canvas); cjs.Ticker.setFPS(60); cjs.Ticker.addEventListener('tick', game.stage); // add game.stage to ticker make the stage.update call automatically. cjs.Ticker.addEventListener('tick', game.tick); // gameloop game.physics.createWorld(); game.physics.showDebugDraw(); }; game.tick = function(){ if (cjs.Ticker.getPaused()) { return; } // run when not paused game.physics.update(); }; game.start(); }).call(this, game, createjs); After these steps, we should have a result as shown in the following screenshot. It is a physics world with two bodies. One body stays in position and the other one falls to the bottom. Objective complete – mini debriefing We have defined our first physical world with one static object and one dynamic object that falls to the bottom. A static object is an object that is not affected by gravity and any other forces. On the other hand, a dynamic object is affected by all the forces. Defining gravity In reality, we have gravity on every planet. It's the same in the Box2D world. We need to define gravity for the world. This is a ball-shooting game, so we will follow the rules of gravity on Earth. We use 0 for the x-axis and 9.8 for the y-axis. It is worth noting that we do not need to use the 9.8 value. For instance, we can set a smaller gravity value to simulate other planets in space—maybe even the moon; or, we can set the gravity to zero to create a top-down view of the ice hockey game, where we apply force to the puck and benefit from the collision. Debug draw The physics engine focuses purely on the mathematical calculation. It doesn't care about how the world will be presented finally, but it does provide a visual method in order to make the debugging easier. This debug draw is very useful before we use our graphics to represent the world. We won't use the debug draw in production. Actually, we can decide how we want to visualize this physics world. We have learned two ways to visualize the game. The first way is by using the DOM objects and the second one is by using the canvas drawing method. We will visualize the world with our graphics in later tasks. Understanding body definition and fixture definition In order to define objects in the physics world, we need two definitions: a body definition and fixture definition. The body is in charge of the physical properties, such as its position in the world, taking and applying force, moving speed, and the angular speed when rotating. We use fixtures to handle the shape of the object. The fixture definition also defines the properties on how the object interacts with others while colliding, such as friction and restitution. Defining shapes Shapes are defined in a fixture. The two most common shapes in Box2D are rectangle and circle. We define a rectangle with the SetAsBox function by providing half of its width and height. Also, the circle shape is defined by the radius. It is worth noting that the position of the body is at the center of the shape. It is different from EaselJS in that the default origin point is set at the top-left corner. Pixels per meter When we define the dimension and location of the body, we use meter as a unit. That's because Box2D uses metric for calculation to make the physics behavior realistic. But we usually calculate in pixels on the screen. So, we need to convert between pixels on the screen and meters in the physics world. That's why we need the pxPerMeter variable here. The value of this variable might change from project to project. The update method In the game tick, we update the physics world. The first thing we need to do is take the world to the next step. Box2D calculates objects based on steps. It is the same as we see in the physical world when a second is passed. If a ball is falling, at any fixed time, the ball is static with the property of the falling velocity. In the next millisecond, or nanosecond, the ball falls to a new position. This is exactly how steps work in the Box2D world. In every single step, the objects are static with their physics properties. When we go a step further, Box2D takes the properties into consideration and applies them to the objects. This step takes three arguments. The first argument is the time passed since the last step. Normally, it follows the frame-per-second parameter that we set for the game. The second and the third arguments are the iteration of velocity and position. This is the maximum iterations Box2D tries when resolving a collision. Usually, we set them to a low value. The reason we clear the force is because the force will be applied indefinitely if we do not clear it. That means the object keeps receiving the force on each frame until we clear it. Normally, clearing forces on every frame will make the objects more manageable. Classified intel We often need to represent a 2D vector in the physics world. Box2D uses b2vec for this purpose. Similar to the b2vec function, we use quite a lot of Box2D functions and classes. They are modularized into namespaces. We need to alias the most common classes to make our code shorter.
Read more
  • 0
  • 0
  • 2601

article-image-sparrow-ios-game-framework-basics-our-game
Packt
25 Jun 2014
10 min read
Save for later

Sparrow iOS Game Framework - The Basics of Our Game

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) Taking care of cross-device compatibility When developing an iOS game, we need to know which device to target. Besides the obvious technical differences between all of the iOS devices, there are two factors we need to actively take care of: screen size and texture size limit. Let's take a closer look at how to deal with the texture size limit and screen sizes. Understanding the texture size limit Every graphics card has a limit for the maximum size texture it can display. If a texture is bigger than the texture size limit, it can't be loaded and will appear black on the screen. A texture size limit has power-of-two dimensions and is a square such as 1024 pixels in width and in height or 2048 x 2048 pixels. When loading a texture, they don't need to have power-of-two dimensions. In fact, the texture does not have to be a square. However, it is a best practice for a texture to have power-of-two dimensions. This limit holds for big images as well as a bunch of small images packed into a big image. The latter is commonly referred to as a sprite sheet. Take a look at the following sample sprite sheet to see how it's structured: How to deal with different screen sizes While the screen size is always measured in pixels, the iOS coordinate system is measured in points. The screen size of an iPhone 3GS is 320 x 480 pixels and also 320 x 480 points. On an iPhone 4, the screen size is 640 x 960 pixels, but is still 320 by 480 points. So, in this case, each point represents four pixels: two in width and two in height. A 100-point wide rectangle will be 200 pixels wide on an iPhone 4 and 100 pixels on an iPhone 3GS. It works similarly for the devices with large display screens, such as the iPhone 5. Instead of 480 points, it's 568 points. Scaling the viewport Let's explain the term viewport first: the viewport is the visible portion of the complete screen area. We need to be clear about which devices we want our game to run on. We take the biggest resolution that we want to support and scale it down to a smaller resolution. This is the easiest option, but it might not lead to the best results; touch areas and the user interface scale down as well. Apple recommends for touch areas to be at least a 40-point square; so, depending on the user interface, some elements might get scaled down so much that they get harder to touch. Take a look at the following screenshot, where we choose the iPad Retina resolution (2048 x 1536 pixels) as our biggest resolution and scale down all display objects on the screen for the iPad resolution (1024 x 768 pixels): Scaling is a popular option for non-iOS environments, especially for PC and Mac games that support resolutions from 1024 x 600 pixels to full HD. Sparrow and the iOS SDK provide some mechanisms that will facilitate handling Retina and non-Retina iPad devices without the need to scale the whole viewport. Black borders Some games in the past have been designed for a 4:3 resolution display but then made to run on a widescreen device that had more screen space. So, the option was to either scale a 4:3 resolution to widescreen, which will distort the whole screen, or put some black borders on either side of the screen to maintain the original scale factor. Showing black borders is something that is now considered as bad practice, especially when there are so many games out there which scale quite well across different screen sizes and platforms. Showing non-interactive screen space If our pirate game is a multiplayer, we may have a player on an iPad and another on an iPhone 5. So, the player with the iPad has a bigger screen and more screen space to maneuver their ship. The worst case will be if the player with the iPad is able to move their ship outside the visual range for the iPhone player to see, which will result in a serious advantage for the iPad player. Luckily for us, we don't require competitive multiplayer functionality. Still, we need to keep a consistent screen space for players to move their ship in for game balance purposes. We wouldn't want to tie the difficulty level to the device someone is playing on. Let's compare the previous screenshot to the black border example. Instead of the ugly black borders, we just show more of the background. In some cases, it's also possible to move some user interface elements to the areas which are not visible on other devices. However, we will need to consider whether we want to keep the same user experience across devices and whether moving these elements will result in a disadvantage for users who don't have this extra screen space on their devices. Rearranging screen elements Rearranging screen elements is probably the most time-intensive and sophisticated way of solving this issue. In this example, we have a big user interface at the top of the screen in the portrait mode. Now, if we were to leave it like this in the landscape mode, the top of the screen will be just the user interface, leaving very little room for the game itself. In this case, we have to be deliberate about what kind of elements we need to see on the screen and which elements are using up too much screen estate. Screen real estate (or screen estate) is the amount of space available on a display for an application or a game to provide output. We will then have to reposition them, cut them up in to smaller pieces, or both. The most prominent example of this technique is Candy Crush (a popular trending game) by King. While this concept applies particularly to device rotation, this does not mean that it can't be used for universal applications. Choosing the best option None of these options are mutually exclusive. For our purposes, we are going to show non-interactive screen space, and if things get complicated, we might also resort to rearranging screen elements depending on our needs. Differences between various devices Let's take a look at the differences in the screen size and the texture size limit between the different iOS devices: Device Screen size (in pixels) Texture size limit (in pixels) iPhone 3GS 480 x 360 2048 x 2048 iPhone 4 (including iPhone 4S) and iPod Touch 4th generation 960 x 640 2048 x 2048 iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch 5th generation 1136 x 640 2048 x 2048 iPad 2 1024 x 768 2048 x 2048 iPad (3rd and 4th generations) and iPad Air 2048 x 1536 4096 x 4096 iPad Mini 1024 x 768 4096 x 4096 Utilizing the iOS SDK Both the iOS SDK and Sparrow can aid us in creating a universal application. Universal application is the term for apps that target more than one device, especially for an app that targets the iPhone and iPad device family. The iOS SDK provides a handy mechanism for loading files for specific devices. Let's say we are developing an iPhone application and we have an image that's called my_amazing_image.png. If we load this image on our devices, it will get loaded—no questions asked. However, if it's not a universal application, we can only scale the application using the regular scale button on iPad and iPhone Retina devices. This button appears on the bottom-right of the screen. If we want to target iPad, we have two options: The first option is to load the image as is. The device will scale the image. Depending on the image quality, the scaled image may look bad. In this case, we also need to consider that the device's CPU will do all the scaling work, which might result in some slowdown depending on the app's complexity. The second option is to add an extra image for iPad devices. This one will use the ~ipad suffix, for example, my_amazing_image~ipad.png. When loading the required image, we will still use the filename my_amazing_image.png. The iOS SDK will automatically detect the different sizes of the image supplied and use the correct size for the device. Beginning with Xcode 5 and iOS 7, it is possible to use asset catalogs. Asset catalogs can contain a variety of images grouped into image sets. Image sets contain all the images for the targeted devices. These asset catalogs don't require files with suffixes any more. These can only be used for splash images and application icons. We can't use asset catalogs for textures we load with Sparrow though. The following table shows which suffix is needed for which device: Device Retina File suffix iPhone 3GS No None iPhone 4 (including iPhone 4S) and iPod Touch (4th generation) Yes @2x @2x~iphone iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch (5th generation) Yes -568h@2x iPad 2 No ~ipad iPad (3rd and 4th generations) and iPad Air Yes @2x~ipad iPad Mini No ~ipad How does this affect the graphics we wish to display? The non-Retina image will be 128 pixels in width and 128 pixels in height. The Retina image, the one with the @2x suffix, will be exactly double the size of the non-Retina image, that is, 256 pixels in width and 256 pixels in height. Retina and iPad support in Sparrow Sparrow supports all the filename suffixes shown in the previous table, and there is a special case for iPad devices, which we will take a closer look at now. When we take a look at AppDelegate.m in our game's source, note the following line: [_viewController startWithRoot:[Game class] supportHighResolutions:YES doubleOnPad:YES]; The first parameter, supportHighResolutions, tells the application to load Retina images (with the @2x suffix) if they are available. The doubleOnPad parameter is the interesting one. If this is set to true, Sparrow will use the @2x images for iPad devices. So, we don't need to create a separate set of images for iPad, but we can use the Retina iPhone images for the iPad application. In this case, the width and height are 512 and 384 points respectively. If we are targeting iPad Retina devices, Sparrow introduces the @4x suffix, which requires larger images and leaves the coordinate system at 512 x 384 points. App icons and splash images If we are talking about images of different sizes for the actual game content, app icons and splash images are also required to be in different sizes. Splash images (also referred to as launch images) are the images that show up while the application loads. The iOS naming scheme applies for these images as well, so for Retina iPhone devices such as iPhone 4, we will name an image as [email protected], and for iPhone 5 devices, we will name an image as [email protected]. For the correct size of app icons, take a look at the following table: Device Retina App icon size iPhone 3GS No 57 x 57 pixels iPhone 4 (including iPhone 4S) and iPod Touch 4th generation Yes 120 x 120 pixels iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch 5th generation Yes 120 x 120 pixels iPad 2 No 76 x 76 pixels iPad (3rd and 4th generation) and iPad Air Yes 152 x 152 pixels iPad Mini No 76 x 76 pixels The bottom line The more devices we want to support, the more graphics we need, which directly increases the application file size, of course. Adding iPad support to our application is not a simple task, but Sparrow does some groundwork. One thing we should keep in mind though: if we are only targeting iOS 7.0 and higher, we don't need to include non-Retina iPhone images any more. Using @2x and @4x will be enough in this case, as support for non-Retina devices will soon end. Summary This article deals with setting up our game to work on iPhone, iPod Touch, and iPad in the same manner. Resources for Article: Further resources on this subject: Mobile Game Design [article] Bootstrap 3.0 is Mobile First [article] New iPad Features in iOS 6 [article]
Read more
  • 0
  • 0
  • 2156