Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-illuminating-scene
Packt
24 Oct 2013
3 min read
Save for later

Illuminating a Scene

Packt
24 Oct 2013
3 min read
(For more resources related to this topic, see here.) Working with lights Before starting to explain about lights, we need to learn how to create and manipulate them. Once you know how to handle them, you will be ready to start learning the who-is-who in the lightning stage. Let's start with the basics. Adding a light Lights are handled in modo just like regular items. You can move, rotate, and scale them, and of course, tweak their properties. By default, a newly created scene has got a default light already. You can use it, change its type, or add as many as you need. In order to add a new light you should go to the Item List tab, and then click on the Add Item button. In the drop-down menu go to Lights, then choose the type of light you want. The other way to do this is by using the top menu. Navigate to Item | Create Light and choose the type you want. Setting the type of a light You can always change the type of the light just created (or change an existent light). In the Item List tab, right-click the light you want to change, and from the menu click on Change Type, then choose the type you want for your light. Placing lights As said previously, lights are like all other regular items. So you can move, rotate, and scale them as you need. You will have the following two ways of placing a light: Direct manipulation: Working in item mode, click on the light on any of the viewports—or directly in the Item List tab— and use the corresponding tools (W for moving, R for scaling, or Y for rotating). Subjective manipulation: A more interesting and practical way to move a light is by changing the viewport to light view mode. Once you change it, your view will be literally inside the light, so the direction you are facing will be the direction of the light. In this view, use your standard viewport controls to orientate the light. Enabling/disabling lights Usually, there are occasions when you need to turn off a light, or a number of them. The first thought would be turning its intensity value to zero, but there is a more practical way to temporarily disable a light. If you take a look at the items list, you will see a column on the left of the panel showing a little eye icon. That column shows the visibility state of each item. The eye means that it's visible, and you can click on that icon to totally disable the light (or any item, in fact), and click on it again to enable it back. Of course you can do rest of the basic operations with the lights, as with other kinds of items including enabling/disabling them, grouping them in a single folder, and so on.
Read more
  • 0
  • 0
  • 1035

article-image-setting-slick2d
Packt
23 Oct 2013
4 min read
Save for later

Setting Up Slick2D

Packt
23 Oct 2013
4 min read
(For more resources related to this topic, see here.) What is Slick2D? Slick2D is a multi-platform library for two dimensional game development that sits upon the LWJGL(Light-Weight Java Game Library). Slick2D simplifies the processes of game development such as game loop, rendering, updating, frame setup, and state-based game creation. It also offers some features that LWJGL does not, such as particle emitters and integration with Tiled (a map editor). Developers of all skill levels can enjoy Slick2D, as it offers a degree of simplicity that you can't find in most libraries. This simplicity not only makes it a great library for programmers but artists as well, who may not have the technical knowledge to create games in other libraries. Downloading the Slick2D and LWJGL files The Slick2D and LWJGL jar files, plus the LWJGL native files, are needed to create a Slick2D game project. The only system requirement for Slick2D is a Java JDK. To get the files, we perform the following steps: Obtaining the LWJGL files: Navigate to http://www.lwjgl.org/download.php. Download the most recent stable build. The .zip file will include both the LWJGL jar file and the native files. (This .zip file will be referenced as lwjgl.zip file.) Obtaining the Slick2D files: Due to hosting issues, the Slick2D files are being hosted by a community member at http://slick.ninjacave.com. If this site is not available, follow the alternative instructions at step 3. Click on Download. Alternative method of obtaining the Slick2D files: Navigate to https://bitbucket.org/kevglass/slick. Download the source. Build the ant script located at slick/trunk/Slick/build.xml Build it in eclipse or command line using $ ant. Setting up an eclipse project We will utilize the Eclipse IDE that can be found at http://www.eclipse.org/ when working with Slick2D in this article. You may, however, utilize other options. Perform the following these steps to set up a Slick2D project: Navigate to File | New | Java Project. Name your project and click on Finish. Create a new folder in your project and name it lib. Add two subfolders named jars and native. Place both lwjgl.jar and slick.jar in the jars subfolder inside our eclipse project. Take all the native files from lwjgl.zip and place them in the native subfolder. Copy the contents of the subfolders inside native from lwjgl.zip not the subfolders themselves. Right-click on project then click on Properties. Click on Java Build Path and navigate to the Libraries tab. Add both the jars from the project. Select and expand lwjgl.jar from the Libraries tab and click on Native library location: (None) then click on Edit and search the workspace for the native's folder. Native files The native files included in lwjgl.zip are platform-specific libraries that allow the developers to make one game that will work on all of the different platforms. What if I want my game to be platform-specific? No real benefit exists to being platform-specific with Slick2D. In the foregoing tutorial, we will establish a game as a multi-platform game. However, if you want your game to be platform-specific, you can make it platform-specific. In the previous tutorial (step 6) we took the content of each operating system's folder and put that content into our native folder. If, instead, you desire to make your game platform-specific, then instead of copying the contents of these folders, you would copy the entire folder as illustrated as follows: When defining the natives for LWJGL (step 10 in previous example), simply point towards the operating system of your choice. Summary In this article we learned tons of important things necessary to create a project in Slick2D. So far we covered: Downloading the necessary library files Setting up a project (platform-specific or multi-platform) Native files Resources for Article: Further resources on this subject: HTML5 Games Development: Using Local Storage to Store Game Data [Article] Adding Sound, Music, and Video in 3D Game Development with Microsoft Silverlight 3: Part 2 [Article] Adding Finesse to Your Game [Article]
Read more
  • 0
  • 0
  • 6297

article-image-basic-concepts
Packt
23 Oct 2013
12 min read
Save for later

Basic Concepts

Packt
23 Oct 2013
12 min read
  (For more resources related to this topic, see here.) Scene and Actors You must have heard the quote written by William Shakespeare: "All the world's a stage, and all the men and women merely players: they have their exits and their entrances; and one man in his time plays many parts, his acts being seven ages." As per my interpretation, he wanted to say that this world is like a stage, and human beings are like players or actors who perform our role in it. Every actor may have his own discrete personality and influence, but there is only one stage, with a finite area, predefined props, and lighting conditions. In the same way, a world in PhysX is known as scene and the players performing their role are known as actors. A scene defines the property of the world in which a simulation takes place, and its characteristics are shared by all of the actors created in the scene. A good example of a scene property is gravity, which affects all of the actors being simulated in a scene. Although different actors can have different properties, independent of the scene. An instance of a scene can be created using the PxScene class. An actor is an object that can be simulated in a PhysX scene. It can have properties, such as shape, material, transform, and so on. An actor can be further classified as a static or dynamic actor; if it is a static one, think of it as a prop or stationary object on a stage that is always in a static position, immovable by simulation; if it is dynamic, think of it as a human or any other moveable object on the stage that can have its position updated by the simulation. Dynamic actors can have properties like mass, momentum, velocity, or any other rigid body related property. An instance of static actor can be created by calling PxPhysics::createRigidStatic() function, similarly an instance of dynamic actor can be created by calling PxPhysics::createRigidDynamic() function. Both functions require single parameter of PxTransform type, which define the position and orientation of the created actor. Materials In PhysX, a material is the property of a physical object that defines the friction and restitution property of an actor, and is used to resolve the collision with other objects. To create a material, call PxPhysics::createMaterial(), which requires three arguments of type PxReal; these represent static friction, dynamic friction and restitution, respectively. A typical example for creating a PhysX material is as follows: PxMaterial* mMaterial = gPhysicsSDK->createMaterial(0.5,0.5,0.5); Static friction represents the friction exerted on a rigid body when it is in a rest position, and its value can vary from 0 to infinity. On the other hand, dynamic friction is applicable to a rigid body only when it is moving, and its value should always be within 0 and 1. Restitution defines the bounciness of a rigid body and its value should always be between 0 and 1; the body will be more bouncy the closer its value is to 1. All of these values can be tweaked to make an object behave as bumpy as a Ping-Pong ball or as slippery as ice when it interacts with other objects. Shapes When we create an actor in PhysX, there are some other properties, like its shape and material, that need to be defined and used further as function parameters to create an actor. A shape in PhysX is a collision geometry that defines the collision boundaries for an actor. An actor can have more than one shape to define its collision boundary. Shapes can be created by calling PxRigidActor::createShape(), which needs at least one parameter each of type PxGeometry and PxMaterial respectively. A typical example of creating a PhysX shape of an actor is as follows: PxMaterial* mMaterial = gPhysicsSDK->createMaterial(0.5,0.5,0.5); PxRigidDynamic* sphere = gPhysicsSDK->createRigidDynamic(spherePos); sphere->createShape(PxSphereGeometry(0.5f), *mMaterial); An actor of type PxRigidStatic, which represents static actors, can have shapes such as a sphere, capsule, box, convex mesh, triangular mesh, plane, or height field. Permitted shapes for actors of the PxRigidDynamic type that represents dynamic actors depends on whether the actor is flagged as kinematic or not. If the actor is flagged as kinematic, it can have all of the shapes of an actor of the PxRigidStatic type; otherwise it can have shapes such as a sphere, capsule, box, convex mesh, but not a triangle mesh, a plane, or a height field. Creating the first PhysX 3 program Now we have enough understanding to create our first PhysX program. In this program, we initialize PhysX SDK, create a scene, and then add two actors. The first actor will be a static plane that will act as a static ground, and the second will be a dynamic cube positioned a few units above the plane. Once the simulation starts, the cube should fall on to the plane under the effect of gravity. Because this is our first PhysX code, to keep it simple, we will not draw any actor visually on the screen. We will just print the position of the falling cube on the console until it comes to rest. We will start our code by including the required header files. PxPhysicsAPI.h is the main header file for PhysX, and includes the entire PhysX API in a single header. Later on, you may want to selectively include only the header files that you need, which will help to reduce the application size. We also load the three most frequently used precompiled PhysX libraries for both the Debug and Release platform configuration of VC++ 2010 Express compiler shown as follows: In addition to the std namespace, which is a part of standard C++, we also need to add the physx namespace for PhysX, as follows: #include <iostream> #include <PxPhysicsAPI.h> //PhysX main header file //-------Loading PhysX libraries----------] #ifdef _DEBUG #pragma comment(lib, "PhysX3DEBUG_x86.lib") #pragma comment(lib, "PhysX3CommonDEBUG_x86.lib") #pragma comment(lib, "PhysX3ExtensionsDEBUG.lib") #else #pragma comment(lib, "PhysX3_x86.lib") #pragma comment(lib, "PhysX3Common_x86.lib") #pragma comment(lib, "PhysX3Extensions.lib") #endif using namespace std; using namespace physx; Initializing PhysX For initializing PhysX SDK, we first need to create an object of type PxFoundation by calling the PxCreateFoundation() function. This requires three parameters: the version ID, an allocator callback, and an error callback. The first parameter prevents a mismatch between the headers and the corresponding SDK DLL(s). The allocator callback and error callback are specific to an application, but the SDK also provides a default implementation, which is used in our program. The foundation class is needed to initialize higher-level SDKs. The code snippet for creating a foundation of PhysX SDK is as follows: static PxDefaultErrorCallback gDefaultErrorCallback; static PxDefaultAllocator gDefaultAllocatorCallback; static PxFoundation* gFoundation = NULL; //Creating foundation for PhysX gFoundation = PxCreateFoundation (PX_PHYSICS_VERSION, gDefaultAllocatorCallback, gDefaultErrorCallback); After creating an instance of the foundation class, we finally create an instance of PhysX SDK by calling the PxCreatePhysics() function. This requires three parameters: the version ID, the reference of the PxFoundation object we created earlier, and PxTolerancesScale. The PxTolerancesScale parameter makes it easier to author content on different scales and still have PhysX work as expected; however, to get started, we simply pass a default object of this type. We make sure that the PhysX device is created correctly by comparing it with NULL. If the object is not equal to NULL, the device was created successfully. The code snippet for creating an instance of PhysX SDK is as follows: static PxPhysics* gPhysicsSDK = NULL; //Creating instance of PhysX SDK gPhysicsSDK = PxCreatePhysics (PX_PHYSICS_VERSION, *gFoundation, PxTolerancesScale() ); if(gPhysicsSDK == NULL) { cerr<<"Error creating PhysX3 device, Exiting..."<<endl; exit(1); } Creating scene Once the PhysX device is created, it's time to create a PhysX scene and then add the actors to it. You can create a scene by calling PxPhysics::createScene(), which requires an instance of the PxSceneDesc class as a parameter. The object of PxSceneDesc contains the description of the properties that are required to create a scene, such as gravity. The code snippet for creating an instance of the PhysX scene is given as follows: PxScene* gScene = NULL; //Creating scene PxSceneDesc sceneDesc(gPhysicsSDK->getTolerancesScale()); sceneDesc.gravity = PxVec3(0.0f, -9.8f, 0.0f); sceneDesc.cpuDispatcher = PxDefaultCpuDispatcherCreate(1); sceneDesc.filterShader = PxDefaultSimulationFilterShader; gScene = gPhysicsSDK->createScene(sceneDesc); Then, one instance of PxMaterial is created, which will be used as a parameter for creating the actors. //Creating material PxMaterial* mMaterial = //static friction, dynamic friction, restitution gPhysicsSDK->createMaterial(0.5,0.5,0.5); Creating actors Now it's time to create actors; our first actor is a plane that will act as a ground. When we create a plane in PhysX, its default orientation is vertical, like a wall, but we want it to act like a ground. So, we have to rotate it by 90 degrees so that its normal will face upwards. This can be done using the PxTransform class to position and rotate the actor in 3D world space. Because we want to position the plane at the origin, we put the first parameter of PxTransform as PxVec3(0.0f,0.0f,0.0f); this will position the plane at the origin. We also want to rotate the plane along the z-axis by 90 degrees, so we will use PxQuat(PxHalfPi,PxVec3(0.0f,0.0f,1.0f)) as the second parameter. Now we have created a rigid static actor, but we don't have any shape defined for it. So, we will do this by calling the createShape() function and putting PxPlaneGeometry() as the first parameter, which defines the plane shape and a reference to the mMaterial that we created before as the second parameter. Finally, we add the actor by calling PxScene::addActor and putting the reference of plane, as shown in the following code: //1-Creating static plane PxTransform planePos = PxTransform(PxVec3(0.0f, 0, 0.0f),PxQuat(PxHalfPi, PxVec3(0.0f, 0.0f, 1.0f))); PxRigidStatic* plane = gPhysicsSDK->createRigidStatic(planePos); plane->createShape(PxPlaneGeometry(), *mMaterial); gScene->addActor(*plane); The next actor we want to create is a dynamic actor having box geometry, situated 10 units above our static plane. A rigid dynamic actor can be created by calling the PxCreateDynamic() function, which requires five parameters of type: PxPhysics, PxTransform, PxGeometry, PxMaterial, and PxReal respectively. Because we want to place it 10 units above the origin, the first parameter of PxTransform will be PxVec3(0.0f,10.0f,0.0f). Notice that they component of the vector is 10, which will place it 10 units above the origin. Also, we want it at its default identity rotation, so we skipped the second parameter of the PxTransform class. An instance of PxBoxGeometry also needs to be created, which requires PxVec3 as a parameter, which describes the dimension of a cube in half extent. We finally add the created actor to the PhysX scene by calling PxScene::addActor() and providing the reference of gBox as the function parameter. PxRigidDynamic*gBox); //2) Create cube PxTransform boxPos(PxVec3(0.0f, 10.0f, 0.0f)); PxBoxGeometry boxGeometry(PxVec3(0.5f,0.5f,0.5f)); gBox = PxCreateDynamic(*gPhysicsSDK, boxPos, boxGeometry, *mMaterial, 1.0f); gScene->addActor(*gBox); Simulating PhysX Simulating a PhysX program requires calculating the new position of all of the PhysX actors that are under the effect of Newton's law, for the next time frame. Simulating a PhysX program requires a time value, also known as time step, which forwards the time in the PhysX world. We use the PxScene::simulate() method to advance the time in the PhysX world. Its simplest form requires one parameter of type PxReal, which represents the time in seconds, and this should always be more than 0, of else the resulting behavior will be undefined. After this, you need to call PxScene::fetchResults(), which will allow the simulation to finish and return the result. The method requires an optional Boolean parameter, and setting this to true indicates that the simulation should wait until it is completed, so that on return the results are guaranteed to be available. //Stepping PhysX PxReal myTimestep = 1.0f/60.0f; void StepPhysX() { gScene->simulate(myTimestep); gScene->fetchResults(true); } We will simulate our PhysX program in a loop until the dynamic actor (box) we created 10 units above the ground falls to the ground and comes to an idle state. The position of the box is printed on the console for each time step of the PhysX simulation. By observing the console, you can see that initially the position of the box is (0, 10, 0), but the y component, which represents the vertical position of the box, is decreasing under the effect of gravity during the simulation. At the end of loop, it can also be observed that the position of the box in each simulation loop is the same; this means the box has hit the ground and is now in an idle state. //Simulate PhysX 300 times for(int i=0; i<=300; i++) { //Step PhysX simulation if(gScene) StepPhysX(); //Get current position of actor (box) and print it PxVec3 boxPos = gBox->getGlobalPose().p; cout<<"Box current Position ("<<boxPos.x <<" "<<boxPos.y <<" "<<boxPos.z<<")n"; } Shutting down PhysX Now that our PhysX simulation is done, we need to destroy the PhysX related objects and release the memory. Calling the PxScene::release() method will remove all actors, particle systems, and constraint shaders from the scene. Calling PxPhysics::release() will shut down the entire physics. Soon after, you may want to call PxFoundation::release() to release the foundation object, as follows: void ShutdownPhysX() { gScene->release(); gPhysicsSDK->release(); gFoundation->release(); } Summary We finally created our first PhysX program and learned its steps from start to finish. To keep our first PhysX program short and simple, we just used a console to display the actor's position during simulation, which is not very exciting; but it was the simplest way to start with PhysX. Resources for Article: Further resources on this subject: Building Events [Article] AJAX Form Validation: Part 1 [Article] Working with Zend Framework 2.0 [Article]
Read more
  • 0
  • 0
  • 1217
Banner background image

article-image-building-events
Packt
22 Oct 2013
8 min read
Save for later

Building Events

Packt
22 Oct 2013
8 min read
(For more resources related to this topic, see here.) Building a collision event system In a game such as Angry Birds, we would want to know when a breakable object such as a pig or piece of wood has collided with something, so that we can determine the amount of damage that was dealt, and whether or not the object should be destroyed, which in turn spawns some particle effects and increments the player's score. It's the game logic's job to distinguish between the objects, but it's the physics engine's responsibility to send these events in the first place and then we can extract this information from Bullet through its persistent manifolds. Continue from here using the Chapter6.1_ CollisionEvents project files. Explaining the persistent manifolds Persistent manifolds are the objects that store information between pairs of objects that pass the broad phase. If we remember our physics engine theory the broad phase returns a shortlist of the object pairs that might be touching, but are not necessarily touching. They could still be a short distance apart from one another, so the existence of a manifold does not imply a collision. Once you have the manifolds, there's still a little more work to do to verify if there is a collision between the object pair. One of the most common mistakes made with the Bullet physics engine is to assume that the existence of a manifold is enough to signal a collision. This results in detecting collision events a couple of frames too early (while the objects are still approaching one another) and detecting separation events too late (once they've separated far enough away that they no longer pass the broad phase). This often results in a desire to blame Bullet for being sluggish, when the fault lies with the user's original assumptions. Be warned! Manifolds reside within the collision dispatcher, and Bullet keeps the same manifolds in memory for as long as the same object pairs keep passing the broad phase. This is useful if you want to keep querying the same contact information between pairs of objects over time. This is where the persistent part comes in, which serves to optimize the memory allocation process by minimizing how often the manifolds are created and destroyed. Bullet is absolutely riddled with subtle optimizations and this is just one of them. This is all the more reason to use a known good physics solution like Bullet, instead of trying to take on the world and building your own! The manifold class in question is btPersistentManifold and we can gain access to the manifold list through the collision dispatcher's getNumManifolds() and getManifoldByIndexInternal() functions. Each manifold contains a handful of different functions and member variables to make use of, but the ones we're most interested in for now are getBody0(), getBody1(), and getNumContacts(). These functions return the two bodies in the object pair that passed the broad phase, and the number of contacts detected between them. We will use these functions to verify if a collision has actually taken place, and send the involved objects through an event. Managing the collision event There are essentially two ways to handle collision events: either send an event every update while two objects are touching (and continuously while they're still touching), or send events both when the objects collide and when the objects separate. In almost all cases it is wiser to pick the latter option, since it is simply an optimized version of the first. If we know when the objects start and stop touching, then we can assume that the objects are still touching between those two moments in time. So long as the system also informs us of peculiar cases in separation (such as if one object is destroyed, or teleports away while they're still touching), then we have everything we need for a collision event system. Bullet strives to be feature-rich, but also flexible, allowing us to build custom solutions to problems such as this; so this feature is not built into Bullet by default. In other words, we will need to build this logic ourselves. Our goals are simple; determine if a pair of objects have either collided or separated during the step, and if so, broadcast the corresponding event. The basic process is as follows: For each manifold, check if the two objects are touching (the number of contact points will be greater than zero). If so, add the pair to a list of pairs that we found in this step. If the same pair was not detected during the previous step, broadcast a collision event. Once we've finished checking the manifolds, create another list of collision objects that contains only the missing collision pairs between the previous step and this step. For each pair that is missing, broadcast a separation event. Overwrite the list of collision pairs from the previous step, with the list we created for this step. There are several STL (Standard Template Library) objects and functions we can use to make these steps easier. An std::pair can be used to store the objects in pairs, and can be stored within an std::set. These sets let us perform rapid comparisons between two sets using a helpful function, std::set_difference(). This function tells us the elements that are present in the first set, but not in the second. The following diagram shows how std::set_difference returns only objects pairs that are present in the first set, but missing from the second set. Note that it does not return new object pairs from the second set. The most important function introduced in this article's source code isCheckForCollisionEvents(). The code may look a little intimidating at first, but it simply implements the steps listed previously. The comments should help us to identify each step. When we detect a collision or separation, we will want some way to inform the game logic of it. These two functions will do the job nicely: virtual void CollisionEvent(btRigidBody* pBody0, btRigidBody * pBody1); virtual void SeparationEvent(btRigidBody * pBody0, btRigidBody * pBody1); In order to test this feature, we introduce the following code to turn colliding objects white (and similar code to turn separating objects black): void BulletOpenGLApplication::CollisionEvent(const btCollisionObject * pBody0, const btCollisionObject * pBody1) { GameObject* pObj0 = FindGameObject((btRigidBody*)pBody0); pObj0->SetColor(btVector3(1.0,1.0,1.0)); GameObject* pObj1 = FindGameObject((btRigidBody*)pBody1); pObj1->SetColor(btVector3(1.0,1.0,1.0)); } Note that these color changing commands are commented out in future project code. When we launch the application, we should expect colliding and separating objects to change to the colors give in CollisionEvent(). Colliding objects should turn white, and separated objects should turn black. But, when objects have finished moving, we observe something that might seem a little counterintuitive. The following screenshot shows the two objects colored differently once they come to rest: But, if we think about the order of events for a moment, it begins to make sense: When the first box collides with the ground plane, this turns both objects (the box and the ground plane) white The second box then collides with the first turning the second box white, while the first box stays white. Next, the second box separates from the first box, meaning both objects turn black. Finally, the second box collides with the ground plane, turning the box white once again. What was the last color that the first box turned to? The answer is black, because the last event it was involved in was a separation with the second box. But, how can the box be black if it's touching something? This is an intentional design consequence of this particular style of collision event management; one where we only recognize the collision and separation events. If we wanted objects to remember that they're still touching something, we would have to introduce some internal method of counting how many objects they're still in contact with, and incrementing/decrementing the count each time a collision or separation event comes along. This naturally consumes a little memory and processing time, but it's certainly far more optimized than the alternative of spamming a new collision event every step while two objects are still touching. We want to avoid wasting CPU cycles telling ourselves information that we already know. The CollisionEvent() and SeparationEvent() functions can be used by a game logic to determine if, when, and how two objects have collided. Since they hand over the rigid bodies involved in the collision, we can determine all kinds of important physics information, such as the points of contact (where they hit), and the difference in velocity/impulse force of the two bodies (how hard they hit). From there we can construct pretty much whatever physics collision-related game logic we desire. Try picking up, or introducing more objects with the left/right mouse buttons, causing further separations and collisions until you get a feel for how this system works. Summary Very little game logic can be built around a physics engine without a collision event system, so we made Bullet broadcast collision and separation events to our application so that it can be used by our game logic. This works by checking the list of manifolds, and creating logic that keeps track of important changes in these data structures. Resources for Article: Further resources on this subject: Flash Game Development: Making of Astro-PANIC! [Article] 2D game development with Monkey [Article] Developing Flood Control using XNA Game Development [Article]
Read more
  • 0
  • 0
  • 1017

article-image-adding-finesse-your-game
Packt
21 Oct 2013
7 min read
Save for later

Adding Finesse to Your Game

Packt
21 Oct 2013
7 min read
(For more resources related to this topic, see here.) Adding a background There is still a lot of black in the background and as the game has a space theme, let's add some stars in there. The way we'll do this is to add a sphere that we can map the stars texture to, so click on Game Object | Create Other | Sphere, and position it at X: 0, Y: 0, Z: 0. We also need to set the size to X: 100, Y: 100, Z: 100. Drag the stars texture, located at Textures/stars, on to the new sphere that we created in our scene. That was simple, wasn't that? Unity has added the texture to a material that appears on the outside of our sphere while we need it to show on the inside. To fix it, we are going to reverse the triangle order, flip the normal map, and flip the UV map with C# code. Right-click on the Scripts folder and then click on Create and select C# Script. Once you click on it, a script will appear in the Scripts folder; it should already have focus and be asking you to type a name for the script, call it SkyDome. Double-click on the script in Unity and it will open in MonoDevelop. Edit the Start method, as shown in the following code: void Start () {// Get a reference to the meshMeshFilterBase MeshFilter = transform.GetComponent("MeshFilter")as MeshFilter;Mesh mesh = BaseMeshFilter.mesh;// Reverse triangle windingint[] triangles = mesh.triangles;int numpolies = triangles.Length / 3;for(int t = 0;t <numpolies; t++){Int tribuffer = triangles[t * 3];triangles[t * 3] = triangles[(t * 3) + 2];triangles[(t * 3) + 2] = tribuffer;}// Read just uv map for inner sphere projectionVector2[] uvs = mesh.uv;for(int uvnum = 0; uvnum < uvs.Length; uvnum++){uvs[uvnum] = new Vector2(1 - uvs[uvnum].x, uvs[uvnum].y);}// Read just normals for inner sphere projectionVector3[] norms = mesh.normals;for(int normalsnum = 0; normalsnum < norms.Length; normalsnum++){[ 69 ]norms[normalsnum] = -norms[normalsnum];}// Copy local built in arrays back to the meshmesh.uv = uvs;mesh.triangles = triangles;mesh.normals = norms;} The breakdown of the code as is follows: Get the mesh of the sphere. Reverse the way the triangles are drawn. Each triangle has three indexes in the array; this script just swaps the first and last index of each triangle in the array. Adjust the X position for the UV map coordinates. Flip the normals of the sphere. Apply the new values of the reversed triangles, adjusted UV coordinates, and flipped normals to the sphere. Click and drag this script onto your sphere GameObject and test your scene. You should now see something like the following screenshot: Adding extra levels Now that the game is looking better, we can add some more content in to it. Luckily the jagged array we created earlier easily supports adding more levels. Levels can be any size, even with variable column heights per row. Double-click on the Sokoban script in the Project panel and switch over to MonoDevelop. Find levels array and modify it to be as follows: // Create the top array, this will store the level arraysint[][][] levels ={// Create the level array, this will store the row arraynew int [][] {// Create all row array, these will store column datanew int[] {1,1,1,1,1,1,1,1},new int[] {1,0,0,1,0,0,0,1},new int[] {1,0,3,3,0,3,0,1},new int[] {1,0,0,1,0,1,0,1},new int[] {1,0,0,1,3,1,0,1},new int[] {1,0,0,2,2,2,2,1},new int[] {1,0,0,1,0,4,1,1},new int[] {1,1,1,1,1,1,1,1}},// Create a new levelnew int [][] {new int[] {1,1,1,1,0,0,0,0},new int[] {1,0,0,1,1,1,1,1},new int[] {1,0,2,0,0,3,0,1},new int[] {1,0,3,0,0,2,4,1},new int[] {1,1,1,0,0,1,1,1},new int[] {0,0,1,1,1,1,0,0}},// Create a new levelnew int [][] {new int[] {1,1,1,1,1,1,1,1},new int[] {1,4,0,1,2,2,2,1},new int[] {1,0,0,3,3,0,0,1},new int[] {1,0,3,0,0,0,1,1},new int[] {1,0,0,1,1,1,1},new int[] {1,0,0,1},new int[] {1,1,1,1}}}; The preceding code has given us two extra levels, bringing the total to three. The layout of the arrays is still very visual and you can easily see the level layout just by looking at the arrays. Our BuildLevel, CheckIfPlayerIsAttempingToMove and MovePlayer methods only work on the first level at the moment, let's update them to always use the users current level. We'll have to store which level the player is currently on and use that level at all times, incrementing the value when a level is finished. As we'll want this value to persist between plays, we'll be using the PlayerPrefs object that Unity provides for saving player data. Before we get the value, we need to check that it is actually set and exists; otherwise we could see some odd results. Start by declaring our variable for use at the top of the Sokoban script as follows: int currentLevel; Next, we'll need to get the value of the current level from the PlayerPrefs object and store it in the Awake method. Add the following code to the top of your Awake method: if (PlayerPrefs.HasKey("currentLevel")) {currentLevel = PlayerPrefs.GetInt("currentLevel");} else {currentLevel = 0;PlayerPrefs.SetInt("currentLevel", currentLevel);} Here we are checking if we have a value already stored in the PlayerPrefs object, if we do then use it, if we don't then set currentLevel to 0, and then save it to the PlayerPrefs object. To fix the methods mentioned earlier, click on Search | Replace. A new window will appear. Type levels[0] in the top box and levels[currentLevel] in the bottom one, and then click on All. Level complete detection It's all well and good having three levels, but without a mechanism to move between them they are useless. We are going to add a check to see if the player has finished a level, if they have then increment the level counter and load the next level in the array. We only need to do the check at the end of every move; to do so every frame would be redundant. We'll write the following method first and then explain it: // If this method returns true then we have finished the levelboolhaveFinishedLevel () {// Initialise the counter for how many crates are on goal// tilesint cratesOnGoalTiles = 0;// Loop through all the rows in the current levelfor (int i = 0; i< levels[currentLevel].Length; i++) {// Get the tile ID for the column and pass it the switch// statementfor (int j = 0; j < levels[currentLevel][i].Length; j++) {switch (levels[currentLevel][i][j]) {case 5:// Do we have a match for a crate on goal// tile ID? If so increment the countercratesOnGoalTiles++;break;default:break;}}}// Check if the cratesOnGoalTiles variable is the same as the// amountOfCrates we set when building the levelif (amountOfCrates == cratesOnGoalTiles) {return true;} else {return false;}} In the BuildLevel method, whenever we instantiate crate, we increment the amountOfCrates variable. We can use this variable to check if the amount of crates on goal tiles is the same as the amountOfCrates variable, if it is then we know we have finished the current level. The for loops iterate through the current level's rows and columns, and we know that 5 in the array is a crate on a goal tile. The method returns a Boolean based on whether we have finished the level or not. Now let's add the call to the method. The logical place would be inside the MovePlayer method, so go ahead and add a call to the method just after the pCol += tCol; statement. As the method returns true or false, we're going to use it in an if statement, as shown in the following code: // Check if we have finished the levelif (haveFinishedLevel()) {Debug.Log("Finished");} The Debug.Log method will do for now, let's check if it's working. The solution for level one is on YouTube at http://www.youtube.com/watch?v=K5SMwAJrQM8&hd=1. Click on the play icon at the top-middle of the Unity screen and copy the sequence of moves in the video (or solve it yourself), when all the crates are on the goal tiles you'll see Finished in the Console panel. Summary The game now has some structure in the form of levels that you can complete and is easily expandable. If you wanted to take a break from the article, now would be a great time to create and add some levels to the game and maybe add some extra sound effects. All this hard work is for nothing if you can't make any money though, isn't it? Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Flash Game Development: Making of Astro-PANIC! [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 3006

article-image-mesh-animation
Packt
18 Oct 2013
5 min read
Save for later

Mesh animation

Packt
18 Oct 2013
5 min read
(For more resources related to this topic, see here.) Using animated models is not very different from using normal models. There are essentially two types of animation to consider (in addition to manually changing the position of a mesh's geometry in Three.js). If all you need is to smoothly transition properties between different values—for example, changing the rotation of a door in order to animate it opening—you can use the Tween.js library at https://github.com/sole/tween.jsto do so instead of animating the mesh itself. Jerome Etienne has a nice tutorial on doing this at http://learningthreejs.com/blog/2011/08/17/tweenjs-for-smooth-animation/. Morph animation Morph animation stores animation data as a sequence of positions. For example, if you had a cube with a shrink animation, your model could hold the positions of the vertices of the cube at full size and at the shrunk size. Then animation would consist of interpolating between those states during each rendering or keyframe. The data representing each state can hold either vertex targets or face normals. To use morph animation, the easiest approach is to use a THREE.MorphAnimMesh class, which is a subclass of the normal mesh. In the following example, the highlighted lines should only be included if the model uses normals: var loader = new THREE.JSONLoader(); loader.load('model.js', function(geometry) { var material = new THREE.MeshLambertMaterial({ color: 0x000000, morphTargets: true, morphNormals: true, }); if (geometry.morphColors && geometry.morphColors.length) { var colorMap = geometry.morphColors[0]; for (var i = 0; i < colorMap.colors.length; i++) { geometry.faces[i].color = colorMap.colors[i]; } material.vertexColors = THREE.FaceColors; } geometry.computeMorphNormals(); var mesh = new THREE.MorphAnimMesh(geometry, material); mesh.duration = 5000; // in milliseconds scene.add(mesh); morphs.push(mesh); }); The first thing we do is set our material to be aware that the mesh will be animated with the morphTargets properties and optionally with morphNormal properties. Next, we check whether colors will change during the animation, and set the mesh faces to their initial color if so (if you know your model doesn't have morphColors, you can leave out that block). Then the normals are computed (if we have them) and our MorphAnimMesh animation is created. We set the duration value of the full animation, and finally store the mesh in the global morphs array so that we can update it during our physics loop: for (var i = 0; i < morphs.length; i++) { morphs[i].updateAnimation(delta); } Under the hood, the updateAnimation method just changes which set of positions in the animation the mesh should be interpolating between. By default, the animation will start immediately and loop indefinitely. To stop animating, just stop calling updateAnimation. Skeletal animation Skeletal animation moves a group of vertices in a mesh together by making them follow the movement of bone. This is generally easier to design because artists only have to move a few bones instead of potentially thousands of vertices. It's also typically less memory-intensive for the same reason. To use morph animation, use a THREE.SkinnedMesh class, which is a subclass of the normal mesh: var loader = new THREE.JSONLoader(); loader.load('model.js', function(geometry, materials) { for (var i = 0; i < materials.length; i++) { materials[i].skinning = true; } var material = new THREE.MeshFaceMaterial(materials); THREE.AnimationHandler.add(geometry.animation); var mesh = new THREE.SkinnedMesh(geometry, material, false); scene.add(mesh); var animation = new THREE.Animation(mesh, geometry.animation.name); animation.interpolationType = THREE.AnimationHandler.LINEAR; // or CATMULLROM for cubic splines (ease-in-out) animation.play(); }); The model we're using in this example already has materials, so unlike in the morph animation examples, we have to change the existing materials instead of creating a new one. For skeletal animation we have to enable skinning, which refers to how the materials are wrapped around the mesh as it moves. We use the THREE.AnimationHandler utility to track where we are in the current animation and a THREE.SkinnedMesh utility to properly handle our model's bones. Then we use the mesh to create a new THREE.Animation and play it. The animation's interpolationType determines how the mesh transitions between states. If you want cubic spline easing (slow then fast then slow), use THREE.AnimationHandler.CATMULLROM instead of the LINEAR easing. We also need to update the animation in our physics loop: THREE.AnimationHandler.update(delta); It is possible to use both skeletal and morph animations at the same time. In this case, the best approach is to treat the animation as skeletal and manually update the mesh's morphTargetInfluences array as demonstrated in examples/webgl_animation_skinning_morph.html in the Three.js project. Summary This article explains how to manage external assets such as 3D models, as well as add details to your worlds and also teaches us to manage 3D models and animation. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article] 2D game development with Monkey [Article]
Read more
  • 0
  • 0
  • 3992
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introducing-building-blocks-unity-scripts
Packt
11 Oct 2013
15 min read
Save for later

Introducing the Building Blocks for Unity Scripts

Packt
11 Oct 2013
15 min read
(For more resources related to this topic, see here.) Using the term method instead of function You are constantly going to see the words function and method used everywhere as you learn Unity. The words function and method truly mean the same thing in Unity. They do the same thing. Since you are studying C#, and C# is an Object-Oriented Programming (OOP) language, I will use the word "method" throughout this article, just to be consistent with C# guidelines. It makes sense to learn the correct terminology for C#. Also, UnityScript and Boo are OOP languages. The authors of the Scripting Reference probably should have used the word method instead of function in all documentation. From now on I'm going to use the words method or methods in this article. When I refer to the functions shown in the Scripting Reference , I'm going to use the word method instead, just to be consistent throughout this article. Understanding what a variable does in a script What is a variable? Technically, it's a tiny section of your computer's memory that will hold any information you put there. While a game runs, it keeps track of where the information is stored, the value kept there, and the type of the value. However, for this article, all you need to know is how a variable works in a script. It's very simple. What's usually in a mailbox, besides air? Well, usually there's nothing but occasionally there is something in it. Sometimes there's money (a paycheck), bills, a picture from aunt Mabel, a spider, and so on. The point is what's in a mailbox can vary. Therefore, let's call each mailbox a variable instead. Naming a variable Using the picture of the country mailboxes, if I asked you to see what is in the mailbox, the first thing you'd ask is which one? If I said in the Smith mailbox, or the brown mailbox, or the round mailbox, you'd know exactly which mailbox to open to retrieve what is inside. Similarly, in scripts, you have to name your variables with a unique name. Then I can ask you what's in the variable named myNumber, or whatever cool name you might use. A variable name is just a substitute for a value As you write a script and make a variable, you are simply creating a placeholder or a substitute for the actual information you want to use. Look at the following simple math equation: 2 + 9 = 11 Simple enough. Now try the following equation: 11 + myNumber = ??? There is no answer to this yet. You can't add a number and a word. Going back to the mailbox analogy, write the number 9 on a piece of paper. Put it in the mailbox named myNumber. Now you can solve the equation. What's the value in myNumber? The value is 9. So now the equation looks normal: 11 + 9 = 20 The myNumber variable is nothing more than a named placeholder to store some data (information). So anywhere you would like the number 9 to appear in your script, just write myNumber, and the number 9 will be substituted. Although this example might seem silly at first, variables can store all kinds of data that is much more complex than a simple number. This is just a simple example to show you how a variable works. Time for action – creating a variable and seeing how it works Let's see how this actually works in our script. Don't be concerned about the details of how to write this, just make sure your script is the same as the script shown in the next screenshot. In the Unity Project panel, double-click on LearningScript. In MonoDevelop, write the lines 6, 11, and 13 from the next screenshot. Save the file. To make this script work, it has to be attached to a GameObject. Currently, in our State Machine project we only have one GameObject, the Main Camera. This will do nicely since this script doesn't affect the Main Camera in any way. The script simply runs by virtue of it being attached to a GameObject. Drag LearningScript onto the Main Camera. Select Main Camera so that it appears in the Inspector panel. Verify whether LearningScript is attached. Open the Unity Console panel to view the output of the script. Click on Play. The preceding steps are shown in the following screenshot: What just happened? In the following Console panel is the result of our equations. As you can see, the equation on line 13 worked by substituting the number 9 for the myNumber variable: Time for action – changing the number 9 to a different number Since myNumber is a variable, the value it stores can vary. If we change what is stored in it, the answer to the equation will change too. Follow the ensuing steps: Stop the game and change 9 to 19. Notice that when you restart the game, the answer will be 30. What just happened? You learned that a variable works by simple process of substitution. There's nothing more to it than that. We didn't get into the details of the wording used to create myNumber, or the types of variables you can create, but that wasn't the intent. This was just to show you how a variable works. It just holds data so you can use that data elsewhere in your script. Have a go hero – changing the value of myNumber In the Inspector panel, try changing the value of myNumber to some other value, even a negative value. Notice the change in answer in the Console. Using a method in a script Methods are where the action is and where the tasks are performed. Great, that's really nice to know but what is a method? What is a method? When we write a script, we are making lines of code that the computer is going to execute, one line at a time. As we write our code, there will be things we want our game to execute more than once. For example, we can write a code that adds two numbers. Suppose our game needs to add the two numbers together a hundred different times during the game. So you say, "Wow, I have to write the same code a hundred times that adds two numbers together. There has to be a better way." Let a method take away your typing pain. You just have to write the code to add two numbers once, and then give this chunk of code a name, such as AddTwoNumbers(). Now, every time our game needs to add two numbers, don't write the code over and over, just call the AddTwoNumbers() method. Time for action – learning how a method works We're going to edit LearningScript again. In the following screenshot, there are a few lines of code that look strange. We are not going to get into the details of what they mean in this article.Getting into the Details of Methods. Right now, I am just showing you a method's basic structure and how it works: In MonoDevelop, select LearningScript for editing. Edit the file so that it looks exactly like the following screenshot. Save the file. What's in this script file? In the previous screenshot, lines 6 and 7 will look familiar to you; they are variables just as you learned in the previous section. There are two of them this time. These variables store the numbers that are going to be added. Line 16 may look very strange to you. Don't concern yourself right now with how this works. Just know that it's a line of code that lets the script know when the Return/Enter key is pressed. Press the Return/Enter key when you want to add the two numbers together. Line 17 is where the AddTwoNumbers() method gets called into action. In fact, that's exactly how to describe it. This line of code calls the method. Lines 20, 21, 22, and 23 make up the AddTwoNumbers() method. Don't be concerned about the code details yet. I just want you to understand how calling a method works. Method names are substitutes too You learned that a variable is a substitute for the value it actually contains. Well, a method is no different. Take a look at line 20 from the previous screenshot: void AddTwoNumbers () The AddTwoNumbers() is the name of the method. Like a variable, AddTwoNumbers() is nothing more than a named placeholder in the memory, but this time it stores some lines of code instead. So anywhere we would like to use the code of this method in our script, just write AddTwoNumbers(), and the code will be substituted. Line 21 has an opening curly-brace and line 23 has a closing curly-brace. Everything between the two curly-braces is the code that is executed when this method is called in our script. Look at line 17 from the previous screenshot: AddTwoNumbers(); The method name AddTwoNumbers() is called. This means that the code between the curly-braces is executed. It's like having all of the code of a method right there on line 17. Of course, this AddTwoNumbers() method only has one line of code to execute, but a method could have many lines of code. Line 22 is the action part of this method, the part between the curly-braces. This line of code is adding the two variables together and displaying the answer to the Unity Console. Then, follow the ensuing steps: Go back to Unity and have the Console panel showing. Now click on Play. What just happened? Oh no! Nothing happened! Actually, as you sit there looking at the blank Console panel, the script is running perfectly, just as we programmed it. Line 16 in the script is waiting for you to press the Return/Enter key. Press it now. And there you go! The following screenshot shows you the result of adding two variables together that contain the numbers 2 and 9: Line 16 waited for you to press the Return/Enter key. When you do this, line 17 executes which calls the AddTwoNumbers() method. This allows the code block of the method, line 23, to add the the values stored in the variables number1 and number2. Have a go hero – changing the output of the method While Unity is in the Play mode, select the Main Camera so its Components show in the Inspector. In the Inspector panel, locate Learning Script and its two variables. Change the values, currently 2 and 9, to different values. Make sure to click your mouse in the Game panel so it has focus, then press the Return/Enter key again. You will see the result of the new addition in the Console. You just learned how a method works to allow a specific block of code to to be called to perform a task. We didn't get into any of the wording details of methods here, this was just to show you fundamentally how they work. Introducing the class The class plays a major role in Unity. In fact, what Unity does with a class a little piece of magic when Unity creates Components. You just learned about variables and methods. These two items are the building blocks used to build Unity scripts. The term script is used everywhere in discussions and documents. Look it up in the dictionary and it can be generally described as written text. Sure enough, that's what we have. However, since we aren't just writing a screenplay or passing a note to someone, we need to learn the actual terms used in programming. Unity calls the code it creates a C# script. However, people like me have to teach you some basic programming skills and tell you that a script is really a class. In the previous section about methods, we created a class (script) called LearningScript. It contained a couple of variables and a method. The main concept or idea of a class is that it's a container of data, stored in variables, and methods that process that data in some fashion. Because I don't have to constantly write class (script), I will be using the word script most of the time. However, I will also be using class when getting more specific with C#. Just remember that a script is a class that is attached to a GameObject. The State Machine classes will not be attached to any GameObjects, so I won't be calling them scripts. By using a little Unity magic, a script becomes a Component While working in Unity, we wear the following two hats: A Game-Creator hat A Scripting (programmer) hat When we first wear our Game-Creator hat, we will be developing our Scene, selecting GameObjects, and viewing Components; just about anything except writing our scripts. When we put our Scripting hat on, our terminology changes as follows: We're writing code in scripts using MonoDevelop We're working with variables and methods The magic happens when you put your Game-Creator hat back on and attach your script to a GameObject. Wave the magic wand — ZAP — the script file is now called a Component, and the public variables of the script are now the properties you can see and change in the Inspector panel. A more technical look at the magic A script is like a blueprint or a written description. In other words, it's just a single file in a folder on our hard drive. We can see it right there in the Projects panel. It can't do anything just sitting there. When we tell Unity to attach it to a GameObject, we haven't created another copy of the file, all we've done is tell Unity we want the behaviors described in our script to be a Component of the GameObject. When we click on the Play button, Unity loads the GameObject into the computer's memory. Since the script is attached to a GameObject, Unity also has to make a place in the computer's memory to store a Component as part of the GameObject. The Component has the capabilities specified in the script (blueprint) we created. Even more Unity magic There's some more magic you need to be aware of. The scripts inherit from MonoBehaviour. For beginners to Unity, studying C# inheritance isn't a subject you need to learn in any great detail, but you do need to know that each Unity script uses inheritance. We see the code in every script that will be attached to a GameObject. In LearningScript, the code is on line 4: public class LearningScript : MonoBehaviour The colon and the last word of that code means that the LearningScript class is inheriting behaviors from the MonoBehaviour class. This simply means that the MonoBehaviour class is making few of its variables and methods available to the LearningScript class. It's no coincidence that the variables and methods inherited look just like some of the code we saw in the Unity Scripting Reference. The following are the two inherited behaviors in the LearningScript: Line 9:: void Start () Line 14: void Update () The magic is that you don't have to call these methods, Unity calls them automatically. So the code you place in these methods gets executed automatically. Have a go hero – finding Start and Update in the Scripting Reference Try a search on the Scripting Reference for Start and Update to learn when each method is called by Unity and how often. Also search for MonoBehaviour. This will show you that since our script inherits from MonoBehaviour, we are able to use the Start() and Update() methods. Components communicating using the Dot Syntax Our script has variables to hold data, and our script has methods to allow tasks to be performed. I now want to introduce the concept of communicating with other GameObjects and the Components they contain. Communication between one GameObject's Components and another GameObject's Components using Dot Syntax is a vital part of scripting. It's what makes interaction possible. We need to communicate with other Components or GameObjects to be able to use the variables and methods in other Components. What's with the dots? When you look at the code written by others, you'll see words with periods separating them. What the heck is that? It looks complicated, doesn't it. The following is an example from the Unity documentation: transform.position.x Don't concern yourself with what the preceding code means as that comes later, I just want you to see the dots. That's called the Dot Syntax. The following is another example. It's the fictitious address of my house: USA.Vermont.Essex.22MyStreet Looks funny, doesn't it? That's because I used the syntax (grammar) of C# instead of the post office. However, I'll bet if you look closely, you can easily figure out how to find my house. Summary This article introduced you to the basic concepts of variables, methods, and Dot Syntax. These building blocks are used to create scripts and classes. Understanding how these building blocks work is critical so you don't feel you're not getting it. We discovered that a variable name is a substitute for the value it stores; a method name is a substitute for a block of code; when a script or class is attached to a GameObject, it becomes a Component. The Dot Syntax is just like an address to locate GameObjects and Components. With these concepts under your belt, we can proceed to learn the details of the sentence structure, the grammar, and the syntax used to work with variables, methods, and the Dot Syntax. Resources for Article: Further resources on this subject: Debugging Multithreaded Applications as Singlethreaded in C# [Article] Simplifying Parallelism Complexity in C# [Article] Unity Game Development: Welcome to the 3D world [Article]
Read more
  • 0
  • 0
  • 7439

article-image-let-there-be-light
Packt
04 Oct 2013
8 min read
Save for later

Let There be Light!

Packt
04 Oct 2013
8 min read
(For more resources related to this topic, see here.) Basic light sources You use lights to give a scene brightness, ambience, and depth. Without light, everything looks flat and dull. Use additional light sources to even-out lighting and to set up interior scenes. In Unity, lights are components of GameObjects. The different kinds of light sources are as follows: Directional lights: These lights are commonly used to mimic the sun. Their position is irrelevant, as only orientation matters. Every architectural scene should at least have one main Directional light. When you only need to lighten up an interior room, they are more tricky to use, as they tend to brighten up the whole scene, but they help getting some light through the windows, inside the project. We'll see a few use cases in the next few sections. Point lights: These lights are easy to use, as they emit light in any direction. Try to minimize their Range, so they don't spill light in other places. In most scenes, you'll need several of them to balance out dark spots and corners and to even-out the overall lighting. Spot lights: These lights only emit light into a cone and are good to simulate interior light fixtures. They cast a distinct bright circular light spot so use them to highlight something. Area lights: These are the most advanced lights, as they allow a light source to be given an actual rectangular size. This results in smoother lights and shadows, but their effect is only visible when baking and they require a pro-license. They are good to simulate light panels or the effect of light coming in through a window. In the free version, you can simulate them using multiple Spot or Directional Lights. Shadows Most current games support some form of shadows. They can be pre-calculated or rendered in real-time. Pre-calculation implies that the effect of shadows and lighting is calculated in advance and rendered onto an additional material layer. It only makes sense for objects that don't move in the scene. Real-time shadows are rendered using the GPU, but can be computationally expensive and should only be used for dynamic lighting. You might be familiar with real-time shadows from applications such as SketchUp and recent versions of ArchiCAD or Revit. Ideally, both techniques are combined. The overall lighting of the scene (for example, buildings, street, interiors, and so on) is pre-calculated and baked in texture maps. Additional real-time shadows are used on the moving characters. Unity can blend both types of shadows to simulate dynamic lighting in large scenes. Some of these techniques, however, are only supported in the pro-version. Real-time shadows Imagine we want to create a sun or shadow study of a building. This is best appreciated in real-time and by looking from the outside. We will use the same model as we did in the previous article, but load it in a separate scene. We want to have a light object acting as a sun, a spherical object to act as a visual clue where the sun is positioned and link them together to control the rotations in an easy way. The steps to be followed to achieve this are as follows: Add a Directional light, name it SunLight and choose the Shadow Type. Hard shadows are more sharply defined and are the best choice in this example, whereas Soft shadows look smoother and are better suited for a subtle, mood lighting. Add an empty GameObject by navigating to GameObject | Create Empty that is positioned in the center of the scene and name it ORIGIN. Create a Sphere GameObject by navigating to GameObject | Create Other | Sphere, name it VisualSun. Make it a child of the ORIGIN by dragging the VisualSun name in the Hierarchy Tab onto the ORIGIN name. Give it a bright, yellow material, using a Self-Illumin/Diffuse Shader. Deactivate Cast Shadows and Receive Shadows on the Mesh Renderer component. After you have placed the VisualSun as a child of the origin object, reset the position of the Sphere to be 0 for X, Y and Z. It now sits in the same place as its parent. Even if you move the parent, its local position stays at X=0, Y=0 and Z=0, which makes it convenient for a placement relative to its parent object. Alter the Z-position to define an offset from the origin, for example 20 units. The negative Z will facilitate the SunLight orientation in the next step. The SunLight can be dragged onto the VisualSun and its local position reset to zero as well. When all rotations are also zero, it emits light down the Z-axis and thus straight to the origin. If you want to have a nice glow effect, you can add a Halo by navigating to Components | Effects | Halo and then to SunLight and setting a suitable Size. We now have a hierarchic structure of the origin, the visible sphere and the Directional light, that is accentuated by the halo. We can adjust this assembly by rotating the origin around. Rotating around the Y-axis defines the orientation of the sun, whereas a rotation around the X-axis defines the azimuth. With these two rotations, we can position the sun wherever we want. Lightmapping Real-time lighting is computationally very expensive. If you don't have the latest hardware, it might not even be supported. Or you might avoid it for a mobile app, where hardware resources are limited. It is possible to pre-calculate the lighting of a scene and bake it onto the geometry as textures. This process is called Lightmapping, for more information on it visit: http://docs.unity3d.com/Documentation/Manual/Lightmapping.html While actual calculations are rather complex, the process in Unity is made easy, thanks to the integrated Beast Lightmapping. There are a few things you need to set up properly. These are given as follows: First, ensure that any object that needs to be baked is set to Static. Each GameObject has a static-toggle, right next to the Name property. Activate this for all models and light objects that will not move in the Scene. Secondly, ensure that all geometry has a second set of texture coordinates, called UV2 coordinates in Unity. Default GameObjects have those set up, but for imported models, they usually need to be added. Luckily for us, this is automated when Generate Lightmap UVs is activated on the model import settings given earlier in Quick Walk Around Your Design, in the section entitled, Controlling the import settings. If all lights and meshes are static and UV2 coordinates are calculated, you are ready to go. Open the Lightmapping dialog by navigating to Window | Lightmapping and dock it somewhere conveniently. There are several settings, but we start with a basic setup that consists of the following steps: Usually a Single Lightmap suffices. Dual Lightmaps can look better, but require the deferred rendering method that is only supported in Unity Pro. Choose the Quality High modus. Quality Low gives jagged edges and is only used for quick testing. Activate Ambient Occlusion as a quick additional rendering step that darkens corners and occluded areas, such as where objects touch. This adds a real sense of depth and is highly recommended. Set both sliders somewhere in the middle and leave the distance at 0.1, to control how far the system will look to detect occlusions. Start with a fairly low Resolution, such as 5 or 10 texels per world unit. This defines how detailed the calculated Lightmap texture is, when compared to the geometry. Look at the Scene view, to get a checkered overlay visible, when adjusting Lightmapping settings. For final results, increase this to 40 or 50, to give more detail to the shadows, at the cost of longer rendering times. There are additional settings for which Unity Pro is required, such as Sky Light and Bounced Lighting. They both add to the realism of the lighting, so they are actually highly recommended for architectural visualization, if you have the pro-version. On the Object sub-tab, you can also tweak the shadow calculation settings for individual lights. By increasing the radius, you get a smoother shadow edge, at the cost of longer rendering times. If you increase the radius, you should also increase the amount of samples, which helps reduce the noise that gets added with sampling. This is shown in the following screenshot: Now you can go on and click Bake Scene. It can take quite some time for large models and fine resolutions. Check the blue time indicator on the right side of the status bar (but you can continue working in Unity). After the calculations are finished, the model is reloaded with the new texture and baked shadows are visible in Scene and Game views, as shown in the following screenshot: Beware that to bake a Scene, it needs to be saved and given a name, as Unity places the calculated Lightmap textures in a subfolder with the same name as the Scene. Summary In this article we learned about the use of different light sources and shadow. To avoid the heavy burden of real-time shadows, we discussed the use Lightmapping technique to bake lights and shadows on the model, from within Unity Resources for Article: Further resources on this subject: Unity Game Development: Welcome to the 3D world [Article] Introduction to Game Development Using Unity 3D [Article] Unity 3-0 Enter the Third Dimension [Article]    
Read more
  • 0
  • 0
  • 2290

article-image-cross-platform-development-build-once-deploy-anywhere
Packt
01 Oct 2013
19 min read
Save for later

Cross-platform Development - Build Once, Deploy Anywhere

Packt
01 Oct 2013
19 min read
(For more resources related to this topic, see here.) The demo application – how the projects work together Take a look at the following diagram to understand and familiarize yourself with the configuration pattern that all of your Libgdx applications will have in common: What you see here is a compact view of four projects. The demo project to the very left contains the shared code that is referenced (that is, added to the build path) by all the other platform-specific projects. The main class of the demo application is MyDemo.java. However, looking at it from a more technical view, the main class where an application gets started by the operating system, which will be referred to as Starter Classes from now on. Notice that Libgdx uses the term "Starter Class" to distinguish between these two types of main classes in order to avoid confusion. We will cover everything related to the topic of Starter Classes in a moment. While taking a closer look at all these directories in the preceding screenshot, you may have spotted that there are two assets folders: one in the demo-desktop project and another one in demo-android. This brings us to the question, where should you put all the application's assets? The demo-android project plays a special role in this case. In the preceding screenshot, you see a subfolder called data, which contains an image named libgdx.png, and it also appears in the demo-desktop project in the same place. Just remember to always put all of your assets into the assets folder under the demo-android project. The reason behind this is that the Android build process requires direct access to the application's assets folder. During its build process, a Java source file, R.java, will automatically be generated under the gen folder. It contains special information for Android about the available assets. It would be the usual way to access assets through Java code if you were explicitly writing an Android application. However, in Libgdx, you will want to stay platform-independent as much as possible and access any resource such as assets only through methods provided by Libgdx. You may wonder how the other platform-specific projects will be able to access the very same assets without having to maintain several copies per project. Needless to say that this would require you to keep all copies manually synchronized each time the assets change. Luckily, this problem has already been taken care of by the generator as follows: The demo-desktop project uses a linked resource, a feature by Eclipse, to add existing files or folders to other places in a workspace. You can check this out by right-clicking on the demo-desktop project then navigating to Properties | Resource | Linked Resources and clicking on the Linked Resources tab. The demo-html project requires another approach since Google Web Toolkit ( GWT ) has a different build process compared to the other projects. There is a special file GwtDefinition.gwt.xml that allows you to set the asset path by setting the configuration property gdx.assetpath, to the assets folder of the Android project. Notice that it is good practice to use relative paths such as ../demo-android/assets so that the reference does not get broken in case the workspace is moved from its original location. Take this advice as a precaution to protect you and maybe your fellow developers too from wasting precious time on something that can be easily avoided by using the right setup right from the beginning. The following is the code listing for GwtDefinition.gwt.xml from demo-html : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE module PUBLIC "-//Google Inc.//DTD Google Web Toolkit trunk//EN" "http://google-web-toolkit.googlecode.com/svn/trunk/ distro-source/core/src/gwt-module.dtd"> <module> <inherits name='com.badlogic.gdx.backends.gdx_backends_gwt' /> <inherits name='MyDemo' /> <entry-point class='com.packtpub.libgdx.demo.client.GwtLauncher' /> <set-configuration-property name="gdx.assetpath" value="../demo-android/assets" /> </module> Backends Libgdx makes use of several other libraries to interface the specifics of each platform in order to provide cross-platform support for your applications. Generally, a backend is what enables Libgdx to access the corresponding platform functionalities when one of the abstracted (platform-independent) Libgdx methods is called. For example, drawing an image to the upper-left corner of the screen, playing a sound file at a volume of 80 percent, or reading and writing from/to a file. Libgdx currently provides the following three backends: LWJGL (Lightweight Java Game Library) Android JavaScript/WebGL As already mentioned in Introduction to Libgdx and Project Setup , there will also be an iOS backend in the near future. LWJGL (Lightweight Java Game Library) LWJGL ( Lightweight Java Game Library ) is an open source Java library originally started by Caspian Rychlik-Prince to ease game development in terms of accessing the hardware resources on desktop systems. In Libgdx, it is used for the desktop backend to support all the major desktop operating systems, such as Windows, Linux, and Mac OS X. For more details, check out the official LWJGL website at http://www.lwjgl.org/. Android Google frequently releases and updates their official Android SDK. This represents the foundation for Libgdx to support Android in the form of a backend. There is an API Guide available which explains everything the Android SDK has to offer for Android developers. You can find it at http://developer.android.com/guide/components/index.html. WebGL WebGL support is one of the latest additions to the Libgdx framework. This backend uses the GWT to translate Java code into JavaScript and SoundManager2 ( SM2 ), among others, to add a combined support for HTML5, WebGL, and audio playback. Note that this backend requires a WebGL-capable web browser to run the application. You might want to check out the official website of GWT: https://developers.google.com/web-toolkit/. You might want to check out the official website of SM2: http://www.schillmania.com/projects/soundmanager2/. You might want to check out the official website of WebGL: http://www.khronos.org/webgl/. There is also a list of unresolved issues you might want to check out at https://github.com/libgdx/libgdx/blob/master/backends/gdx-backends-gwt/issues.txt. Modules Libgdx provides six core modules that allow you to access the various parts of the system your application will run on. What makes these modules so great for you as a developer is that they provide you with a single Application Programming Interface ( API ) to achieve the same effect on more than just one platform. This is extremely powerful because you can now focus on your own application and you do not have to bother with the specialties that each platform inevitably brings, including the nasty little bugs that may require tricky workarounds. This is all going to be transparently handled in a straightforward API which is categorized into logic modules and is globally available anywhere in your code, since every module is accessible as a static field in the Gdx class. Naturally, Libgdx does always allow you to create multiple code paths for per-platform decisions. For example, you could conditionally increase the level of detail in a game when run on the desktop platform, since desktops usually have a lot more computing power than mobile devices. The application module The application module can be accessed through Gdx.app. It gives you access to the logging facility, a method to shutdown gracefully, persist data, query the Android API version, query the platform type, and query the memory usage. Logging Libgdx employs its own logging facility. You can choose a log level to filter what should be printed to the platform's console. The default log level is LOG_INFO. You can use a settings file and/or change the log level dynamically at runtime using the following code line: Gdx.app.setLogLevel(Application.LOG_DEBUG); The available log levels are: LOG_NONE: This prints no logs. The logging is completely disabled. LOG_ERROR: This prints error logs only. LOG_INFO: This prints error and info logs. LOG_DEBUG: This prints error, info, and debug logs. To write an info, debug, or an error log to the console, use the following listings: Gdx.app.log("MyDemoTag", "This is an info log."); Gdx.app.debug("MyDemoTag", "This is a debug log."); Gdx.app.error("MyDemoTag", "This is an error log."); Shutting down gracefully You can tell Libgdx to shutdown the running application. The framework will then stop the execution in the correct order as soon as possible and completely de-allocate any memory that is still in use, freeing both Java and the native heap. Use the following listing to initiate a graceful shutdown of your application: Gdx.app.exit(); You should always do a graceful shutdown when you want to terminate your application. Otherwise, you will risk creating memory leaks, which is a really bad thing. On mobile devices, memory leaks will probably have the biggest negative impact due to their limited resources. Persisting data If you want to persist your data, you should use the Preferences class. It is merely a dictionary or a hash map data type which stores multiple key-value pairs in a file. Libgdx will create a new preferences file on the fly if it does not exist yet. You can have several preference files using unique names in order to split up data into categories. To get access to a preference file, you need to request a Preferences instance by its filename as follows: Preferences prefs = Gdx.app.getPreferences("settings.prefs"); To write a (new) value, you have to choose a key under which the value should be stored. If this key already exists in a preferences file, it will be overwritten. Do not forget to call flush() afterwards to persist the data, or else all the changes will be lost. prefs.putInteger("sound_volume", 100); // volume @ 100% prefs.flush(); Persisting data needs a lot more time than just modifying values in memory (without flushing). Therefore, it is always better to modify as many values as possible before a final flush() method is executed. To read back a certain value from a preferences file, you need to know the corresponding key. If this key does not exist, it will be set to the default value. You can optionally pass your own default value as the second argument (for example, in the following listing, 50 is for the default sound volume): int soundVolume = prefs.getInteger("sound_volume", 50); Querying the Android API Level On Android, you can query the Android API Level, which allows you to handle things differently for certain versions of the Android OS. Use the following listing to find out the version: Gdx.app.getVersion(); On platforms other than Android, the version returned is always 0. Querying the platform type You may want to write a platform-specific code where it is necessary to know the current platform type. The following example shows how it can be done: switch (Gdx.app.getType()) { case Desktop: // Code for Desktop application break; case Android: // Code for Android application break; case WebGL: // Code for WebGL application break; default: // Unhandled (new?) platform application break; } Querying memory usage You can query the system to find out its current memory footprint of your application. This may help you find excessive memory allocations that could lead to application crashes. The following functions return the amount of memory (in bytes) that is in use by the corresponding heap: long memUsageJavaHeap = Gdx.app.getJavaHeap(); long memUsageNativeHeap = Gdx.app.getNativeHeap(); Graphics module The graphics module can be accessed either through Gdx.getGraphics() or by using the shortcut variable Gdx.graphics. Querying delta time Query Libgdx for the time span between the current and the last frame in seconds by calling Gdx.graphics.getDeltaTime(). Querying display size Query the device's display size returned in pixels by calling Gdx.graphics.getWidth() and Gdx.graphics.getHeight(). Querying the FPS (frames per second) counter Query a built-in frame counter provided by Libgdx to find the average number of frames per second by calling Gdx.graphics.getFramesPerSecond(). Audio module The audio module can be accessed either through Gdx.getAudio() or by using the shortcut variable Gdx.audio. Sound playback To load sounds for playback, call Gdx.audio.newSound(). The supported file formats are WAV, MP3, and OGG. There is an upper limit of 1 MB for decoded audio data. Consider the sounds to be short effects like bullets or explosions so that the size limitation is not really an issue. Music streaming To stream music for playback, call Gdx.audio.newMusic(). The supported file formats are WAV, MP3, and OGG. Input module The input module can be accessed either through Gdx.getInput() or by using the shortcut variable Gdx.input. In order to receive and handle input properly, you should always implement the InputProcessor interface and set it as the global handler for input in Libgdx by calling Gdx.input.setInputProcessor(). Reading the keyboard/touch/mouse input Query the system for the last x or y coordinate in the screen coordinates where the screen origin is at the top-left corner by calling either Gdx.input.getX() or Gdx.input.getY(). To find out if the screen is touched either by a finger or by mouse, call Gdx.input.isTouched() To find out if the mouse button is pressed, call Gdx.input.isButtonPressed() To find out if the keyboard is pressed, call Gdx.input.isKeyPressed() Reading the accelerometer Query the accelerometer for its value on the x axis by calling Gdx.input.getAccelerometerX(). Replace the X in the method's name with Y or Z to query the other two axes. Be aware that there will be no accelerometer present on a desktop, so Libgdx always returns 0. Starting and canceling vibrator On Android, you can let the device vibrate by calling Gdx.input.vibrate(). A running vibration can be cancelled by calling Gdx.input.cancelVibrate(). Catching Android soft keys You might want to catch Android's soft keys to add an extra handling code for them. If you want to catch the back button, call Gdx.input.setCatchBackKey(true). If you want to catch the menu button, call Gdx.input.setCatchMenuKey(true). On a desktop where you have a mouse pointer, you can tell Libgdx to catch it so that you get a permanent mouse input without having the mouse ever leave the application window. To catch the mouse cursor, call Gdx.input.setCursorCatched(true). The files module The files module can be accessed either through Gdx.getFiles() or by using the shortcut variable Gdx.files. Getting an internal file handle You can get a file handle for an internal file by calling Gdx.files.internal(). An internal file is relative to the assets folder on the Android and WebGL platforms. On a desktop, it is relative to the root folder of the application. Getting an external file handle You can get a file handle for an external file by calling Gdx.files.external(). An external file is relative to the SD card on the Android platform. On a desktop, it is relative to the user's home folder. Note that this is not available for WebGL applications. The network module The network module can be accessed either through Gdx.getNet() or by using the shortcut variable Gdx.net. HTTP GET and HTTP POST You can make HTTP GET and POST requests by calling either Gdx.net.httpGet() or Gdx.net.httpPost(). Client/server sockets You can create client/server sockets by calling either Gdx.net.newClientSocket() or Gdx.net.newServerSocket(). Opening a URI in a web browser To open a Uniform Resource Identifier ( URI ) in the default web browser, call Gdx.net.openURI(URI). Libgdx's Application Life-Cycle and Interface The Application Life-Cycle in Libgdx is a well-defined set of distinct system states. The list of these states is pretty short: create, resize, render, pause, resume, and dispose. Libgdx defines an ApplicationListener interface that contains six methods, one for each system state. The following code listing is a copy that is directly taken from Libgdx's sources. For the sake of readability, all comments have been stripped. public interface ApplicationListener { public void create (); public void resize (int width, int height); public void render (); public void pause (); public void resume (); public void dispose (); } All you need to do is implement these methods in your main class of the shared game code project. Libgdx will then call each of these methods at the right time. The following diagram visualizes the Libgdx's Application Life-Cycle: Note that a full and dotted line basically has the same meaning in the preceding figure. They both connect two consecutive states and have a direction of flow indicated by a little arrowhead on one end of the line. A dotted line additionally denotes a system event. When an application starts, it will always begin with create(). This is where the initialization of the application should happen, such as loading assets into memory and creating an initial state of the game world. Subsequently, the next state that follows is resize(). This is the first opportunity for an application to adjust itself to the available display size (width and height) given in pixels. Next, Libgdx will handle system events. If no event has occurred in the meanwhile, it is assumed that the application is (still) running. The next state would be render(). This is where a game application will mainly do two things: Update the game world model Draw the scene on the screen using the updated game world model Afterwards, a decision is made upon which the platform type is detected by Libgdx. On a desktop or in a web browser, the displaying application window can be resized virtually at any time. Libgdx compares the last and current sizes on every cycle so that resize() is only called if the display size has changed. This makes sure that the running application is able to accommodate a changed display size. Now the cycle starts over by handling (new) system events once again. Another system event that can occur during runtime is the exit event. When it occurs, Libgdx will first change to the pause() state, which is a very good place to save any data that would be lost otherwise after the application has terminated. Subsequently, Libgdx changes to the dispose() state where an application should do its final clean-up to free all the resources that it is still using. This is also almost true for Android, except that pause() is an intermediate state that is not directly followed by a dispose() state at first. Be aware that this event may occur anytime during application runtime while the user has pressed the Home button or if there is an incoming phone call in the meanwhile. In fact, as long as the Android operating system does not need the occupied memory of the paused application, its state will not be changed to dispose(). Moreover, it is possible that a paused application might receive a resume system event, which in this case would change its state to resume(), and it would eventually arrive at the system event handler again. Starter Classes A Starter Class defines the entry point (starting point) of a Libgdx application. They are specifically written for a certain platform. Usually, these kinds of classes are very simple and mostly consist of not more than a few lines of code to set certain parameters that apply to the corresponding platform. Think of them as a kind of boot-up sequence for each platform. Once booting has finished, the Libgdx framework hands over control from the Starter Class (for example, the demo-desktop project) to your shared application code (for example, the demo project) by calling the different methods from the ApplicationListener interface that the MyDemo class implements. Remember that the MyDemo class is where the shared application code begins. We will now take a look at each of the Starter Classes that were generated during the project setup. Running the demo application on a desktop The Starter Class for the desktop application is called Main.java. The following listing is Main.java from demo-desktop : package com.packtpub.libgdx.demo; import com.badlogic.gdx.backends.lwjgl.LwjglApplication; import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration; public class Main { public static void main(String[] args) { LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration(); cfg.title = "demo"; cfg.useGL20 = false; cfg.width = 480; cfg.height = 320; new LwjglApplication(new MyDemo(), cfg); } } In the preceding code listing, you see the Main class, a plain Java class without the need to implement an interface or inherit from another class. Instead, a new instance of the LwjglApplication class is created. This class provides a couple of overloaded constructors to choose from. Here, we pass a new instance of the MyDemo class as the first argument to the constructor. Optionally, an instance of the LwjglApplicationConfiguration class can be passed as the second argument. The configuration class allows you to set every parameter that is configurable for a Libgdx desktop application. In this case, the window title is set to demo and the window's width and height is set to 480 by 320 pixels. This is all you need to write and configure a Starter Class for a desktop. Let us try to run the application now. To do this, right-click on the demo-desktop project in Project Explorer in Eclipse and then navigate to Run As | Java Application. Eclipse may ask you to select the Main class when you do this for the first time. Simply select the Main class and also check that the correct package name ( com.packtpub.libgdx.demo ) is displayed next to it. The desktop application should now be up and running on your computer. If you are working on Windows, you should see a window that looks as follows: Summary In this article, we learned about Libgdx and how all the projects of an application work together. We covered Libgdx's backends, modules, and Starter Classes. Additionally, we covered what the Application Life Cycle and corresponding interface are, and how they are meant to work. Resources for Article: Further resources on this subject: Panda3D Game Development: Scene Effects and Shaders [Article] Microsoft XNA 4.0 Game Development: Receiving Player Input [Article] Introduction to Game Development Using Unity 3D [Article]
Read more
  • 0
  • 0
  • 2022

article-image-game-publishing
Packt
23 Sep 2013
14 min read
Save for later

Game Publishing

Packt
23 Sep 2013
14 min read
(For more resources related to this topic, see here.) Manifest file Many application details are specified in the manifest file. Thus, we will modify it to set the correct application name, description, as well as choose suitable tile images. we can adjust these settings on the Application UI page in the Manifest Designer. Basic configuration As presented in the following screenshot, set the Display Name field to Space Aim 3D, and adjust the Description field as well. What is more, we should choose the suitable App Icon, which is an image with size 100 x 100 pixels. It represents our application, thus we should provide the game with a proper icon that the user can easily recognize. In the exemplary game, the icon shows the planet and a few asteroids, which are the main elements in the game. What is more, the image contains the game title. It is important, because we clear the content of the Tile Title setting. Thus the user will not see additional small text with a name of the application, after pinning the tile to the Start screen. Tiles Apart from some basic settings, we can also choose tile images. They will be shown on the default tile when the player taps the game to the Start screen. we can also create secondary tiles, which could navigate to particular locations inside the application. However, such a solution is not shown in this article. The Windows Phone 8 platform supports three kinds of tile templates: flip, iconic, and cycle. They differ by a way of presenting content. Of course, we can select a suitable one also in the Manifest Designer by choosing an option from the Tile Template list. The flip tile template is the default option, which allows we to present two sides of the tile and flip between them. Thus, we can present the game logo on the front side and some more information on the other side. The tile flips automatically. we can specify settings regarding the background images, titles shown on the front and back sides, the content presented on the other side, as well as the number displayed on the tile. The iconic tile template shows the content in a bit different way. Here, a small icon is used to present the application, together with an optional number, for example, regarding received messages in our project. Of course, we can set some properties, including the tile title, the image, as well as the background color. The cycle tile template is the last available type, which makes it possible to present up to nine images that are changed automatically. Thus, it can be a suitable way to present a tile for the application which works with many images. In case of the cycle tile template, we can also adjust a few settings, such as the title or the shown images. Tiles can be available in three sizes: small (159 x 159 pixels), medium (336 x 336), and large (691 x 336). The small and medium ones are mandatory, while the large one is optional. With the usage of the Manifest Designer, it is very easy to prepare the basic version of the default tile for the application, just by selecting a suitable template type and choosing images which should be shown on the tile, depending on its size. we can also enable or disable the support for a large tile. After adjusting the tiles, we may receive a result as shown in the following screenshot: As we can see, the tile is presented in three various sizes: small, medium, and large. What is important, we do not just rescale the image, but provide three separate images in particular sizes, which look good regardless of the tile size. Here, we can also find the Space Aim 3D shortcut in the application list. However, we can expect that it appears in the Games group instead. Fortunately, we do not need to worry about it, because after downloading the game from the store, its shortcut will be shown in the proper place. As we could remember, the flip data template, which we chose, can present two sides of the tile, but now we do not see this effect. Thus, in the following part of this section we will learn how to configure the default tile to support flipping and how to set the background image and the content at the back. To do so, we should open the WMAppManifest.xml file, but not in the Manifest Designer. we need to choose the View Code option from the context menu of this file. What is interesting, the XML code contains information that we had set in a graphical way. Thus, it can be an additional way of adjusting some settings and adding more complex features, which are not supported directly by the Manifest Designer. The TemplateFlip node, specified in the WMAppManifest.xml file, is as follows: <TemplateFlip> <SmallImageURI (...)> (...) </SmallImageURI> <Count>0</Count> <BackgroundImageURI (...)> (...) </BackgroundImageURI> <Title></Title> <BackContent>Let's avoid asteroids and reach the target planet!</BackContent> <BackBackgroundImageURI></BackBackgroundImageURI> <BackTitle>Space Aim 3D</BackTitle> <LargeBackgroundImageURI (...)> (...) </LargeBackgroundImageURI> <LargeBackContent>Let's avoid asteroids, reach the target planet, and see players in the vicinity!</LargeBackContent> <LargeBackBackgroundImageURI(...)></LargeBackBackgroundImageURI> <DeviceLockImageURI></DeviceLockImageURI> <HasLarge>True</HasLarge> </TemplateFlip> Here, we specify the BackContent, BackTitle, and LargeBackContent elements to adjust a way of presenting the back side of the tile. The first setting is a string which will be displayed on the other side of the medium-sized tile. The second setting is the title shown on the back side (regardless of the tile size), while the other is a string shown on the large tile. When we deploy the application to the emulator or the phone, and tap it to the start screen, we should see that the tile flips automatically, and it should present some additional content on the back side. What is more, it differs depending on the tile size, as shown in the following screenshots: The Windows Phone 8 platform also supports live tiles, which present the content received from the Internet and are updated automatically. Such a kind of tiles can use either the push notifications or the local notifications. Remaining settings we have completed the basic configuration of the application, as well as learned how to set up the default tile. However, a few other modifications are necessary on the Packaging tab. Here, we specify the author and publisher data, as well as a version of the application. Other tabs (Capabilities and Requirements) remain unchanged, because we made the required modifications earlier. Rating by the users While designing the web screen, we created the button which should allow the user to rate the game. we can easily implement this functionality with the MarketplaceReviewTask class, which is another launcher used in the exemplary game. we need to modify the Rate method of the webViewModel class, as shown in the following code snippet: private void Rate() { MarketplaceReviewTask task = new MarketplaceReviewTask(); task.Show(); } Here, we create a new instance of the MarketplaceReviewTask class (from the Microsoft.Phone.Tasks namespace) and call the Show method. When this part of code is executed, the review page is opened, where the user can rate and review the current application. Release version The development environment for programming the Windows Phone 8 applications is equipped with many advanced features regarding their debugging. Such functionalities may require a bit different form of the code, which can be executed slower, but provide the developers with additional possibilities during development. For this reason, it is important to prepare the release version (retail) of the game before publishing. we can easily generate the release version of the .xap file (with the data of wer application) using the IDE. To do it, we should change two options located next to the green triangle, and a selection of the emulator or device, as shown in the following screenshot. Here, we should indicate that we want to use the Release mode, as well as the ARM platform. Then, we should select Build and Rebuild Solution options to generate the suitable version of the .xap file. Now, we can proceed to the process of testing our game and preparing for submission to the store! Store Test Kit Some testing operations are simplified by the Store Test Kit, which allows to perform a set of automatic and manual tests. They can be used to verify many requirements that should be met to accept the application in the store. we can open the tool by choosing the Open Store Test Kit option from the context menu of the SpaceAim3D project (not solution), or by choosing the Open Store Test Kit entry from the Project menu. The tool contains three pages: Application Details, Automated Tests, and Manual Tests. we will learn how to use all of them in this section. Application details In the first page we should specify some details regarding the application, including the store tile image with size 300 x 300 pixels. Apart from it, we need to choose a set of screenshots for each supported screen resolution. It is important to add at least one screenshot and not more than eight. Each of them should be provided in proper resolution, that is, 480 x 800 (WVGA), 768 x 1280 (WXGA), and 720 x 1280 (720P). The Application Details page is shown in the following screenshot: In case of applications running in the landscape mode (as in case of wer game), we should provide images that are not rotated, that is, as captured in the emulator. we can make screenshots either in the emulator (using the Screenshot tab in the Additional Tools window) or on the phone. In the latter case, we should press the Start and power buttons. When a screenshot is taken correctly, a shutter sound is played and the image is saved in a suitable album. The Application Package field is a read-only textbox, where a path to the .xap file is shown. It is worth mentioning that it indicates the Release version for ARM, thus it is exactly the same version as we generated earlier. Automated tests The second page is named Automated Tests. By clicking on the Run Tests button, we run a basic verification of wer project regarding the submission requirements, for example, whether the .xap file size is correct, as well as we provide the suitable icons and screenshots. If all tests are passed, we will receive a result as shown in the following screenshot. Otherwise, some additional notes are shown. In such a situation, we need to fix errors and run the tests once again. It is important to note that passing all automated tests does not mean that the application does not contain any errors which could prevent the project from being accepted in the store. Manual tests Additional verification can be performed manually, by using the Manual Tests tab in the Store Test Kit. Here, we have a list of test cases and their descriptions. we should follow them and manually indicate whether the test is passed or failed as shown in the following screenshot: Simulation Dashboard The exemplary game may be used in various conditions, often significantly different than our testing environment. For instance, the player can have a limited access to the Internet or switched off the location services. For this reason, we should try to test our project in many situations. Such a process can be simplified by the Simulation Dashboard tool. we can open it by choosing the Simulation Dashboard option from the Tools menu. Its window is shown in the following screenshot: This tool makes it possible to simulate some specific network conditions, by choosing a proper network speed and signal strength. we can choose 2G, 3G, 4G, Wi-Fi, and No Network options as the speed, as well as Good, Average, and Poor as the Signal Strength. Thus, we can check how our application behaves if we need to download some data from the Internet using the slower network connection, or even whether the game responds correctly in case of no network access. Apart from the network simulation, the Simulation Dashboard allows we to lock and unlock the screen just by choosing a suitable option from the Lock Screen group. The last supported testing feature is named Reminders. Just after pressing the Trigger Reminder button, the reminder is shown in the emulator. Thus, we can easily check how wer application reacts in such a situation. Windows Phone Application Analysis Apart from testing the project in real-world conditions, it is important to check its performance and try to eliminate problems in this area. Fortunately, the Windows Phone Application Analysis tool is integrated with the IDE, and we can use it to measure the performance of various parts of our game. The tool can be started by selecting the Start Windows Phone Application Analysis option from the Debug menu. After launching, we should have the Execution option selected, thus we can click on the Start Session (App will start) element. It automatically starts our game in the emulator, if it is selected as the target. Then, a process of collecting data is started, and we can use the application in various ways to check the performance of its several areas. For instance, we can start by spending some time on the Menu screen, then we launch the game and play ten levels, return to the menu, and open additional screens (such as Ranks, Map, or World). When we want to finish collecting performance data, we should click on the End Session (App will exit) option. Then, the process of collecting data stops and the report is created, which can take some time. The .sap file is generated for each analysis session. It is saved in the directory of the managed project, and can be later opened and analyzed using the IDE. The Windows Phone Application Analysis tool presents a lot of important data related to the game performance, including both the managed and the native parts. The graph, presented in the main part of the window, shows the CPU usage at a particular time, as shown in the following screenshot. we can easily analyze which part of our application causes performance issues, and we may try to improve them. As we can see in the preceding screenshot, the performance results for the Space Aim 3D game are expectable and reasonable. In case of the Menu screen, the CPU usage is very low. It grows significantly for the Game page, but here we need to perform a lot of operations, for example, related to rendering many objects in the 3D game world. What is interesting, the CPU usage grows slightly while consecutive levels, but also on the tenth level, the game works smoothly both in the emulator and on the phone. As soon as we exit to the main menu, the CPU usage decreases almost immediately. The remaining part is related to opening the following game screens: Ranks, Map, World, and others. The possibilities of the Windows Phone Application Analysis are not limited only to drawing the graph of the CPU usage. we can also see the Hot Path information, which lets us know what part of code uses the most processing power. In our case, it is the region that renders 3D objects on the screen, using Direct3D. What is more, we can click on a name of the particular function to open another view, which shows more details. By using the tool, we can even analyze the performance of particular lines of code, as shown in the following screenshot: Available features make it significantly easier to find a bottleneck causing performance problems. By running the Windows Phone Application Analysis tool multiple times, while making modifications in the code, we can see how wer changes reflect in the performance.
Read more
  • 0
  • 0
  • 834
article-image-directx-graphics-diagnostic
Packt
19 Sep 2013
6 min read
Save for later

DirectX graphics diagnostic

Packt
19 Sep 2013
6 min read
(For more resources related to this topic, see here.) Debugging a captured frame is usually a real challenge in comparison with debugging C++ code. We are dealing with hundreds of thousands of more, pixels that are produced, and in addition, there might be several functions being processed by the GPU. Typically, in modern games, there are different passes on a frame constructing it; also, there are many post-process renderings that will be applied on the final result to increase the quality of the frame. All these processes make it quite difficult to find why a specific pixel is drawn with an unexpected color during debugging! Visual Studio 2012 comes with a series of tools that intend to assist game developers. The new DirectX graphics diagnostics tools are a set of development tools integrated with Microsoft Visual Studio 2012, which can help us to analyze and debug the frames captured from a Direct3D application. Some of the functionalities of these tools come from the PIX for a Windows tool, which is a part of DirectX SDK. Please note that the DirectX graphics diagnostics tools are not supported by Visual Studio 2012 Express at the time of writing this article. In this article, we are going to explain a complete example that shows how to use graphics diagnostics to capture and analyze the graphics information of a frame. Open the final project of this article, DirectX Graphics Diagnostics, and let's see what is going on with the GPU. Intel Graphics Performance Analyzer (Intel GPA) is another suite of graphics analysis and optimization tools that are supported by Windows Store applications. At the time of writing this article, the final release of this suite (Intel GPA 2013 R2) is able to analyze Windows 8 Store applications, but tracing the captured frames is not supported yet. Also, Nvidia Nsight™ Visual Studio Edition 3.1 is another option which supports Visual Studio 2012 and Direct3D 11.1 for debugging, profiling, and tracing heterogeneous compute and graphics application. Capturing the frame To start debugging the application, press Alt + F5 or select the Start Diagnostics command from Debug | Graphics | Start Diagnostics, as shown in the following screenshot: You can capture graphics information by using the application in two ways. The first way is to use Visual Studio to manually capture the frame while it is running, and the second way is to use the programmatic capture API. The latter is useful when the application is about to run on a computer that does not have Visual Studio installed or when you would like to capture the graphics information from the Windows RT devices. For in the first way, when the application starts, press the Print Screen key (Prt Scr). For in the second way, for preparing the application to use the programmatic capture, you need to use the CaptureCurrentFrame API. So, make sure to add the following header to the pch.h file: #include <vsgcapture.h> For Windows Store applications, the location of the temp directory is specific to each user and application, and can be found at C:usersusernameAppDataLocalPackagespackage family nameTempState. Now you can capture your frame by calling the g_pVsgDbg->CaptureCurrentFrame() function. By default, the name of the captured file is default.vsglog. Remember, do not start the graphics diagnostics when using the programmatic capture API, just run or debug your application. The Graphics Experiment window After a frame is captured, it is displayed in Visual Studio as Graphics Experiment.vsglog. Each captured frame will be added to the Frame List and is presented as a thumbnail image at the bottom of the Graphics Experiment tab. This logfile contains all the information needed for debugging and tracing. As you can see in the following screenshot, there are three subwindows: the left one displays the captured frames, the right one, which is named Graphics Events List, demonstrates the list of all DirectX events, and finally, the Graphics Pixel History subwindow in the middle is responsible for displaying the activities of the selected pixel in the running frame: Let's start with the Graphics Pixel History subwindow. As you can see in the preceding screenshot, we selected one of the pixels on the spaceship model. Now let us take a look at the Graphics Pixel History subwindow of that pixel, as shown in the following screenshot: The preceding screenshot shows how this pixel has been modified by each DirectX event; first it is initialized with a specific color, then it is changed to blue by the ClearRenderTargetView function and after this, it is changed to the color of our model by drawing indexed primitives. Open the collapsed DrawIndexed function to see what really happens in the Vertex Shader and Pixel Shader pipelines. The following screenshot shows the information about each of the vertices: The input layout of the vertex buffer is VertexPositionNormalTangentColorTexture. In this article, you can see each vertex's value of the model's triangle. Now, we would like to debug the Pixel Shader of this pixel, so just press the green triangular icon to start debugging. As you can see in the following screenshot, when the debug process is started, Visual Studio will navigate to the source code of the Pixel Shader: Now you can easily debug the Pixel Shader code of the specific pixel in the DrawIndexed stage. You can also right-click on each pixel of the captured frame and select Graphics Object Table to check the Direct3D object's data. Following screenshot shows the Graphics Event List subwindow. Draw calls in this list are typically more important events: The event that is displayed with the icon marks a draw event, the one that is displayed with the icon marks an event that occurs before the captured frame, and the user-defined event marker or the group shown with the icon can be defined inside the application code. In this example, we mark an event (Start rendering the model) before rendering the model and mark another event (The model has been rendered) after the model is rendered. You can create these events by using the ID3DUserDefinedAnnotation:: BeginEvent, ID3DUserDefinedAnnotation:: EndEvent, and ID3DUserDefinedAnnotation:: SetMarker interfaces. Summary In this article, you have learned DirectX graphics diagnostic, how to capture the frame, and the graphics Experiment window. Resources for Article: Further resources on this subject: Getting Started with GameSalad [Article] 2D game development with Monkey [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 2029

article-image-managing-and-displaying-information
Packt
17 Sep 2013
37 min read
Save for later

Managing and Displaying Information

Packt
17 Sep 2013
37 min read
(For more resources related to this topic, see here.) In order to realize these goals, in this article, we'll be doing the following: Displaying a countdown timer on the screen Configuring fonts Creating a game attribute to count lives Using graphics to display information Counting collected actors Keeping track of the levels Prior to continuing with the development of our game, let's take a little time out to review what we have achieved so far, and also to consider some of the features that our game will need before it can be published. A review of our progress The gameplay mechanics are now complete; we have a controllable character in the form of a monkey, and we have some platforms for the monkey to jump on and traverse the scene. We have also introduced some enemy actors, the croc and the snake, and we have Aztec statues falling from the sky to create obstacles for the monkey. Finally, we have the fruit, all of which must be collected by the monkey in order to successfully complete the level. With regards to the scoring elements of the game, we're currently keeping track of a countdown timer (displayed in the debug console), which causes the scene to completely restart when the monkey runs out of time. When the monkey collides with an enemy actor, the scene is not reloaded, but the monkey is sent back to its starting point in the scene, and the timer continues to countdown. Planning ahead – what else does our game need? With the gameplay mechanics working, we need to consider what our players will expect to happen when they have completed the task of collecting all the fruits. As mentioned in the introduction to this article, our plan is to create additional, more difficult levels for the player to complete! We also need to consider what will happen when the game is over; either when the player has succeeded in collecting all the fruits, or when the player has failed to collect the fruits in the allocated time. The solution that we'll be implementing in this game is to display a message to advise the player of their success or failure, and to provide options for the player to either return to the main menu, or if the task was completed successfully, continue to the next level within the game. We need to implement a structure so that the game can keep track of information, such as how many lives the player has left and which level of the game is currently being played. Let's put some information on the screen so that our players can keep track of the countdown timer. Displaying a countdown timer on the screen We created a new scene behavior called Score Management, which contains the Decrement Countdown event, shown as follows: Currently, as we can see in the previous screenshot, this event decrements the Countdown attribute by a value of 1, every second. We also have a debug print instruction that displays the current value of Countdown in the debug console to help us, as game developers, keep track of the countdown. However, players of the game cannot see the debug console, so we need to provide an alternative means of displaying the amount of time that the player has to complete the level. Let's see how we can display that information on the screen for players of our game. Time for action – displaying the countdown timer on the screen Ensure that the Score Management scene behavior is visible: click on the Dashboard tab, select Scene Behaviors, and double-click on the Score Management icon in the main panel. Click + Add Event | Basics | When Drawing. Double-click on the title of the new Drawing event, and rename it to Display Countdown. Click on the Drawing section button in the instruction block palette. Drag a draw text anything at (x: 0 y: 0) block into the orange when drawing event block in the main panel. Enter the number 10 into the x: textbox and also enter 10 into the y: textbox. Click on the drop-down arrow in the textbox after draw text and select Text | Basics. Then click on the text & text block. In the first textbox in green, … & … block, enter the text COUNTDOWN: (all uppercase, followed by a colon). In the second textbox, after the & symbol, click on the drop-down arrow and select Basics, then click on the anything as text block. Click on the drop-down arrow in the … as text block, and select Number | Attributes | Countdown. Ensure that the new Display Countdown event looks like the following screenshot: Test the game. What just happened? When the game is played, we can now see in the upper-left corner of the screen, a countdown timer that represents the value of the Countdown attribute as it is decremented each second. First, we created a new Drawing event, which we renamed to Display Countdown, and then we added a draw text anything at (x: 0 y: 0) block, which is used to display the specified text in the required location on the screen. We set both the x: and y: coordinates for displaying the drawn text to 10 pixels, that is, 10 pixels from the left-hand side of the screen, and 10 pixels from the top of the screen. The next task was to add some text blocks that enabled us to display an appropriate message along with the value of the Countdown attribute. The text & text block enables us to concatenate, or join together, two separate pieces of text. The Countdown attribute is a number, so we used the anything as text block to convert the value of the Countdown attribute to text to ensure that it will be displayed correctly when the game is being played. In practice, we could have just located the Countdown attribute block in the Attributes section of the palette, and then dragged it straight into the text & text block. However, it is best practice to correctly convert attributes to the appropriate type, as required by the instruction block. In our case, the number attribute is being converted to text because it is being used in the text concatenation instruction block. If we needed to use a text value in a calculation, we would convert it to a number using an anything as number block. Configuring fonts We can see, when testing the game, that the font we have used is not very interesting; it's a basic font that doesn't really suit the style of the game! Stencyl allows us to specify our own fonts, so our next step is to import a font to use in our game. Time for action – specifying a font for use in our game Before proceeding with the following steps, we need to locate the fonts-of-afrikaAfritubu.TTF file. Place the file in a location where it can easily be located, and continue with the following steps: In the Dashboard tab, click on Fonts. In the main panel, click on the box containing the words This game contains no Fonts. Click here to create one. In the Name textbox of the Create New… dialog box, type HUD Font and click on the Create button. In the left-hand panel, click on the Choose… button next to the Font selector. Locate the file Afritubu.TTF and double-click on it to open it. Note that the main panel shows a sample of the new font. In the left-hand panel, change the Size option to 25. Important: save the game! Return to the Display Countdown event in the Score Management scene behavior. In the instruction block palette, click on the Drawing section button and then the Styles category button. Drag the set current font to Font block above the draw text block in the when drawing event. Click on the Font option in the set current font to Font block, and select Choose Font from the pop-up menu. Double-click on the HUD Font icon in the Choose a Font… dialog box. Test the game. Observe the countdown timer at the upper-left corner of the game. What just happened? We can see that the countdown timer is now being displayed using the new font that we have imported into Stencyl, as shown in the following screenshot: The first step was to create a new blank font in the Stencyl dashboard and to give it a name (we chose HUD Font), and then we imported the font file from a folder on our hard disk. Once we had imported the font file, we could see a sample of the font in the main panel. We then increased the size of the font using the Size option in the left-hand panel. That's all we needed to do in order to import and configure a new font in Stencyl! However, before progressing, we saved the game to ensure that the newly imported font will be available for the next steps. With our new font ready to use, we needed to apply it to our countdown text in the Display Countdown behavior. So, we opened up the behavior and inserted the set current font to Font style block. The final step was to specify which font we wanted to use, by clicking on the Font option in the font style block, and choosing the new font, HUD Font, which we configured in the earlier steps. Heads-Up Display (HUD) is often used in games to describe either text or graphics that is overlaid on the main game graphics to provide the player with useful information. Using font files in Stencyl Stencyl can use any TrueType font that we have available on our hard disk (files with the extension TTF); many thousands of fonts are available to download from the Internet free of charge, so it's usually possible to find a font that suits the style of any game that we might be developing. Fonts are often subject to copyright, so be careful to read any licensing agreements that are provided with the font file, and only download font files from reliable sources. Have a go hero When we imported the font into Stencyl, we specified a new font size of 25, but it is a straightforward process to modify further aspects of the font style, such as the color and other effects. Click on the HUD Font tab to view the font settings (or reopen the Font Editor from the Dashboards tab) and experiment with the font size, color, and other effects to find an appropriate style for the game. Take this opportunity to learn more about the different effects that are available, referring to Stencyl's online help if required. Remember to test the game to ensure that any changes are effective and the text is not difficult to read! Creating a game attribute to count lives Currently, our game never ends. As soon as the countdown reaches zero, the scene is restarted, or when the monkey collides with an enemy actor, the monkey is repositioned at the starting point in the scene. There is no way for our players to lose the game! In some genres of game, the player will never be completely eliminated; effectively, the same game is played forever. But in a platform game such as ours, the player typically will have a limited number of chances or lives to complete the required task. In order to resolve our problem of having a never-ending game, we need to keep track of the number of lives available to our player. So let's start to implement that feature right now by creating a game attribute called Lives! Time for action – creating a Lives game attribute Click on the Settings icon on the Stencyl toolbar at the top of the screen. In the left-hand panel of the Game Settings dialog box, click on the Attributes option. Click on the green Create New button. In the Name textbox, type Lives. In the Category textbox, change the word Default to Scoring. In the Type section, ensure that the currently selected option is Number. Change Initial Value to 3. Click on OK to confirm the configuration. We'll leave the Game Settings dialog box open, so that we can take a closer look. What just happened? We have created a new game attribute called Lives. If we look at the rightmost panel of the Game Settings dialog box that we left open on the screen, we can see that we have created a new heading entitled SCORING, and underneath the heading, there is a label icon entitled Lives, as shown in the following screenshot: The Lives item is a new game attribute that can store a number. The category name of SCORING that we created is not used within the game. We can't access it with the instruction blocks; it is there purely as a memory aid for the game developer when working with game attributes. When many game attributes are used in a game, it can become difficult to remember exactly what they are for, so being able to place them under specific headings can be helpful. Using game attributes The attributes we have used so far, such as the Countdown attribute that we created in the Score Management behavior, lose their values as soon as a different scene is loaded, or when the current scene is reloaded. Some game developers may refer to these attributes as local called Lives, so let's attributes, because they belong to the behavior in which they were created. Losing its value is fine when the attribute is just being used within the current scene; for example, we don't need to keep track of the countdown timer outside of the Jungle scene, because the countdown is reset each time the scene is loaded. However, sometimes we need to keep track of values across several scenes within a game, and this is when game attributes become very useful. Game attributes work in a very similar manner to local attributes. They store values that can be accessed and modified, but the main difference is that game attributes keep their values even when a different scene is loaded. Currently, the issue of losing attribute values when a scene is reloaded is not important to us, because our game only has one scene. However, when our players succeed in collecting all the fruits, we want the next level to be started without resetting the number of lives. So we need the number of lives to be remembered when the next scene is loaded. We've created a game attribute called Lives, so let's put it to good use. Time for action – decrementing the number of lives If the Game Settings dialog box is still open, click on OK to close it. Open the Manage Player Collisions actor behavior. Click on the Collides with Enemies event in the left-hand panel. Click on the Attributes section button in the palette. Click on the Game Attributes category button. Locate the purple set Lives to 0 block under the Number Setters subcategory and drag it into the orange when event so that it appears above the red trigger event RestartLevel in behavior Health for Self block. Click on the drop-down arrow in the set Lives to … block and select 0 - 0 in the Math section. In the left textbox of the … - … block, click on the drop-down arrow and select Game Attributes | Lives. In the right-hand textbox, enter the digit 1. Locate the print anything block in the Flow section of the palette, under the Debug category, and drag it below the set Lives to Lives – 1 block. In the print … block, click on the drop-down arrow and select Text | Basics | text & text. In the first empty textbox, type Lives remaining: (including the colon). Click on the drop-down arrow in the second textbox and select Basics | anything as text. In the … as text block, click on the drop-down arrow and select Number | Game Attributes | Lives. Ensure that the Collides with Enemies event looks like the following screenshot: Test the game; make the monkey collide with an enemy actor, such as the croc, and watch the debug console! What just happened? We have modified the Collides with Enemies event in the Manage Player Collisions behavior so that it decrements the number of lives by one when the monkey collides with an enemy actor, and the new value of Lives is shown in the debug console. This was achieved by using the purple game attribute setter and getter blocks to set the value of the Lives game attribute to its current value minus one. For example, if the value of Lives is 3 when the event occurs, Lives will be set to 3 minus 1, which is 2! The print … block was then used to display a message in the console, advising how many lives the player has remaining. We used the text & text block to join the text Lives remaining: together with the current value of the Lives game attribute. The anything as text block converts the numeric value of Lives to text to ensure that it will display correctly. Currently, the value of the Lives attribute will continue to decrease below 0, and the monkey will always be repositioned at its starting point. So our next task is to make something happen when the value of the Lives game attribute reaches 0! No more click-by-click steps! From this point onwards, click-by-click steps to modify behaviors and to locate and place each instruction block will not be specified! Instead, an overview of the steps will be provided, and a screenshot of the completed event will be shown towards the end of each Time for action section. The search facility, at the top of the instruction block palette, can be used to locate the required instruction block; simply click on the search box and type any part of the text that appears in the required block, then press the Enter key on the keyboard to display all the matching blocks in the block palette. Time for action – detecting when Lives reaches zero Create a new scene called Game Over — Select the Dashboard tab, select Scenes, and then select Click here to create a new Scene. Leave all the settings at their default configuration and click on OK. Close the tab for the newly created scene. Open the Manage Player Collisions behavior and click on the Collides with Enemies event to display the event's instruction blocks. Insert a new if block under the existing print block. Modify the if block to if Lives < 0. Move the existing block, trigger event RestartLevel in behavior Health for Self into the if Lives > 0 block. Insert an otherwise block below the if Lives > 0 block. Insert a switch to Scene and Crossfade for 0 secs block inside the otherwise block. Click on the Scene option in the new block, then click on Choose Scene and select the Game Over scene. Change the secs textbox to 0 (zero). Ensure that our modified Collides with Enemies event now looks like the following screenshot: Test the game; make the monkey run into an enemy actor, such as the croc, three times. What just happened? We have modified the Collides with Enemies event so that the value of the Lives game attribute is tested after it has been decremented, and the game will switch to the Game Over scene if the number of lives remaining is less than zero. If the value of Lives is greater than zero, the RestartLevel event in the monkey's Health behavior is triggered. However, if the value of Lives is not greater than zero, the instruction in the otherwise block will be executed, and this switches to the (currently blank) Game Over scene that we have created. If we review all the instructions in the completed Collides with Enemies event, and write them in English, the statement will be: When the monkey collides with an enemy, reduce the value of Lives by one and print the new value to the debug console. Then, if the value of Lives is more than zero, trigger the RestartLevel event in the monkey's Health behavior, otherwise switch to the Game Over scene. Before continuing, we should note that the Game Over scene has been created as a temporary measure to ensure that as we are in the process of developing the game, it's immediately clear to us (the developer) that the monkey has run out of lives. Have a go hero Change the Countdown attribute value to 30 — open the Jungle scene, click on the Behaviors button, then select the Score Management behavior in the left panel to see the attributes for this behavior. The following tasks in this Have a go hero session are optional — failure to attempt them will not affect future tutorials, but it is a great opportunity to put some of our newly learned skills to practice! In the section, Time for action – displaying the countdown timer on the screen, we learned how to display the value of the countdown timer on the screen during gameplay. Using the skills that we have acquired in this article, try to complete the following tasks: Update the Score Management behavior to display the number of lives at the upper-right corner of the screen, by adding some new instruction blocks to the Display Counter event. Rename the Display Counter event to Display HUD. Remove the print Countdown block from the Decrement Countdown event also found in the Score Management behavior. Right-click on the instruction block and review the options available in the pop-up menu! Remove the print Lives remaining: & Lives as text instruction block from the Collides with Enemies event in the Manage Player Collisions behavior. Removing debug instructions Why did we remove the debug print … blocks in the previous Have a go hero session? Originally, we added the debug blocks to assist us in monitoring the values of the Countdown attribute and Lives game attribute during the development process. Now that we have updated the game to display the required information on the screen, the debug blocks are redundant! While it would not necessarily cause a problem to leave the debug blocks where they are, it is best practice to remove any instruction blocks that are no longer in use. Also, during development, excessive use of debug print blocks can have an impact on the performance of the game; so it's a good idea to remove them as soon as is practical. Using graphics to display information We are currently displaying two on-screen pieces of information for players of our game: the countdown timer and the number of lives available. However, providing too much textual information for players can be distracting for them, so we need to find an alternative method of displaying some of the information that the player needs during gameplay. Rather than using text to advise the player how much time they have remaining to complete the level, we're going to display a timer bar on the screen. Time for action – displaying a timer bar Open the Score Management scene behavior and click on the Display HUD event. In the when drawing event, right-click on the blue block that draws the text for the countdown timer and select Activate / Deactivate from the pop-up menu. Note that the block becomes faded. Locate the draw rect at (x: 0 y: 0) with (w: 0 h: 0) instruction block in the palette, and insert it at the bottom of the when drawing event. Click on the draw option in the newly inserted block and change it to fill. Set both the x: and y: textboxes to 10. Set the width (w:) to Countdown x 10. Set the height (h:) to 10. Ensure that the draw text … block and the fill rect at … block in the Display HUD event appear as shown in the following screenshot (the draw text LIVES: … block may look different if the earlier Have a go hero section was attempted): Test the game! What just happened? We have created a timer bar that displays the amount of time remaining for the player to collect the fruit, and the timer bar reduces in size with the countdown! First, in the Display HUD event we deactivated, or disabled, the block that was drawing the textual countdown message, because we no longer want the text message to be displayed on the screen. The next step was to insert a draw rect … block that was configured to create a filled rectangle at the upper-left corner of the screen and with a width equal to the value of the Countdown timer multiplied by 10. If we had not multiplied the value of the countdown by 10, the timer bar would be very small and difficult to see (try it)! We'll be making some improvements to the timer bar later in this article. Activating and deactivating instruction blocks When we deactivate an instruction block, as we did in the Display HUD event, it no longer functions; it's completely ignored! However, the block remains in place, but is shown slightly faded, and if required, it can easily be reenabled by right-clicking on it and selecting the Activate / Deactivate option. Being able to activate and deactivate instruction blocks without deleting them is a useful feature — it enables us to try out new instructions, such as our timer bar, without having to completely remove blocks that we might want to use in the future. If, for example, we decided that we didn't want to use the timer bar, we could deactivate it and reactivate the draw text … block! Deactivated instruction blocks have no impact on the performance of a game; they are completely ignored during the game compilation process. Have a go hero The tasks in this Have a go hero session are optional; failure to attempt them will not affect future tutorials. Referring to Stencyl's online help if required at www.stencyl.com/help/, try to make the following improvements to the timer bar: Specify a more visually appealing color for the rectangle Make it thicker (height) so that it is easier to see when playing the game Consider drawing a black border (known as a stroke) around the rectangle Try to make the timer bar reduce in size smoothly, rather than in big steps Ask an independent tester for feedback about the changes and then modify the bar based on the feedback. To view suggested modifications together with comments, review the Display HUD event in the downloadable files that accompany this article. Counting collected actors With the number of lives being monitored and displayed for the player, and the timer bar in place, we now need to create some instructions that will enable our game to keep track of how many of the fruit actors have been collected, and to carry out the appropriate action when there is no fruit left to collect. Time for action – counting the fruit Open the Score Management scene behavior and create a new number attribute (not a game attribute) with the configuration shown in the following screenshot (in the block palette, click on Attributes, and then click on Create an Attribute…). Add a new when created event and rename it to Initialize Fruit Required. Add the required instruction blocks to the new when created event, so the Initialize Fruit Required event appears as shown in the following screenshot, carefully checking the numbers and text in each of the blocks' textboxes: Note that the red of group block in the set actor value … block cannot be found in the palette; it has been dragged into place from the orange for each … of group Collectibles block. Test the game and look for the Fruit required message in the debug console. What just happened? Before we can create the instructions to determine if all the fruit have been collected, we need to know how many fruit actors there are to collect. So we have created a new event that stores that information for us in a number attribute called Fruit Required and displays it in the debug console. We have created a for each … of group Collectibles block. This very useful looping block will repeat the instructions placed inside it for each member of the specified group that can be found in the current scene. We have specified the Collectibles group, and the instruction that we have placed inside the new loop is increment Fruit Required by 1. When the loop has completed, the value of the Fruit Required attribute is displayed in the debug console using a print … block. When constructing new events, it's good practice to insert print … blocks so we can be confident that the instructions achieve the results that we are expecting. When we are happy that the results are as expected, perhaps after carrying out further testing, we can remove the debug printing from our event. We have also introduced a new type of block that can set a value for an actor; in this case, we have set actor value Collected for … of group to false. This block ensures that each of the fruit actors has a value of Collected that is set to false each time the scene is loaded; remember that this instruction is inside the for each … loopso it is being carried out for every Collectible group member in the current scene. Where did the actor's Collected value come from? Well, we just invented it! The set actor value … block allows us to create an arbitrary value for an actor at any time. We can also retrieve that value at any time with a corresponding get actor value … block, and we'll be doing just that when we check to see if a fruit actor has been collected in the next section, Time for action – detecting when all the fruit are collected. Translating our instructions into English, results in the following statement: For each actor in the Collectibles group, that can be found in this scene, add the value 1 to the Fruit Required attribute and also set the actor's Collected value to false. Finally, print the result in the debug console. Note that the print … block has been placed after the for each … loop, so the message will not be printed for each fruit actor; it will appear just once, after the loop has completed! If we wish to prove to ourselves that the loop is counting correctly, we can edit the Jungle scene and add as many fruit actors as we wish. When we test the game, we can see that the number of fruit actors in the scene is correctly displayed in the debug console. We have designed a flexible set of instructions that can be used in any scene with any number of fruit actors, and which does not require us (as the game designer) to manually configure the number of fruit actors to be collected in that scene! Once again, we have made life easier for our future selves! Now that we have the attribute containing the number of fruit to be collected at the start of the scene, we can create the instructions that will respond when the player has successfully collected them all. Time for action – detecting when all fruits have been collected Create a new scene called Level Completed, with a Background Color of yellow. Leave all the other settings at their default configuration. Close the tab for the newly created scene. Return to the Score Management scene behavior, and create a new custom event by clicking on + Add Event | Advanced | Custom Event. In the left-hand panel, rename the custom event to Fruit Collected. Add the required instruction blocks to the new Fruit Collected event, so it appears as shown in the following screenshot, again carefully checking the parameters in each of the text boxes: Note that there is no space in the when FruitCollected happens custom event name. Save the game and open the Manage Player Collisions actor behavior. Modify the Collides with Collectibles event so it appears as shown in the following screenshot. The changes are listed in the subsequent steps: A new if get actor value Collected for … of group = false block has been inserted. The existing blocks have been moved into the new if … block. A set actor value Collected for … of group to true block has been inserted above the grow … block. A trigger event FruitCollected in behavior Score Management for this scene block has been inserted above the do after 0.5 seconds block. An if … of group is alive block has been inserted into the do after 0.5 seconds block, and the existing kill … of group block has been moved inside the newly added if … block. Test the game; collect several pieces of fruit, but not all of them! Examine the contents of the debug console; it may be necessary to scroll the console horizontally to read the messages. Continue to test the game, but this time collect all the fruit actors. What just happened? We have created a new Fruit Collected event in the Score Management scene behavior, which switches to a new scene when all the fruit actors have been collected, and we have also modified the Collides with Collectibles event in the Manage Player Collisions actor behavior in order to count how many pieces of fruit remain to be collected. When testing the game we can see that, each time a piece of fruit is collected, the new value of the Fruit Required attribute is displayed in the debug console, and when all the fruit actors have been collected, the yellow Level Completed scene is displayed. The first step was to create a blank Level Completed scene, which will be switched to when all the fruit actors have been collected. As with the Game Over scene that we created earlier in this article, it is a temporary scene that enables us to easily determine when the task of collecting the fruit has been completed successfully for testing purposes. We then created a new custom event called Fruit Collected in the Score Management scene behavior. This custom event waits for the FruitCollected event trigger to occur, and when that trigger is received, the Fruit Required attribute is decremented by 1 and its new value is displayed in the debug console. A test is then carried out to determine if the value of the Fruit Required attribute is equal to zero, and if it is equal to zero, the bright yellow, temporary Level Completed scene will be displayed! Our final task was to modify the Collides with Collectibles event in the Manage Player Collisions actor behavior. We inserted an if… block to test the collectible actor's Collected value; remember that we initialized this value to false in the previous section, Time for action – counting the fruit. If the Collected value for the fruit actor is still false, then it hasn't been collected yet, and the instructions contained within the if … block will be carried out. Firstly, the fruit actor's Collected value is set to false, which ensures that this event cannot occur again for the same piece of fruit. Next, the FruitCollected custom event in the Score Management scene behavior is triggered. Following that, the do after 0.5 seconds block is executed, and the fruit actor will be killed. We have also added an if … of group is alive check that is carried out before the collectible actor is killed. Because we are killing the actor after a delay of 0.5 seconds, it's good practice to ensure that the actor still exists before we try to kill it! In some games, it may be possible for the actor to be killed by other means during that very short 0.5 second delay, and if we try to kill an actor that does not exist, a runtime error may occur, that is, an error that happens while the game is being played. This may result in a technical error message being displayed to the player, and the game cannot continue; this is extremely frustrating for players, and they are unlikely to try to play our game again! Preventing multiple collisions from being detected A very common problem experienced by game designers, who are new to Stencyl, occurs when a collision between two actors is repeatedly detected. When two actors collide, all collision events that have been created with the purpose of responding to that collision will be triggered repeatedly until the collision stops occurring, that is, when the two actors are no longer touching. If, for example, we need to update the value of an attribute when a collision occurs, the attribute might be updated dozens or even hundreds of times in only a few seconds! In our game, we want collisions between the monkey actor and any single fruit actor to cause only a single update to the Fruit Required attribute. This is why we created the actor value Collected for each fruit actor, and this value is initialized to be false, not collected, by the Initialize Fruit Required event in the Score Management scene behavior. When the Collides with Collectibles event in Manage Player Collisions actor behavior is triggered, a test is carried out to determine if the fruit actor has already been collected, and if it has been collected, no further instructions are carried out. If we did not have this test, then the FruitCollected custom event would be triggered numerous times, and therefore the Fruit Required attribute would be decremented numerous times, causing the value of the Fruit Required attribute to reach zero almost instantly; all because the monkey collided with a single fruit actor! Using a Boolean value of True or False to carry out a test in this manner is often referred to by developers as using a flag or Boolean flag. Note that, rather than utilizing an actor value to record whether or not a fruit actor has been collected, we could have created a new attribute and initialized and updated the attribute in the same way that we initialized and updated the actor value. However, this would have required more effort to configure, and there is no perceptible impact on performance when using actor values in this manner. Some Stencyl users never use actor values (preferring to always use attributes instead), however, this is purely a matter of preference and it is at the discretion of the game designer which method to use. In order to demonstrate what happens when the actor value Collected is not used to determine whether or not a fruit actor has been collected, we can simply deactivate the set actor value Collected for … of group to true instruction block in the Collides with Collectibles event. After deactivating the block, run the game with the debug console open, and allow the monkey to collide with a single fruit actor. The Fruit Required attribute will instantly be decremented multiple times, causing the level to be completed after colliding with only one fruit actor! Remember to reactivate the set actor value … block before continuing! Keeping track of the levels As discussed in the introduction to this article, we're going to be adding an additional level to our game, so we'll need a method for keeping track of a player's progress through the game's levels. Time for action – adding a game attribute to record the level Create a new number game attribute called Level, with the configuration shown in the following screenshot: Save the game. What just happened? We have created a new game attribute called Level, which will be used to record the current level of the game. A game attribute is being used here, because we need to access this value in other scenes within our game; local attributes have their values reset whenever a scene is loaded, whereas game attributes' values are retained regardless of whether or not a different scene has been loaded. Fixing the never-ending game! We've finished designing and creating the gameplay for the Monkey Run game, and the scoring behaviors are almost complete. However, there is an anomaly with the management of the Lives game attribute. The monkey correctly loses a life when it collides with an enemy actor, but currently, when the countdown expires, the monkey is simply repositioned at the start of the level, and the countdown starts again from the beginning! If we leave the game as it is, the player will have an unlimited number of attempts to complete the level — that's not much of a challenge! Have a go hero (Recommended) In the Countdown expired event, which is found in the Health actor behavior, modify the test for the countdown so that it checks for the countdown timer being exactly equal to zero, rather than the current test, which is for the Countdown attribute being less than 1. We only want the ShowAngel event to be triggered once when the countdown equals exactly zero! (Recommended) Update the game so that the Show Angel event manages the complete process of losing a life, that is, either when a collision occurs between the monkey and an enemy, or when the countdown timer expires. A single event should deduct a life and restart the level. (Optional) If we look carefully, we can see that the countdown timer bar starts to grow backwards when the player runs out of time! Update the Display HUD event in the Score Management scene behavior, so that the timer bar is only drawn when the countdown is greater than zero. There are many different ways to implement the above modifications, so take some time and plan the recommended modifications! Test the game thoroughly to ensure that the lives are reduced correctly and the level restarts as expected, when the monkey collides with the enemy, and when the countdown expires. It would certainly be a good idea to review the download file for this session, compare it with your own solutions, and review each event carefully, along with the accompanying comment blocks. There are some useful tips in the example file, so do take the time to have a look! Summary Although our game doesn't look vastly different from when we started this article, we have made some very important changes. First, we implemented a text display to show the countdown timer, so that players of our game can see how much time they have remaining to complete the level. We also imported and configured a font and used the new font to make the countdown display more visually appealing. We then implemented a system of tracking the number of lives that the player has left, and this was our first introduction to learning how game attributes can store information that can be carried across scenes. The most visible change that we implemented in this article was to introduce a timer bar that reduces in size as the countdown decreases. Although very few instruction blocks were required to create the timer bar, the results are very effective, and are less distracting for the player than having to repeatedly look to the top of the screen to read a text display. The main challenge for players of our game is to collect all the fruit actors in the allocated time, so we created an initialization event to count the number of fruit actors in the scene. Again, this event has been designed to be reusable, as it will always correctly count the fruit actors in any scene. We also implemented the instructions to test when there are no more fruit actors to be collected, so the player can be taken to the next level in the game when they have completed the challenge. A very important skill that we learned while implementing these instructions was to use actor values as Boolean flags to ensure that collisions are counted only once. Finally, we created a new game attribute to keep track of our players' progress through the different levels in the game Resources for Article : Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] 2D game development with Monkey [Article] Getting Started with GameSalad [Article]
Read more
  • 0
  • 0
  • 2888

article-image-paths-and-curves-raphael-js-vector-graphics
Packt
10 Sep 2013
4 min read
Save for later

Paths and curves in Raphael JS Vector Graphics

Packt
10 Sep 2013
4 min read
(For more resources related to this topic, see here.) Path drawing concepts The process of drawing with a pen on paper can be broken down into the following steps: You place your pen at a particular point on a piece of paper. You press and move the pen freely from this point to another point. You keep your pen at this point or lift up the pen and place it at another point on the paper. he process is repeated until you have finished drawing. Path drawing works in much the same way. The point at which you place your pen, known as the current point, defines the start of a path while the free movement of the pen describes the path itself. Consider drawing an arbelos shape. We first place our pen at a point (100, 180) on our canvas and draw an arc to the point (380, 180) as shown: We then create an arc from the point (380, 180) to the point (200, 180): Finally, we create an arc back to the point (100, 180) to complete the path: The example of the arbelos demonstrates the drawing of a single path, where we did not lift up the pen at any point during the drawing process. Were we to lift up the pen, the subsequent drawing would technically create subpaths on the main path element. In the example, creating a triangle as a subpath has the effect of adding to our single path element. Notice also that the fill attribute is applied to the overall path rather than individual subpaths: Drawing Curves There are three types of curve path: quadratic Bézier curves, cubic Bézier curves, and arcs. Bézier curves are curves defined between a start and end point but whose direction we can determine by using control points. Drawing quadratic Bézier curves A quadratic Bézier curve is a curve between two points with a single control point. To appreciate how quadratic Bézier curves are drawn, you should experiment with the animated demo at http://www.jasondavies.com/animated-bezier/. As shown here, the curve is drawn from a point P0 to a point P2 with a control point P1. The control point P1 relative to P0 and P2 determines the extent to which the curve bends: from start to finish, the curve starts off heading in the direction of the control point P1 and then bends towards the end point P2 from the direction of P1. There are two quadratic Bézier curve commands, the syntax for which is given here: Command Parameters Example Q or q (x1, y1, x, y)+ Q 100 50 200 250 T or t (x y)+ T 400 250 The Q command (or q for relative points) describes a curve drawn from the current point on a path to the point (x, y) using (x1, y1) as a control point. For example, consider the following code: paper.path(['M', 50, 150, 'Q', 225, 20, 400, 150]); This draws the quadratic Bézier curve shown. The equivalent path using the lowercase variant of the command would be "M 50,150 q 175,-130 350,0", where the (x, y) and (x1, y1) parameters are the relative distances from the start point (50, 100): Moving the control point affects the way that the path is drawn. For example, the path "M 50, 150 Q 100,40 400,150" is shown as: As with the other commands we have encountered so far, parameters can be repeatable, which allows us to draw multiple connected quadratic Bézier curves. Consider the following code: paper.path([ 'M', 50, 150 'Q', 225, 20, 400, 150, 575, 20, 750, 150]); This has the effect of drawing a second curve from (400, 150) to the point (750, 150) with a control point at (575, 20): The T or t command is shorthand whereby the control point coordinates are not specified. Instead, the control point is determined automatically as a reflection of the previous control point. Consider the path drawn by the following code: paper.path([ 'M', 50, 150, 'Q', 225, 20, 400, 150, 'T', 750, 150 ]); This creates two curves as shown in the following screenshot. The current point at the start of the path drawn by T is (400, 150). Relative to this point, a reflection of the previous control point (225, 20) is (575, 280): Summary In this article we have successfully covered a quadratic Bézier curve. Resources for Article: Further resources on this subject: Getting Started with Impressive Presentations [Article] So, what is EaselJS? [Article] Creating a Simple Application in Sencha Touch [Article]
Read more
  • 0
  • 0
  • 3420
article-image-cocos2d-x-installation
Packt
05 Sep 2013
10 min read
Save for later

Cocos2d-x: Installation

Packt
05 Sep 2013
10 min read
(For more resources related to this topic, see here.) Download and installation All the examples in this article were developed on a Mac using Xcode. Although you can use Cocos2d-x to develop your games for other platforms, using different systems, the examples will focus on iOS and Mac. Xcode is free and can be downloaded from the Mac App store (https://developer.apple.com/xcode/index.php), but in order to test your code on an iOS device and publish your games, you will need a developer account with Apple, which will cost you USD 99 a year. You can find more information on their website: https://developer.apple.com/ So, assuming you have an internet connection, and that Xcode is ready to rock, let's begin! Time for action – downloading and installing Cocos2d-x We start by downloading the framework: Go to http://download.cocos2d-x.org/ and download the latest stable version of Cocos2d-x. For this article I'll be using version Cocos2d-2.0-x-2.0.4, which means the 2.0.4 C++ port of version 2.0 of Cocos2d. Uncompress the files somewhere on your machine. Open Terminal and type cd (that is cd and a space). Drag the uncompressed folder you just downloaded to the Terminal window. You should see the path to the folder added to the command line. Hit returnto go to that folder in Terminal. Now type: sudo ./install-templates-xcode.sh -u Hit return again and you're done. What just happened? You have successfully installed the Cocos2d-x templates in your machine. With these in place, you can select the type of Cocos2d-x application you wish to build inside Xcode, and the templates will take care of copying all the necessary files into your application. Next, open Xcode and select Create a new Xcode Project.You should see something like this: So let's build our first application. Hello-x World-x Let's create that old chestnut in computer programming: the hello world example. Time for action – creating an application Open Xcode and select File | New | Project... and follow these steps: In the dialogue box select cocos2d-x under the iOS menu and choose the cocos2dx template. Hit Next . Give the application a name, but not HelloWorld. I'll show you why in a second. You will be then asked to select a place to save the project and you are done. Once your application is ready, click Run to build it. After that, this is what you should see in the simulator: When you run a cocos2d-x application in Xcode it is quite common for the program to post some warnings regarding your code, or most likely the frameworks. These will mostly reference deprecated methods, or statements that do not precisely follow more recent, and stricter rules of the current SDK. But that's okay. These warnings, though certainly annoying, can be ignored. What just happened? You created your first Cocos2d-x application using the cocos2dx template, sometimes referred to as the basic template. The other template options include one with Box2D, one with Chipmunk (both related to physics simulation), one with JavaScript, and one with Lua. The last two options allow you to code some or all of your game using those script languages instead of the native C++; and they work just as you would expect a scripting language to work, meaning the commands written in either Javascript or Lua are actually replaced and interpreted as C++ commands by the compiler. Now if you look at the files created by the basic template you will see a HelloWorldScene class file. That's the reason I didn't want you to call your application HelloWorld, because I didn't want you to have the impression that the file name was based on your project name. It isn't. You will always get a HelloWorldScene file unless you change the template itself. Now let's go over the sample application and its files: The folder structure First you have the Resources folder, where you find the images used by the application. The ios folder has the necessary underlying connections between your app and iOS. For other platforms, you will have their necessary linkage files in separate folders targeting their respective platform (like an android folder the Android platform, for instance.) In the libs folder you have all the cocos2dx files, plus CocosDenshion files (for sound support) and a bunch of other extensions. Using a different template for your projects will result in a different folder structure here, based on what needs to be added to your project. So you will see a Box2D folder, for example, if you choose the Box2D template. In the Classes folder you have your application. In here, everything is written in C++ and this is the home for the part of your code that will hopefully not need to change, however many platforms you target with your application. Now let us go over the main classes of the basic application. The iOS linkage classes AppController and RootViewController are responsible for setting up OpenGL in iOS as well as telling the underlying operating system that your application is about to say Hello... To the World. These classes are written with a mix of Objective-C and C++, as all the nice brackets and the .mm extensions show. You will change very little if anything in these classes; and again that will reflect in changes to the way iOS handles your application. So other targets would require the same instructions or none at all depending on the target. In AppController for instance, I could add support for multitouch. And in RootViewController, I could limit the screen orientations supported by my application. The AppDelegate class This class marks the first time your C++ app will talk to the underlying OS. It attempts to map the main events that mobile devices wants to dispatch and listen to. From here on, all your application will be written in C++ (unless you need something else). In AppDelegate you should setup CCDirector (the cocos2d-x all powerful singleton manager object) to run your application just the way you want. You can: Get rid of the application status information Change the frame rate of your application Tell CCDirector where your high definition images are, and where your standard definition images are, as well as which to use You can change the overall scale of your application to suit different screens The AppDelegate class is also the best place to start any preloading process And, most importantly, it is here you tell the CCDirector object what CCScene to begin your application with Here too you will handle what happens to your application if the OS decides to kill it, push it aside, or hang it upside down to dry. All you need to do is place your logic inside the correct event handler: applicationDidEnterBackground or applicationWillEnterForeground. The HelloWorldScene class When you run the application you get a screen with the words Hello World and a bunch of numbers in one corner. These are the display stats you decided you wanted around in the AppDelegate class. The actual screen is created by the oddly named HelloWorldScene class. It is a Layer class that creates its own scene (don't worry if you don't know what a Layer class is, or a Scene class, you will soon enough). When it initializes, HelloWorldScene puts a button on screen that you can press to exit the application. The button is actually a Menu item, part of a Menu group consisting of one button, two image states for that button, and one callback event, triggered when the said button is pressed. The Menu group automatically handles touch events targeting its members, so you don't get to see any of that code floating about. There is also the necessary Label object to show the Hello World message and the background image. Who begets whom If you never worked with either Cocos2d or Cocos2d-x before, the way the initial scene() method is instantiated may lead to dizziness. To recap, in AppDelegate you have: CCScene *pScene = HelloWorld::scene(); pDirector->runWithScene(pScene); CCDirector needs a CCScene object to run, which you can think of as being your application, basically. CCScene needs something to show, which in this case is a CCLayer class. CCScene is then said to contain a CCLayer class. Here a CCScene object is created through a static method scene inside a CCLayer derived class. So the layer creates the scene, and the scene immediately adds the layer to itself. Huh? Relax. This incestuous-like instantiation will most likely only happen once, and you have nothing to do with it when it happens. So you can easily ignore all these funny goings-on and look the other way. I promise instantiations will be much easier after this first one. Further information Follow these steps to access one of the best sources for reference material on Cocos2d-x: its Test project. Time for action – running the test samples You open the test project just like you would do for any other Xcode project: Go inside the folder you downloaded for the framework, and navigate to samples/TestCpp/proj.ios/TestCpp.xcodeproj. Open that project in Xcode. When you run the project, you will see inside the simulator a long list of tests, all nicely organized by topic. Pick any one to review. Better yet, navigate to samples/TestCpp/Classes and if you have a program like TextWrangler or some equivalent, you can open that entire directory inside a Disk Browser window and have all that information ready for referencing right at your desktop. What just happened? With the test samples you can visualize most features in Cocos2d-x and see what they do, as well as some of the ways you can initialize and customize them. I will refer to the code found in the tests quite often. As usual with programming, there is always a different way to accomplish a given task, so sometimes after showing you one way, I'll refer to another one that you can find (and by then easily understand) inside the Test classes. The other tools Now comes the part where you may need to spend a bit more money to get some extremely helpful tools. In this articles examples I use four of them: A tool to help build sprite sheets: I'll use Texture Packer (http://www.codeandweb.com/texturepacker). There are other alternatives, like Zwoptex (http://zwopple.com/zwoptex/). And they usually offer some features for free. A tool to help build particle effects: I'll use Particle Designer (http://www.71squared.com/en/particledesigner). Depending on your operating system you may find free tools online for this. Cocos2d-x comes bundled with some common particle effects that you can customize. But to do it blindly is a process I do not recommend. A tool to help build bitmap fonts: I'll use Glyph Designer (http://www.71squared.com/en/glyphdesigner). But there are others: bmGlyph (which is not as expensive), FontBuilder (which is free). It is not extremely hard to build a Bitmap font by hand, not nearly as hard as building a particle effect from scratch, but doing it once is enough to convince you to get one of these tools fast. A tool to produce sound effects: No contest. cfxr for Mac or the original sfxr for Windows. Both are free (http://www.drpetter.se/project_sfxr.html and http://thirdcog.eu/apps/cfxr respectively). Summary You just learned how to install Cocos2d-x templates and create a basic application. You also learned enough of the structure of a basic Cocos2d-x application to get started to build your first game. Resources for Article: Further resources on this subject: Getting Started With Cocos2d [Article] Cocos2d: Working with Sprites [Article] Cocos2d for iPhone: Surfing Through Scenes [Article]
Read more
  • 0
  • 0
  • 2243

article-image-audio-playback
Packt
04 Sep 2013
17 min read
Save for later

Audio Playback

Packt
04 Sep 2013
17 min read
(For more resources related to this topic, see here.) Understanding FMOD One of the main reasons why I chose FMOD for this book is that it contains two separate APIs—the FMOD Ex Programmer's API, for low-level audio playback, and FMOD Designer, for high-level data-driven audio. This will allow us to cover game audio programming at different levels of abstraction without having to use entirely different technologies. Besides that reason, FMOD is also an excellent piece of software, with several advantages to game developers: License: It is free for non-commercial use, and has reasonable licenses for commercial projects. Cross-platform: It works across an impressive number of platforms. You can run it on Windows, Mac, Linux, Android, iOS, and on most of the modern video game consoles by Sony, Microsoft, and Nintendo. Supported formats: It has native support for a huge range of audio file formats, which saves you the trouble of having to include other external libraries and decoders. Programming languages: Not only can you use FMOD with C and C++, there are also bindings available for other programming languages, such as C# and Python. Popularity: It is extremely popular, being widely considered as the industry standard nowadays. It was used in games such as BioShock, Crysis, Diablo 3, Guitar Hero, Start Craft II, and World of Warcraft. It is also used to power several popular game engines, such as Unity3D and CryEngine. Features: It is packed with features, covering everything from simple audio playback, streaming and 3D sound, to interactive music, DSP effects and low-level audio programming. Installing FMOD Ex Programmer's API Installing a C++ library can be a bit daunting at first. The good side is that once you have done it for the first time, the process is usually the same for every other library. Here are the steps that you should follow if you are using Microsoft Visual Studio: Download the FMOD Ex Programmer's API from http://www.fmod.org and install it to a folder that you can remember, such as C:FMOD. Create a new empty project, and add at least one .cpp file to it. Then, right-click on the project node on the Solution Explorer , and select Properties from the list. For all the steps that follow, make sure that the Configuration option is set to All Configurations . Navigate to C/C++ | General , and add C:FMODapiinc to the list of Additional Include Directories (entries are separated by semicolons). Navigate to Linker | General , and add C:FMODapilib to the list of Additional Library Directories . Navigate to Linker | Input , and add fmodex_vc.lib to the list of Additional Dependencies . Navigate to Build Events | Post-Build Event , and add xcopy /y "C:FMODapifmodex.dll" "$(OutDir)" to the Command Lin e list. Include the <fmod.hpp> header file from your code. Creating and managing the audio system Everything that happens inside FMOD is managed by a class named FMOD::System, which we must start by instantiating with the FMOD::Syste m_Create() function: FMOD::System* system; FMOD::System_Create(&system); Notice that the function returns the system object through a parameter. You will see this pattern every time one of the FMOD functions needs to return a value, because they all reserve the regular return value for an error code. We will discuss error checking in a bit, but for now let us get the audio engine up and running. Now that we have a system object instantiated, we also need to initialize it by calling the init() method: system->init(100, FMOD_INIT_NORMAL, 0); The first parameter specifies the maximum number of channels to allocate. This controls how many sounds you are able to play simultaneously. You can choose any number for this parameter because the system performs some clever priority management behind the scenes and distributes the channels using the available resources. The second and third parameters customize the initialization process, and you can usually leave them as shown in the example. Many features that we will use work properly only if we update the system object every frame. This is done by calling the update() method from inside your game loop: system->update(); You should also remember to shutdown the system object before your game ends, so that it can dispose of all resources. This is done by calling the release() method: system->release(); Loading and streaming audio files One of the greatest things about FMOD is that you can load virtually any audio file format with a single method call. To load an audio file into memory, use the createSound() method: FMOD::Sound* sound; system->createSound("sfx.wav", FMOD_DEFAULT, 0, &sound); To stream an audio file from disk without having to store it in memory, use the createStream() method: FMOD::Sound* stream; system->createStream("song.ogg", FMOD_DEFAULT, 0, &stream); Both methods take the path of the audio file as the first parameter, and return a pointer to an FMOD::Sound object through the fourth parameter, which you can use to play the sound. The paths in the previous examples are relative to the application path. If you are running these examples in Visual Studio, make sure that you copy the audio files into the output folder (for example, using a post-build event such as xcopy /y "$(ProjectDir)*.ogg" "$(OutDir)"). The choice between loading and streaming is mostly a tradeoff between memory and processing power. When you load an audio file, all of its data is uncompressed and stored in memory, which can take up a lot of space, but the computer can play it without much effort. Streaming, on the other hand, barely uses any memory, but the computer has to access the disk constantly, and decode the audio data on the fly. Another difference (in FMOD at least) is that when you stream a sound, you can only have one instance of it playing at any time. This limitation exists because there is only one decode buffer per stream. Therefore, for sound effects that have to be played multiple times simultaneously, you have to either load them into memory, or open multiple concurrent streams. As a rule of thumb, streaming is great for music tracks, voice cues, and ambient tracks, while most sound effects should be loaded into memory. The second and third parameters allow us to customize the behavior of the sound. There are many different options available, but the following list summarizes the ones we will be using the most. Using FMOD_DEFAULT is equivalent to combining the first option of each of these categories: FMOD_LOOP_OFF and FMOD_LOOP_NORMAL: These modes control whether the sound should only play once, or loop once it reaches the end FMOD_HARDWARE and FMOD_SOFTWARE: These modes control whether the sound should be mixed in hardware (better performance) or software (more features) FMOD_2D and FMOD_3D: These modes control whether to use 3D sound We can combine multiple modes using the bitwise OR operator (for instance, FMOD_DEFAULT | FMOD_LOOP_NORMAL | FMOD_SOFTWARE). We can also tell the system to stream a sound even when we are using the createSound() method, by setting the FMOD_CREATESTREAM flag. In fact, the createStream() method is simply a shortcut for this. When we do not need a sound anymore (or at the end of the game) we should dispose of it by calling the release() method of the sound object. We should always release the sounds we create, regardless of the audio system also being released. sound->release(); Playing sounds With the sounds loaded into memory or prepared for streaming, all that is left is telling the system to play them using the playSound() method: FMOD::Channel* channel; system->playSound(FMOD_CHANNEL_FREE, sound, false, &channel); The first parameter selects in which channel the sound will play. You should usually let FMOD handle it automatically, by passing FMOD_CHANNEL_FREE as the parameter. The second parameter is a pointer to the FMOD::Sound object that you want to play. The third parameter controls whether the sound should start in a paused state, giving you a chance to modify some of its properties without the changes being audible. If you set this to true, you will also need to use the next parameter so that you can unpause it later. The fourth parameter is an output parameter that returns a pointer to the FMOD::Channel object in which the sound will play. You can use this handle to control the sound in multiple ways, which will be the main topic of the next chapter. You can ignore this last parameter if you do not need any control over the sound, and simply pass in 0 in its place. This can be useful for non-lopping one-shot sounds. system->playSound(FMOD_CHANNEL_FREE, sound, false, 0); Checking for errors So far, we have assumed that every operation will always work without errors. However, in a real scenario, there is room for a lot to go wrong. For example, we could try to load an audio file that does not exist. In order to report errors, every function and method in FMOD has a return value of type FMOD_RESULT, which will only be equal to FMOD_OK if everything went right. It is up to the user to check this value and react accordingly: FMOD_RESULT result = system->init(100, FMOD_INIT_NORMAL, 0); if (result != FMOD_OK) { // There was an error, do something about it } For starters, it would be useful to know what the error was. However, since FMOD_RESULT is an enumeration, you will only see a number if you try to print it. Fortunately, there is a function called FMOD_ErrorString() inside the fmod_errors.h header file which will give you a complete description of the error. You might also want to create a helper function to simplify the error checking process. For instance, the following function will check for errors, print a description of the error to the standard output, and exit the application: #include <iostream> #include <fmod_errors.h> void ExitOnError(FMOD_RESULT result) { if (result != FMOD_OK) { std::cout << FMOD_ErrorString(result) << std::endl; exit(-1); } } You could then use that function to check for any critical errors that should cause the application to abort: ExitOnError(system->init(100, FMOD_INIT_NORMAL, 0)); The initialization process described earlier also assumes that everything will go as planned, but a real game should be prepared to deal with any errors. Fortunately, there is a template provided in the FMOD documentation which shows you how to write a robust initialization sequence. It is a bit long to cover here, so I urge you to refer to the file named Getting started with FMOD for Windows.pdf inside the documentation folder for more information. For clarity, all of the code examples will continue to be presented without error checking, but you should always check for errors in a real project. Project 1 building a simple audio manager In this project, we will be creating a SimpleAudioManager class that combines everything that was covered in this chapter. Creating a wrapper for an underlying system that only exposes the operations that we need is known as the façade design pattern , and is very useful in order to keep things nice and simple. Since we have not seen how to manipulate sound yet, do not expect this class to be powerful enough to be used in a complex game. Its main purpose will be to let you load and play one-shot sound effects with very little code (which could in fact be enough for very simple games). It will also free you from the responsibility of dealing with sound objects directly (and having to release them) by allowing you to refer to any loaded sound by its filename. The following is an example of how to use the class: SimpleAudioManager audio; audio.Load("explosion.wav"); audio.Play("explosion.wav"); From an educational point of view, what is perhaps even more important is that you use this exercise as a way to get some ideas on how to adapt the technology to your needs. It will also form the basis of the next chapters in the book, where we will build systems that are more complex. Class definition Let us start by examining the class definition: #include <string> #include <map> #include <fmod.hpp> typedef std::map<std::string, FMOD::Sound*> SoundMap; class SimpleAudioManager { public: SimpleAudioManager(); ~SimpleAudioManager(); void Update(float elapsed); void Load(const std::string& path); void Stream(const std::string& path); void Play(const std::string& path); private: void LoadOrStream(const std::string& path, bool stream); FMOD::System* system; SoundMap sounds; }; From browsing through the list of public class members, it should be easy to deduce what it is capable of doing: The class can load audio files (given a path) using the Load() method The class can stream audio files (given a path) using the Stream() method The class can play audio files (given a path) using the Play() method (granted that they have been previously loaded or streamed) There is also an Update() method and a constructor/destructor pair to manage the sound system The private class members, on the other hand, can tell us a lot about the inner workings of the class: At the core of the class is an instance of FMOD::System responsible for driving the entire sound engine. The class initializes the sound system on the constructor, and releases it on the destructor. Sounds are stored inside an associative container, which allows us to search for a sound given its file path. For this purpose, we will be relying on one of the C++ Standard Template Library (STL ) associative containers, the std::map class, as well as the std::string class for storing the keys. Looking up a string key is a bit inefficient (compared to an integer, for example), but it should be fast enough for our needs. An advantage of having all the sounds stored on a single container is that we can easily iterate over them and release them from the class destructor. Since the code for loading and streaming audio file is almost the same, the common functionality has been extracted into a private method called LoadOrStream(), to which Load() and Stream() delegate all of the work. This prevents us from repeating the code needlessly. Initialization and destruction Now, let us walk through the implementation of each of these methods. First we have the class constructor, which is extremely simple, as the only thing that it needs to do is initialize the system object. SimpleAudioManager::SimpleAudioManager() { FMOD::System_Create(&system); system->init(100, FMOD_INIT_NORMAL, 0); } Updating is even simpler, consisting of a single method call: void SimpleAudioManager::Update(float elapsed) { system->update(); } The destructor, on the other hand, needs to take care of releasing the system object, as well as all the sound objects that were created. This process is not that complicated though. First, we iterate over the map of sounds, releasing each one in turn, and clearing the map at the end. The syntax might seem a bit strange if you have never used an STL iterator before, but all that it means is to start at the beginning of the container, and keep advancing until we reach its end. Then we finish off by releasing the system object as usual. SimpleAudioManager::~SimpleAudioManager() { // Release every sound object and clear the map SoundMap::iterator iter; for (iter = sounds.begin(); iter != sounds.end(); ++iter) iter->second->release(); sounds.clear(); // Release the system object system->release(); system = 0; } Loading or streaming sounds Next in line are the Load() and Stream() methods, but let us examine the private LoadOrStream() method first. This method takes the path of the audio file as a parameter, and checks if it has already been loaded (by querying the sound map). If the sound has already been loaded there is no need to do it again, so the method returns. Otherwise, the file is loaded (or streamed, depending on the value of the second parameter) and stored in the sound map under the appropriate key. void SimpleAudioManager::LoadOrStream(const std::string& path, bool stream) { // Ignore call if sound is already loaded if (sounds.find(path) != sounds.end()) return; // Load (or stream) file into a sound object FMOD::Sound* sound; if (stream) system->createStream(path.c_str(), FMOD_DEFAULT, 0, &sound); else system->createSound(path.c_str(), FMOD_DEFAULT, 0, &sound); // Store the sound object in the map using the path as key sounds.insert(std::make_pair(path, sound)); } With the previous method in place, both the Load() and the Stream() methods can be trivially implemented as follows: void SimpleAudioManager::Load(const std::string& path) { LoadOrStream(path, false); } void SimpleAudioManager::Stream(const std::string& path) { LoadOrStream(path, true); } Playing sounds Finally, there is the Play() method, which works the other way around. It starts by checking if the sound has already been loaded, and does nothing if the sound is not found on the map. Otherwise, the sound is played using the default parameters. void SimpleAudioManager::Play(const std::string& path) { // Search for a matching sound in the map SoundMap::iterator sound = sounds.find(path); // Ignore call if no sound was found if (sound == sounds.end()) return; // Otherwise play the sound system->playSound(FMOD_CHANNEL_FREE, sound->second, false, 0); } We could have tried to automatically load the sound in the case when it was not found. In general, this is not a good idea, because loading a sound is a costly operation, and we do not want that happening during a critical gameplay section where it could slow the game down. Instead, we should stick to having separate load and play operations. A note about the code samples Although this is a book about audio, all the samples need an environment to run on. In order to keep the audio portion of the samples as clear as possible, we will also be using the Simple and Fast Multimedia Library 2.0 (SFML ) (http://www.sfml-dev.org). This library can very easily take care of all the miscellaneous tasks, such as window creation, timing, graphics, and user input, which you will find in any game. For example, here is a complete sample using SFML and the SimpleAudioManager class. It creates a new window, loads a sound, runs a game loop at 60 frames per second, and plays the sound whenever the user presses the space key. #include <SFML/Window.hpp> #include "SimpleAudioManager.h" int main() { sf::Window window(sf::VideoMode(320, 240), "AudioPlayback"); sf::Clock clock; // Place your initialization logic here SimpleAudioManager audio; audio.Load("explosion.wav"); // Start the game loop while (window.isOpen()) { // Only run approx 60 times per second float elapsed = clock.getElapsedTime().asSeconds(); if (elapsed < 1.0f / 60.0f) continue; clock.restart(); sf::Event event; while (window.pollEvent(event)) { // Handle window events if (event.type == sf::Event::Closed) window.close(); // Handle user input if (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::Space) audio.Play("explosion.wav"); } // Place your update and draw logic here audio.Update(elapsed); } // Place your shutdown logic here return 0; } Summary In this article, we have seen some of the advantages of using the FMOD audio engine. We saw how to install the FMOD Ex Programmer's API in Visual Studio, how to initialize, manage, and release the FMOD sound system, how to load or stream an audio file of any type from disk, how to play a sound that has been previously loaded by FMOD, how to check for errors in every FMOD function, and how to create a simple audio manager that encapsulates the act of loading and playing audio files behind a simple interface. Resources for Article : Further resources on this subject: Using SpriteFonts in a Board-based Game with XNA 4.0 [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 2988