Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-ai-unity-game-developers-emulate-real-world-senses
Kunal Chaudhari
06 Jun 2018
19 min read
Save for later

AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior

Kunal Chaudhari
06 Jun 2018
19 min read
An AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player's sight, and so on. The quality of our  Non-Player Character (NPC's) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data. If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it's worth a quick internet search to find some videos of AI glitches for a good laugh. In this article, we'll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist. You are reading an extract from Unity 2017 Game AI programming - Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe. Basic sensory systems Our agent's sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room's layout. While our senses rely on and take in a constant stream of data to navigate their environment, our agent's AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent. The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that's looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain. For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we'll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we'll write a minimal Aspect class that our senses will be looking for. Cone of sight A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn't accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game: The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent's eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone. The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source. Hearing, feeling, and smelling using spheres One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener's hearing is represented by a sphere, and the sounds closest to the listener are more likely to be "heard." We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling. The following figure represents our sphere and how our agent fits into the setup: As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere. Expanding AI through omniscience In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn't necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity. In games, we tend to model abstract concepts using concrete values. For example, we may represent a player's health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them. While omniscience is not necessarily a specific pattern or technique, it's another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means. Getting creative with sensing While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away! Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell. Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you'll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game. Setting up the scene In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps: Let's create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles. Add a plane to be used as a floor. Then, we add a directional light so that we can see what is going on in our scene. As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene. For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you've set everything up correctly: Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene: With the essential setup out of the way, we can begin tackling the code for driving the various systems. Setting up the player tank and aspect Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider. Look at the following code in the Target.cs file: using UnityEngine; public class Target : MonoBehaviour { public Transform targetMarker; void Start (){} void Update () { int button = 0; //Get the point of the hit position when the mouse is being clicked if(Input.GetMouseButtonDown(button)) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hitInfo; if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) { Vector3 targetPosition = hitInfo.point; targetMarker.position = targetPosition; } } } } You'll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don't do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector. Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene. A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way. Implementing the player tank Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank. The isKinematic flag in Unity's Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API. The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly. The code in the PlayerTank.cs file is as follows: using UnityEngine; public class PlayerTank : MonoBehaviour { public Transform targetTransform; public float targetDistanceTolerance = 3.0f; private float movementSpeed; private float rotationSpeed; // Use this for initialization void Start () { movementSpeed = 10.0f; rotationSpeed = 2.0f; } // Update is called once per frame void Update () { if (Vector3.Distance(transform.position, targetTransform.position) < targetDistanceTolerance) { return; } Vector3 targetPosition = targetTransform.position; targetPosition.y = transform.position.y; Vector3 direction = targetPosition - transform.position; Quaternion tarRot = Quaternion.LookRotation(direction); transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } } The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank. This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable. Implementing the Aspect class Next, let's take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That's all of the variables we need in this component. Whenever our AI character senses something, we'll check the  aspectType to see whether it's the aspect that the AI has been looking for. The code in the Aspect.cs file looks like this: using UnityEngine; public class Aspect : MonoBehaviour { public enum AspectTypes { PLAYER, ENEMY, } public AspectTypes aspectType; } Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot: Creating an AI character Our NPC will be roaming around the scene in a random direction. It'll have the following two senses: The perspective sense will check whether the tank aspect is within a set visible range and distance The touch sense will detect if the enemy aspect has collided with its box collider, which we'll be adding to the tank in a later step Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own. The code in the Wander.cs file is as follows: using UnityEngine; public class Wander : MonoBehaviour { private Vector3 targetPosition; private float movementSpeed = 5.0f; private float rotationSpeed = 2.0f; private float targetPositionTolerance = 3.0f; private float minX; private float maxX; private float minZ; private float maxZ; void Start() { minX = -45.0f; maxX = 45.0f; minZ = -45.0f; maxZ = 45.0f; //Get Wander Position GetNextPosition(); } void Update() { if (Vector3.Distance(targetPosition, transform.position) <= targetPositionTolerance) { GetNextPosition(); } Quaternion targetRotation = Quaternion.LookRotation(targetPosition - transform.position); transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } void GetNextPosition() { targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ)); } } The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic. Using the Sense class The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively. Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them. The code in the Sense.cs file looks like this: using UnityEngine; public class Sense : MonoBehaviour { public bool enableDebug = true; public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY; public float detectionRate = 1.0f; protected float elapsedTime = 0.0f; protected virtual void Initialize() { } protected virtual void UpdateSense() { } // Use this for initialization void Start () { elapsedTime = 0.0f; Initialize(); } // Update is called once per frame void Update () { UpdateSense(); } } The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we'll be deriving from it for our actual senses. Giving a little perspective The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console. The code in the Perspective.cs file looks like this: using UnityEngine; public class Perspective : Sense { public int fieldOfView = 45; public int viewDistance = 100; private Transform playerTransform; private Vector3 rayDirection; protected override void Initialize() { playerTransform = GameObject.FindGameObjectWithTag("Player").transform; } protected override void UpdateSense() { elapsedTime += Time.deltaTime; if (elapsedTime >= detectionRate) { DetectAspect(); } } //Detect perspective field of view for the AI Character void DetectAspect() { RaycastHit hit; rayDirection = playerTransform.position - transform.position; if ((Vector3.Angle(rayDirection, transform.forward)) < fieldOfView) { // Detect if player is within the field of view if (Physics.Raycast(transform.position, rayDirection, out hit, viewDistance)) { Aspect aspect = hit.collider.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Detected"); } } } } } We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI's current direction. If it's in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property. The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it's hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own. The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character's line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY. This method can be illustrated as follows: void OnDrawGizmos() { if (playerTransform == null) { return; } Debug.DrawLine(transform.position, playerTransform.position, Color.red); Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance); //Approximate perspective visualization Vector3 leftRayPoint = frontRayPoint; leftRayPoint.x += fieldOfView * 0.5f; Vector3 rightRayPoint = frontRayPoint; rightRayPoint.x -= fieldOfView * 0.5f; Debug.DrawLine(transform.position, frontRayPoint, Color.green); Debug.DrawLine(transform.position, leftRayPoint, Color.green); Debug.DrawLine(transform.position, rightRayPoint, Color.green); } } Touching is believing The next sense we'll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on. We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object's collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide. Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively. The code in the Touch.cs file is as follows: using UnityEngine; public class Touch : Sense { void OnTriggerEnter(Collider other) { Aspect aspect = other.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Touch Detected"); } } } } Our sample NPC and tank have  BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you're setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot: The previous screenshot shows the box collider on our enemy AI that we'll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up: For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want. Testing the results Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank: The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you'll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class. To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character. If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to explore the brand-new features in Unity 2017. How to use arrays, lists, and dictionaries in Unity for 3D game development How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 17683

article-image-fuzzy-logic-ai-characters-unity-3d-games
Kunal Chaudhari
01 Jun 2018
16 min read
Save for later

Implementing fuzzy logic to bring AI characters alive in Unity based 3D games

Kunal Chaudhari
01 Jun 2018
16 min read
Fuzzy logic is a fantastic way to represent the rules of your game in a more nuanced way. Perhaps more so than other concepts, fuzzy logic is a very math-heavy topic. Most of the information can be represented purely by mathematical functions. For the sake of teaching the important concepts as they apply to Unity, most of the math has been simplified and implemented using Unity's built-in features. In this tutorial, we will take a look at the concepts behind fuzzy logic systems and implement in your AI system. Implementing fuzzy logic will make your game characters more believable and depict real-world attributes. This article is an excerpt from a book written by Ray Barrera, Aung Sithu Kyaw, and Thet Naing Swe titled  Unity 2017 Game AI Programming - Third Edition. This book will help you leverage the power of artificial intelligence to program smart entities for your games. Defining fuzzy logic The simplest way to define fuzzy logic is by comparison to binary logic.  Generally, transition rules as looked at as true or false or 0 or 1 values. Is something visible? Is it at least a certain distance away? Even in instances where multiple values were being evaluated, all of the values had exactly two outcomes; thus, they were binary. In contrast, fuzzy values represent a much richer range of possibilities, where each value is represented as a float rather than an integer. We stop looking at values as 0 or 1, and we start looking at them as 0 to 1. A common example used to describe fuzzy logic is temperature. Fuzzy logic allows us to make decisions based on non-specific data. I can step outside on a sunny Californian summer's day and ascertain that it is warm, without knowing the temperature precisely. Conversely, if I were to find myself in Alaska during the winter, I would know that it is cold, again, without knowing the exact temperature. These concepts of cold, cool, warm, and hot are fuzzy ones. There is a good amount of ambiguity as to at what point we go from warm to hot. Fuzzy logic allows us to model these concepts as sets and determine their validity or truth by using a set of rules. When people make decisions, people have some gray areas. That is to say, it's not always black and white. The same concept applies to agents that rely on fuzzy logic. Say you hadn't eaten in a few hours, and you were starting to feel a little hungry. At which point were you hungry enough to go grab a snack? You could look at the time right after a meal as 0, and 1 would be the point where you approached starvation. The following figure illustrates this point: When making decisions, there are many factors that determine the ultimate choice. This leads into another aspect of fuzzy logic controllers—they can take into account as much data as necessary. Let's continue to look at our "should I eat?" example. We've only considered one value for making that decision, which is the time since the last time you ate. However, there are other factors that can affect this decision, such as how much energy you're expending and how lazy you are at that particular moment. Or am I the only one to use that as a deciding factor? Either way, you can see how multiple input values can affect the output, which we can think of as the "likeliness to have another meal." Fuzzy logic systems can be very flexible due to their generic nature. You provide input, the fuzzy logic provides an output. What that output means to your game is entirely up to you. We've primarily looked at how the inputs would affect a decision, which, in reality, is taking the output and using it in a way the computer, our agent, can understand. However, the output can also be used to determine how much of something to do, how fast something happens, or for how long something happens. For example, imagine your agent is a car in a sci-fi racing game that has a "nitro-boost" ability that lets it expend a resource to go faster. Our 0 to 1 value can represent a normalized amount of time for it to use that boost or perhaps a normalized amount of fuel to use. Picking fuzzy systems over binary systems With most things in game programming, we must evaluate the requirements of our game and the technology and hardware limitations when deciding on the best way to tackle a problem. As you might imagine, there is a performance cost associated with going from a simple yes/no system to a more nuanced fuzzy logic one, which is one of the reasons we may opt out of using it. Of course, being a more complex system doesn't necessarily always mean it's a better one. There will be times when you just want the simplicity and predictability of a binary system because it may fit your game better. While there is some truth to the old adage, "the simpler, the better", one should also take into account the saying, "everything should be made as simple as possible, but not simpler". Though the quote is widely attributed to Albert Einstein, the father of relativity, it's not entirely clear who said it. The important thing to consider is the meaning of the quote itself. You should make your AI as simple as your game needs it to be, but not simpler. Pac-Man's AI works perfectly for the game–it's simple enough. However, rules say that simple would be out of place in a modern shooter or strategy game. Using fuzzy logic Once you understand the simple concepts behind fuzzy logic, it's easy to start thinking of the many ways in which it can be useful. In reality, it's just another tool in our belt, and each job requires different tools. Fuzzy logic is great at taking some data, evaluating it in a similar way to how a human would (albeit in a much simpler way), and then translating the data back to information that is usable by the system. Fuzzy logic controllers have several real-world use cases. Some are more obvious than others, and while these are by no means one-to-one comparisons to our usage in game AI, they serve to illustrate a point: Heating ventilation and air conditioning (HVAC) systems: The temperature example when talking about fuzzy logic is not only a good theoretical approach to explaining fuzzy logic, but also a very common real-world example of fuzzy logic controllers in action. Automobiles: Modern automobiles come equipped with very sophisticated computerized systems, from the air conditioning system (again), to fuel delivery, to automated braking systems. In fact, putting computers in automobiles has resulted in far more efficient systems than the old binary systems that were sometimes used. Your smartphone: Ever notice how your screen dims and brightens depending on how much ambient light there is? Modern smartphone operating systems look at ambient light, the color of the data being displayed, and the current battery life to optimize screen brightness. Washing machines: Not my washing machine necessarily, as it's quite old, but most modern washers (from the last 20 years) make some use of fuzzy logic. Load size, water dirtiness, temperature, and other factors are taken into account from cycle to cycle to optimize water use, energy consumption, and time. If you take a look around your house, there is a good chance you'll find a few interesting uses of fuzzy logic, and I mean besides your computer, of course. While these are neat uses of the concept, they're not particularly exciting or game-related. I'm partial to games involving wizards, magic, and monsters, so let's look at a more relevant example. Implementing a simple fuzzy logic system For this example, we're going to use my good friend, Bob, the wizard. Bob lives in an RPG world, and he has some very powerful healing magic at his disposal. Bob has to decide when to cast this magic on himself based on his remaining health points (HPs). In a binary system, Bob's decision-making process might look like this: if(healthPoints <= 50) { CastHealingSpell(me); } We see that Bob's health can be in one of two states—above 50, or not. Nothing wrong with that, but let's have a look at what the fuzzy version of this same scenario might look like, starting with determining Bob's health status: Before the panic sets in upon seeing charts and values that may not quite mean anything to you right away, let's dissect what we're looking at. Our first impulse might be to try to map the probability that Bob will cast a healing spell to how much health he is missing. That would, in simple terms, just be a linear function. Nothing really fuzzy about that—it's a linear relationship, and while it is a step above a binary decision in terms of complexity, it's still not truly fuzzy. Enter the concept of a membership function. It's key to our system, as it allows us to determine how true a statement is. In this example, we're not simply looking at raw values to determine whether or not Bob should cast his spell; instead, we're breaking it up into logical chunks of information for Bob to use in order to determine what his course of action should be. In this example, we're comparing three statements and evaluating not only how true each one is, but which is the most true: Bob is in a critical condition Bob is hurt Bob is healthy If you're into official terminology, we call this determining the degree of membership to a set. Once we have this information, our agent can determine what to do with it next. At a glance, you'll notice it's possible for two statements to be true at a time. Bob can be in a critical condition and hurt. He can also be somewhat hurt and a little bit healthy. You're free to pick the thresholds for each, but, in this example, let's evaluate these statements as per the preceding graph. The vertical value represents the degree of truth of a statement as a normalized float (0 to 1): At 0 percent health, we can see that the critical statement evaluates to 1. It is absolutely true that Bob is critical when his health is gone. At 40 percent health, Bob is hurt, and that is the truest statement. At 100 percent health, the truest statement is that Bob is healthy. Anything outside of these absolutely true statements is squarely in fuzzy territory. For example, let's say Bob's health is at 65 percent. In that same chart, we can visualize it like this: The vertical line drawn through the chart at 65 represents Bob's health. As we can see, it intersects both sets, which means that Bob is a little bit hurt, but he's also kind of healthy. At a glance, we can tell, however, that the vertical line intercepts the Hurt set at a higher point in the graph. We can take this to mean that Bob is more hurt than he is healthy. To be specific, Bob is 37.5 percent hurt, 12.5 percent healthy, and 0 percent critical. Let's take a look at this in code; open up our FuzzySample scene in Unity. The hierarchy will look like this: The important game object to look at is Fuzzy Example. This contains the logic that we'll be looking at. In addition to that, we have our Canvas containing all of the labels and the input field and button that make this example work. Lastly, there's the Unity-generated EventSystem and Main Camera, which we can disregard. There isn't anything special going on with the setup for the scene, but it's a good idea to become familiar with it, and you are encouraged to poke around and tweak it to your heart's content after we've looked at why everything is there and what it all does. With the Fuzzy Example game object selected, the inspector will look similar to the following image: Our sample implementation is not necessarily something you'll take and implement in your game as it is, but it is meant to illustrate the previous points in a clear manner. We use Unity's AnimationCurve for each different set. It's a quick and easy way to visualize the very same lines in our earlier graph. Unfortunately, there is no straightforward way to plot all the lines in the same graph, so we use a separate AnimationCurve for each set. In the preceding screenshot, they are labeled Critical, Hurt, and Healthy. The neat thing about these curves is that they come with a built-in method to evaluate them at a given point (t). For us, t does not represent time, but rather the amount of health Bob has. As in the preceding graph, the Unity example looks at a HP range of 0 to 100. These curves also provide a simple user interface for editing the values. You can simply click on the curve in the inspector. That opens up the curve editing window. You can add points, move points, change tangents, and so on, as shown in the following screenshot: Unity's curve editor window Our example focuses on triangle-shaped sets. That is, linear graphs for each set. You are by no means restricted to this shape, though it is the most common. You could use a bell curve or a trapezoid, for that matter. To keep things simple, we'll stick to the triangle. You can learn more about Unity's AnimationCurve editor at http://docs.unity3d.com/ScriptReference/AnimationCurve.html. The rest of the fields are just references to the different UI elements used in code that we'll be looking at later in this chapter. The names of these variables are fairly self-explanatory, however, so there isn't much guesswork to be done here. Next, we can take a look at how the scene is set up. If you play the scene, the game view will look something similar to the following screenshot: A simple UI to demonstrate fuzzy values We can see that we have three distinct groups, representing each question from the "Bob, the wizard" example. How healthy is Bob, how hurt is Bob, and how critical is Bob? For each set, upon evaluation, the value that starts off as 0 true will dynamically adjust to represent the actual degree of membership. There is an input box in which you can type a percentage of health to use for the test. No fancy controls are in place for this, so be sure to enter a value from 0 to 100. For the sake of consistency, let's enter a value of 65 into the box and then press the Evaluate! button. This will run some code, look at the curves, and yield the exact same results we saw in our graph earlier. While this shouldn't come as a surprise (the math is what it is, after all), there are fewer things more important in game programming than testing your assumptions, and sure enough, we've tested and verified our earlier statement. After running the test by hitting the Evaluate! button, the game scene will look similar to the following screenshot: This is how Bob is doing at 65 percent health Again, the values turn out to be 0.125 (or 12.5 percent) healthy and 0.375 (or 37.5 percent) hurt. At this point, we're still not doing anything with this data, but let's take a look at the code that's handling everything: using UnityEngine; using UnityEngine.UI; using System.Collections; public class FuzzySample1 : MonoBehaviour { private const string labelText = "{0} true"; public AnimationCurve critical; public AnimationCurve hurt; public AnimationCurve healthy; public InputField healthInput; public Text healthyLabel; public Text hurtLabel; public Text criticalLabel; private float criticalValue = 0f; private float hurtValue = 0f; private float healthyValue = 0f; We start off by declaring some variables. The labelText is simply a constant we use to plug into our label. We replace {0} with the real value. Next, we declare the three AnimationCurve variables that we mentioned earlier. Making these public or otherwise accessible from the inspector is key to being able to edit them visually (though it is possible to construct curves by code), which is the whole point of using them. The following four variables are just references to UI elements that we saw earlier in the screenshot of our inspector, and the last three variables are the actual float values that our curves will evaluate into: private void Start () { SetLabels(); } /* * Evaluates all the curves and returns float values */ public void EvaluateStatements() { if (string.IsNullOrEmpty(healthInput.text)) { return; } float inputValue = float.Parse(healthInput.text); healthyValue = healthy.Evaluate(inputValue); hurtValue = hurt.Evaluate(inputValue); criticalValue = critical.Evaluate(inputValue); SetLabels(); } The Start() method doesn't require much explanation. We simply update our labels here so that they initialize to something other than the default text. The EvaluateStatements() method is much more interesting. We first do some simple null checking for our input string. We don't want to try and parse an empty string, so we return out of the function if it is empty. As mentioned earlier, there is no check in place to validate that you've input a numerical value, so be sure not to accidentally input a non-numerical value or you'll get an error. For each of the AnimationCurve variables, we call the Evaluate(float t) method, where we replace t with the parsed value we get from the input field. In the example we ran, that value would be 65. Then, we update our labels once again to display the values we got. The code looks similar to this: /* * Updates the GUI with the evluated values based * on the health percentage entered by the * user. */ private void SetLabels() { healthyLabel.text = string.Format(labelText, healthyValue); hurtLabel.text = string.Format(labelText, hurtValue); criticalLabel.text = string.Format(labelText, criticalValue); } } We simply take each label and replace the text with a formatted version of our labelText constant that replaces the {0} with the real value. To summarize, we learned how fuzzy logic is used in the real world, and how it can help illustrate vague concepts in a way binary systems cannot. We also learned to implement our own fuzzy logic controllers using the concepts of member functions, degrees of membership, and fuzzy sets.  If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to build exciting and richer games by mastering advanced Artificial Intelligence concepts in Unity. Unity Machine Learning Agents: Transforming Games with Artificial Intelligence Put your game face on! Unity 2018.1 is now available How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 12800

article-image-unity-variables-script-unity-2017-games
Amarabha Banerjee
23 May 2018
12 min read
Save for later

Working with Unity Variables to script powerful Unity 2017 games

Amarabha Banerjee
23 May 2018
12 min read
In this tutorial, you will learn how to work with the different variables available with the Unity 2017 platform. We will show you how to use these variables through use cases in order to script powerful Unity games. This article is an excerpt from the book, Learning C# by Developing Games with Unity 2017 written by Micael DaGraca, Greg Lukosek. The most popular game engine of our generation i.e. Unity is a preferred choice among game developers. Due to the flexibility it provides to code and script a game in C#. To understand and leverage the power of C# in your games, it is utterly necessary to get a proper understanding of how C# coding works. We are going to show you exactly that in the section given below. Writing C# statements properly When you do normal writing, it's in the form of a sentence, with a period used to end the sentence. When you write a line of code, it's called a statement, with a semicolon used to end the statement. This is necessary because the console reads the code one line at a time and it's necessary to use a semicolon to tell the console that the line of code is over and that the console can jump to the next line. (This is happening so fast that it looks like the computer is reading all of them at the same time, but it isn't.) When we start learning how to code, forgetting about this detail is very common, so don't forget to check for this error if the code isn't working: The code for a C# statement does not have to be on a single line as shown in the following example: public int number1 = 2; The statement can be on several lines. Whitespace and carriage returns are ignored, so, if you really want to, you can write it as follows: public int number1 = 2; However, I do not recommend writing your code like this because it's terrible to read code that is formatted like the preceding code. Nevertheless, there will be times when you'll have to write long statements that are longer than one line. Unity won't care. It just needs to see the semicolon at the end. Understanding component properties in Unity's Inspector GameObjects have some components that make them behave in a certain way. For instance, select Main Camera and look at the Inspector panel. One of the components is the camera. Without that component, it will cease being a camera. It would still be a GameObject in your scene, just no longer a functioning camera. Variables become component properties When we refer to components, we are basically referring to the available functions of a GameObject, for example, the human body has many functions, such as talking, moving, and observing. Now let's say that we want the human body to move faster. What is the function linked to that action? Movement. So in order to make our body move faster, we would need to create a script that had access to the movement component and then we would use that to make the body move faster. Just like in real life, different GameObjects can also have different components, for example, the camera component can only be accessed from a camera. There are plenty of components that already exist that were created by Unity's programmers, but we can also write our own components. This means that all the properties that we see in Inspector are just variables of some type. They simply store data that will be used by some method. Unity changes script and variable names slightly When we create a script, one of the first things that we need to do is give a name to the script and it's always good practice to use a name that identifies the content of the script. For example, if we are creating a script that is used to control the player movement, ideally that would be the name of the script. The best practice is to write playerMovement, where the first word is uncapitalized and the second one is capitalized. This is the standard way Unity developers name scripts and variables. Now let's say that we created a script named playerMovement. After assigning that script to a GameObject, we'll see that in the Inspector panel we would see that Unity adds a space to separate the words of the name, Player Movement. Unity does this modification to variable names too where, for example, a variable named number1 is shown as Number 1 and number2 as Number 2. Unity capitalizes the first letter as well. These changes improve readability in Inspector. Changing a property's value in the Inspector panel There are two situations where you can modify a property value: During the Play mode During the development stage (not in the Play mode) When you are in the Play mode, you will see that your changes take effect immediately in real time. This is great when you're experimenting and want to see the results. Write down any changes that you want to keep because when you stop the Play mode, any changes you made will be lost. When you are in the Development mode, changes that you make to the property values will be saved by Unity. This means that if you quit Unity and start it again, the changes will be retained. Of course, you won't see the effect of your changes until you click Play. The changes that you make to the property values in the Inspector panel do not modify your script. The only way your script can be changed is by you editing it in the script editor (MonoDevelop). The values shown in the Inspector panel override any values you might have assigned in your script. If you want to undo the changes you've made in the Inspector panel, you can reset the values to the default values assigned in your script. Click on the cog icon (the gear) on the far right of the component script, and then select Reset, as shown in the following screenshot: Displaying public variables in the Inspector panel You might still be wondering what the word public at the beginning of a variable statement means: public int number1 = 2; We mentioned it before. It means that the variable will be visible and accessible. It will be visible as a property in the Inspector panel so that you can manipulate the value stored in the variable. The word also means that it can be accessed from other scripts using the dot syntax. Private variables Not all variables need to be public. If there's no need for a variable to be changed in the Inspector panel or be accessed from other scripts, it doesn't make sense to clutter the Inspector panel with needless properties. In the LearningScript, perform the following steps: Change line 6 to this: private int number1 = 2; Then change line 7 to the following: int number2 = 9; Save the file In Unity, select Main Camera You will notice in the Inspector panel that both properties, Number 1 and Number 2, are gone: Line 6: private int number1 = 2; The preceding line explicitly states that the number1 variable has to be private. Therefore, the variable is no longer a property in the Inspector panel. It is now a private variable for storing data: Line 7: int number2 = 9; The number2 variable is no longer visible as a property either, but you didn't specify it as private. If you don't explicitly state whether a variable will be public or private, by default, the variable will implicitly be private in C#. It is good coding practice to explicitly state whether a variable will be public or private. So now, when you click Play, the script works exactly as it did before. You just can't manipulate the values manually in the Inspector panel anymore. Naming Unity variables properly As we explored previously, naming a script or variable is a very important step. It won't change the way that the code runs, but it will help us to stay organized and, by using best practices, we are avoiding errors and saving time trying to find the piece of code that isn't working. Always use meaningful names to store your variables. If you don't do that, six months down the line, you will be lost. I'm going to exaggerate here a bit to make a point. Let's say you will name a variable as shown in this code: public bool areRoadConditionsPerfect = true; That's a descriptive name. In other words, you know what it means by just reading the variable. So 10 years from now, when you look at that name, you'll know exactly what I meant in the previous comment. Now suppose that instead of areRoadConditionsPerfect, you had named this variable as shown in the following code: public bool perfect = true; Sure, you know what perfect is, but would you know that it refers to perfect road conditions? I know that right now you'll understand it because you just wrote it, but six months down the line, after writing hundreds of other scripts for all sorts of different projects, you'll look at this word and wonder what you meant. You'll have to read several lines of code you wrote to try to figure it out. You may look at the code and wonder who in their right mind would write such terrible code. So, take your time to write descriptive code that even a stranger can look at and know what you mean. Believe me, in six months or probably less time, you will be that stranger. Using meaningful names for variables and methods is helpful not only for you but also for any other game developer who will be reading your code. Whether or not you work in a team, you should always write easy-to-read code. Beginning variable names with lowercase You should begin a variable name with a lowercase letter because it helps distinguish between a class name and a variable name in your code. There are some other guides in the C# documentation as well, but we don't need to worry about them at this stage. Component names (class names) begin with an uppercase letter. For example, it's easy to know that Transform is a class and transform is a variable. There are, of course, exceptions to this general rule, and every programmer has a preferred way of using lowercase, uppercase, and perhaps an underscore to begin a variable name. In the end, you will have to decide upon a naming convention that you like. If you read the Unity forums, you will notice that there are some heated debates on naming variables. In this book, I will show you my preferred way, but you can use whatever is more comfortable for you. Using multiword variable names Let's use the same example again, as follows: public bool areRoadConditionsPerfect = true; You can see that the variable name is actually four words squeezed together. Since variable names can be only one word, begin the first word with a lowercase and then just capitalize the first letter of every additional word. This greatly helps create descriptive names that the viewer is still able to read. There's a term for this, it's called camel casing. I have already mentioned that for public variables, Unity's Inspector will separate each word and capitalize the first word. Go ahead! Add the previous statement to the LearningScript and see what Unity does with it in the Inspector panel. Declaring a variable and its type Every variable that we want to use in a script must be declared in a statement. What does that mean? Well, before Unity can use a variable, we have to tell Unity about it first. Okay then, what are we supposed to tell Unity about the variable? There are only three absolute requirements to declare a variable and they are as follows: We have to specify the type of data that a variable can store We have to provide a name for the variable We have to end the declaration statement with a semicolon The following is the syntax we use to declare a variable: typeOfData nameOfTheVariable; Let's use one of the LearningScript variables as an example; the following is how we declare a variable with the bare minimum requirements: int number1; This is what we have: Requirement #1 is the type of data that number1 can store, which in this case is an int, meaning an integer Requirement #2 is a name, which is number1 Requirement #3 is the semicolon at the end The second requirement of naming a variable has already been discussed. The third requirement of ending a statement with a semicolon has also been discussed. The first requirement of specifying the type of data will be covered next. The following is what we know about this bare minimum declaration as far as Unity is concerned: There's no public modifier, which means it's private by default It won't appear in the Inspector panel or be accessible from other scripts The value stored in number1 defaults to zero We discussed working with the Unity 2017 variables and how you can start working with them to create fun-filled games effectively. If you liked this article, be sure to go through the book Learning C# by Developing games with Unity 2017 to create exciting games with C# and Unity 2017. Read More Unity 2D & 3D game kits simplify Unity game development for beginners Build a Virtual Reality Solar System in Unity for Google Cardboard Unity Machine Learning Agents: Transforming Games with Artificial Intelligence
Read more
  • 0
  • 0
  • 5062
Banner background image

article-image-arrays-lists-dictionaries-unity-3d-game-development
Amarabha Banerjee
16 May 2018
14 min read
Save for later

How to use arrays, lists, and dictionaries in Unity for 3D game development

Amarabha Banerjee
16 May 2018
14 min read
A key ingredient in scripting 3D games with Unity is the ability to work with C# to create arrays, lists, objects and dictionaries within the Unity platform. In this tutorial, we help you to get started with creating arrays, lists, and dictionaries effectively. This article is an excerpt from Learning C# by Developing Games with Unity 2017. Read more here. You can also read the latest edition of the book here.  An array stores a sequential collection of values of the same type, in the simplest terms. We can use arrays to store lists of values in a single variable. Imagine we want to store a number of student names. Simple! Just create a few variables and name them student1, student2, and so on: public string student1 = "Greg"; public string student2 = "Kate"; public string student3 = "Adam"; public string student4 = "Mia"; There's nothing wrong with this. We can print and assign new values to them. The problem starts when you don't know how many student names you will be storing. The name variable suggests that it's a changing element. There is a much cleaner way of storing lists of data. Let's store the same names using a C# array variable type: public string[ ] familyMembers = new string[ ]{"Greg", "Kate", "Adam", "Mia"} ; As you can see, all the preceding values are stored in a single variable called familyMembers. Declaring an array To declare a C# array, you must first say what type of data will be stored in the array. As you can see in the preceding example, we are storing strings of characters. After the type, we have an open square bracket and then immediately a closed square bracket, [ ]. This will make the variable an actual array. We also need to declare the size of the array. It simply means how many places are there in our variable to be accessed. The minimum code required to declare a variable looks similar to this: public string[] myArrayName = new string[4]; The array size is set during assignment. As you have learned before, all code after the variable declaration and the equal sign is an assignment. To assign empty values to all places in the array, simply write the new keyword followed by the type, an open square bracket, a number describing the size of the array, and then a closed square bracket. If you feel confused, give yourself a bit more time. Then you will fully understand why arrays are helpful. Take a look at the following examples of arrays; don't worry about testing how they work yet: string[ ] familyMembers = new string[]{"John", "Amanda", "Chris", "Amber"} ; string[ ] carsInTheGarage = new string[] {"VWPassat", "BMW"} ; int[ ] doorNumbersOnMyStreet = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 }; GameObject[ ] carsInTheScene = GameObject.FindGameObjectsWithTag("car"); As you can see, we can store different types of data as long as the elements in the array are of the same type. You are probably wondering why the last example, shown here, looks different: GameObject[ ] carsInTheScene = GameObject.FindGameObjectsWithTag("car"); In fact, we are just declaring the new array variable to store a collection of GameObject in the scene using the "car" tag. Jump into the Unity scripting documentation and search for GameObject.FindGameObjectsWithTag: As you can see, GameObject.FindGameObjectsWithTag is a special built-in Unity function that takes a string parameter (tag) and returns an array of GameObjects using this tag. Storing items in the List Using a List instead of an array can be so easier to work with in a script. Look at some forum sites related to C# and Unity, and you'll discover that plenty of programmers simply don't use an array unless they have to; they prefer to use a List. It is up to the developer's preference and task. Let's stick to lists for now. Here are the basics of why a List is better and easier to use than an array: An array is of fixed size and unchangeable The size of a List is adjustable You can easily add and remove elements from a List To mimic adding a new element to an array, we would need to create a whole new array with the desired number of elements and then copy the old elements The first thing to understand is that a List has the ability to store any type of object, just like an array. Also, like an array, we must specify which type of object we want a particular List to store. This means that if you want a List of integers of the int type then you can create a List that will store only the int type. Let's go back to the first array example and store the same data in a List. To use a List in C#, you need to add the following line at the beginning of your script: using System.Collections.Generic; As you can see, using Lists is slightly different from using arrays. Line 9 is a declaration and assignment of the familyMembers List. When declaring the list, there is a requirement for a type of objects that you will be storing in the List. Simply write the type between the < > characters. In this case, we are using string. As we are adding the actual elements later in lines 14 to 17, instead of assigning elements in the declaration line, we need to assign an empty List to be stored temporarily in the familyMembers variable. Confused? If so, just take a look at the right-hand side of the equal sign on line 9. This is how you create a new instance of the List for a given type, string for this example: new List<string>(); Lines 14 to 17 are very simple to understand. Each line adds an object at the end of the List, passing the string value in the parentheses. In various documentation, Lists of type look like this: List< T >. Here, T stands for the type of data. This simply means that you can insert any type in place of T and the List will become a list of that specific type. From now on, we will be using it. Common operations with Lists List<T> is very easy to use. There is a huge list of different operations that you can perform with it. We have already spoken about adding an element at the end of a List. Very briefly, let's look at the common ones that we will be possibly using at later stages: Add: This adds an object at the end of List<T>. Remove: This removes the first occurrence of a specific object from List<T>. Clear: This removes all elements from List<T>. Contains: This determines whether an element is in List<T> or not. It is very useful to check whether an element is stored in the list. Insert: This inserts an element into List<T> at the specified index. ToArray: This copies the elements of List<T> to a new array. You don't need to understand all of these at this stage. All I want you to know is that there are many out-of-the-box operations that you can use. If you want to see them all, I encourage you to dive into the C# documentation and search for the List<T> class. List <T> versus arrays Now you are probably thinking, "Okay, which one should I use?" There isn't a general rule for this. Arrays and List<T> can serve the same purpose. You can find a lot of additional information online to convince you to use one or the other. Arrays are generally faster. For what we are doing at this stage, we don't need to worry about processing speeds. Some time from now, however, you might need a bit more speed if your game slows down, so this is good to remember. List<T> offers great flexibility. You don't need to know the size of the list during declaration. There is a massive list of out-of-the-box operations that you can use with List, so it is my recommendation. Array is faster, List<T> is more flexible. Retrieving the data from the Array or List<T> Declaring and storing data in the array or list is very clear to us now. The next thing to learn is how to get stored elements from an array. To get a stored element from the array, write an array variable name followed by square brackets. You must write an int value within the brackets. That value is called an index. The index is simply a position in the array. So, to get the first element stored in the array, we will write the following code: myArray[0]; Unity will return the data stored in the first place in myArray. It works exactly the same way as the return type methods. So, if myArray stores a string value on index 0, that string will be returned to the place where you are calling it. Complex? It's not. Let's show you by example. The index value starts at 0, no 1, so the first element in an array containing 10 elements will be accessible through an index value of 0 and last one through a value of 9. Let's extend the familyMembers example: I want to talk about line 20. The rest of it is pretty obvious for you, isn't it? Line 20 creates a new variable called thirdFamilyMember and assigns the third value stored in the familyMembers list. We are using an index value of 2 instead of 3 because in programming counting starts at 0. Try to memorize this; it is a common mistake made by beginners in programming. Go ahead and click Play. You will see the name Adam being printed in the Unity Console. While accessing objects stored in an array, make sure you use an index value between zero and the size of the array. In simpler words, we cannot access data from index 10 in an array that contains only four objects. Makes sense? Checking the size This is very common; we need to check the size of the array or list. There is a slight difference between a C# array and List<T>. To get the size as an integer value, we write the name of the variable, then a dot, and then Length of an array or Count for List<T>: arrayName.Length: This returns an integer value with the size of the array listName.Count: This returns an integer value with the size of the list As we need to focus on one of the choices here and move on, from now on we will be using List<T>. ArrayList We definitely know how to use lists now. We also know how to declare a new list and add, remove, and retrieve elements. Moreover, you have learned that the data stored in List<T> must be of the same type across all elements. Let's throw a little curveball. ArrayList is basically List<T> without a specified type of data. This means that we can store whatever objects we want. Storing elements of different types is also possible. ArrayList is very flexible. Take a look at the following example to understand what ArrayList can look like: You have probably noticed that ArrayList also supports all common operations, such as .Add(). Lines 12 to 15 add different elements into the array. The first two are of the integer type, the third is a string type, and the last one is a GameObject. All mixed types of elements in one variable! When using ArrayList, you might need to check what type of element is under a specific index to know how to treat it in code. Unity provides a very useful function that you can use on virtually any type of object. Its GetType() method returns the type of the object, not the value. We are using it in lines 18 and 19 to print the types of the second and third elements. Go ahead, write the preceding code, and click Play. You should get the following output in the Console window: Dictionaries When we talk about collection data, we need to mention Dictionaries. A Dictionary is similar to a List. However, instead of accessing a certain element by index value, we use a string called key. The Dictionary that you will probably be using the most often is called Hashtable. Feel free to dive into the C# documentation after reading this chapter to discover all the bits of this powerful class. Here are a few key properties of Hashtable: Hashtable can be resized dynamically, like List<T> and ArrayList Hashtable can store multiple data types at the same type, like ArrayList A public member Hashtable isn't visible in the Unity Inspector panel due to default inspector limitations I want to make sure that you won't feel confused, so I will go straight to a simple example: Accessing values To access a specific key in the Hashtable, you must know the string key the value is stored under. Remember, the key is the first value in the brackets when adding an element to Hashtable. Ideally, you should also know the type of data you are trying to access. In most cases, that would not be an issue. Take a look at this line: Debug.Log((string)personalDetails["firstName"]); We already know that using Debug.Log serves to display a message on the Unity console, so what are we trying to display? A string value (it's one that can contain letters and numbers), then we specify where that value is stored. In this case, the information is stored under Hashtable personalDetails and the content that we want to display is firstName. Now take a look at the script once again and see if you can display the age, remember that the value that we are trying to access here is a number, so we should use int instead of string: Similar to ArrayList, we can store mixed-type data in Hashtable. Unity requires the developer to specify how an accessed element should be treated. To do this, we need to cast the element into a specific data type. The syntax is very simple. There are brackets with the data type inside, followed by the Hashtable variable name. Then, in square brackets, we have to enter the key string the value is stored under. Ufff, confusing! As you can see in the preceding line, we are casting to string (inside brackets). If we were to access another type of data, for example, an integer number, the syntax would look like this: (int)personalDetails["age"]; I hope that this is clear now. If it isn't, why not search for more examples on the Unity forums? How do I know what's inside my Hashtable? Hashtable, by default, isn't displayed in the Unity Inspector panel. You cannot simply look at the Inspector tab and preview all keys and values in your public member Hashtable. We can do this in code, however. You know how to access a value and cast it. What if you are trying to access the value under a key that isn't stored in the Hashtable? Unity will spit out a null reference error and your program is likely to crash. To check whether an element exists in the Hashtable, we can use the .Contains(object) method, passing the key parameter: This determines whether the array contains the item and if so, the code will continue; otherwise, it will stop there, preventing any error. We discussed how to use C# to create arrays, lists, dictionaries and objects in Unity. The code samples and the examples will help you implement these from the platform. Do check out this book Learning C# by Developing Games with Unity 2017  to develop your first interactive 2D and 3D platform game. Read More Unity 2D & 3D game kits simplify Unity game development for beginners How to create non-player Characters (NPC) with Unity 2018 Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 4
  • 122842

article-image-lighting-camera-effects-unity-2018
Amarabha Banerjee
04 May 2018
12 min read
Save for later

Implementing lighting & camera effects in Unity 2018

Amarabha Banerjee
04 May 2018
12 min read
Today, we will explore lighting & camera effects in Unity 2018. We will start with cameras to include perspectives, frustums, and Skyboxes. Next, we will learn a few uses of multiple cameras to include mini-maps. We will also cover the different types of lighting, explore reflection probes, and conclude with a look at shadows. Working with cameras Cameras render scenes so that the user can view them. Think about the hidden complexity of this statement. Our games are 3D, but people playing our games view them on 2D displays such as television, computer monitors, or mobile devices. Fortunately for us, Unity makes implementing cameras easy work. Cameras are GameObjects and can be edited using transform tools in the Scene view as well as in the Inspector panel. Every scene must have at least one camera. In fact, when a new scene is created, Unity creates a camera named Main Camera. As you will see later in this chapter, a scene can have multiple cameras. In the Scene view, cameras are indicated with a white camera silhouette, as shown in the following screenshot: When we click our Main Camera in the Hierarchy panel, we are provided with a Camera Preview in the Scene view. This gives us a preview of what the camera sees as if it were in game mode. We also have access to several parameters via the Inspector panel. The Camera component in the Inspector panel is shown here: Let's look at each of these parameters with relation to our Cucumber Beetle game: The Clear Flags parameter lets you switch between Skybox, Solid Color, Depth Only, and Don't Clear. The selection here informs Unity which parts of the screen to clear. We will leave this setting as Skybox. You will learn more about Skyboxes later in this chapter. The Background parameter is used to set the default background fill (color) of your game world. This will only be visible after all game objects have been rendered and if there is no Skybox. Our Cucumber Beetle game will have a Skybox, so this parameter can be left with the default color. The Culling Mask parameter allows you to select and deselect the layers you want the camera to render. The default selection options are Nothing, Everything, Default, TransparentFX, Ignore Raycast, Water, and UI. For our game, we will select Everything. If you are not sure which layer a game object is associated with, select it and look at the Layer parameter in the top section of the Inspector panel. There you will see the assigned layer. You can easily change the layer as well as create your own unique layers. This gives you finite rendering control. The Projection parameter allows you to select which projection, perspective or orthographic, you want for your camera. We will cover both of those projections later in this chapter. When perspective projection is selected, we are given access to the Field of View parameter. This is for the width of the camera's angle of view. The value range is 1-179°. You can use the slider to change the values and see the results in the Camera Preview window. When orthographic projection is selected, an additional Size parameter is available. This refers to the viewport size. For our game, we will select perspective projection with the Field of View set to 60. The Clipping Planes parameters include Near and Far. These settings set the closest and furthest points, relative to the camera, that rendering will happen at. For now, we will leave the default settings of 0.3 and 1000 for the Near and Far parameters, respectively. The Viewport Rect parameter has four components – X, Y, W, and H – that determine where the camera will be drawn on the screen. As you would expect, the X and Y components refer to horizontal and vertical positions, and the W and H components refer to width and height. You can experiment with these values and see the changes in the Camera Preview. For our game, we will leave the default settings. The Depth parameter is used when we implement more than one camera. We can set a value here to determine the camera's priority in relation to others. Larger values indicate a higher priority. The default setting is sufficient for our game. The Rendering Path parameter defines what rendering methods our camera will use. The options are Use Graphics Settings, Forward, Deferred, Legacy Vertex Lit, and Legacy Deferred (light prepass). We will use the Use Graphics Settings option for our game, which also uses the default setting. The Target Texture parameter is not something we will use in our game. When a render texture is set, the camera is not able to render to the screen. The Occlusion Culling parameter is a powerful setting. If enabled, Unity will not render objects that are occluded, or not seen by the camera. An example would be objects inside a building. If the camera can currently only see the external walls of the building, then none of the objects inside those walls can be seen. So, it makes sense to not render those. We only want to render what is absolutely necessary to help ensure our game has smooth gameplay and no lag. We will leave this as enabled for our game. The Allow HDR parameter is a checkbox that toggles a camera's High Dynamic Range (HDR) rendering. We will leave the default setting of enabled for our game. The Allow MSAA parameter is a toggle that determines whether our camera will use a Multisample Anti-Aliasing (MSAA) render target. MSAA is a computer graphics optimization technique and we want this enabled for our game. Understanding camera projections There are two camera projections used in Unity: perspective and orthographic. With perspective projection, the camera renders a scene based on the camera angle, as it exists in the scene. Using this projection, the further away an object is from the camera, the smaller it will be displayed. This mimics how we see things in the real world. Because of the desire to produce realistic games, or games that approximate the realworld, perspective projection is the most commonly used in modern games. It is also what we will use in our Cucumber Beetle game. The other projection is orthographic. An orthographic perspective camera renders a scene uniformly without any perspective. This means that objects further away will not be displayed smaller than objects closer to the camera. This type of camera is commonly used for top-down games and is the default camera projection used in 2D and Unity's UI system. We will use perspective projection for our Cucumber Beetle game. Orientating your frustum When a camera is selected in the Hierarchy view, its frustum is visible in the Scene view. A frustum is a geometric shape that looks like a pyramid that has had its top cut off, as illustrated here: The near, or top, plane is parallel to its base. The base is also referred to as the far plane. The frustum's shape represents the viable region of your game. Only objects in that region are rendered. Using the camera object in Scene view, we can change our camera's frustum shape. Creating a Skybox When we create game worlds, we typically create the ground, buildings, characters, trees, and other game objects. What about the sky? By default, there will be a textured blue sky in your Unity game projects. That sky is sufficient for testing but does not add to an immersive gaming experience. We want a bit more realism, such as clouds, and that can be accomplished by creating a Skybox. A Skybox is a six-sided cube visible to the player beyond all other objects. So, when a player looks beyond your objects, what they see is your Skybox. As we said, Skyboxes are six-sided cubes, which means you will need six separate images that can essentially be clamped to each other to form the cube. The following screenshot shows the Default Skybox that Unity projects start with as well as the completed Custom Skybox you will create in this section: Perform the following steps to create a Skybox: In the Project panel, create a Skybox subfolder in the Assets folder. We will use this folder to store our textures and materials for the Skybox. Drag the provided six Skybox images, or your own, into the new Skybox folder. Ensure the Skybox folder is selected in the Project panel. From the top menu, select Assets | Create | Material. In the Project panel, name the material Skybox. With the Skybox material selected, turn your attention to the Inspector panel. Select the Shader drop-down menu and select SkyBox | 6 Sided. Use the Select button for each of the six images and navigate to the images you added in step 2. Be sure to match the appropriate texture to the appropriate cube face. For example, the SkyBox_Front texture matches the Front[+Z] cube face on the Skybox Material. In order to assign our new Skybox to our scene, select Window | Lighting | Settings from the top menu. This will bring up the Lighting settings dialog window. In the Lighting settings dialog window, click on the small circle to the right of the Skybox Material input field. Then, close the selection window and the Lighting window. Refer to the following screenshot: You will now be able to see your Skybox in the Scene view. When you click on the Camera in the Hierarchy panel, you will also see the Skybox as it will appear from the camera's perspective. Be sure to save your scene and your project. Using multiple cameras Our Unity games must have a least one camera, but we are not limited to using just one. As you will see we will attach our main camera, or primary camera, to our player character. It will be as if the camera is following the character around the game environment. This will become the eyes of our character. We will play the game through our character's view. A common use of a second camera is to create a mini-map that can be seen in a small window on top of the game display. These mini-maps can be made to toggle on and off or be permanent/fixed display components. Implementations might consist of a fog-of-war display, a radar showing enemies, or a global top-down view of the map for orientation purposes. You are only limited by your imagination. Another use of multiple cameras is to provide the player with the ability to switch between third-person and first-person views. You will remember that the first-person view puts the player's arms in view, while in the third-person view, the player's entire body is visible. We can use two cameras in the appropriate positions to support viewing from either camera. In a game, you might make this a toggle—say, with the C keyboard key—that switches from one camera to the other. Depending on what is happening in the game, the player might enjoy this ability. Some single-player games feature multiple playable characters. Giving the player the ability to switch between these characters gives them greater control over the game strategy. To achieve this, we would need to have cameras attached to each playable character and then give the player the ability to swap characters. We would do this through scripting. This is a pretty advanced implementation of multiple characters. Another use of multiple cameras is adding specialty views in a game. These specialty views might include looking through a door's peep-hole, looking through binoculars at the top of a skyscraper, or even looking through a periscope. We can attach cameras to objects and change their viewing parameters to create unique camera use in our games. We are only limited by our own game designs and imagination. We can also use cameras as cameras. That's right! We can use the camera game object to simulate actual in-game cameras. One example is implementing security cameras in a prison-escape game. Working with lighting In the previous sections, we explored the uses of cameras for Unity games. Just like in the real world, cameras need lights to show us objects. In Unity games, we use multiple lights to illuminate the game environment. In Unity, we have both dynamic lighting techniques as well as light baking options for better performance. We can add numerous light sources throughout our scenes and selectively enable or disable an object's ability to cast or receive shadows. This level of specificity gives us tremendous opportunity to create realistic game scenes. Perhaps the secret behind Unity's ability to so realistically render light and shadows is that Unity models the actual behavior of lights and shadows. Real-time global illumination gives us the ability to instantiate multiple lights in each scene, each with the ability to directly or indirectly impact objects in the scene that are within range of the light sources. We can also add and manipulate ambient light in our game scenes. This is often done with Skyboxes, a tri-colored gradient, or even a single color. Each new scene in Unity has default ambient lighting, which we can control by editing the values in the Lighting window. In that window, you have access to the following settings: Environment Real-time Lighting Mixed Lighting Lightmapping Settings Other Settings Debug Settings No changes to these are required for our game at this time. We have already set the environmental lighting to our Skybox. When we create our scenes in Unity, we have three options for lighting. We can use real-time dynamic light, use the baked lighting approach, or use a mixture of the two. Our games perform more efficiently with baked lighting, compared to real-time dynamic lighting, so if performance is a concern, try using baked lighting where you can. To summarize, we have discussed how to create interesting lighting and camera effects using Unity 2018. This article is an extract from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. This book will help you create fun filled real world games with Unity 2018. Game Engine Wars: Unity vs Unreal Engine Unity plugins for augmented reality application development How to create non-player Characters (NPC) with Unity 2018    
Read more
  • 0
  • 0
  • 5857

article-image-developing-games-using-ai
Packt
08 Aug 2017
13 min read
Save for later

Developing Games Using AI

Packt
08 Aug 2017
13 min read
In this article, MicaelDaGraça, the author of the book, Practical Game AI Programming, we will be covering the following points to introduce you to using AI Programming for games for exploratory data analysis. A brief history of and solutions to game AI Enemy AI in video games From simple to smart and human-like AI Visual and Audio Awareness A brief history of and solutions to game AI To better understand how to overcome the present problems that game developers are currently facing, we need to dig a little bit on the history of video game development and take a look to the problems and solutions that was so important at the time, where some of them was so avant-garde that actually changed the entire history of video game design itself and we still use the same methods today to create unique and enjoyable games. One of the first relevant marks that is always worth to mention when talking about game AI is the chess computer programmed to compete against humans. It was the perfect game to start experimenting artificial intelligence, because chess usually requires a lot of thought and planning ahead, something that a computer couldn't do at the time because it was necessary to have human features in order to successfully play and win the game. So the first step was to make it able for the computer to process the game rules and think for itself in order to make a good judgment of the next move that the computer should do to achieve the final goal that was winning by check-mate. The problem, chess has many possibilities so even if the computer had a perfect strategy to beat the game, it was necessary to recalculate that strategy, adapting it, changing or even creating a new one every time something went wrong with the first strategy. Humans can play differently every time, what makes it a huge task for the programmers to input all the possibility data into the computer to win the game. So writing all the possibilities that could exist wasn't a viable solution, and because of that programmers needed to think again about the problem. Then one day they finally came across with a better solution, making the computer decide for itself every turn, choosing the most plausible option for each turn, that way the computer could adapt to any possibility of the game. Yet this involved another problem, the computer would only think on the short term, not creating any plans to defeat the human in the future moves, so it was easy to play against it but at least we started to have something going on. It would be necessary decades until someone defined the word "Artificial Intelligence" by solving the first problem that many researchers had by trying to create a computer that was capable of defeating a human player. Arthur Samuel is the person responsible for creating a computer that could learn for itself and memorize all the possible combinations. That way there wasn't necessary any human intervention and the computer could actually think for its own and that was a huge step that it's still impressive even for today standards. Enemy AI in video games Now let's move to the video game industry and analyze how the first enemies and game obstacles were programmed, was it that different from what we are doing now? Let's find out. Single player games with AI enemies started to appear in the 70's and soon some games started to elevate the quality and expectations of what defines a video game AI, some of those examples were released for the arcade machines, like Speed Race from Taito (racing video game) and Qwak(duck hunting using a light gun) or Pursuit(aircraft fighter) both from Atari. Other notable examples are the text based games released for the first personal computers, like Hunt the Wumpus and Star Trek that also had enemies. What made those games so enjoyable was precisely the AI enemies that didn't reacted like any other before because they had random elements mixed with the traditional stored patterns, making them unpredictable and a unique experience every time you played the game. But that was only possible due to the incorporation of microprocessors that expanded the capabilities of a programmer at that time. Space Invaders brought the movement patterns,Galaxian improved and added more variety making the AI even more complex, Pac-Man later on brought movement patterns to the maze genre. The influence that the AI design in Pac-Man had is just as significant as the influence of the game itself. This classic arcade game makes the player believe that the enemies in the game are chasing him but not in a crude manner. The ghosts are chasing the player (or evading the player) in a different way as if they have an individual personality. This gives people the illusion that they are actually playing against 4 or 5 individual ghosts rather than copies of a same computer enemy. After that Karate Champ introduced the first AI fighting character, Dragon Quest introduced tactical system for the RPG genre and over the years the list of games that explored artificial intelligence and used it to create unique game concepts kept expanding and all of that came from a single question, how can we make a computer capable of beating a human on a game. All the games mentioned above have a different genre and they are unique in their style but all of them used the same method for the AI, that is called Finite State Machine. Here the programmer input all the behaviors necessary for the computer to challenge the player, just like the first computer that played chess. The programmer defined exactly how the computer should behave in different occasions in order to move, avoid, attack or perform any other behavior in order to challenge the player and that method is used even in the latest big budget games of today. From simple to smart and human-like AI Programmers face many challenges while developing an AI character but one of the greatest challenges is adapting the AI movement and behavior in relation to what the player is currently doing or will do in future actions. The difficulty exists because the AI is programmed with pre-determined states, using probability or possibility maps in order to adapt his movement and behavior according to the player. This technic can become very complex if the programmer extends the possibilities of the AI decisions, just like the chess machine that has all the possible situations that may occur on the game. It's a huge task for the programmer because it's necessary to determine what the player can do and how the AI will react to each action of the player and that takes a lot of CPU power. To overcome that challenge programmers started to mix possibility maps with probabilities and other technics that let the AI decide for itself on how it should react according to the player actions. These factors are important to consider while developing an AI that elevates the game quality as we are about to discover. Games kept evolving and players got even more exigent, not only with the visual quality, as well with the capabilities of the AI enemies and also with the allied characters. To deliver new games that took in consideration the player expectations, programmers started to write even more states for each character, creating new possibilities, more engaging enemies implementing important allies characters, more things for the player to do and a lot more features that helped re-defined different genres and creating new ones. Of course this was also possible because of the technology that also kept improving, allowing the developers to explore even more the artificial intelligence in the video games. A great example of this that is worth to mention is Metal Gear Solid, the game brought a new genre to the video game industry by implementing stealth elements, instead of the popular straight forward and shooting. But those elements couldn't be fully explored as Hideo Kojima intended because of the hardware limitations at the time. Jumping forward from the 3th to the 5th generation of consoles, Konami and Hideo Kojima presented the same title but this time with a lot more interactions, possibilities and behaviors from the AI elements of the game, making it so successful and important in the video game history that it's easy to see its influence in a large number of games that came after Metal Gear Solid. Metal Gear Solid - Sony Playstation 1 Visual and Audio Awareness The game in the above screenshot implemented visual and audio awareness to the enemy. This feature stablished the genre that we know today as a stealth game. So the game uses Pathfinding and Finite States Machine, features that already came from the beginning of the video game industry but in order to create something new they also created new features such as Interaction with the Environment, Navigation Behavior, Visual/Audio Awareness and AI interaction. A lot of things that didn't existed at the time but that is widely used today even on different game genres such as Sports, Racing, Fighting or FPS games were also introduced. After that huge step for game design, developers still faced other problems or should i say, this new possibilities brought even more problems, because it was not perfect. The AI still didn't react as a real person and many other elements was necessary to implement, not only on stealth games but in all other genres and one in particular needed to improve their AI to make the game feel realistic. We are talking about sport games, especially those who tried to simulate the real world team behaviors such as Basketball or Football. If we think about it, the interaction with the player is not the only thing that we need to care about, we left the chess long time ago, where it was 1 vs 1. Now we want more and watching other games getting realistic AI behaviors, sport fanatics started to ask that same features on their favorite games, after all those games was based on real world events and for that reason the AI should react realistically as possible. At this point developers and game designers started to take in consideration the AI interaction with itself and just like the enemies from Pac-Man, the player should get the impression that each character on the game, thinks for itself and reacts differently from the others. If we analyze it closely the AI that is present on a sports game is structured like an FPS or RTS game is, using different animation states, general movements, interactions, individual decisions and finally tactic and collective decisions. So it shouldn't be a surprise that sports games could reach the same level of realism as the other genres that greatly evolved in terms of AI development, .However there's a few problems that only sport games had at the time and it was how to make so many characters on the same screen react differently but working together to achieve the same objective. With this problem in mind, developers started to improve the individual behaviors of each character, not only for the AI that was playing against the player, but also the AI that was playing alongside with the player. Once again Finite State Machines made a crucial part of the Artificial Intelligence but the special touch that helped to create a realistic approach in the sports genre was the anticipation and awareness used on stealth games. The computer needed to calculate what the player was doing, where the ball was going and coordinate all of that, plus giving a false impression of a team mindset towards the same plan. Combining the newly features used on the new genre of stealth games with a vast number of characters on the same screen, it was possible to innovate the sports genre by creating a sports simulation type of game that has gained so much popularity over the years. This helps us to understand that we can use almost the same methods for any type of game even if it looks completely different, the core principles that we saw on the computer that played chess it's still valuable to the sport game released 30 years later. Let's move on to our last example that also has a great value in terms of how an AI character should behave to make it more realistic, the game is F.E.A.R. developed by Monolith Productions. What made this game so special in terms of Artificial Intelligence was the dialogues between the enemy characters. While it wasn't an improvement in a technical point of view, it was definitely something that helped to showcase all of the development work that was put on the characters AI and this is so crucial because if the AI don't say it, it didn't happen. This is an important factor to take in consideration while creating a realistic AI character, giving the illusion that it's real, the false impression that the computer reacts like humans and humans interact so AI should do the same. Not only the dialogues help to create a human like atmosphere, it also helps to exhale all of the development put on the character that otherwise the player wouldn't notice that it was there. When the AI detects the player for the first time, he shouts that he found it, when the AI loses sight of the player, he also express that emotion. When the squad of AI's are trying to find the player or ambush him, they speak about that, living the player imagining that the enemy is really capable of thinking and planning against him. Why is this so important? Because if we only had numbers and mathematical equations to the characters, they will react that way, without any human feature, just math and to make it look more human it's necessary to input mistakes, errors and dialogues inside the character AI, just to distract the player from the fact that he's playing against a machine. The history of video game artificial intelligence is still far away from perfect and it's possible that it would take us decades to improve just a little bit what we achieve from the early 50's until this present day, so don't be afraid of exploring what you are about to learn, combine, change or delete some of the things to find different results, because great games did it in the past and they had a lot of success with it. Summary In this article we learned about the AI impact in the video game history, how everything started from a simple idea to have a computer to compete against humans in traditional games and how that naturally evolved into the the world of video games. We also learned about the challenges and difficulties that were present since the day one and how coincidentally programmers kept facing and still face the same problems.
Read more
  • 0
  • 0
  • 3918
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-your-first-unity-project
Packt
15 Jun 2017
11 min read
Save for later

Your first Unity project

Packt
15 Jun 2017
11 min read
In this article by Tommaso Lintrami, the author of the book Unity 2017 Game Development Essentials - Third Edition, we will see that when starting out in game development, one of the best ways to learn the various parts of the discipline is to prototype your idea. Unity excels in assisting you with this, with its visual scene editor and public member variables that form settings in the Inspector. To get to grips with working in the Unity editor. In this article, you will learn about: Creating a New Project in Unity Working with GameObjects in the SceneView and Hierarchys (For more resources related to this topic, see here.) As Unity comes in two main forms—a standard, free download and a paid Pro developer license.We'll stick to using features that users of the standard free edition will have access to. If you're launching Unity for the very first time, you'll be presented with a Unitydemonstration project. While this is useful to look into the best practices for the development of high-end projects, if you're starting out, looking over some of the assets and scripting may feel daunting, so we'll leave this behind and start from scratch! Take a look at the following steps for setting up your Unity project: In Unity go to File | NewProject and you will be presented with the ProjectWizard.The following screenshot is a Mac version shown: From here select the NEW tab and 3D type of project. Be aware that if at any time you wish to launch Unity and be taken directly to the ProjectWizard, then simply launch the Unity Editor application and immediately hold the Alt key (Mac and Windows). This can be set to the default behavior for launch in the Unity preferences. Click the Set button and choose where you would like to save your new Unity project folder on your hard drive.  The new project has been named UGDE after this book, and chosen to store it on my desktop for easy access. The Project Wizard also offers the ability to import many Assetpackages into your new project which are provided free to use in your game development by Unity Technologies. Comprising scripts, ready-made objects, and other artwork, these packages are a useful way to get started in various types of new project. You can also import these packages at any time from the Assets menu within Unity, by selecting ImportPackage, and choosing from the list of available packages. You can also import a package from anywhere on your hard drive by choosing the CustomPackage option here. This import method is also used to share assets with others, and when receiving assets you have downloaded through the AssetStore—see Window | Asset Store to view this part of Unity later. From the list of packages to be imported, select the following (as shown in the previous image):     Characters     Cameras     Effects     TerrainAssets     Environment When you are happy with your selection, simply choose Create Project at the bottom of this dialog window. Unity will then create your new project and you will see progress bars representing the import of the four packages. A basic prototyping environment To create a simple environment to prototype some game mechanics, we'll begin with a basic series of objects with which we can introduce gameplay that allows the player to aim and shoot at a wall of primitive cubes. When complete, your prototyping environment will feature a floor comprised of a cube primitive, a main camera through which to view the 3D world, and a point light setup to highlight the area where our gameplay will be introduced. It will look like something as shown in the following screenshot: Setting the scene As all new scenes come with a Main Camera object by default, we'll begin by adding a floor for our prototyping environment. On the Hierarchy panel, click the Create button, and from the drop-down menu, choose Cube. The items listed in this drop-down menu can also be found in the GameObject | CreateOther top menu. You will now see an object in the Hierarchy panel called Cube. Select this and press Return (Mac)/F2 (Windows) or double-click the object name slowly (both platforms) to rename this object, type in Floor and press Return (both platforms) to confirm this change. For consistency's sake, we will begin our creation at world zero—the center of the 3D environment we are working in. To ensure that the floor cube you just added is at this position, ensure it is still selected in the Hierarchypanel and then check the Transform component on the Inspector panel, ensuring that the position values for X, Y, and Z are all at 0, if not, change them all to zero either by typing them in or by clicking the cog icon to the right of the component, and selecting ResetPosition from the pop-out menu. Next, we'll turn the cube into a floor, by stretching it out in the X and Z axes. Into the X and Z values under Scale in the Transform component, type a value of 100, leaving Y at a value of 1. Adding simple lighting Now we will highlight part of our prototyping floor by adding a point light. Select the Create button on the Hierarchypanel(or go to Game Object | Create Other) and choose point light. Position the new point light at (0,20,0) using the Position values in the Transform component, so that it is 20 units above the floor. You will notice that this means that the floor is out of range of the light, so expand the range by dragging on the yellow dot handles that intersect the outline of the point light in the SceneView, until the value for range shown in the Light component in the Inspector reaches something around a value of 40, and the light is creating a lit part of the floor object. Bear in mind that most components and visual editing tools in the SceneView are inextricably linked, so altering values such asRangein the Inspector Light component will update the visual display in the SceneView as you type, and stay constant as soon as you pressReturnto confirm the values entered. Another brick in the wall Now let's make a wall of cubes that we can launch a projectile at. We'll do this by creating a single master brick, adding components as necessary, and then duplicating this until our wall is complete. Building the master brick In order to create a template for all of our bricks, we'll start by creating a master object, something to create clones of. This is done as follows: Click the Create button at the top of the Hierarchy, and select Cube. Position this at (0,1,0) using the Position values in the Transform component on the Inspector. Then, focus your view on this object by ensuring it is still selectedin the Hierarchy, by hovering your cursor over the SceneView, and pressing F. Add physics to your Cube object by choosing Component | Physics | Rigidbody from the top menu. This means that your object is now a Rigidbody—it has mass, gravity, and is affected by other objects using the physics engine for realistic reactions in the 3D world. Finally, we'll color this object by creating amaterial. Materials are a way of applying color and imagery to our 3D geometry. To make a new one, go to the Create button on the Project panel and choose Material from the drop-down menu. Press Return (Mac) or F2 (Windows) to rename this asset to Red instead of the default name New Material. You can also right-click in the Materials Project folder and Create| Material or alternatively you can use the editor main menu: Assets | Create | Material With this material selected, the Inspector shows its properties. Click on the color block to the right of MainColor [see image label 1] to open the Color Picker[see image label 2]. This will differ in appearance depending upon whether you are using Mac or Windows. Simply choose a shade of red, and then close the window. The Main Color block should now have been updated. To apply this material, drag it from the Project panel and drop it onto either the cube as seen in the SceneView, or onto the name of the object in the Hierarchy. The material is then applied to the Mesh Renderer component of this object and immediately seen following the other components of the object in the Inspector. Most importantly, your cube should now be red! Adjusting settings using the preview of this material on any object will edit the original asset, as this preview is simply a link to the asset itself, not a newly editable instance. Now that our cube has a color and physics applied through the Rigid body component, it is ready to be duplicated and act as one brick in a wall of many. However, before we do that, let’s have a quick look at the physics in action. With the cube still selected, set the Yposition value to 15 and the Xrotation value to 40 in the Transform component in the Inspector. Press Play at the top of the Unity interface and you should see the cube fall and then settle, having fallen at an angle. The shortcut for Play is Ctrl+Pfor Windows andCommand+Pfor Mac. Press Play again to stop testing. Do not press Pause as this will only temporarily halt the test, and changes made thereafter to the scene will not be saved. Set the Yposition value for the cube back to 1, and set the X Rotation back to 0. Now that we know our brick behaves correctly, let's start creating a row of bricks to form our wall. And snap!—It's a row To help you position objects, Unity allows you to snap to specific increments when dragging—these increments can be redefined by going to Edit | Snap Settings. To use snapping, hold down Command (Mac) or Ctrl (Windows) when using the Translatetool (W) to move objects in theSceneView. So in order to start building thewall, duplicate the cube brick we already have using the shortcut Command+D (Mac) or Ctrl+D (PC), then drag the red axis handle while holding the snapping key. This will snap one unit at a time by default, so snap-move your cube one unit in the X axis so that it sits next to the original cube, shown as follows: Repeat this procedure of duplication and snap-dragging until you have a row of 10 cubes in a line. This is the first row of bricks, and to simplify building the rest of the bricks we will now group this row under an empty object, and then duplicate the parent empty object. Vertex snapping The basic snapping technique used here works well as our cubes are a generic scale of 1, but when scaling more detailed shaped objects, you should use vertex snapping instead. To do this, ensure that the Translate tool is selected and hold down V on the keyboard.Now hover your cursor over a vertex point on your selected object and drag to any other vertex of another object to snap to it. Grouping and duplicating with empty objects Create an empty object by choosing GameObject | Create Empty from the top menu, then position this at (4.5,0.5,-1) using the Transform component in the Inspector. Rename this from the default nameGameObjecttoCubeHolder. Now select all of the cube objects in the Hierarchy by selecting the top one, holding the Shift key, and then selecting the last. Now drag this list of cubes in the Hierarchy onto the empty object named CubeHolder in the Hierarchy in order to make this their parent object.The Hierarchy should now look like this: You'll notice that the parent empty object now has an arrow to the left of its object title, meaning you can expand and collapse it. To save space in the Hierarchy, click the arrow now to hide all of the child objects, and then re-select the CubeHolder. Now that we have a complete row made and parented, we can simply duplicate the parent object, and use snap-dragging to lift a whole new row up in the Y axis. Use the duplicate shortcut (Command/Ctrl + D) as before, then select the Translate tool (W) and use the snap-drag technique (hold command on Mac, Ctrl on PC) outlined earlier to lift by 1 unit in the Y axis by pulling the green axis handle. Repeat this procedure to create eight rows of bricks in all, one on top of the other. It should look something like the following screenshot. Note that in the image all CubeHolderrow objects are selected in the Hierarchy. Summary In this article, you should have become familiar with the basics of using the Unity interface, working with GameObjects. Resources for Article: Further resources on this subject: Components in Unity [article] Component-based approach of Unity [article] Using Specular in Unity [article]
Read more
  • 0
  • 0
  • 2076

article-image-instance-and-devices
Packt
13 Feb 2017
4 min read
Save for later

Instance and Devices

Packt
13 Feb 2017
4 min read
In this article by Pawel Lapinski, the author of the book Vulkan Cookbook, we will learn how to destroy a logical device, destroy a Vulkan Instance and then releasing a Vulkan Loader library. (For more resources related to this topic, see here.) Destroying a logical device After we are done and we want to quit the application, we should cleanup after ourselves. Despite the fact that all the resources should be destroy automatically by the driver, when the Vulkan Instance is destroyed, we should do this explicitly in the application to follow the good programming guidelines. The order of destroying resources should be opposite to the order in which they were created. Resources should be released in the order reverse to the order of their creation. In this article logical device was the last created object, so it will be destroyed first. How to do it… Take the handle of the created logical device that was stored in a variable of type VkDevice named logical_device. Call vkDestroyDevice( logical_device, nullptr ), provide the logical_device variable in the first argument and nullptr in the second. For safety reasons, assign the VK_NULL_HANDLE value the logical_device variable. How it works… Implementation of logical device destroying is very straightforward: if( logical_device ) { vkDestroyDevice( logical_device, nullptr ); logical_device = VK_NULL_HANDLE; } First, we need to check, if the logical device handle is valid, we shouldn't destroy objects that weren't created. Then, we destroy the device with vkDestroyDevice() function call and we assign the VK_NULL_HANDLE value to the variable in which logical device handle was stored. We do this just in case--if there is some kind of mistake in our code, we won't destroy the same object twice. Remember that when we destroy a logical device, we can't use device‑level functions acquired from it. See also Creating a logical device Destroying a Vulkan Instance After all other resources are destroyed, we can destroy a Vulkan Instance. How to do it… Take the handle of a created Vulkan Instance object stored in a variable of type VkInstance named instance. Call vkDestroyInstance( instance, nullptr ), provide an instance variable as the first argument and nullptr as the second argument. For safety reasons, assign VK_NULL_HANDLE value to the instance variable. How it works… Before we can close the application, we should make sure created resources are released. Vulkan Instance is destroyed with the following code: if( instance ) { vkDestroyInstance( instance, nullptr ); instance = VK_NULL_HANDLE; } See also Creating a Vulkan Instance Releasing a Vulkan Loader library Libraries that are loaded dynamically, must be explicitly closed (released). To be able to use Vulkan in our application, we opened Vulkan Loader (a vulkan-1.dll library on Windows or libvulkan.so.1 library on Linux). So before we can close the application, we should free it. How to do it… On Windows operating systems family: Take the variable of type HMODULE named vulkan_library, in which handle of a loaded Vulkan Loader was stored. Call FreeLibrary( vulkan_library ), provide a vulkan_library variable in the only argument. For safeness reasons, assign the nullptr value to the vulkan_library variable. On Linux operating systems family: Take the variable of type void* named vulkan_library in which handle to a loaded Vulkan Loader was stored. Call dlclose( vulkan_library ), provide a vulkan_library variable in the only argument. For safety reasons, assign the nullptr value to the vulkan_library variable. How it works… On Windows operating systems family, dynamic libraries are opened using LoadLibrary() function. Such libraries must be closed (released) by calling FreeLibrary() function to which a handle of a previously opened library must be provided. On Linux operating systems family, dynamic libraries are opened using dlopen() function. Such libraries must be closed (released) by calling dlclose() function, to which a handle of a previously opened library must be provided. #if defined _WIN32 FreeLibrary( vulkan_library ); #elif defined __linux dlclose( vulkan_library ); #endif vulkan_library = nullptr; See also Connecting with a Vulkan Loader library Summary In this article, you learned about the different members of a class or blueprint. We worked with instance properties, type properties, instance methods, and type methods. We worked with stored properties, getters, setters. Resources for Article: Further resources on this subject: Introducing an Android platform [article] Get your Apps Ready for Android N [article] Drawing and Drawables in Android Canvas [article]
Read more
  • 0
  • 0
  • 1121

article-image-normal-maps
Packt
19 Jan 2017
12 min read
Save for later

Normal maps

Packt
19 Jan 2017
12 min read
In this article by Raimondas Pupius, the author of the book Mastering SFML Game Development we will learn about normal maps and specular maps. (For more resources related to this topic, see here.) Lighting can be used to create visually complex and breath-taking scenes. One of the massive benefits of having a lighting system is the ability it provides to add extra details to your scene, which wouldn't have been possible otherwise. One way of doing so is using normal maps. Mathematically speaking, the word "normal" in the context of a surface is simply a directional vector that is perpendicular to the said surface. Consider the following illustration: In this case, what's normal is facing up because that's the direction perpendicular to the plane. How is this helpful? Well, imagine you have a really complex model with many vertices; it'd be extremely taxing to render the said model because of all the geometry that would need to be processed with each frame. A clever trick to work around this, known as normal mapping, is to take the information of all of those vertices and save them on a texture that looks similar to this one: It probably looks extremely funky, especially if being looked of physical release in grayscale, but try not to think of this in terms of colors, but directions. The red channel of a normal map encodes the –x and +x values. The green channel does the same for –y and +y values, and the blue channel is used for –z to +z. Looking back at the previous image now, it's easier to confirm which direction each individual pixel is facing. Using this information on geometry that's completely flat would still allow us to light it in such a way that it would make it look like it has all of the detail in there; yet, it would still remain flat and light on performance: These normal maps can be hand-drawn or simply generated using software such as Crazybump. Let's see how all of this can be done in our game engine. Implementing normal map rendering In the case of maps, implementing normal map rendering is extremely simple. We already have all the material maps integrated and ready to go, so at this time, it's simply a matter of sampling the texture of the tile-sheet normals: void Map::Redraw(sf::Vector3i l_from, sf::Vector3i l_to) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. auto shader = renderer->GetCurrentShader(); auto textureName = m_tileMap.GetTileSet().GetTextureName(); auto normalMaterial = m_textureManager-> GetResource(textureName + "_normal"); for (auto x = l_from.x; x <= l_to.x; ++x) { for (auto y = l_from.y; y <= l_to.y; ++y) { for (auto layer = l_from.z; layer <= l_to.z; ++layer) { auto tile = m_tileMap.GetTile(x, y, layer); if (!tile) { continue; } auto& sprite = tile->m_properties->m_sprite; sprite.setPosition( static_cast<float>(x * Sheet::Tile_Size), static_cast<float>(y * Sheet::Tile_Size)); // Normal pass. if (normalMaterial) { shader->setUniform("material", *normalMaterial); renderer->Draw(sprite, &m_normals[layer]); } } } } } ... } The process is exactly the same as drawing a normal tile to a diffuse map, except that here we have to provide the material shader with the texture of the tile-sheet normal map. Also note that we're now drawing to a normal buffer texture. The same is true for drawing entities as well: void S_Renderer::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. auto shader = renderer->GetCurrentShader(); auto textures = m_systemManager-> GetEntityManager()->GetTextureManager(); for (auto &entity : m_entities) { auto position = entities->GetComponent<C_Position>( entity, Component::Position); if (position->GetElevation() < l_layer) { continue; } if (position->GetElevation() > l_layer) { break; } C_Drawable* drawable = GetDrawableFromType(entity); if (!drawable) { continue; } if (drawable->GetType() != Component::SpriteSheet) { continue; } auto sheet = static_cast<C_SpriteSheet*>(drawable); auto name = sheet->GetSpriteSheet()->GetTextureName(); auto normals = textures->GetResource(name + "_normal"); // Normal pass. if (normals) { shader->setUniform("material", *normals); drawable->Draw(&l_window, l_materials[MaterialMapType::Normal].get()); } } } ... } You can try obtaining a normal texture through the texture manager. If you find one, you can draw it to the normal map material buffer. Dealing with particles isn't much different from what we've seen already, except for one little piece of detail: void ParticleSystem::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialValuePass")) { // Material pass. auto shader = renderer->GetCurrentShader(); for (size_t i = 0; i < container->m_countAlive; ++i) { if (l_layer >= 0) { if (positions[i].z < l_layer * Sheet::Tile_Size) { continue; } if (positions[i].z >= (l_layer + 1) * Sheet::Tile_Size) { continue; } } else if (positions[i].z < Sheet::Num_Layers * Sheet::Tile_Size) { continue; } // Normal pass. shader->setUniform("material", sf::Glsl::Vec3(0.5f, 0.5f, 1.f)); renderer->Draw(drawables[i], l_materials[MaterialMapType::Normal].get()); } } ... } As you can see, we're actually using the material value shader in order to give particles' static normals, which are always sort of pointing to the camera. A normal map buffer should look something like this after you render all the normal maps to it: Changing the lighting shader Now that we have all of this information, let's actually use it when calculating the illumination of the pixels inside the light pass shader: uniform sampler2D LastPass; uniform sampler2D DiffuseMap; uniform sampler2D NormalMap; uniform vec3 AmbientLight; uniform int LightCount; uniform int PassNumber; struct LightInfo { vec3 position; vec3 color; float radius; float falloff; }; const int MaxLights = 4; uniform LightInfo Lights[MaxLights]; void main() { vec4 pixel = texture2D(LastPass, gl_TexCoord[0].xy); vec4 diffusepixel = texture2D(DiffuseMap, gl_TexCoord[0].xy); vec4 normalpixel = texture2D(NormalMap, gl_TexCoord[0].xy); vec3 PixelCoordinates = vec3(gl_FragCoord.x, gl_FragCoord.y, gl_FragCoord.z); vec4 finalPixel = gl_Color * pixel; vec3 viewDirection = vec3(0, 0, 1); if(PassNumber == 1) { finalPixel *= vec4(AmbientLight, 1.0); } // IF FIRST PASS ONLY! vec3 N = normalize(normalpixel.rgb * 2.0 - 1.0); for(int i = 0; i < LightCount; ++i) { vec3 L = Lights[i].position - PixelCoordinates; float distance = length(L); float d = max(distance - Lights[i].radius, 0); L /= distance; float attenuation = 1 / pow(d/Lights[i].radius + 1, 2); attenuation = (attenuation - Lights[i].falloff) / (1 - Lights[i].falloff); attenuation = max(attenuation, 0); float normalDot = max(dot(N, L), 0.0); finalPixel += (diffusepixel * ((vec4(Lights[i].color, 1.0) * attenuation))) * normalDot; } gl_FragColor = finalPixel; } First, the normal map texture needs to be passed to it as well as sampled, which is where the first two highlighted lines of code come in. Once this is done, for each light we're drawing on the screen, the normal directional vector is calculated. This is done by first making sure that it can go into the negative range and then normalizing it. A normalized vector only represents a direction. Since the color values range from 0 to 255, negative values cannot be directly represented. This is why we first bring them into the right range by multiplying them by 2.0 and subtracting by 1.0. A dot product is then calculated between the normal vector and the normalized L vector, which now represents the direction from the light to the pixel. How much a pixel is lit up from a specific light is directly contingent upon the dot product, which is a value from 1.0 to 0.0 and represents magnitude. A dot product is an algebraic operation that takes in two vectors, as well as the cosine of the angle between them, and produces a scalar value between 0.0 and 1.0 that essentially represents how “orthogonal” they are. We use this property to light pixels less and less, given greater and greater angles between their normals and the light. Finally, the dot product is used again when calculating the final pixel value. The entire influence of the light is multiplied by it, which allows every pixel to be drawn differently as if it had some underlying geometry that was pointing in a different direction. The last thing left to do now is to pass the normal map buffer to the shader in our C++ code: void LightManager::RenderScene() { ... if (renderer->UseShader("LightPass")) { // Light pass. ... shader->setUniform("NormalMap", m_materialMaps[MaterialMapType::Normal]->getTexture()); ... } ... } This effectively enables normal mapping and gives us beautiful results such as this: The leaves, the character, and pretty much everything in this image now looks like they have a definition, ridges, and crevices; it is lit as if it had geometry, although it's paper-thin. Note the lines around each tile in this particular instance. This is one of the main reasons why normal maps for pixel art, such as tile sheets, shouldn't be automatically generated; it can sample the tiles adjacent to it and incorrectly add bevelled edges. Specular maps While normal maps provide us with the possibility to fake how bumpy a surface is, specular maps allow us to do the same with the shininess of a surface. This is what the same segment of the tile sheet we used as an example for a normal map looks like in a specular map: It's not as complex as a normal map since it only needs to store one value: the shininess factor. We can leave it up to each light to decide how much shine it will cast upon the scenery by letting it have its own values: struct LightBase { ... float m_specularExponent = 10.f; float m_specularStrength = 1.f; }; Adding support for specularity Similar to normal maps, we need to use the material pass shader to render to a specularity buffer texture: void Map::Redraw(sf::Vector3i l_from, sf::Vector3i l_to) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. ... auto specMaterial = m_textureManager->GetResource( textureName + "_specular"); for (auto x = l_from.x; x <= l_to.x; ++x) { for (auto y = l_from.y; y <= l_to.y; ++y) { for (auto layer = l_from.z; layer <= l_to.z; ++layer) { ... // Normal pass. // Specular pass. if (specMaterial) { shader->setUniform("material", *specMaterial); renderer->Draw(sprite, &m_speculars[layer]); } } } } } ... } The texture for specularity is once again attempted to be obtained; it is passed down to the material pass shader if found. The same is true when you render entities: void S_Renderer::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. ... for (auto &entity : m_entities) { ... // Normal pass. // Specular pass. if (specular) { shader->setUniform("material", *specular); drawable->Draw(&l_window, l_materials[MaterialMapType::Specular].get()); } } } ... } Particles, on the other hand, also use the material value pass shader: void ParticleSystem::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialValuePass")) { // Material pass. auto shader = renderer->GetCurrentShader(); for (size_t i = 0; i < container->m_countAlive; ++i) { ... // Normal pass. // Specular pass. shader->setUniform("material", sf::Glsl::Vec3(0.f, 0.f, 0.f)); renderer->Draw(drawables[i], l_materials[MaterialMapType::Specular].get()); } } } For now, we don't want any of them to be specular at all. This can obviously be tweaked later on, but the important thing is that we have that functionality available and yielding results, such as the following: This specularity texture needs to be sampled inside a light-pass shader, just like a normal texture. Let's see what this involves. Changing the lighting shader Just as before, a uniform sampler2D needs to be added to sample the specularity of a particular fragment: uniform sampler2D LastPass; uniform sampler2D DiffuseMap; uniform sampler2D NormalMap; uniform sampler2D SpecularMap; uniform vec3 AmbientLight; uniform int LightCount; uniform int PassNumber; struct LightInfo { vec3 position; vec3 color; float radius; float falloff; float specularExponent; float specularStrength; }; const int MaxLights = 4; uniform LightInfo Lights[MaxLights]; const float SpecularConstant = 0.4; void main() { ... vec4 specularpixel = texture2D(SpecularMap, gl_TexCoord[0].xy); vec3 viewDirection = vec3(0, 0, 1); // Looking at positive Z. ... for(int i = 0; i < LightCount; ++i){ ... float specularLevel = 0.0; specularLevel = pow(max(0.0, dot(reflect(-L, N), viewDirection)), Lights[i].specularExponent * specularpixel.a) * SpecularConstant; vec3 specularReflection = Lights[i].color * specularLevel * specularpixel.rgb * Lights[i].specularStrength; finalPixel += (diffusepixel * ((vec4(Lights[i].color, 1.0) * attenuation)) + vec4(specularReflection, 1.0)) * normalDot; } gl_FragColor = finalPixel; } We also need to add in the specular exponent and strength to each light's struct, as it's now part of it. Once the specular pixel is sampled, we need to set up the direction of the camera as well. Since that's static, we can leave it as is in the shader. The specularity of the pixel is then calculated by taking into account the dot product between the pixel’s normal and the light, the color of the specular pixel itself, and the specular strength of the light. Note the use of a specular constant in the calculation. This is a value that can and should be tweaked in order to obtain best results, as 100% specularity rarely ever looks good. Then, all that's left is to make sure the specularity texture is also sent to the light-pass shader in addition to the light's specular exponent and strength values: void LightManager::RenderScene() { ... if (renderer->UseShader("LightPass")) { // Light pass. ... shader->setUniform("SpecularMap", m_materialMaps[MaterialMapType::Specular]->getTexture()); ... for (auto& light : m_lights) { ... shader->setUniform(id + ".specularExponent", light.m_specularExponent); shader->setUniform(id + ".specularStrength", light.m_specularStrength); ... } } } The result may not be visible right away, but upon closer inspection of moving a light stream, we can see that correctly mapped surfaces will have a glint that will move around with the light: While this is nearly perfect, there's still some room for improvement. Summary Lighting is a very powerful tool when used right. Different aspects of a material may be emphasized depending on the setup of the game level, additional levels of detail can be added in without too much overhead, and the overall aesthetics of the project will be leveraged to new heights. The full version of “Mastering SFML Game Development” offers all of this and more by not only utilizing normal and specular maps, but also using 3D shadow-mapping techniques to create Omni-directional point light shadows that breathe new life into the game world. Resources for Article: Further resources on this subject: Common Game Programming Patterns [article] Sprites in Action [article] Warfare Unleashed Implementing Gameplay [article]
Read more
  • 0
  • 0
  • 2498

article-image-game-objective
Packt
04 Jan 2017
5 min read
Save for later

Game objective

Packt
04 Jan 2017
5 min read
In this article by Alan Thorn, author of the book Mastering Unity 5.x, we will see what the game objective is and asset preparation. Every game (except for experimental and experiential games) need an objective for the player; something they must strive to do, not just within specific levels, but across the game overall. This objective is important not just for the player (to make the game fun), but also for the developer, for deciding how challenge, diversity and interest can be added to the mix. Before starting development, have a clearly stated and identified objective in mind. Challenges are introduced primarily as obstacles to the objective, and bonuses are 'things' that facilitate the objective; that make it possible and easier to achieve. For Dead Keys, the primary objective is to survive and reach the level end. Zombies threaten that objective by attacking and damaging the player, and bonuses exist along the way to make things more interesting. I highly recommend using project management and team collaboration tools to chart, document and time-track tasks within your project. And you can do this for free too. Some online tools for this include Trello (https://trello.com), Bitrix 24 (https://www.bitrix24.com), BaseCamp (https://basecamp.com), FreedCamp (https://freedcamp.com), UnFuddle (https://unfuddle.com), BitBucket (https://bitbucket.org), Microsoft Visual Studio Team Services (https://www.visualstudio.com/en-us/products/visual-studio-team-services-vs.aspx), Concord Contract Management (http://www.concordnow.com). Asset preparation When you've reached a clear decision on initial concept and design, you're ready to prototype! This means building a Unity project demonstrating the core mechanic and game rules in action; as a playable sample. After this, you typically refine the design more, and repeat prototyping until arriving at an artefact you want to pursue. From here, the art team must produce assets (meshes and textures) based on concept art, the game design, and photographic references. When producing meshes and textures for Unity, some important guidelines should be followed to achieve optimal graphical performance in-game. This is about structuring and building assets in a smart way, so they export cleanly and easily from their originating software, and can then be imported with minimal fuss, performing as best as they can at run-time. Let's see some of these guidelines for meshes and textures. Meshes - work only with good topology Good mesh topology consists in all polygons having only three or four sides in the model (not more). Additionally, Edge Loops should flow in an ordered, regular way along the contours of the model, defining its shape and form. Clean Topology Unity automatically converts, on import, any NGons (Polygons with more than four sides) into triangles, if the mesh has any. But, it's better to build meshes without NGons, as opposed to relying on Unity's automated methods. Not only does this cultivate good habits at the modelling phase, but it avoids any automatic and unpredictable retopology of the mesh, which affects how it's shaded and animated. Meshes - minimize polygon count Every polygon in a mesh entails a rendering performance hit insofar as a GPU needs time to process and render each polygon. Consequently, it's sensible to minimize the number of a polygons in a mesh, even though modern graphics hardware is adept at working with many polygons. It's good practice to minimize polygons where possible and to the degree that it doesn't detract from your central artistic vision and style. High-Poly Meshes! (Try reducing polygons where possible) There are many techniques available for reducing polygon counts. Most 3D applications (like 3DS Max, Maya and Blender) offer automated tools that decimate polygons in a mesh while retaining its basic shape and outline. However, these methods frequently make a mess of topology; leaving you with faces and edge loops leading in all directions. Even so, this can still be useful for reducing polygons in static meshes (Meshes that never animate), like statues or houses or chairs. However, it's typically bad for animated meshes where topology is especially important. Reducing Mesh Polygons with Automated Methods can produce messy topology! If you want to know the total vertex and face count of a mesh, you can use your 3D Software statistics. Blender, Maya, 3DS Max, and most 3D software, let you see vertex and face counts of selected meshes directly from the viewport. However, this information should only be considered a rough guide! This is because, after importing a mesh into Unity, the vertex count frequently turns out higher than expected! There are many reasons for this, explained in more depth online, here: http://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html In short, use the Unity Vertex Count as the final word on the actual Vertex Count of your mesh. To view the vertex-count for an imported mesh in Unity, click the right-arrow on the mesh thumbnail in the Project Panel. This shows the Internal Mesh asset. Select this asset, and then view the Vertex Count from the Preview Pane in the Object Inspector. Viewing the Vertex and Face Count for meshes in Unity Summary In this article, we've learned about what are game objectives and about asset preparation.
Read more
  • 0
  • 0
  • 2986
article-image-cooking-cupcakes-towers
Packt
03 Jan 2017
6 min read
Save for later

Cooking cupcakes towers

Packt
03 Jan 2017
6 min read
In this article by Francesco Sapio author of the book Getting Started with Unity 2D Game Development - Second Edition we will see how to create our towers. This is not an easy task, but at the end we will acquire a lot of scripting skills. (For more resources related to this topic, see here.) What a cupcake Tower does First of all, it's useful to write down what we want to achieve and define what exactly a cupcake tower is supposed to do. The best way is to write down a list, to have clear idea of what we are trying to achieve: A cupcake tower is able to detect pandas within a certain range. A cupcake tower shoots a different kind of projectile according to its typology against the pandas within a certain range. Furthermore, among this range, it uses a policy to decide which panda to shoot. There is a reload time, before the cupcake tower is able to shoot again. The cupcake tower can be upgraded (in a bigger cupcake!), increasing its stats and therefore changing its appearance. Scripting the cupcake tower There are a lot of things to implement. Let's start by creating a new script and naming it CupcakeTowerScript. As we already mentioned for the Projectile Script, in this article, we implement the main logic, but of course there is always space to improve. Shooting to pandas Even if we don't have enemies yet, we can already start to program the behavior of the cupcake towers to shoot to the enemies. In this article we will learn a bit about using Physics to detect objects within a range. Let's start by defining four variables. The first three are public, so we can set them in the Inspector, the last one is private, since we only need it to check how much time is elapsed. In particular, the first three variables store the parameters of our tower. So, the projectile prefab, its range and its reload time. We can write the following: public float rangeRadius; //Maximum distance that the Cupcake Tower can shoot public float reloadTime; //Time before the Cupcake Tower is able to shoot again public GameObject projectilePrefab; //Projectile type that is fired from the Cupcake Tower private float elapsedTime; //Time elapsed from the last time the Cupcake Tower has shot Now, in the Update() function we need to check if enough time has elapsed in order to shoot. This can be easily done by using an if-statement. In any case, at the end, the time elapsed should be increased: void Update () { if (elapsedTime >= reloadTime) { //Rest of the code } elapsedTime += Time.deltaTime; } Within the if statement, we need to reset the elapsed time, so to be able to shoot the next time. Then, we need to check if within its range there are some game objects or not. if (elapsedTime >= reloadTime) { //Reset elapsed Time elapsedTime = 0; //Find all the gameObjects with a collider within the range of the Cupcake Tower Collider2D[] hitColliders = Physics2D.OverlapCircleAll(transform.position, rangeRadius); //Check if there is at least one gameObject found if (hitColliders.Length != 0) { //Rest of the code } } If there are enemies within range, we need to decide a policy about which enemy the tower should be targeted. There are different ways to do this and different strategies that the tower itself could choose. Here, we are going to implement one where the nearest enemy to the tower will be the one targeted. To implement this policy, we need to loop all all the game objects that we have found in range, check if they actually are enemies, and using distances, pick the nearest one. To achieve this, write the following code inside the previous if statement: if (hitColliders.Length != 0) { //Loop over all the gameObjects to identify the closest to the Cupcake Tower float min = int.MaxValue; int index = -1; for (int i = 0; i < hitColliders.Length; i++) { if (hitColliders[i].tag == "Enemy") { float distance = Vector2.Distance(hitColliders[i].transform.position, transform.position); if (distance < min) { index = i; min = distance; } } } if (index == -1) return; //Rest of the code } Once we got the target, we need to get the direction, that the tower will use to throw the projectile. So, let's write this: //Get the direction of the target Transform target = hitColliders[index].transform; Vector2 direction = (target.position - transform.position).normalized; Finally, we need to instantiate a new Projectile, and assign to it the direction of the enemy, as the following: //Create the Projectile GameObject projectile = GameObject.Instantiate(projectilePrefab, transform.position, Quaternion.identity) as GameObject; projectile.GetComponent<ProjectileScript>().direction = direction; Instantiate Game Objects it is usually slow, and it should be avoided. However, for the learning propose we can live with that. And that is it for shooting to the enemies. Upgrading the cupcake tower, making it even tastier In order to create a function to upgrade the tower, we first need to define a variable to store the actual level of the tower: public int upgradeLevel; //Level of the Cupcake Tower Then, we need an array with all the Sprites for the different upgrades, like the following: public Sprite[] upgradeSprites; //Different sprites for the different levels of the Cupcake Tower Finally, we can create our Upgrade function. We need to upgrade the graphics, and increase the stats. Feel free to tweak this values as you prefer. However, don't forget to increase the level of the tower as well as to assign the new sprite. At the end, you should have something like the following: public void Upgrade() { rangeRadius += 1f; reloadTime -= 0.5f; upgradeLevel++; GetComponent<SpriteRenderer>().sprite = upgradeSprites[upgradeLevel]; } Save the script, and for now we have done with it. A pre-cooked cupcake tower through Prefabs As we have done with the Sprinkle, we need to do something similar for the cupcake Tower. In the Prefabs folder in the Project Panel, create a new Prefab by right clicking and then navigate to Create | Prefab. Name it SprinklesCupcakeTower. Now, drag and drop the Sprinkles_Cupcake_Tower_0 from the Graphics/towers folder (within the cupcake_tower_sheet-01 file) in the Scene View. Attach the CupcakeTowerScript to the object by navigating to Add Component | Script | CupcakeTowerScript. The Inspector should look like the following: We need to assign the Pink_Sprinkle_Projectile_Prefab to the Projectile Prefab variable. Then, we need to assign the different Sprites for the upgrades. In particular, we can use Sprinkles_Cupcake_Tower_* (replacing the * with the level of the cupcake tower) from the same sheet as before. Don't worry too much about the other parameters of the tower, like the range radius or the reload time, since we will see how to balance the game later on. At the end, this is what we should see: The last step is to drag this game object inside the prefab. As a result, our cupcake tower is ready. Summary In this article we covered the topic of creating a cupcake tower and scripting it. Resources for Article: Further resources on this subject: Animating a Game Character [article] What's Your Input? [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 1369

article-image-android-game-development-unity3d
Packt
23 Nov 2016
8 min read
Save for later

Android Game Development with Unity3D

Packt
23 Nov 2016
8 min read
In this article by Wajahat Karim, author of the book Mastering Android Game Development with Unity, we will be creating addictive fun games by using a very famous game engine called Unity3D. In this article, we will cover the following topics: Game engines and Unity3D Features of Unity3D Basics of Unity game development (For more resources related to this topic, see here.) Game engines and Unity3D A game engine is a software framework designed for the creation and development of video games. Many tools and frameworks are available for game designers and developers to code a game quickly and easily without building from the ground up. As time passed by, game engines became more mature and easy for developers, with feature-rich environments. Starting from native code frameworks for Android such as AndEngine, Cocos2d-x, LibGDX, and so on, game engines started providing clean user interfaces and drag-drop functionalities to make game development easier for developers. These engines include lots of tools which are different in user interface, features, porting, and many more things; but all have one thing in common— they create video games in the end. Unity (http://unity3d.com) is a cross-platform game engine developed by Unity Technologies. It made its first public announcement at Apple Worldwide Developers Conference in 2005, and supported only game development for Mac OS, but since then it has been extended to target more than 15 platforms for desktop, mobile, and consoles. It is notable for its one-click ability to port games on multiple platforms including BlackBerry 10, Windows Phone 8, Windows, OS X, Linux, Android, iOS, Unity Web Player (including Facebook), Adobe Flash, PlayStation 3, PlayStation 4, PlayStation Vita, Xbox 360, Xbox One, Wii U, and Wii. Unity has a fantastic interface, which lets the developers manage the project really efficiently from the word go. It has a nice drag-drop functionality with connecting behavior scripts written in either C#, JavaScript (or UnityScript), or Boo to define the custom logic and functionality with visual objects quite easily. Unity has been proven quite easy to learn for new developers who are just starting out with game development. Now more largely studios have also started using , and that too for good reasons. Unity is one of those engines that provide support for both 2D and 3D games without putting developers in trouble or confusing them. Due to its popularity all over the game development industry, it has a vast collection of online tutorials, great documentation, and a very helping community of developers. Features of Unity3D Unity is a game development ecosystem comprising a powerful rendering engine, intuitive tools, rapid workflows for 2D and 3D games, all-in-one deployment support, and thousands of already created free and paid assets with a helping developer's community. The feature list includes the following: Easy workflow allowing developers to rapidly assemble scenes in an intuitive editor workspace Quality game creation like AAA visuals, high-definition audio, full-throttle action without any glitches on screen Dedicated tools for both 2D and 3D game creation with shared conventions to make it easy for developers A very unique and flexible animation system to create natural animations with very less time-consuming efforts Smooth frame rate with reliable performance on all the platforms where developers publish their games One-click ability to deploy to all platforms from desktops, browsers, and mobiles to consoles within minutes Reduces time of development by using already created reusable assets available at the huge asset store Basics of Unity game development Before delving into details of Unity3D and game development concepts, let's have a look at some of the very basics of Unity 5.0. We will go through the Unity interface, menu items, using assets, creating scenes, and publishing builds. Unity editor interface When you launch Unity 5.0 for the first time, you will be presented with an editor with a few panels on the left, right, and bottom of the screen. The following screenshot shows the editor interface when it's first launched: Fig 1.7 Unity 5 editor interface at first launch First of all, take your time to look over the editor, and become a little familiar with it. The Unity editor is divided into different small panels and views, which can be dragged to customize the workspace according to the developer/designer's needs. Unity 5 comes with some prebuilt workspace layout templates, which can be selected from the Layout drop-down menu at top-right corner of the screen, as shown in the following screenshot: Fig 1.8 Unity 5 editor layouts The layout currently displayed in the editor shown in the preceding screenshot is the Default layout. You can select these layouts, and see how the editor's interface changes, and how the different panels are placed at different positions in each layout. This book uses the 2 by 3 workspace layout for the whole game. The following figure shows the 2 by 3 workspace with the names of the views and panels highlighted: Fig 1.9 Unity 5 2 by 3 Layout with views and panel names As you can see in the preceding figure, the Unity editor contains different views and panels. Every panel and view have a specific purpose, which is described as follows: Scene view The Scene view is the whole stage for the game development, and it contains every asset in the game from a tiny point to any heavy 3D model. The Scene view is used to select and position environments, characters, enemies, the player, camera, and all other objects which can be placed on the stage for the game. All those objects which can be placed and shown in the game are called game objects. The Scene view allows developers to manipulate game objects such as selecting, scaling, rotating, deleting, moving, and so on. It also provides some controls such as navigation and transformation.  In simple words, the Scene view is the interactive sandbox for developers and designers. Game view The Game view is the final representation of how your game will look when published and deployed on the target devices, and it is rendered from the cameras of the scene. This view is connected to the play mode navigation bar in the center at the top of the whole Unity workspace. The play mode navigation bar is shown in the following: figure. Fig 1.14 Play mode bar When the game is played in the editor, this control bar gets changed to blue color. A very interesting feature of Unity is that it allows developers to pause the game and code while running, and the developers can see and change the properties, transforms, and much more at runtime, without recompiling the whole game, for quick workflow. Hierarchy view The Hierarchy view is the first point to select or handle any game object available in the scene. This contains every game object in the current scene. It is tree-type structure, which allows developers to utilize the parent and child concept on the game objects easily. The following figure shows a simple Hierarchy view: Fig 1.16 Hierarchy view Project browser panel This panel looks like a view, but it is called the Project browser panel. It is an embedded files directory in Unity, and contains all the files and folders included in the game project. The following figure shows a simple Project browser panel: Fig 1.17 Project browser panel The left side of the panel shows a hierarchal directory, while the rest of the panel shows the files, or, as they are called, assets in Unity. Unity represents these files with different icons to differentiate these according to their file types. These files can be sprite images, textures, model files, sounds, and so on. You can search any specific file by typing in the search text box. On the right side of search box, there are button controls for further filters such as animation files, audio clip files, and so on. An interesting thing about the Project browser panel is that if any file is not available in the Assets, then Unity starts looking for it on the Unity Asset Store, and presents you with the available free and paid assets. Inspector panel This is the most important panel for development in Unity. Unity structures the whole game in the form of game objects and assets. These game objects further contain components such as transform, colliders, scripts, meshes, and so on. Unity lets developers manage these components of each game object through the Inspector panel. The following figure shows a simple Inspector panel of a game object: Fig 1.18 Inspector panel These components vary in types, for example, Physics, Mesh, Effects, Audio, UI, and so on. These components can be added in any object by selecting it from the Component menu. The following figure shows the Component menu: Fig 1.19 Components menu Summary In this article, you learned about game engines, such as Unity3D, which is used to create games for Android devices. We also discussed the important features of Unity along with the basics of its development environment. Resources for Article: Further resources on this subject: The Game World [article] Customizing the Player Character [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 3753

article-image-debugging-vulkan
Packt
23 Nov 2016
16 min read
Save for later

Debugging in Vulkan

Packt
23 Nov 2016
16 min read
In this article by Parminder Singh, author of Learning Vulkan, we learn Vulkan debugging in order to avoid unpleasant mistakes. Vulkan allows you to perform debugging through validation layers. These validation layer checks are optional and can be injected into the system at runtime. Traditional graphics APIs perform validation right up front using some sort of error-checking mechanism, which is a mandatory part of the pipeline. This is indeed useful in the development phase, but actually, it is an overhead during the release stage because the validation bugs might have already been fixed at the development phase itself. Such compulsory checks cause the CPU to spend a significant amount of time in error checking. On the other hand, Vulkan is designed to offer maximum performance, where the optional validation process and debugging model play a vital role. Vulkan assumes the application has done its homework using the validation and debugging capabilities available at the development stage, and it can be trusted flawlessly at the release stage. In this article, we will learn the validation and debugging process of a Vulkan application. We will cover the following topics: Peeking into Vulkan debugging Understanding LunarG validation layers and their features Implementing debugging in Vulkan (For more resources related to this topic, see here.) Peeking into Vulkan debugging Vulkan debugging validates the application implementation. It not only surfaces the errors, but also other validations, such as proper API usage. It does so by verifying each parameter passed to it, warning about the potentially incorrect and dangerous API practices in use and reporting any performance-related warnings when the API is not used optimally. By default, debugging is disabled, and it's the application's responsibility to enable it. Debugging works only for those layers that are explicitly enabled at the instance level at the time of the instance creation (VkInstance). When debugging is enabled, it inserts itself into the call chain for the Vulkan commands the layer is interested in. For each command, the debugging visits all the enabled layers and validates them for any potential error, warning, debugging information, and so on. Debugging in Vulkan is simple. The following is an overview that describes the steps required to enable it in an application: Enable the debugging capabilities by adding the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension at the instance level. Define the set of the validation layers that are intended for debugging. For example, we are interested in the following layers at the instance and device level. For more information about these layer functionalities, refer to the next section: VK_LAYER_GOOGLE_unique_objects VK_LAYER_LUNARG_api_dump VK_LAYER_LUNARG_core_validation VK_LAYER_LUNARG_image VK_LAYER_LUNARG_object_tracker VK_LAYER_LUNARG_parameter_validation VK_LAYER_LUNARG_swapchain VK_LAYER_GOOGLE_threading The Vulkan debugging APIs are not part of the core command, which can be statically loaded by the loader. These are available in the form of extension APIs that can be retrieved at runtime and dynamically linked to the predefined function pointers. So, as the next step, the debug extension APIs vkCreateDebugReportCallbackEXT and vkDestroyDebugReportCallbackEXT are queried and linked dynamically. These are used for the creation and destruction of the debug report. Once the function pointers for the debug report are retrieved successfully, the former API (vkCreateDebugReportCallbackEXT) creates the debug report object. Vulkan returns the debug reports in a user-defined callback, which has to be linked to this API. Destroy the debug report object when debugging is no more required. Understanding LunarG validation layers and their features The LunarG Vulkan SDK supports the following layers for debugging and validation purposes. In the following points, we have described some of the layers that will help you understand the offered functionalities: VK_LAYER_GOOGLE_unique_objects: Non-dispatchable handles are not required to be unique; a driver may return the same handle for multiple objects that it considers equivalent. This behavior makes the tracking of the object difficult because it is not clear which object to reference at the time of deletion. This layer packs the Vulkan objects into a unique identifier at the time of creation and unpacks them when the application uses it. This ensures there is proper object lifetime tracking at the time of validation. As per LunarG's recommendation, this layer must be last in the chain of the validation layer, making it closer to the display driver. VK_LAYER_LUNARG_api_dump: This layer is helpful in knowing the parameter values passed to the Vulkan APIs. It prints all the data structure parameters along with their values. VK_LAYER_LUNARG_core_validation: This is used for validating and printing important pieces of information from the descriptor set, pipeline state, dynamic state, and so on. This layer tracks and validates the GPU memory, object binding, and command buffers. Also, it validates the graphics and compute pipelines. VK_LAYER_LUNARG_image: This layer can be used for validating texture formats, rendering target formats, and so on. For example, it verifies whether the requested format is supported on the device. It validates whether the image view creation parameters are reasonable for the image that the view is being created for. VK_LAYER_LUNARG_object_tracker: This keeps track of object creation along with its use and destruction, which is helpful in avoiding memory leaks. It also validates that the referenced object is properly created and is presently valid. VK_LAYER_LUNARG_parameter_validation: This validation layer ensures that all the parameters passed to the API are correct as per the specification and are up to the required expectation. It checks whether the value of a parameter is consistent and within the valid usage criteria defined in the Vulkan specification. Also, it checks whether the type field of a Vulkan control structure contains the same value that is expected for a structure of that type. VK_LAYER_LUNARG_swapchain: This layer validates the use of the WSI swapchain extensions. For example, it checks whether the WSI extension is available before its functions could be used. Also, it validates that an image index is within the number of images in a swapchain. VK_LAYER_GOOGLE_threading: This is helpful in the context of thread safety. It checks the validity of multithreaded API usage. This layer ensures the simultaneous use of objects using calls running under multiple threads. It reports threading rule violations and enforces a mutex for such calls. Also, it allows an application to continue running without actually crashing, despite the reported threading problem. VK_LAYER_LUNARG_standard_validation: This enables all the standard layers in the correct order. For more information on validation layers, visit LunarG's official website. Check out https://vulkan.lunarg.com/doc/sdk and specifically refer to the Validation layer details section for more details. Implementing debugging in Vulkan Since debugging is exposed by validation layers, most of the core implementation of the debugging will be done under the VulkanLayerAndExtension class (VulkanLED.h/.cpp). In this section, we will learn about the implementation that will help us enable the debugging process in Vulkan: The Vulkan debug facility is not part of the default core functionalities. Therefore, in order to enable debugging and access the report callback, we need to add the necessary extensions and layers: Extension: Add the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension to the instance level. This will help in exposing the Vulkan debug APIs to the application: vector<const char *> instanceExtensionNames = { . . . . // other extensios VK_EXT_DEBUG_REPORT_EXTENSION_NAME, }; Layer: Define the following layers at the instance level to allow debugging at these layers: vector<const char *> layerNames = { "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation", "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker", "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation", "VK_LAYER_LUNARG_swapchain", “VK_LAYER_GOOGLE_unique_objects” }; In addition to the enabled validation layers, the LunarG SDK provides a special layer called VK_LAYER_LUNARG_standard_validation. This enables basic validation in the correct order as mentioned here. Also, this built-in metadata layer loads a standard set of validation layers in the optimal order. It is a good choice if you are not very specific when it comes to a layer. a) VK_LAYER_GOOGLE_threading b) VK_LAYER_LUNARG_parameter_validation c) VK_LAYER_LUNARG_object_tracker d) VK_LAYER_LUNARG_image e) VK_LAYER_LUNARG_core_validation f) VK_LAYER_LUNARG_swapchain g) VK_LAYER_GOOGLE_unique_objects These layers are then supplied to the vkCreateInstance() API to enable them: VulkanApplication* appObj = VulkanApplication::GetInstance(); appObj->createVulkanInstance(layerNames, instanceExtensionNames, title); // VulkanInstance::createInstance() VkResult VulkanInstance::createInstance(vector<const char *>& layers, std::vector<const char *>& extensionNames, char const*const appName) { . . . VkInstanceCreateInfo instInfo = {}; // Specify the list of layer name to be enabled. instInfo.enabledLayerCount = layers.size(); instInfo.ppEnabledLayerNames = layers.data(); // Specify the list of extensions to // be used in the application. instInfo.enabledExtensionCount = extensionNames.size(); instInfo.ppEnabledExtensionNames = extensionNames.data(); . . . vkCreateInstance(&instInfo, NULL, &instance); } The validation layer is very specific to the vendors and SDK version. Therefore, it is advisable to first check whether the layers are supported by the underlying implementation before passing them to the vkCreateInstance() API. This way, the application remains portable throughout when ran against another driver implementation. The areLayersSupported() is a user-defined utility function that inspects the incoming layer names against system-supported layers. The unsupported layers are informed to the application and removed from the layer names before feeding them into the system: // VulkanLED.cpp VkBool32 VulkanLayerAndExtension::areLayersSupported (vector<const char *> &layerNames) { uint32_t checkCount = layerNames.size(); uint32_t layerCount = layerPropertyList.size(); std::vector<const char*> unsupportLayerNames; for (uint32_t i = 0; i < checkCount; i++) { VkBool32 isSupported = 0; for (uint32_t j = 0; j < layerCount; j++) { if (!strcmp(layerNames[i], layerPropertyList[j]. properties.layerName)) { isSupported = 1; } } if (!isSupported) { std::cout << "No Layer support found, removed” “ from layer: "<< layerNames[i] << endl; unsupportLayerNames.push_back(layerNames[i]); } else { cout << "Layer supported: " << layerNames[i] << endl; } } for (auto i : unsupportLayerNames) { auto it = std::find(layerNames.begin(), layerNames.end(), i); if (it != layerNames.end()) layerNames.erase(it); } return true; } The debug report is created using the vkCreateDebugReportCallbackEXT API. This API is not a part of Vulkan's core commands; therefore, the loader is unable to link it statically. If you try to access it in the following manner, you will get an undefined symbol reference error: vkCreateDebugReportCallbackEXT(instance, NULL, NULL, NULL); All the debug-related APIs need to be queried using the vkGetInstanceProcAddr() API and linked dynamically. The retrieved API reference is stored in a corresponding function pointer called PFN_vkCreateDebugReportCallbackEXT. The VulkanLayerAndExtension::createDebugReportCallback() function retrieves the create and destroy debug APIs, as shown in the following implementation: /********* VulkanLED.h *********/ // Declaration of the create and destroy function pointers PFN_vkCreateDebugReportCallbackEXT dbgCreateDebugReportCallback; PFN_vkDestroyDebugReportCallbackEXT dbgDestroyDebugReportCallback; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Get vkCreateDebugReportCallbackEXT API dbgCreateDebugReportCallback=(PFN_vkCreateDebugReportCallbackEXT) vkGetInstanceProcAddr(*instance,"vkCreateDebugReportCallbackEXT"); if (!dbgCreateDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkCreateDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } // Get vkDestroyDebugReportCallbackEXT API dbgDestroyDebugReportCallback= (PFN_vkDestroyDebugReportCallbackEXT)vkGetInstanceProcAddr (*instance, "vkDestroyDebugReportCallbackEXT"); if (!dbgDestroyDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkDestroyDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } . . . } The vkGetInstanceProcAddr() API obtains the instance-level extensions dynamically; these extensions are not exposed statically on a platform and need to be linked through this API dynamically. The following is the signature of this API: PFN_vkVoidFunction vkGetInstanceProcAddr( VkInstance instance, const char* name); The following table describes the API fields: Parameters Description instance This is a VkInstance variable. If this variable is NULL, then the name must be one of these: vkEnumerateInstanceExtensionProperties, vkEnumerateInstanceLayerProperties, or vkCreateInstance. name This is the name of the API that needs to be queried for dynamic linking.   Using the dbgCreateDebugReportCallback()function pointer, create the debugging report object and store the handle in debugReportCallback. The second parameter of the API accepts a VkDebugReportCallbackCreateInfoEXT control structure. This data structure defines the behavior of the debugging, such as what should the debug information include—errors, general warnings, information, performance-related warning, debug information, and so on. In addition, it also takes the reference of a user-defined function (debugFunction); this helps filter and print the debugging information once it is retrieved from the system. Here's the syntax for creating the debugging report: struct VkDebugReportCallbackCreateInfoEXT { VkStructureType type; const void* next; VkDebugReportFlagsEXT flags; PFN_vkDebugReportCallbackEXT fnCallback; void* userData; }; The following table describes the purpose of the mentioned API fields: Parameters Description type This is the type information of this control structure. It must be specified as VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT. flags This is to define the kind of debugging information to be retrieved when debugging is on; the next table defines these flags. fnCallback This field refers to the function that filters and displays the debug messages. The VkDebugReportFlagBitsEXT control structure can exhibit a bitwise combination of the following flag values: Insert table here The createDebugReportCallback function implements the creation of the debug report. First, it creates the VulkanLayerAndExtension control structure object and fills it with relevant information. This primarily includes two things: first, assigning a user-defined function (pfnCallback) that will print the debug information received from the system (see the next point), and second, assigning the debugging flag (flags) in which the programmer is interested: /********* VulkanLED.h *********/ // Handle of the debug report callback VkDebugReportCallbackEXT debugReportCallback; // Debug report callback create information control structure VkDebugReportCallbackCreateInfoEXT dbgReportCreateInfo = {}; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Define the debug report control structure, // provide the reference of 'debugFunction', // this function prints the debug information on the console. dbgReportCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgReportCreateInfo.pfnCallback = debugFunction; dbgReportCreateInfo.pUserData = NULL; dbgReportCreateInfo.pNext = NULL; dbgReportCreateInfo.flags = VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT | VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_DEBUG_BIT_EXT; // Create the debug report callback and store the handle // into 'debugReportCallback' result = dbgCreateDebugReportCallback (*instance, &dbgReportCreateInfo, NULL, &debugReportCallback); if (result == VK_SUCCESS) { cout << "Debug report callback object created successfullyn"; } return result; } Define the debugFunction() function that prints the retrieved debug information in a user-friendly way. It describes the type of debug information along with the reported message: VKAPI_ATTR VkBool32 VKAPI_CALL VulkanLayerAndExtension::debugFunction( VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData){ if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] ERROR: [" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] WARNING: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) { std::cout<<"[VK_DEBUG_REPORT] INFORMATION:[" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if(msgFlags& VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT){ cout <<"[VK_DEBUG_REPORT] PERFORMANCE: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_DEBUG_BIT_EXT) { cout << "[VK_DEBUG_REPORT] DEBUG: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else { return VK_FALSE; } return VK_SUCCESS; } The following table describes the various fields from the debugFunction()callback: Parameters Description msgFlags This specifies the type of debugging event that has triggered the call, for example, an error, warning, performance warning, and so on. objType This is the type object that is manipulated by the triggering call. srcObject This is the handle of the object that's being created or manipulated by the triggered call. location This refers to the place of the code describing the event. msgCode This refers to the message code. layerPrefix This is the layer responsible for triggering the debug event. msg This field contains the debug message text. userData Any application-specific user data is specified to the callback using this field.  The debugFunction callback has a Boolean return value. The true return value indicates the continuation of the command chain to subsequent validation layers even after an error is occurred. However, the false value indicates the validation layer to abort the execution when an error occurs. It is advisable to stop the execution at the very first error. Having an error itself indicates that something has occurred unexpectedly; letting the system run in these circumstances may lead to undefined results or further errors, which could be completely senseless sometimes. In the latter case, where the execution is aborted, it provides a better chance for the developer to concentrate and fix the reported error. In contrast, it may be cumbersome in the former approach, where the system throws a bunch of errors, leaving the developers in a confused state sometimes. In order to enable debugging at vkCreateInstance, provide dbgReportCreateInfo to the VkInstanceCreateInfo’spNext field: VkInstanceCreateInfo instInfo = {}; . . . instInfo.pNext = &layerExtension.dbgReportCreateInfo; vkCreateInstance(&instInfo, NULL, &instance); Finally, once the debug is no longer in use, destroy the debug callback object: void VulkanLayerAndExtension::destroyDebugReportCallback(){ VulkanApplication* appObj = VulkanApplication::GetInstance(); dbgDestroyDebugReportCallback(instance,debugReportCallback,NULL); } The following is the output from the implemented debug report. Your output may differ from this based on the GPU vendor and SDK provider. Also, the explanation of the errors or warnings reported are very specific to the SDK itself. But at a higher level, the specification will hold; this means you can expect to see a debug report with a warning, information, debugging help, and so on, based on the debugging flag you have turned on. Summary This article was short, precise, and full of practical implementations. Working on Vulkan without debugging capabilities is like shooting in the dark. We know very well that Vulkan demands an appreciable amount of programming and developers make mistakes for obvious reasons; they are humans after all. We learn from our mistakes, and debugging allows us to find and correct these errors. It also provides insightful information to build quality products. Let's do a quick recap. We learned the Vulkan debugging process. We looked at the various LunarG validation layers and understood the roles and responsibilities offered by each one of them. Next, we added a few selected validation layers that we were interested to debug. We also added the debug extension that exposes the debugging capabilities; without this, the API's definition could not be dynamically linked to the application. Then, we implemented the Vulkan create debug report callback and linked it to our debug reporting callback; this callback decorates the captured debug report in a user-friendly and presentable fashion. Finally, we implemented the API to destroy the debugging report callback object. Resources for Article: Further resources on this subject: Get your Apps Ready for Android N [article] Multithreading with Qt [article] Manage Security in Excel [article]
Read more
  • 0
  • 0
  • 8856
article-image-turn-your-life-gamified-experience-unity
Packt
14 Nov 2016
29 min read
Save for later

Turn Your Life into a Gamified Experience with Unity

Packt
14 Nov 2016
29 min read
In this article for by Lauren S. Ferro from the book Gamification with Unity 5.x we will look into a Gamified experience with Unity. In a world full of work, chores, and dull things, we all must find the time to play. We must allow ourselves to be immersed in enchanted world of fantasy and to explore faraway and uncharted exotic islands that form the mysterious worlds. We may also find hidden treasure while confronting and overcoming some of our worst fears. As we enter these utopian and dystopian worlds, mesmerized by the magic of games, we realize anything and everything is possible and all that we have to do is imagine. Have you ever wondered what Gamification is? Join us as we dive into the weird and wonderful world of gamifying real-life experiences, where you will learn all about game design, motivation, prototyping, and bringing all your knowledge together to create an awesome application. Each chapter in this book is designed to guide you through the process of developing your own gamified application, from the initial idea to getting it ready and then published. The following is just a taste of what to expect from the journey that this book will take you on. (For more resources related to this topic, see here.) Not just pixels and programming The origins of gaming have an interesting and ancient history. It stems as far back as the ancient Egyptians with the game Sennet; and long since the reign of great Egyptian Kings, we have seen games as a way to demonstrate our strength and stamina, with the ancient Greeks and Romans. However, as time elapsed, games have not only developed from the marble pieces of Sennet or the glittering swords of battles, they have also adapted to changes in the medium: from stone to paper, and from paper to technology. We saw the rise and development of physical games (such as table top and card games) to games that require us to physically move our characters using our bodies and peripherals (Playstaton Move and WiiMote), in order to interact with the gaming environment (Wii Sports and Heavy Rain). So, now we not only have the ability to create 3D virtual worlds with virtual reality, but also can enter these worlds and have them enter ours with augmented reality. Therefore, it is important to remember that, just as the following image, (Dungeons and Dragons), games don't have to take on a digital form, they can also be physical: Dungeons and Dragons board with figurines and dice Getting contextual At the beginning of designing a game or game-like experience, designers need to consider the context for which the experience is to take place. Context is an important consideration for how it may influence the design and development of the game (such as hardware, resources, and target group). The way in which a designer may create a game-like experience varies. For example, a game-like experience aimed to encourage students to submit assessments on time will be designed differently from the one promoting customer loyalty. In this way, the designer should be more context aware, and as a result, it may be more likely to keep it in view during the design process. Education: Games can be educational, and they may be designed specifically to teach or to have elements of learning entwined into them to support learning materials. Depending on the type of learning game, it may include formal (educational institutions) or informal educational environments (learning a language for a business trip). Therefore, if you are thinking about creating an educational game, you might need to think about these considerations in more detail. Business: Maybe your intention is get your employees to arrive on time or to finish reports in the afternoon rather than right before they go home. Designing content for use within a business context targets situations that occur within the workplace. It can include objectives such as increasing employee productivity (individual/group). Personal: Getting personal with game-like applications can relate specifically to creating experiences to achieve personal objectives. These may include personal development, personal productivity, organization, and so on. Ultimately, only one person maintains these experiences; however, other social elements, such as leaderboards and group challenges, can bring others into the personal experience as well. Game: If it is not just educational, business, or personal development, chances are that you probably want to create a game to be a portal into lustrous worlds of wonder or to pass time on the evening commute home. Pure gaming contexts have no personal objectives (other than to overcome challenges of course). Who is our application targeting and where do they come from? Understanding the user is one of the most important considerations for any approach to be successful. User considerations not only include the demographics of the user (for example, who they are and where they are from), but also the aim of the experience, the objectives that you aim to achieve, and outcomes that the objectives lead to. In this book, this section considers the real-life consequences that your application/game will have on its audience. For example, will a loyalty application encourage people to engage with your products/store in the areas that you're targeting it toward. Therefore, we will explore ways that your application can obtain demographic data in Unity. Are you creating a game to teach Spanish to children, teenagers, or adults? This will change the way that you will need to think about your audience. For example, children tend to be users who are encouraged to play by their parents, teenagers tend to be a bit more autonomous but may still be influenced by their parents, and adults are usually completely autonomous. Therefore, this can influence the amount and the type of feedback that you can give and how often. Where are your audience from? For example, are you creating an application for a global reward program or a local one? This will have an effect on whether or not you will incorporate things like localization features so that the application adapts to your audience automatically or whether it's embedded into the design. What kind of devices does your audience use? Do they live in an area where they have access to a stable Internet connection? Do they need to have a powerful system to run your game or application? Chances are if the answer is yes for the latter question then you should probably take a look at how you will optimize your application. What is game design? Many types of games exist and so do design approaches. There are different ways that you can design and implement games. Now, let's take a brief look at how games are made, and more importantly, what they are made of: Generating ideas: This involves thinking about the story that we want to tell, or a trip that we may want the player to go on. At this stage, we're just getting everything out of our head and onto the paper. Everything and anything should be written; the stranger and abstract the idea, the better. It's important at this stage not to feel trapped that an idea may not be suitable. Often, the first few ideas that we create are the worst, and the great stuff comes from iterating all the ideas that we put down in this stage. Talk about your ideas with friends and family, and even online forums are a great place to get feedback on your initial concepts. One of the first things that any aspiring game designer can begin with is to look at what is already out there. A lot is learned when we succeed—or fail—especially why and how. Therefore, at this stage, you will want to do a bit of research about what you are designing. For instance, if you're designing an application to teach English, not only should you see other similar applications that are out there but also how English is actually taught, even in an educational environment. While you are generating ideas, it is also useful to think about the technology and materials that you will use along the way. What game engine is better for your game's direction? Do you need to purchase licenses if you are intending to make your game commercial? Answering these kinds of questions earlier can save many headaches later on when you have your concept ready to go. Especially, if you will need to learn how to use the software, as some have steep learning curves. Defining your idea: This is not just a beautiful piece of art that we see when a game is created; it can be rough, messy, and downright simple, but it communicates the idea. Not just this; it also communicates the design of the game's space and how a player may interact and even traverse it. Concept design is an art in itself and includes concepts on environments, characters puzzles, and even the quest itself. We will take the ideas that we had during the idea generation and flesh them out. We begin to refine it, to see what works and what doesn't. Again, get feedback. The importance of feedback is vital. When you design games, you often get caught up; you are so immersed in your ideas, and they make sense to you. You have sorted out every details (at least for the most part, it feels like that). However, you aren't designing for you, you are designing for your audience, and getting an outsiders opinion can be crucial and even offer a perspective that you may not necessarily would have thought of. This stage also includes the story. A game without a story is like a life without existence. What kind of story do you want your player to be a part of? Can they control it, or is it set in stone? Who are the characters? The answers to these questions will breathe soul into your ideas. While you design your story, keep referring to the concept that you created, the atmosphere, the characters, and the type of environment that you envision. Some other aspects of your game that you will need to consider at this stage are as follows: How will your players learn how to play your game? How will the game progress? This may include introducing different abilities, challenges, levels, and so on. Here is where you will need to observe the flow of the game. Too much happening and you will have a recipe for chaos, not enough and your player will get bored. What is the number of players that you envision playing your game, even if you intend for a co-op or online mode? What are the main features that will be in your game? How will you market your game? Will there be an online blog that documents the stages of development? Will it include interviews with different members of the team? Will there be different content that is tailored for each network (for example, Twitter, Facebook, Instagram, and so on). Bringing it together: This involves thinking about how all your ideas will come together and how they will work, or won't. Think of this stage as creating a painting. You may have all pieces, but you need to know how to use them to create the piece of art. Some brushes (for example, story, characters) work better with some paints (for example, game elements, mechanics), and so on. This stage is about bringing your ideas and concepts into reality. This stage features design processes, such as the following: Storyboards that will give an overview of how the story and the gameplay evolve throughout the game. Character design sheets that will outline characteristics about your characters and how they fit into the story. Game User Interfaces (GUIs) that will provide information to the player during gameplay. This may include elements, such as progress bars, points, and items that they will collect along the way. Prototyping: This is where things get real…well, relatively. It may be something as simple as a piece of paper or something more complex as a 3D model. You then begin to create the environments or the levels that your player will explore. As you develop your world, you will take your content and populate the levels. Prototyping is where we take what was in our head and sketched out on paper and use it to sculpt the gameful beast. The main purpose of this stage is to see how everything works, or doesn't. For example, the fantastic idea of a huge mech-warrior with flames shooting out of an enormous gun on its back was perhaps not the fantastic idea that was on paper, at least not in the intended part of the game. Rapid prototyping is fast and rough. Remember when you were in school and you had things, such as glue, scissors, pens, and pencils; well, that is what you will need for this. It gets the game to a functioning point before you spend tireless hours in a game engine trying to create your game. A few bad rapid prototypes early on can save a lot of time instead of a single digital one. Lastly, rapid prototyping isn't just for the preliminary prototyping phase. It can be used before you add in any new features to your game once it's already set up. Iteration: This is to the game what an iron is to a creased shirt. You want your game to be on point and iterating it gets it to that stage. For instance, that awesome mech-warrior that you created for the first level was perhaps better as the final boss. Iteration is about fine-tuning the game, that is, to tweak it so that it not only flows better overall, but also improves the gameplay. Playtesting: This is the most important part of the whole process once you have your game to a relatively functioning level. The main concept here is to playtest, playtest, and playtest. The importance of this stage cannot be emphasized enough. More often than not, games are buggy when finally released, with problems and issues that could be avoided during this stage. As a result, players lose interest and reviews contain frustration and disappointment, which—let's face it—we don't want after hours and hours of blood, sweat, and tears. The key here is not only to playtest your game but also to do it in multiple ways and on multiple devices with a range of different people. If you release your game on PC, test it on a high performance one and a low performance one. The same process should be applied for mobile devices (phones, tablets) and operating systems. Evaluate: Evaluateyour game based on the playtesting. Iterating, playtesting, and evaluating are three steps that you will go through on a regular basis, more so as you implement a new feature or tweak an existing one. This cycle is important. You wouldn't buy a car that has parts added without being tested first so why should a player buy a game with untested features? Build: Build your game and get it ready for distribution, albeit on CD or online as a digital download Publish: Publish your game! Your baby has come of age and is ready to be released out into the wild where it will be a portal for players around the world to enter the world that you (and your team) created from scratch. Getting gamified When we merge everyday objectives with games, we create gamified experiences. The aim of these experiences is to improve something about ourselves in ways that are ideally more motivating than how we perceive them in real life. For example, think of something that you find difficult to stay motivated with. This may be anything from managing your finances, to learning a new language, or even exercising. Now, if you make a deal with yourself to buy a new dress once you finish managing your finances or to go on a trip once you have learned a new language, you are turning the experience into a game. The rules are simply to finish the task; the condition of finishing it results in a reward—in the preceding example, either a dress or the trip. The fundamental thing to remember is that gamified experiences aim to make ordinary tasks extraordinary and enjoyable for the player. Games, gaming, and game-like experiences can give rise to many types of opportunities for us to play or even escape reality. To finish this brief exploration into the design of games, we must realize that games are not solely about sitting in front of the TV, playing on the computer, or being glued to the seat transfixed on a digital character dodging bullets. The game mechanics that make a task more engaging and fun is defined as "Gamification." Gamification relates to games, and not play; while the term has become popular, the concept is not entirely new. Think about loyalty cards, not just frequent flyer mile programs, but maybe even at your local butcher or café. Do you get a discount after a certain amount of purchases? For example, maybe, the tenth coffee is free. It's been a while since various reward schemes have already been in place; giving children a reward for completing household chores or for good behavior and rewarding "gold stars" for academic excellence is gamification. If you consider some social activities, such as Scouts, they utilize "gamification" as part of their procedures. Scouts learn new skills and cooperate and through doing so, they achieve status and receive badges of honor that demonstrate levels of competency. Gamification has become a favorable approach to "engaging" clients with new and exciting design schemes to maintain interest and promote a more enjoyable and ideally "fun" product. The product in question does not have to be "digital." Therefore, "gamification" can exist both in a physical realm (as mentioned before with the rewarding of gold stars) as well as in a more prominent digital sense (for example, badge and point reward systems) as an effective way to motivate and engage users. Some common examples of gamification include the following: Loyalty programs: Each time you engage with the company in a particular way, such as buying certain products or amount of, you are rewarded. These rewards can include additional products, points toward items, discounts, and even free items. School House points: A pastime that some of us may remember, especially fans of Harry Potter is that each time you do the right thing, such as follow the school rules, you get some points. Alternatively, you do the wrong thing, and you lose points. Scouts: It rewards levels of competency with badges and ranks. The more skilled you are, the more badges you collect, wear, and ultimately, the faster you work your way up the hierarchy. Rewarding in general: This will often be associated with some rules, and these rules determine whether or not you will get a reward. Eat your vegetables, you will get dessert; do your math homework, you will get to play. Both have winning conditions. Tests: As horrifying as it might sound, tests can be considered as a game. For example, we're on a quest to learn about history. Each assignment you get is like a task, preparing you for the final battle—the exam. At the end of all these assessments, you get a score or a grade that indicates to you your progress as you pass from one concept to the next. Ultimately, your final exam will determine your rank among your peers and whether or not you made it to the next level (that being anywhere from your year level to a university). It may be also worth noting that just as in games, you also have those trying to work the system, searching for glitches in the system that they can exploit. However, just as in games, they too eventually are kicked. One last thing to remember when you design anything targeted toward kids is that they can be a lot more perceptive than what we sometimes give them credit for. Therefore, if you "disguise" educational content with gameplay, it is likely that they will see through it. It's the same with adults; they know that they are monitoring their health or spending habits, it's your job to make it a little less painful. Therefore, be upfront, transparent, and cut through the "disguise." Of course, kids don't want to be asked to "play a game about maths" but they will be more interested in "going on adventures to beat the evil dragon with trigonometry." The same goes for adults; creating an awesome character that can be upgraded to a level-80 warrior for remembering to take out the trash, keep hydrated, and eat healthier is a lot better than telling them this is a "fun" application to become a better person. There is no I in Team Working on our own can be good, sometimes working with others can be better! However, the problem with working in a team is that we're all not equal. Some of us are driven by the project, with the aim to get the best possible outcome, whereas, others are driven by fame, reward, money, and the list goes on. If you ever worked on a group project in school, then you know exactly what it's like. Agile gamification is, to put simply, getting teams to work better together. Often, large complex projects encounter a wide range of problems from keeping on top of schedules, different perspectives, undefined roles, and a lack of overall motivation. Agile frameworks in this context are associated with the term Scrum. This describes an overall framework used to formalize software development projects. The Scrum process works as follows: The owner of the product will create a wish list known as the product backlog. Once the sprint planning begins, members of the team (between 3-9 people) will take sections from the top of the product backlog. Sprint planning involves the following: It involves listing all of the items that are needed to be completed for the project (in a story format—who, what, and why). This list needs to be prioritized. It includes estimating each task relatively (using the Fibonacci system). It involves planning the work sprint (1-2 week long, but less than 1 month long) and working toward a demo. It also involves making the work visible using a storyboard that contains the following sections: To do, Doing, and Done. Items begin in the To do section; once they have begun, they move to the Doing section; and once they are completed, they are then put in the Done section. The idea is that the team works through tasks in the burn down chart. Ideally, the amount of points that the sprint began with (in terms of tasks to be done) decreases in value each day you get closer to finishing the sprint. The team engages with daily meetings (preferably standing up) run by the Sprint/Scrum master. These meetings discuss what was done, what is planned to be done during the day, any issues that come up or might come up, and how can improvements be made. It provides a demonstration of the product's basic (working) features. During this stage, feedback is provided by the product owner as to whether or not they are happy with what has been done, the direction that it is going, and how it will relate to the remaining parts of the project. At this stage, the owner may ask you to improve it, iterate it, and so forth, for the next sprint. Lastly, the idea is to get the team together and to review the development of the project as a whole: what went well and what didn't go so well and what are the areas of improvement that can then be used to make the next Scrum better? Next, they will decide on how to implement each section. They will meet each day to not only assess the overall progress made for the development of each section but also to ensure that the work will be achieved within the time frame. Throughout the process, the team leader known as the Scrum/Sprint Master has the job of ensuring that the team stays focused and completes sections of the product backlog on time. Once the sprint is finished, the work should be at a level to be shipped, sold to the customer, or to at least show to a stakeholder. At the end of the sprint, the team and Scrum/Sprint Master assess the completed work and determine whether it is at an acceptable level. If the work is approved, the next sprint begins. Just as the first sprint, the team chooses another chunk of the product backlog and begins the process again.An overview of the Scrum process However, in the modern world, Scrum is adopted and applied to a range of different contexts outside of software development. As a result, it has gone through some iterations, including gamification. Agile Gamification, as it is more commonly known as, takes the concept of Scrum and turns it into a playful experience. Adding an element of fun to agile frameworks To turn the concept of Scrum into something a bit more interesting and at the same time to boost the overall motivation of your team, certain parts of it can be transformed with game elements. For example, implementing leaderboards based on the amount of tasks that each team member is able to complete (and on time) results in a certain number of points. By the end of the spring, the team member with the most number of points may be able to obtain a reward, such as a bonus in their next pay or an extended lunch break. It is also possible to make the burn down chart a bit more exciting by placing various bonuses if certain objectives are met within certain time frame or at a certain point during the burn down;as a result, giving added incentive to team members to get things delivered on time. In addition, to ensure that quality standards are also maintained, Scrum/Sprint Masters can also provide additional rewards if there is few or no feedback regarding things, such as quality or the overall cohesiveness of the output from the sprint. An example of a gamified framework can be seen in the image below. While setting up a DuoLingo Classroom account, users are presented with various game elements (for example, progress bar) and a checklist to ensure that everything that needs to be completed is done. Playtesting This is one of the most important parts of your game design. In fact, you cannot expect to have a great game without it. Playtesting is not just about checking whether your game works, or if there are bugs, it is also about finding out what people really think about it before you put it out in the world to see. In some cases, playtesting can make the difference between succeeding of failing epically. Consider this scenario: you have spent the last year, your blood, sweat and tears, and even your soul to create something fantastic. You probably think it's the best thing out there. Then, after you release it, you realize that only half the game was balanced, or worst, half interesting. At this stage, you will feel pretty down, but all these could have been avoided if you had taken the time to get some feedback. As humans, we don't necessarily like to hear our greatest love being criticized, especially if we have committed so much of our lives to it. However, the thing to keep in mind is, this stage shapes the final details. Playtesting is not meant for the final stages, when your game is close to being finished. At each stage, even when you begin to get a basic prototype completed, it should be play tested. During these stages, it does not have to be a large-scale testing, it can be done by a few colleagues, friends, or even family who can give you an idea of whether or not you're heading in the right direction. Of course, the other important thing to keep in mind is that the people who are testing your game are as close, if not the target audience. For instance, image that you're creating for your gamified application to encourage people to take medication on a regular basis is not ideal to test with people who do not take medication. Sure, they may be able to cover general feedback, such as user interface elements or even interaction, but in terms of its effectiveness, you're better off taking the time to recruit more specific people. Iterating After we have done all the playtesting is the time to re-plan another development cycle. In fact, the work of tuning your application doesn't stop after the first tests. On the contrary, it goes through different iterations many times. The iteration cycle starts with the planning stage, which include brainstorming, organizing the work (as we saw for instance in Scrum), and so on. In the next phase, development, we actually create the application, as we did in the previous chapter. Then, there is the playtesting, which we saw earlier in this chapter. In the latter stage, we tune and tweak values and fix bugs from our application. Afterward, we iterate the whole cycle again, by entering in the planning stage again. Here, we will need to plan the next iteration: what should be left and what should be done better or what to remove. All these decisions should be based on what we have collected in the playtesting stage. The cycle is well represented in the following diagram as a spiral that goes on and on through the process: The point of mentioning it now is because after you finish playtesting your game, you will need to repeat the stages that we have done previously, again. You will have to modify your design; you may need to even redesign things again. So, it is better to think of this as upgrading your design, rather than a tedious and repetitive process. When to stop? In theory, there is no stopping; the more the iteration, the better the application will be. Usually, the iterations stop when the application is well enough for your standards or when external constrains, such as the market or deadlines, don't allow you to perform any more iteration. The question when to stop? is tricky, and the answer really depends on many factors. You will need to take into account the resources needed to perform another iteration and time constraints. Of course, remember that your final goal is to deliver a quality product to your audience and each iteration is a step closer. Taking in the view with dashboards Overviews, summaries, and simplicity make life easier. Dashboards are a great way for keeping a lot of information relatively concise and contained, without being too overwhelming to a player. Of course, if the players want to obtain more detailed information, perhaps statistics about their accuracy since they began, they will have the ability to do so. So, what exactly is a dashboard? A dashboard is a central hub to view all of your progress, achievements, points, and rewards. If we take a look at the following screenshot, we can get a rough idea about what kind of information that they display. The image on the left is the dashboard for Memrise and displays current language courses, in this case, German; the players' achievements and streak; and the progress that they are making in the course. On the right is the dashboard for DuoLingo. Similar to Memrise, it also features information about daily streaks, amount of time committed, and the strength of each category learned for the new language, in this case, Italian. By just looking at these dashboards, the player can get a very quick idea about how well or bad they are doing.   Different dashboards (left) Memrise (right) DuoLingo Different approaches to dashboards can encourage different behaviors depending on the data displayed and how it is displayed. For example, you can have a dashboard that provides reflective information more dominantly, such as progress bars and points. Others can provide a more social approach by displaying the players rank among friends and comparing their statistics to others who are also engaged with the application. Some dashboards may even suggest friends that have similar elements in common, such as the language that is being learned. Ideally, the design of dashboards can be as simple or as complicated as the designer decides, but typically, the less is more approach is better. Summary Everything that we discussed in this chapter is just a taste of what this book offers. Each aspect of the design process is explained in more detail, giving you not only the information, but also the practical skills that you can use to build upon and develop any gamified application from start to finish. If you want to find out about gamification, how to use it, and more importantly how to implement it into Unity, then this book is a great foundation to get you going. In particular, you will learn how to apply all these concepts into Unity and create gamified experiences. Furthermore, the book will bring you to create a gamified application starting from the basic pieces, with a particular focus to your audience and your goals. Learning about the uses of gamification does not have to stop with this book. In fact, there are many ways that you can develop the knowledge that you have gained and apply it to other tasks. Some other Packt books, such as the Unity UI Cookbook by Francesco Sapio, which you can obtain at https://www.packtpub.com/game-development/unity-ui-cookbook features a range of different recipes to implement a range of different UI elements that can even be featured in your dashboard. In fact, UIs are the key for the development of gamifed experiences and applications. The main thing is that you continue to learn, adapt, and to apply your knowledge in many different types of contexts. Resources for Article: Further resources on this subject: Buildbox 2 Game Development: peek-a-boo [article] Customizing the Player Character [article] Sprites in Action [article]
Read more
  • 0
  • 0
  • 4102

article-image-designing-games-swift
Packt
03 Nov 2016
16 min read
Save for later

Designing Games with Swift

Packt
03 Nov 2016
16 min read
In this article by Stephen Haney, the author of the book Swift 3 Game Development - Second Edition, we will see that apple's newest version of its flagship programming language, Swift 3, is the perfect choice for game developers. As it matures, Swift is realizing its opportunity to be something special, a revolutionary tool for app creators. Swift is the gateway for developers to create the next big game in the Apple ecosystem. We have only started to explore the wonderful potential of mobile gaming, and Swift is the modernization we need for our toolset. Swift is fast, safe, current, and attractive to developers coming from other languages. Whether you are new to the Apple world, or a seasoned veteran of Objective-C, I think you will enjoy making games with Swift. (For more resources related to this topic, see here.) Apple's website states the following: "Swift is a successor to the C and Objective-C languages." My goal is to guide you step-by-step through the creation of a 2D game for iPhones and iPads. We will start with installing the necessary software, work through each layer of game development, and ultimately publish our new game to the App Store. We will also have some fun along the way! We aim to create an endless flyer game featuring a magnificent flying penguin named Pierre. What is an endless flyer? Picture hit games like iCopter, Flappy Bird, Whale Trail, Jetpack Joyride, and many more—the list is quite long. Endless flyer games are popular on the App Store, and the genre necessitates that we cover many reusable components of 2D game design. I will show you how to modify our mechanics to create many different game styles. My hope is that our demo project will serve as a template for your own creative works. Before you know it, you will be publishing your own game ideas using the techniques we explore together. In this article, we will learn the following topics: Why you will love Swift What you will learn in this article New in Swift 3 Setting up your development environment Creating your first Swift game Why you will love Swift Swift, as a modern programming language, benefits from the collective experience of the programming community; it combines the best parts of other languages and avoids poor design decisions. Here are a few of my favorite Swift features: Beautiful syntax: Swift's syntax is modern and approachable, regardless of your existing programming experience. Apple balanced syntax with structure to make Swift concise and readable. Interoperability: Swift can plug directly into your existing projects and run side-by-side with your Objective-C code. Strong typing: Swift is a strongly typed language. This means the compiler will catch more bugs at compile time, instead of when your users are playing your game! The compiler will expect your variables to be of a certain type (int, string, and so on) and will throw a compile-time error if you try to assign a value of a different type. While this may seem rigid if you are coming from a weakly typed language, the added structure results in safer, more reliable code. Smart type inference: To make things easier, type inference will automatically detect the types of your variables and constants based upon their initial value. You do not need to explicitly declare a type for your variables. Swift is smart enough to infer variable types in most expressions. Automatic memory management: As the Apple Swift developer guide states, "memory management just works in Swift". Swift uses a method called Automatic Reference Counting (ARC) to manage your game's memory usage. Besides a few edge cases, you can rely on Swift to safely clean up and turn off the lights. An even playing field: One of my favorite things about Swift is how quickly the language is gaining mainstream adoption. We are all learning and growing together, and there is a tremendous opportunity to break new ground. Open source: From version 2.2 onwards, Apple made Swift open source, curetting it through the website www.swift.org, and launched a package manager with Swift 3. This is a welcome change as it fosters greater community involvement and a larger ecosystem of third party tools and add-ons. Eventually, we should see Swift migrate to new platforms. Prerequisites I will try to make this text easy to understand for all skill levels: I will assume you are brand new to Swift as a language Requires no prior game development experience, though it will help I will assume you have a fundamental understanding of common programming concepts What you will learn in this article You will be capable of creating and publishing your own iOS games. You will know how to combine the techniques we learned to create your own style of game, and you will be well prepared to dive into more advanced topics with a solid foundation in 2D game design. Embracing SpriteKit SpriteKit is Apple's 2D game development framework and your main tool for iOS game design. SpriteKit will handle the mechanics of our graphics rendering, physics, and sound playback. As far as game development frameworks go, SpriteKit is a terrific choice. It is built and supported by Apple and thus integrates perfectly with Xcode and iOS. You will learn to be highly proficient with SpriteKit as we will be using it exclusively in our demo game. We will learn to use SpriteKit to power the mechanics of our game in the following ways: Animate our player, enemies, and power-ups Paint and move side-scrolling environments Play sounds and music Apply physics-like gravity and impulses for movement Handle collisions between game objects Reacting to player input The control schemes in mobile games must be inventive. Mobile hardware forces us to simulate traditional controller inputs, such as directional pads and multiple buttons, on the screen. This takes up valuable visible area, and provides less precision and feedback than with physical devices. Many games operate with only a single input method: A single tap anywhere on the screen. We will learn how to make the best of mobile input, and explore new forms of control by sensing device motion and tilt. Structuring your game code It is important to write well-structured code that is easy to re-use and modify as your game design inevitably changes. You will often find mechanical improvements as you develop and test your games, and you will thank yourself for a clean working environment. Though there are many ways to approach this topic, we will explore some best practices to build an organized system with classes, protocols, inheritance, and composition. Building UI/menus/levels We will learn to switch between scenes in our game with a menu screen. We will cover the basics of user experience design and menu layout as we build our demo game. Integrating with Game Center Game Center is Apple's built-in social gaming network. Your game can tie into Game Center to store and share high scores and achievements. We will learn how to register for Game Center, tie it into our code, and create a fun achievement system. Maximizing fun If you are like me, you will have dozens of ideas for games floating around your head. Ideas come easily, but designing fun game play is difficult! It is common to find that your ideas need game play enhancements once you see your design in action. We will look at how to avoid dead-ends and see your project through to the finish line. Plus, I will share my tips and tricks to ensure your game will bring joy to your players. Crossing the finish line Creating a game is an experience you will treasure. Sharing your hard work will only sweeten the satisfaction. Once our game is polished and ready for public consumption, we will navigate the App Store submission process together. You will finish feeling confident in your ability to create games with Swift and bring them to market in the App Store. Monetizing your work Game development is a fun and rewarding process, even without compensation, but the potential exists to start a career, or side-job, selling games on the App Store. Successfully promoting and marketing your game is an important task. I will outline your options and start you down the path to monetization. New in Swift 3 The largest feature in Swift 3 is syntax compatibility and stability. Apple is trying to refine its young, shifting language into its final foundational shape. Each successive update of Swift has introduced breaking syntax changes that made older code incompatible with the newest version of Swift; this is very inconvenient for developers. Going forward, Swift 3 aims to reach maturity and maintain source compatibility with future releases of the language. Swift 3 also features the following:  A package manager that will help grow the ecosystem A more consistent, readable API that often results in less code for the same result Improved tooling and bug fixes in the IDE, Xcode Many small syntax improvements in consistency and clarity Swift has already made tremendous steps forward as a powerful, young language. Now Apple is working on polishing Swift into a mature, production-ready tool. The overall developer experience improves with Swift 3. Setting up your development environment Learning a new development environment can be a roadblock. Luckily, Apple provides some excellent tools for iOS developers. We will start our journey by installing Xcode. Introducing and installing Xcode Xcode is Apple's Integrated Development Environment (IDE). You will need Xcode to create your game projects, write and debug your code, and build your project for the App Store. Xcode also comes bundled with an iOS simulator to test your game on virtualized iPhones and iPads on your computer. Apple praises Xcode as "an incredibly productive environment for building amazing apps for Mac, iPhone, and iPad".   To install Xcode, search for Xcode in the AppStore, or visit http://developer.apple.com and select Developer and then Xcode. Swift is continually evolving, and each new Xcode release brings changes to Swift. If you run into errors because Swift has changed, you can always use Xcode's built-in syntax update tool. Simply use Xcode's Edit | Convert to Latest Syntax option to update your code. Xcode performs common IDE features to help you write better, faster code. If you have used IDEs in the past, then you are probably familiar with auto completion, live error highlighting, running and debugging a project, and using a project manager pane to create and organize your files. However, any new program can seem overwhelming at first. We will walk through some common interface functions over the next few pages. I have also found tutorial videos on YouTube to be particularly helpful if you are stuck. Most common search queries result in helpful videos. Creating our first Swift game Do you have Xcode installed? Let us see some game code in action in the simulator! We will start by creating a new project in Xcode. For our demo game, we will create a side-scrolling endless flyer featuring an astonishing flying penguin named Pierre. I am going to name this project Pierre Penguin Escapes the Antarctic, but feel free to name your project whatever you like. Follow these steps to create a new project in Xcode: Launch Xcode and navigate to File | New | Project. You will see a screen asking you to select a template for your new project. Select iOS | Application in the left pane, and Game in the right pane. It should look like this: Once you select Game, click Next. The following screen asks us to enter some basic information about our project. Don’t worry; we are almost at the fun bit. Fill in the Product Name field with the name of your game. Let us fill in the Team field. Do you have an active Apple developer account? If not, you can skip over the Team field for now. If you do, your Team is your developer account. Click Add Team and Xcode will open the accounts screen where you can log in. Enter your developer credentials as shown in the following screenshot: Once you're authenticated, you can close the accounts screen. Your developer account should appear in the Team dropdown. You will want to pick a meaningful Organization Name and Organization Identifier when you create your own games for publication. Your Organization Name is the name of your game development studio. For me, that's Joyful Games. By convention, your Organization Identifier should follow a reverse domain name style. I will use io.JoyfulGames since my website is JoyfulGames.io. After you fill out the name fields, be sure to select Swift for the Language, SpriteKit for Game Technology, and Universal for Devices. For now, uncheck Integrate GameplayKit, uncheck Include Unit Tests, uncheck Include UI Tests. We will not use these features in our demo game. Here are my final project settings: Click Next and you will see the final dialog box. Save your new project. Pick a location on your computer and click Next. And we are in! Xcode has pre-populated our project with a basic SpriteKit template. Navigating our project Now that we have created our project, you will see the project navigator on the left-hand side of Xcode. You will use the project navigator to add, remove, and rename files and generally organize your project. You might notice that Xcode has created quite a few files in our new project. We will take it slow; don’t feel that you have to know what each file does yet, but feel free to explore them if you are curious: Exploring the SpriteKit Demo Use the project navigator to open up the file named GameScene.swift. Xcode created GameScene.swift to store the default scene of our new game. What is a scene? SpriteKit uses the concept of scenes to encapsulate each unique area of a game. Think of the scenes in a movie; we will create a scene for the main menu, a scene for the Game Over screen, a scene for each level in our game, and so on. If you are on the main menu of a game and you tap Play, you move from the menu scene to the Level 1 scene. SpriteKit prepends its class names with the letters "SK"; consequently, the scene class is SKScene. You will see there is already some code in this scene. The SpriteKit project template comes with a very small demo. Let's take a quick look at this demo code and use it to test the iOS simulator. Please do not be concerned with understanding the demo code at this point. Your focus should be on learning the development environment. Look for the run toolbar at the top of the Xcode window. It should look something like the following: Select the iOS device of your choice to simulate using the dropdown on the far right. Which iOS device should you simulate? You are free to use the device of your choice. I will be using an iPhone 6 for the screenshots, so choose iPhone 6 if you want your results to match my images perfectly. Unfortunately, expect your game to play poorly in the simulator. SpriteKit suffers from poor FPS in the iOS simulator. Once our game becomes relatively complex, we will see our FPS drop, even on high-end computers. The simulator will get you through, but it is best if you can plug in a physical device to test. It is time for our first glimpse of SpriteKit in action! Press the gray play arrow in the toolbar (handy keyboard shortcut: command + r). Xcode will build the project and launch the simulator. The simulator starts in a new window, so make sure you bring it to the front. You should see a gray background with white text: Hello, World. Click around on the gray background. You will see colorful, spinning boxes spawning wherever you click: If you have made it this far, congratulations! You have successfully installed and configured everything you need to make your first Swift game. Once you have finished playing with the spinning squares, you can close the simulator down and return to Xcode. Note: You can use the keyboard command command + q to exit the simulator or press the stop button inside Xcode. If you use the stop button, the simulator will remain open and launch your next build faster. Examining the demo code Let's quickly explore the demo code. Do not worry about understanding everything just yet; we will cover each element in depth later. At this point, I am hoping you will acclimatize to the development environment and pick up a few things along the way. If you are stuck, keep going! Make sure you have GameScene.swift open in Xcode. The demo GameScene class implements some functions you will use in your games. Let’s examine these functions. Feel free to read the code inside each function, but I do not expect you to understand the specific code just yet. The game invokes the didMove function whenever it switches to the GameScene. You can think of it a bit like an initialize, or main, function for the scene. The SpriteKit demo uses it to draw the Hello, World text to the screen and set up the spinning square shape that shows up when we tap. There are seven functions involving touch which handle the user's touch input to the iOS device screen. The SpriteKit demo uses these functions to spawn the spinning square wherever we touch the screen. Do not worry about understanding these functions at this time. The update function runs once for every frame drawn to the screen. The SpriteKit demo does not use this function, but we may have reason to implement it later. Cleaning up I hope that you have absorbed some Swift syntax and gained an overview of Swift and SpriteKit. It is time to make room for our own game; let us clear all of that demo code out! We want to keep a little bit of the boilerplate, but we can delete most of what is inside the functions. To be clear, I do not expect you to understand this code yet. This is simply a necessary step towards the start of our journey. Please remove lines from your GameScene.swift file until it looks like the following code: import SpriteKit class GameScene: SKScene { override funcdidMove(to view: SKView) { } } Summary You have already accomplished a lot. You have had your first experience with Swift, installed and configured your development environment, launched code successfully into the iOS simulator. Great work! Resources for Article: Further resources on this subject: Swift for Open Source Developers [Article] Swift Power and Performance [Article] Introducing the Swift Programming Language [Article]
Read more
  • 0
  • 0
  • 2751