Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-thats-one-fancy-hammer
Packt
13 Jan 2014
8 min read
Save for later

That's One Fancy Hammer!

Packt
13 Jan 2014
8 min read
(For more resources related to this topic, see here.) Introducing Unity 3D Unity 3D is a new piece of technology that strives to make life better and easier for game developers. Unity is a game engine or a game authoring tool that enables creative folks like you to build video games. By using Unity, you can build video games more quickly and easily than ever before. In the past, building games required an enormous stack of punch cards, a computer that filled a whole room, and a burnt sacrificial offering to an ancient god named Fortran. Today, instead of spanking nails into boards with your palm, you have Unity. Consider it your hammer—a new piece of technology for your creative tool belt. Unity takes over the world We'll be distilling our game development dreams down to small, bite-sized nuggets instead of launching into any sweepingly epic open-world games. The idea here is to focus on something you can actually finish instead of getting bogged down in an impossibly ambitious opus. When you're finished, you can publish these games on the Web, Mac, or PC. The team behind Unity 3D is constantly working on packages and export opinions for other platforms. At the time of this writing, Unity could additionally create games that can be played on the iPhone, iPod, iPad, Android devices, Xbox Live Arcade, PS3, and Nintendo's WiiWare service. Each of these tools is an add-on functionality to the core Unity package, and comes at an additional cost. As we're focusing on what we can do without breaking the bank, we'll stick to the core Unity 3D program for the remainder of this article. The key is to start with something you can finish, and then for each new project that you build, to add small pieces of functionality that challenge you and expand your knowledge. Any successful plan for world domination begins by drawing a territorial border in your backyard Browser-based 3D – welcome to the future Unity's primary and most astonishing selling point is that it can deliver a full 3D game experience right inside your web browser. It does this with the Unity Web Player—a free plugin that embeds and runs Unity content on the Web Time for action – install the Unity Web Player Before you dive into the world of Unity games, download the Unity Web Player. Much the same way the Flash player runs Flash-created content, the Unity Web Player is a plugin that runs Unity-created content in your web browser. Go to http://unity3D.com. Click on the install now! button to install the Unity Web Player. Click on Download Now! Follow all of the on-screen prompts until the Web Player has finished installing. Welcome to Unity 3D! Now that you've installed the Web Player, you can view the content created with the Unity 3D authoring tool in your browser. What can I build with Unity? In order to fully appreciate how fancy this new hammer is, let's take a look at some projects that other people have created with Unity. While these games may be completely out of our reach at the moment, let's find out how game developers have pushed this amazing tool to its very limits. FusionFall The first stop on our whirlwind Unity tour is FusionFall—a Massively Multiplayer Online Role-Playing Game (MMORPG). You can find it at fusionfall.com. You may need to register to play, but it's definitely worth the extra effort! FusionFall was commissioned by the Cartoon Network television franchise, and takes place in a re-imagined, anime-style world where popular Cartoon Network characters are all grown up. Darker, more sophisticated versions of the Powerpuff Girls, Dexter, Foster and his imaginary friends, and the kids from Codename: Kids Next Door run around battling a slimy green alien menace. Completely hammered FusionFall is a very big and very expensive high-profile game that helped draw a lot of attention to the then-unknown Unity game engine when the game was released. As a tech demo, it's one of the very best showcases of what your new technological hammer can really do! FusionFall has real-time multiplayer networking, chat, quests, combat, inventory, NPCs (non-player characters), basic AI (artificial intelligence), name generation, avatar creation, and costumes. And that's just a highlight of the game's feature set. This game packs a lot of depth. Should we try to build FusionFall? At this point, you might be thinking to yourself, "Heck YES! FusionFall is exactly the kind of game I want to create with Unity, and this article is going to show me how!" Unfortunately, a step-by-step guide to creating a game the size and scope of FusionFall would likely require its own flatbed truck to transport, and you'd need a few friends to help you turn each enormous page. It would take you the rest of your life to read, and on your deathbed, you'd finally realize the grave error that you had made in ordering it online in the first place, despite having qualified for free shipping. Here's why: check out the game credits link on the FusionFall website: http://www.fusionfall.com/game/credits.php. This page lists all of the people involved in bringing the game to life. Cartoon Network enlisted the help of an experienced Korean MMO developer called Grigon Entertainment. There are over 80 names on that credits list! Clearly, only two courses of action are available to you: Build a cloning machine and make 79 copies of yourself. Send each of those copies to school to study various disciplines, including marketing, server programming, and 3D animation. Then spend a year building the game with your clones. Keep track of who's who by using a sophisticated armband system. Give up now because you'll never make the game of your dreams. Another option Before you do something rash and abandon game development for farming, let's take another look at this. FusionFall is very impressive, and it might look a lot like the game that you've always dreamed of making. This article is not about crushing your dreams. It's about dialing down your expectations, putting those dreams in an airtight jar, and taking baby steps. Confucius said: "A journey of a thousand miles begins with a single step." I don't know much about the man's hobbies, but if he was into video games, he might have said something similar about them—creating a game with a thousand awesome features begins by creating a single, less feature-rich game. So, let's put the FusionFall dream in an airtight jar and come back to it when we're ready. We'll take a look at some smaller Unity 3D game examples and talk about what it took to build them. Off-Road Velociraptor Safari No tour of Unity 3D games would be complete without a trip to Blurst.com—the game portal owned and operated by indie game developer Flashbang Studios. In addition to hosting games by other indie game developers, Flashbang has packed Blurst with its own slate of kooky content, including Off-Road Velociraptor Safari. (Note: Flashbang Studios is constantly toying around with ways to distribute and sell its games. At the time of this writing, Off-Road Velociraptor Safari could be played for free only as a Facebook game. If you don't have a Facebook account, try playing another one of the team's creations, like Minotaur China Shop or Time Donkey). In Off-Road Velociraptor Safari, you play a dinosaur in a pith helmet and a monocle driving a jeep equipped with a deadly spiked ball on a chain (just like in the archaeology textbooks). Your goal is to spin around in your jeep doing tricks and murdering your fellow dinosaurs (obviously). For many indie game developers and reviewers, Off-Road Velociraptor Safari was their first introduction to Unity. Some reviewers said that they were stunned that a fully 3D game could play in the browser. Other reviewers were a little bummed that the game was sluggish on slower computers. We'll talk about optimization a little later, but it's not too early to keep performance in mind as you start out. Fewer features, more promise If you play Off-Road Velociraptor Safari and some of the other games on the Blurst site, you'll get a better sense of what you can do with Unity without a team of experienced Korean MMO developers. The game has 3D models, physics (code that controls how things move around somewhat realistically), collisions (code that detects when things hit each other), music, and sound effects. Just like FusionFall, the game can be played in the browser with the Unity Web Player plugin. Flashbang Studios also sells downloadable versions of its games, demonstrating that Unity can produce standalone executable game files too. Maybe we should build Off-Road Velociraptor Safari? Right then! We can't create FusionFall just yet, but we can surely create a tiny game like Off-Road Velociraptor Safari, right? Well... no. Again, this article isn't about crushing your game development dreams. But the fact remains that Off-Road Velociraptor Safari took five supremely talented and experienced guys eight weeks to build on full-time hours, and they've been tweaking and improving it ever since. Even a game like this, which may seem quite small in comparison to full-blown MMO like FusionFall, is a daunting challenge for a solo developer. Put it in a jar up on the shelf, and let's take a look at something you'll have more success with.
Read more
  • 0
  • 0
  • 1680

article-image-creating-ice-and-snow-materials
Packt
24 Dec 2013
8 min read
Save for later

Creating the ice and snow materials

Packt
24 Dec 2013
8 min read
(For more resources related to this topic, see here.) Getting ready We will create the ice and the snow using a single material, and mix them using a new technique. Select the iceberg mesh and add a new material to it. How to do it... We will now see the steps required to create the ice as well as the snow material. Creating ice The following are the steps to create ice: Add Glossy BSDF and Glass BSDF and mix them using a Mix Shader node. Let's put Glossy in the first socket and Glass in the second one. As input for the color of both the BSDFs, we will use a color Mix node with Color1 as white and Color2 as RGB 0.600, 1.00, 0.760. As input for the Fac value of the color Mix node, we will use a Voronoi Texture node with the Generated coordinates, Intensity mode, and Scale 100. Invert the color output using an Invert node and plug it into the Fac value of the color Mix node. As the input for Roughness of the Glossy BSDF, we will use the Layer Weight node's Facing output with a Blend value of 0.800. Then we will plug this into a ColorRamp node and set the color stops as shown in the following screenshot. The first color stop is HSV 0.000, 0.000, 0.090 and the second one is HSV 0.000, 0.000, 0.530. Remember to plug the ColorRamp node into the Glossy BSDF roughness socket. Finally, set Glass BSDF node's IOR to 1.310 and Roughness to 0.080. Now we will create the Fac input for the Shader Mix node of the two BSDFs. Now we will add Noise Texture to the Generated coordinates with a Scale of 130, Detail of 1, and Distortion of 0.500. Plug this into a ColorRamp node and set the color stops as shown in the the following screenshot: Let's now add a Subsurface Scattering node. Set the mode to Compatible, the Scale to 10.000 and the Radius to 0.070, 0.070, 0.10. As a color input, let's add another color Mix node with Color1 as RGB 0.780, 0.960, 1.000 and Color2 as RGB 0.320, 0.450, 0.480. The Fac input for the color Mix node will be the same as for the color Mix node of the Glass and Glossy BSDFs. Now mix the SSS node with the mix of the other two BSDFs, using an Add Shader node. Now, we will create the normals for the shader. Add three Image Texture nodes. In the first one, let's load the IceScratches.jpg file. We will use the Generated coordinates with a Scale of XYZ 20.000, 20.000, 20.000. Set the projection mode to Box and the Blend to 0.500. In the second Image Texture node, load the ice_snow_DISP.pngfile, using the UV coordinates. Finally, load the ice_snow_NRM.png file in the third Image Texture node, using again the UV textures. Now let's mix the IceScratches.jpg with the ice_snow_DISP.png textures, using a color Mix node with the Displacement Texture into the Color1 socket and the scratches texture into the Color2 socket. Set the Fac value to 0.100. Plug the mix of the textures into the Height socket of a Bump node and then plug the ice_snow_NRM.png texture into the Color socket of a Normal Map node. Finally, plug this one into the Normal socket of the Bump node. Set the Normal Map node's Strength to 0.050, the Bump node's Strength to 0.500 and the Distance to 1.000. Plug the Bump node into all of the BSDFs we added so far. Frame every node we created and label the frame ICE. Creating snow The nodes we will add now will still be within the same material, but outside the ICE group we just created. Add a Glossy and a Subsurface Scatter BSDF nodes. Mix them using a Mix Shader node with 20 percent of influence from the Glossy BSDF node. Set both the colors to white. Also, set the SSS Scale to 3.00 and the Radius to 0.400, 0.400, 0.450. Set the Glossy mode to GGX and the Roughness to 0.600. Add a Noise Textures node and set Scale to 2000.000, Detail to 2.000 and Distortion to 0.000. We will use Generated coordinates for this texture. Connect the Fac output of the Noise Texture node to the Height socket of the Bump node and set the Strength to 0.200 and the Distance to 1.000. Connect the Normal output of the Noise Texture node to the Normal input of the Subsurface Scatter BSDF and Glossy BSDF nodes. Now let's mix the Mix Shader node of the Subsurface Scatter BSDF and Glossy BSDF nodes with an Emission shader using another Mix Shader node. Add new Noise Textures, this time with Scale as 2500.000, Detail as 2.000, and Distortion as 0.000. Connect the Fac output of the Noise Texture node to the Color input of the Gamma node, with a Gamma value of 8.000, and then add the Color output of the Gamma node to the Fac input of the ColorRamp node. We will set up the color stops as seen in the next screenshot. Connect the ColorRamp node's Color socket to the Fac socket of the previous Mix Shader node. Remember to use the Color output of the ColorRamp node. Frame all these nodes and label the frame SNOW. Mixing ice and snow Add a Geometry(Add | Input) and a Normal node (Add | Vectors). Connect the Normal output from the Geometry node to the Normal input of the Normal node. Now connect the Dot output of the Normal node to the first socket of a Multiply node and set the mode to Multiply and the second value to 2.000. Add a Mix shader node and connect the ICE frame into the first Shader socket and the SNOW frame into the second one. Finally, connect the output of the Multiply node into the Fac value of the Mix Shader node. How it works... Let's see the most interesting points of this material in detail. For the ice material, we used a Voronoi Texture node to create a pattern for the surface color. Then we mixed the Glass and Glossy BSDF nodes using a Noise Texture node to simulate both, the more and less transparent areas: for example, the ice may be less transparent due to some part of it being covered with snow, difference in purity of the water, or the thickness of the ice. Then we mixed the two BSDFs with a Subsurface Scatter node to simulate the dispersion of the lighting inside the ice. Note that we used the ColorRamp node quite often in order to fine tune the various mixing and inputs. The snow material is quite similar for the main concept, but is missing the refractive part of the ice. Here we did something else. We used a Noise Texture node with some tweaking (Gamma and ColorRamp) to make it really contrasted to mix an emission shader to the rest of the material. This will create a small emission dot over the snow surface that we will use in compositing to create the flakes. It is really interesting how we mixed the two materials. We wanted the snow to be placed only on the flat surfaces of the iceberg, while we wanted the slopes to be just ice. To obtain this effect, we extracted the normal information about the mesh and used it to understand which part of the mesh is facing upward. We must imagine the normals to be working like the sunlight falling on the surface of the earth. Half of the sphere is in darkness, and half is hit by the light. We can decide from which direction the light hits the surface. Now imagine the same principle applied to the shape of our mesh. In this way we can create a white mask on the areas that are hit by the normal sphere direction. With the Normal node, we can orient this effect wherever we want. The default position of the sphere is exactly what we needed: the parts of the mesh that are faced upward are made white, while the rest of the mesh is black. Turning the sphere around will make the direction of the ramp that has been created, change accordingly. The sphere, maybe, is not the best way to set these kind of things as it lacks a bit of precision (as for the setting of the sky), but this will probably change in the future with something that allows more precise settings. Finally we used a Multiply node to multiply the value coming from the Normal node and increased the contrast of the mask. There's more... The normal method we just saw in this article is not the only way of mixing materials. Just some time ago, two new plugins have been developed. The first one allows us to obtain the same results we got in this article by creating a weight map or a vertex paint based on the slope of the mesh, while the second creates the same based on the altitude. This opens up many possibilities not only in terms of material creation, but also for the distribution of particle systems! The plugins can be downloaded from the following links, where we can find some instructions about them: http://blenderthings.blogspot.nl/2013/09/height-to-vertex-weights-blender-addon.html See also In the following link, Andrew Price teaches us how to create a different kind of ice material; for example, material that is more suitable for ice cubes. Surely worth a watch! http://www.blenderguru.com/videos/how-to-create-realistic-ice/
Read more
  • 0
  • 0
  • 5576

article-image-planning-characters-look
Packt
23 Dec 2013
10 min read
Save for later

Planning a character's look

Packt
23 Dec 2013
10 min read
(For more resources related to this topic, see here.) Creating a character is not an easy job, it can be done by creative insight or careful calculation. The main hero is the soul of the game and whose avatar is used by a player to explore the game world. Players identify themselves with that picture on the screen, empathizing with it, enjoying, grumbling, and taking all the situations to heart. Therefore, the objective is to generate an emotional connection between the character and the player. It is not necessary to provoke only positive feelings; ironically, sometimes the player can hate his avatar. A graphic look of the character can induce a wide spectrum of emotions, and these emotions can be calculated in advance. This is based on the fact that humans usually have nearly the same reactions; in this case, the rules of their behavior are pretty conspicuous. To illustrate the principles, I created the Scale of attractiveness, made of four main entries. At the very left, there is the Cute mark, in the middle is Brutal, followed by Human likeness, and to the right we see Scary. So, the chart begins with something very adorable and sweet but end with a scary character. Each position has its own collection of qualities. Making characters cute Cuteness is one of the most popular and demanded features that game designers want for their creations. Protagonists of majority casual games are cute to some degree; they look and act as sweet and comely creatures. It is simply impossible to ignore them and not fall in love with them. Recall the famous image of Nintendo's Mario, he is 100 percent cute. What is the secret of such popularity? First, the cuteness is not about beauty (it is hard to call an alligator's baby truly attractive, but it definitely is cute), which depends on personal taste and preferences, but about some basic patterns and proportions. Here is a citation from Natalie Angie's article The Cute Factor published in The New York Times: "Scientists who study the evolution of visual signaling have identified a wide and still expanding assortment of features and behaviors that make something look cute: bright forward-facing eyes set low on a big round face, a pair of big round ears, floppy limbs and a side-to-side, teeter-totter gait, among many others." All listed features are general descriptions of one class of creatures on Earth: little babies and cubs. They are small, their heads are noticeably bigger than their body, their limbs are short, eyes are large, and so on. When we see something like that, a special system inside us tends to react commonly. It says that probably in front of us is a defenseless young creature that needs protection, care, and tenderness; a list of positive senses is switched on. Figuratively, we are filled with light. By introducing a cute character in a game or other media, the authors simply exploits one of the natural human reactions. This is possible because it is pretty unconditional, the brain only needs some basic patterns, and the factual meaning of an object is totally irrelevant in this case. Thus, we consider something as cute despite the fact it is not a baby at all. Kittens are super cute, but adult cats can be cute too because they are small, have round and smooth bodies, and big eyes. Another popular example is owls, they have big round heads and large expressive eyes, making them one of the cutest birds on the planet. Moreover, some mechanical objects are cute as well: majority European compact cars from the 1950s are adorable, remember the BMW Isetta, Fiat 500, original Mini, and VW Beetle? All of them look so nice and sweet, that you want to hug them, cover them with a plaid, and give some milk in a plate, as though they are small mechanical babies of bigger adult cars. The industrial design in that period was inclined toward cuteness (may be it correlated with the baby boom). Even utility vehicle such as buses and trucks were cute, in addition to household devices such as radio sets and refrigerators. But the most amazing thing was cute weapons. Of course, I'm not talking about the real ones, but imaginary ray guns that appeared in sci-fi art and in the form of toys were definitely adorable. It is clear why illustrators prefer to use objects from previous decades in their pictures, the final illustrations look warmer. Therefore, to create a cute character, you need to follow some evident visual rules: Short body Rounded angles Curved contours and chubby figures Smooth surfaces without folds and wrinkles Big head with large forehead and small low jaw Small mouth and teeth Large eyes (or they stand wide) with big pupils Wide-open eyes with eyebrows lifted up Animals with big nose Short arms, legs, and fingers without visual joints Making characters scary Generally, cuteness is necessary for a protagonist to have a corner in the player's heart. An antagonist must give birth to opposite feelings: loath and fear. Good enemies are creepy characters. To choose their visual appearance, let's again exploit some ancient mechanics from the human brain. There are a lot of alarm systems that alert us when something looks or behaves suspiciously. Deep-seated fears of various forms are inside most people. A game, of course, should not provoke a real panic action, but can tickle some sensitive zones, playing with associations. Creepiness is the complete opposite of cuteness, it gives not a feeling of warmth, but that of cold. To find an effective scary factor, it is good to look at common fears. Traditionally, many people try to avoid insects or even have phobias. Attributes of such creatures are interpreted as unpleasant or frightening. The only exception is ladybugs (they are round with white dots on their head and large eyes) and butterflies, which are primarily associated with petals of bright flowers. It is important to note that in most cases, insects are not aggressive and harmful, but they remind us of creatures from our worst nightmares, giving us the creeps (few examples are mole crickets and earwigs, eek!) Besides them, there are other types of arthropods with high potential of creepiness (and some of them are really dangerous!): spiders, scorpions, and centipedes. They have adverse visual features such as jointed legs, spikes, exoskeletons, tails, multiple eyes, mandibles, and pincers. The key visual element is a gad, associated with cuts, injuries, and so on, that contradicts a cute creature's properties, which has no acute angles. Moreover, fishes, reptiles, birds, and mammals look dangerous when they show their teeth, canines, tusks, horns, clutches, or sharp beaks. A predator is frightening because it shows it threatens with its weaponry: the potential danger is pretty obvious and their current intention is questionable. Now, the eyes comes into play. If they are fixed at you and are not blinking, it is most likely that the predator is paying attention to you and that is super scary. Furthermore, the eyes can be red because the reflection of light creates such an image. Dangerous creatures are fast so they can move and attack quickly; this means their limbs are pretty long, but bodies are narrow and streamlined. The following figure shows a scary creature: Besides aggressive elements, other unpleasant properties can be used to increase the emotional impact; for example, the character can be additionally disgusting if it is covered with strange skin and even mucus. That turns on the dread of biological substances and the fear of germs and parasites. Squeamishness is one of the protection systems of a human, and sometimes it is very unconditional. Such an approach was used in Ridley Scott's science fiction classics Alien: apparently the xenomorph was inspired by various creepy creatures, including arthropods and reptiles. In addition, it had a very disgusting feature: toxic saliva was always dripping from his mouth, causing the viewers to feel terror and revulsion, a doubled negative emotion. While creating a scary character, remember some basic features it needs to express through its design: Long and skinny body Nonhumanoid structure Many angles Many legs or at least noticeable joints Acute elements, such as spikes, clutches, and horns Small head Naked eyes that stay very close and can be red Weaponry openly displayed (biological tools such as pincers, real guns, and grenades) Unpleasant skin with verruca, folds, wrinkles, and some mucus Warning color that can mean that the character is venomous Making characters brutal Brutality, at the middle of the scale of attractiveness, describes the properties of a character that should exercise some heroic duties, being a soldier or a mercenary. It is obvious that such a person cannot have cute characteristics, otherwise it would look comical. Adorable creatures are associated with something very young, but the heroic character should be an adult. He must demonstrate strength and confidence with a little aggression. So, his look should be a little scary, but only a little, as far as he is not a creepy creature from the end of the scale. Since the brutal hero performs various acts of bravery, he must be fast and agile. So, his anatomical proportions should be close to hyperbolic athletic ones like heavy action heroes from the 1980s, featuring a well-developed muscular system and military toys. The following figure shows such a heavy action hero: The apparent illustration of a brutal character is Duke Nukem, a protagonist of the game series of the same name originally developed by Apogee Software. He is brutal and fearless, and definitely not cute. A bunch of good examples is included in the game Gears of War from Epic Games. The members of Delta Squad are canonically brutal and tough guys. Such type of characters generally are used in action games such as shooters. Figuratively speaking, they are mix made of a human and an armed vehicle, since they hold heavy weapons and armor. The following are the basic visual properties of such a hero (it is important to note that the brutality is gender independent, and although many such characters are men, nobody has forbidden you from creating a strong woman protagonist): Figures with no acute angles or rounded corners, but ones that are roughed down Strong legs Heavy feet to lean on the ground reliably Big hands with tenacious fingers to hold weapons and other objects Wide chest with ram-like powerful shoulders that are bigger than the legs Normal head with mid-sized forehead and a big low jaw Naked big eyes It is pretty apparent that the extreme position on the scale is too categorical a benchmark to follow, whereas in most cases, nobody needs super cute or extremely scary characters. Something more universal is a mix of different properties, which is far more expressive. It is interesting that, as a rule, designs of a protagonist are situated between cuteness and brutality, so the character is anatomically adapted to make complex actions: move fast, climb ladders, fight, among others, and at the same time being pretty attractive. Summary The game lets you create a universe and settles it with characters, describing the rules of their behavior. This is a real gift for your imagination! And it is not about realism. The game universe can be based on your own principles and artistic taste. It can be cartoon like or gloomy and dark. It all depends on the mood of the story you are going to tell the players. Now, you know how to create a cute character and why you should be careful with real anatomical proportions. If the animation does not look scary, you can easily animate the protagonist and other characters. Resources for Article: Further resources on this subject: Development of iPhone Applications [Article] iPhone JavaScript: Installing Frameworks [Article] iPhone User Interface: Starting, Stopping, and Multitasking Applications [Article]
Read more
  • 0
  • 0
  • 1494
Banner background image

article-image-getting-started-glsl
Packt
20 Dec 2013
15 min read
Save for later

Getting Started with GLSL

Packt
20 Dec 2013
15 min read
(For more resources related to this topic, see here.) In this article, we will cover the following recipes: Using a function loader to access the latest OpenGL functionality Using GLM for mathematics Determining the GLSL and OpenGL version Compiling a shader Linking a shader program The OpenGL Shading Language(GLSL) Version 4 brings unprecedented power and flexibility to programmers interested in creating modern, interactive, and graphical programs. It allows us to harness the power of modern Graphics Processing Units(GPUs) in a straightforward way by providing a simple yet powerful language and API. Of course, the first step towards using GLSL is to create a program that utilizes the latest version of the OpenGL API. GLSL programs don't stand on their own; they must be a part of a larger OpenGL program. In this article, we will provide some tips and techniques for getting a basic program up and running. First, let's start with some background. The OpenGL Shading Language The GLSL is now a fundamental and integral part of the OpenGL API. Going forward, every program written using the OpenGL API will internally utilize one or several GLSL programs. These "mini-programs" are often referred to as shader programs. A shader program usually consists of several components called shaders. Each shader executes within a different section of the OpenGL pipeline. Each shader runs on the GPU, and as the name implies, (typically) implement the algorithms related to the lighting and shading effects of an image. However, shaders are capable of doing much more than just implementing a shading algorithm. They are also capable of performing animation, tessellation, or even generalized computation. The field of study dubbed GPGPU(General Purpose Computing on Graphics Processing Units) is concerned with utilization of GPUs (often using specialized APIs such as CUDA or OpenCL) to perform general purpose computations such as fluid dynamics, molecular dynamics, cryptography, and so on. With compute shaders, introduced in OpenGL 4.3, we can now do GPGPU within OpenGL. Shader programs are designed for direct execution on the GPU and are executed in parallel. For example, a fragment shader might be executed once for every pixel, with each execution running simultaneously on a separate GPU thread. The number of processors on the graphics card determines how many can be executed at one time. This makes shader programs incredibly efficient, and provides the programmer with a simple API for implementing highly parallel computation. The computing power available in modern graphics cards is impressive. The following table shows the number of shader processors available for several models in the NVIDIA GeForce series cards (source: http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units). Model Unified Shader Processors GeForce GTS 450 192 GeForce GTX 480 480 GeForce GTX 780 2304 Shader programs are intended to replace parts of the OpenGL architecture referred to as the fixed-function pipeline. Prior to OpenGL Version 2.0, the shading algorithm was "hard-coded" into the pipeline and had only limited configurability. This default lighting/shading algorithm was a core part of the fixed-function pipeline. When we, as programmers, wanted to implement more advanced or realistic effects, we used various tricks to force the fixed-function pipeline into being more flexible than it really was. The advent of GLSL will help by providing us with the ability to replace this "hard-coded" functionality with our own programs written in GLSL, thus giving us a great deal of additional flexibility and power. In fact, recent (core) versions of OpenGL not only provide this capability, but they require shader programs as part of every OpenGL program. The old fixed-function pipeline has been deprecated in favor of a new programmable pipeline, a key part of which is the shader program written in GLSL. Profiles – Core vs. Compatibility OpenGL Version 3.0 introduced a deprecation model, which allowed for the gradual removal of functions from the OpenGL specification. Functions or features can be marked as deprecated, meaning that they are expected to be removed from a future version of OpenGL. For example, immediate mode rendering using glBegin/glEndwas marked deprecated in version 3.0 and removed in version 3.1. In order to maintain backwards compatibility, the concept of compatibility profiles was introduced with OpenGL 3.2. A programmer that is writing code intended to be used with a particular version of OpenGL (with older features removed) would use the so-called core profile. Someone who also wanted to maintain compatibility with older functionality could use the compatibility profile. It may be somewhat confusing that there is also the concept of a forward compatible context, which is distinguished slightly from the concept of a core/compatibility profile. A context that is considered forward compatible basically indicates that all deprecated functionality has been removed. In other words, if a context is forward compatible, it only includes functions that are in the core, but not those that were marked as deprecated. Some window APIs provide the ability to select forward compatible status along with the profile. The steps for selecting a core or compatibility profile are window system API dependent. For example, in GLFW, one can select a forward compatible, 4.3 core profile using the following code: glfwWindowHint( GLFW_CONTEXT_VERSION_MAJOR, 4 ); glfwWindowHint( GLFW_CONTEXT_VERSION_MINOR, 3 ); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); GLFWwindow *window = glfwCreateWindow(640, 480, "Title", NULL, NULL); All programs in this article are designed to be compatible with a forward compatible OpenGL 4.3 core profile. Using a function loader to access the latest OpenGL functionality The OpenGL ABI (application binary interface) is frozen to OpenGL version 1.1 on Windows. Unfortunately for Windows developers, that means that it is not possible to link directly to functions that are provided in newer versions of OpenGL. Instead, one must get access to these functions by acquiring a function pointer at runtime. Getting access to the function pointers is not difficult, but requires somewhat tedious work, and has a tendency to clutter your code. Additionally, Windows typically comes with a standard OpenGL gl.h file that also conforms to OpenGL 1.1. The OpenGL wiki states that Microsoft has no plans to ever update the gl.hand opengl32.lib that come with their compilers. Thankfully, others have provided libraries that manage all of this for us by transparently providing the needed function pointers, while also exposing the needed functionality in header files. There are several libraries available that provide this kind of support. One of the oldest and most common is GLEW (OpenGL Extension Wrangler). However, there are a few serious issues with GLEW that might make it less desirable, and insufficient for my purposes when writing this article. First, at time of writing, it doesn't yet support core profiles properly, and for this article, I want to focus only on the latest non-deprecated functionality. Second, it provides one large header file that includes everything from all versions of OpenGL. It might be preferable to have a more streamlined header file that only includes functions that we might use. Finally, GLEW is distributed as a library that needs to be compiled separately and linked into our project. It is often preferable to have a loader that can be included into a project simply by adding the source files and compiling them directly into our executable, avoiding the need to support another link-time dependency. In this recipe, we'll use the OpenGL Loader Generator (GLLoadGen), available from https://bitbucket.org/alfonse/glloadgen/wiki/Home. This very flexible and efficient library solves all three of the issues described in the previous paragraph. It supports core profiles and it can generate a header that includes only the needed functionality, and also generates just a couple of files (a source file and a header) that we can add directly into our project. Getting ready To use GLLoadGen, you'll need Lua. Lua is a lightweight embeddable scripting language that is available for nearly all platforms. Binaries are available at http://luabinaries.sourceforge.net, and a fully packaged install for Windows (LuaForWindows) is available at: https://code.google.com/p/luaforwindows Download the GLLoadGen distribution from: https://bitbucket.org/alfonse/glloadgen/downloads. The distribution is compressed using 7zip, which is not widely installed, so you may need to install a 7zip utility, available at http://7-zip.org/. Extract the distribution to a convenient location on your hard drive. Since GLLoadGen is written in Lua, there's nothing to compile, once the distribution is uncompressed, you're ready to go. How to do it... The first step is to generate the header and source files for the OpenGL version and profile of choice. For this example, we'll generate files for an OpenGL 4.3 core profile. We can then copy the files into our project and compile them directly alongside our code: To generate the header and source files, navigate to the GLLoadGen distribution directory, and run GLLoadGen with the following arguments: lua LoadGen.lua -style=pointer_c -spec=gl -version=4.3 -profile=core core_4_3 The previous step should generate two files: gl_core_4_3.c and gl_core_4_3.h. Move these files into your project and include gl_core_4_3.c in your build. Within your program code, you can include the gl_core_4_3.h file whenever you need access to the OpenGL functions. However, in order to initialize the function pointers, you need to make sure to call a function to do so. The needed function is called ogl_LoadFunctions. Somewhere just after the GL context is created (typically in an initialization function), and before any OpenGL functions are called, use the following code: int loaded = ogl_LoadFunctions(); if(loaded == ogl_LOAD_FAILED) { //Destroy the context and abort return; } int num_failed = loaded - ogl_LOAD_SUCCEEDED; printf("Number of functions that failed to load: %i.n", num_failed); That's all there is to it! How it works... The lua command in step 1 generates a pair of files, that is; a header and a source file. The header provides prototypes for all of the selected OpenGL functions and redefines them as function pointers, and defines all of the OpenGL constants as well. The source file provides initialization code for the function pointers as well as some other utility functions. We can include the gl_core_4_3.h header file wherever we need prototypes for OpenGL functions, so all function entry points are available at compile time. At run time, the ogl_LoadFunctions()function will initialize all available function pointers. If some functions fail to load, the number of failures can be determined by the subtraction operation shown in step 2. If a function is not available in the selected OpenGL version, the code may not compile, because only function prototypes for the selected OpenGL version and profile are available in the header (depending on how it was generated). The command line arguments available to GLLoadGen are fully documented here: https://bitbucket.org/alfonse/glloadgen/wiki/Command_Line_Options. The previous example shows the most commonly used setup, but there's a good amount of flexibility built into this tool. Now that we have generated this source/header pair, we no longer have any dependency on GLLoadGen and our program can be compiled without it. This is a significant advantage over tools such as GLEW. There's more... GLLoadGen includes a few additional features that are quite useful. We can generate more C++ friendly code, manage extensions, and generate files that work without the need to call an initialization function. Generating a C++ loader GLLoadGen supports generation of C++ header/source files as well. This can be selected via the -style parameter. For example, to generate C++ files, use -style=pointer_cpp as in the following example: lua LoadGen.lua -style=pointer_cpp -spec=gl -version=4.3 -profile=core core_4_3 This will generate gl_core_4_3.cpp and gl_core_4_3.hpp. This places all OpenGL functions and constants within the gl::namespace, and removes their gl(or GL) prefix. For example, to call the function glBufferData, you might use the following syntax. gl::BufferData(gl::ARRAY_BUFFER, size, data, gl::STATIC_DRAW); Loading the function pointers is also slightly different. The return value is an object rather than just a simple integer and LoadFunctions is in the gl::sys namespace. gl::exts::LoadTest didLoad = gl::sys::LoadFunctions(); if(!didLoad) { // Clean up (destroy the context) and abort. return; } printf("Number of functions that failed to load: %i.n", didLoad.GetNumMissing()); -load styles GLLoadGen supports the automatic initialization of function pointers. This can be selected using the noload_c or noload_cpp options for the style parameter. With these styles, there is no need to call the initialization function ogl_LoadFunctions. The pointers are loaded automatically, the first time a function is called. This can be convenient, but there's very little overhead to loading them all at initialization. Using Extensions GLLoadGen does not automatically support extensions. Instead, you need to ask for them with command line parameters. For example, to request ARB_texture_view and ARB_vertex_attrib_binding extensions, you might use the following command. lua LoadGen.lua -style=pointer_c -spec=gl -version=3.3 -profile=core core_3_3 -exts ARB_texture_view ARB_vertex_attrib_binding The -exts parameter is a space-separated list of extensions. GLLoadGen also provides the ability to load a list of extensions from a file (via the -extfile parameter) and provides some common extension files on the website. You can also use GLLoadGen to check for the existence of an extension at run-time. For details, see the GLLoadGen wiki. See also GLEW, an older, and more common loader and extension manager, available from glew.sourceforge.net. Using GLM for mathematics Mathematics is core to all of computer graphics. In earlier versions, OpenGL provided support for managing coordinate transformations and projections using the standard matrix stacks (GL_MODELVIEW and GL_PROJECTION). In recent versions of core OpenGL however, all of the functionality supporting the matrix stacks has been removed. Therefore, it is up to us to provide our own support for the usual transformation and projection matrices, and then to pass them into our shaders. Of course, we could write our own matrix and vector classes to manage this, but some might prefer to use a ready-made, robust library. One such library is GLM(OpenGL Mathematics) written by Christophe Riccio. Its design is based on the GLSL specification, so the syntax is very similar to the mathematical support in GLSL. For experienced GLSL programmers, this makes GLM very easy to use and familiar. Additionally, it provides extensions that include functionality similar to some of the much missed OpenGL functions such as glOrtho, glRotate, or gluLookAt. Getting ready Since GLM is a header-only library, installation is simple. Download the latest GLM distribution from http://glm.g-truc.net. Then, unzip the archive file, and copy the glm directory contained inside to anywhere in your compiler's include path. How to do it... To use the GLM libraries, it is simply a matter of including the core header file, and headers for any extensions. For this example, we'll include the matrix transform extension as follows: #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> Then the GLM classes are available in the glm namespace. The following is an example of how you might go about making use of some of them: glm::vec4 position = glm::vec4( 1.0f, 0.0f, 0.0f, 1.0f ); glm::mat4 view = glm::lookAt( glm::vec3(0.0,0.0,5.0), glm::vec3(0.0,0.0,0.0), glm::vec3(0.0,1.0,0.0) ); glm::mat4 model(1.0f); // The identity matrix model = glm::rotate( model, 90.0f, glm::vec3(0.0f,1.0f,0.0) ); glm::mat4 mv = view * model; glm::vec4 transformed = mv * position; How it works... The GLM library is a header-only library. All of the implementation is included within the header files. It doesn't require separate compilation and you don't need to link your program to it. Just placing the header files in your include path is all that's required! The previous example first creates a vec4(four coordinate vector) representing a position. Then it creates a 4 x 4 view matrix by using the glm::lookAt function. This works in a similar fashion to the old gluLookAt function. Here, we set the camera's location at (0, 0, 5), looking towards the origin, with the "up" direction in the direction of the y-axis. We then go on to create the model matrix by first storing the identity matrix in the variable model(via the single argument constructor), and multiplying by a rotation matrix using the glm::rotate function. The multiplication here is implicitly done by the glm::rotate function. It multiplies its first parameter by the rotation matrix (on the right) that is generated by the function. The second parameter is the angle of rotation (in degrees), and the third parameter is the axis of rotation. Since before this statement, model is the identity matrix, the net result is that model becomes a rotation matrix of 90 degrees around the y-axis. Finally, we create our modelview matrix (mv) by multiplying the view and model variables, and then using the combined matrix to transform the position. Note that the multiplication operator has been overloaded to behave in the expected way. There's more... It is not recommended to import all of the GLM namespace by using the following command: using namespace glm; This will most likely cause a number of namespace clashes. Instead, it is preferable to import symbols one at a time, as needed. For example: #include <glm/glm.hpp> using glm::vec3; using glm::mat4; Using the GLM types as input to OpenGL GLM supports directly passing a GLM type to OpenGL using one of the OpenGL vector functions (with the suffix v). For example, to pass a mat4 named proj to OpenGL we can use the following code: glm::mat4 proj = glm::perspective( viewAngle, aspect, nearDist, farDist ); glUniformMatrix4fv(location, 1, GL_FALSE, &proj[0][0]); See also The Qt SDK includes many classes for vector/matrix mathematics, and is another good option if you're already using Qt The GLM website http://glm.g-truc.net has additional documentation and examples
Read more
  • 0
  • 0
  • 2330

article-image-what-lumion
Packt
20 Dec 2013
9 min read
Save for later

What is Lumion?

Packt
20 Dec 2013
9 min read
(For more resources related to this topic, see here.) Why use Lumion? The short answer is that Lumion is easy to use and the final product is of a good quality. The long answer is that every construction project needs technical drawings and documents. Although this technical information is fine for a construction crew, usually the client has no idea what a CAD plan means. They can have an idea where the kitchen or the living room will be, but translating that 2D information to 3D is not always easy in the client's mind. This can be an issue if we need to give a presentation or if we are trying to sell something that is not built yet. And truth be told, an image sells more than words. That's where Lumion comes in. Lumion is the fastest way to render high quality still pictures and videos, and it makes it so easy to import our 3D models from any 3D modeling software, such as SketchUp, AutoCAD, Revit, ArchiCAD, and 3ds Max, and create a scene in minutes. So, Lumion 3D is a distinct architectural visualization software not only because it is faster to render, but also because it is very user friendly and intuitive. Another reason why we can use Lumion to create architectural visualizations is because we can have a great idea of how our project will look in natural surroundings at any time of the day or season, and this in just a few minutes. Now if you are an architect, it is doubtless that you want to enhance your project characteristics in the best way possible. Lumion can help you achieve this in hours instead of the inevitable days and weeks of rendering time. The following screenshot is an example of what you can get with Lumion in just a few minutes: However, this tool is not exclusively meant for architects. For example, if you are an interior designer, you may want to present how the textures, furniture, and colors would look at different angles, places, moods, and light conditions. Lumion provides nice interior visualization with furniture, good lighting, and realistic textures. In conclusion, Lumion is a great tool that improves the process of creating a building, or an art, or an architectural project. The time we need to get those results is less in comparison to other solutions such as using 3ds Max and V-Ray. What can we get from Lumion? Asking what Lumion can us is a double-edged question. Looking at the previous screenshots, we can get an idea of the final result. The final quality depends only on your creativity and time. I have seen amazing videos created with Lumion, but then you may need a touch of other software to create eye-catching compositions. Now, the package that we get with Lumion is another thing. You already know that we can easily create beautiful still images and videos, but we need to bear in mind that Lumion is not a tool designed to create photo-realistic renders. Nevertheless, we can get so much from this application that you will forget photo-realistic renders. Lumion is a powerful and advanced 3D engine that allows us to work with heavier models, and we can make our scene come alive with the click-and-drag technique. To do this, Lumion comes with a massive library where we can find: 2409 models of trees, transports, exterior and interior models, and people 103 ambient, people, and machine sounds 518 types of materials and textures 28 types of landscape In addition to this extensive collection, there are more features that we can add; we can include realistic water in our scene (oceans, rivers, pools, waterfalls, and fountains), we can sculpt the landscape to adapt to our needs, and we can add rain, snow, fog, wind, and animate objects, and we can add camera effects. You just need a blank 3D model; import and start working because, as you can see, Lumion is well equipped with almost everything we need to create architectural visualisations. Lumion's 3D interface Now that we know what we can do with Lumion and the content available, we will take some time to explore Lumion and get our hands dirty. In my opinion and experience, it is much easier to learn something if at the same time we apply what we are learning. So, in the next section we are going to explore the Lumion interface with the menus and different settings. But to do that we will use a small tutorial as a quick start. By doing this, we will explore Lumion and at the same time see how easy it is to produce great results. We will see that Lumion is easier to learn and more accessible than other software. So go ahead and fire up Lumion and let's have a quick tour before we start working with it. I am going to explain to you what each tab does, to help you see how you can do simple tasks, such as saving and loading a scene, changing the settings, and creating a new scene. Let's start with the first tab that Lumion shows us. A look into the New tab On startup, Lumion goes straight to the New tab. The New tab, as the name indicates, is a good place to start when you want to create a new scene. We can create a new scene based on a few presets or just create an empty scene. We can choose from Night, Sunset, Sunny Day, Flatlands, Island, Lake, Desert, Cold Climate, and an Empty scene. I found these presets as a quick help to cut some time, because in the end everything we get from these presets, we can create in a few minutes. So, there is nothing special about them. When you start Lumion, this will be the first thing you will see: The nine presets you can find on the New tab Now, we will finally see the Objects menu and the following is what this menu looks like: The Objects menu Here is where the fun starts. We have at our disposal eight categories of objects and more, such as Nature, Transport, Sound, Effects, Indoor, People and Animals, Outdoor, and Lights and special objects. Each of these menus has subcategories. If you are working with the Lumion PRO version, you can choose from more than 2000 models. Even if you don't have this version, cheer up! You can still import your own models and textures. It is really simple to add a model. First, we need to select the category we want to use. So in this case, click on the Nature button. Now that we have this category selected, click on the thumbnail above the Place button and a new window with the Nature library will appear. We don't have just trees, we have grass, plants, flowers, rocks, and clusters. Now let me show you one trick. Click on the Grass tab and select Rough_Grass1_RT. Now that we are back to the Build mode, press the Ctrl key and click on the ground. We are randomly adding 10 copies of the object, which in this case is really handy. So, after playing a little with Lumion, we can get something like the following screenshot: Our scene after adding some trees, grass, and animals Just think, it took me about 30 minutes to create something like this. Now imagine what you can do. Let's save our scene and turn our attention to the right-hand side of the Lumion 3D interface, where we can find the menus as shown in the following screenshot: The Build mode button Starting from the top of the preceding screenshot we can see the blue rectangle with a question mark. If we put our mouse cursor over this rectangle, we can see a quick help guideline for our interface. The next button informs us that we are in the Build mode and if, for example, you are working in the Photo or Video mode, this button lets you go back to the scene. Lumion materials Lumion helps us with this important step by offering many types of materials: 518 materials that are ready to use. You may need to do some adjustments, but the major hard work was already done for you. The materials that we can assign to our model are as follows: Wood: 45 materials Wood floor : 67 materials Brick: 32 materials Tiles: 99 materials Ground: 39 materials Concrete: 43 materials Carpet: 20 materials Misc: 109 materials Asphalt: 12 materials Metal: 47 materials This is one of the reasons why it is so easy to create still images and videos with Lumion. Everything is set up for you, including parameters, details, and textures. However, we may need to do some minor adjustments, and for that, it is important to understand or at least have a basic notion of what each setting does. So, let's have a quick look at how we can configure materials in Lumion. The Landscape material The best way to explain this material is by showing you an example. So, let's say that along with the model, you also created a terrain like the one you can see in the following screenshot: The house along with a terrain model Import the terrain if needed, and add a new material to this terrain. Go to the Custom menu and click on the Landscape material. The Landscape material allows you to seamlessly blend or merge parts of the model with the landscape. Make sure that the terrain intersects with the ground so that it can be perfectly blended. The following screenshot shows this Landscape material applied to my 3D terrain: The terrain model merged with the landscape Adding this material not only allows you to use this cool feature, but as you can see in the picture, we can also start painting soil types of the landscape in the imported terrain. The other two materials that I want to introduce you to are the Standard and Water materials. The Standard material is a simple material without any textures or settings, and we can use this material to start something from scratch. The Water material can have several applications, but perhaps, the most common one is, for example, pools. Summary This article helped you in starting with Lumion, and gave you a taste of how easily and quickly you can get great images and videos. In particular, you have learned the basic steps to save and load scenes, import models, add materials, change the terrain and weather, and create photos and videos. You also learned how to use and configure the prebuilt materials in Lumion and found out how to use the Landscape material to create a terrain. Resources for Article: Further resources on this subject: The Spotfire Architecture Overview [Article] augmentedTi: The application architecture [Article] The architecture of JavaScriptMVC [Article]
Read more
  • 0
  • 0
  • 3672

article-image-going-isometric
Packt
19 Dec 2013
12 min read
Save for later

Going Isometric

Packt
19 Dec 2013
12 min read
(For more resources related to this topic, see here.) Cartesian to isometric equations A very important thing to understand here is that the level data still remains the same 2D array, and we will be altering only the rendering process. Later on, we will need to update the level data to accommodate large tiles, which will contain items that are bigger than the current tile size. Our two-dimensional top-down coordinates for a tile can be called Cartesian coordinates. The relationship between Cartesian and isometric coordinates is shown in the following code: //Cartesian to isometric: x_Iso = x_Cart - y_Cart; y_Iso = ( x_Cart + y_Cart ) / 2; //Isometric to Cartesian: x_Cart = ( 2 * y_Iso + x_Iso ) / 2; y_Cart = ( 2 * y_Iso – x_Iso ) / 2; Now that is very simple isn't it? We will use an IsoHelper class for this conversion where we can pass through a point and get back to the converted point. An isometric view via a matrix transformation Although the equations are simple and straightforward, the art needed for an isometric tile is a bit complicated. The artist needs to create the rhombus-shaped tile art with pixel precision and mostly tileable in all four directions. An alternative approach is to use the square tile itself and skew them dynamically using the corresponding code. Let us try to create the isometric view for the level data with the same tiles using this approach. The transformation matrix for isometric transformation is as follows, which is essentially a rotation of 45 degrees and scaling by half in Y axis: var m:Matrix = new Matrix(1,0.5,-1,0.5,0,0); The code for the IsometricLevel class, is shared as follows. You should initialize this class from the Starling document class using new Starling (IsometricLevel, stage). The following approach just applies the isometric transformation matrix to the RenderTexture image. Minor changes in the init function are shown in the following code: var m:Matrix = new Matrix(1,0.5,-1,0.5,0,0); for(var i_int=0;i<levelData.length;i++){ for(var j_int=0;j<levelData[0].length;j++){ img=new Image(texAtlas.getTexture(paddedName(levelData[i][j]))); img.x=j*tileWidth+borderX; img.y=i*tileWidth+borderY; rTex.draw(img); } } m.translate( 300, 0 ); rTexImage.transformationMatrix = m; We apply the transformation matrix to the RenderTexture image and translate it by 300 pixels so that the whole of it is visible. Skewing will make a part of the image to be out of the visible area of the screen. We will get the following result: An alternate approach is to apply the transformation matrix to each individual tile image, find the corresponding isometric coordinates, and move and place individual tiles accordingly as shown in the following code: var m:Matrix = new Matrix(1,0.5,-1,0.5,0,0); var pt_Point=new Point(); for(var i_int=0;i<levelData.length;i++){ for(var j_int=0;j<levelData[0].length;j++){ img = new Image(texAtlas.getTexture(paddedName(levelData[i][j]))); img.transformationMatrix = m; pt.x=j*tileWidth+borderX; pt.y=i*tileWidth+borderY; pt=IsoHelper.cartToIso(pt); img.x=pt.x+300; img.y=pt.y; rTex.draw(img); } } Here, we use the convenient cartToIso(pt) conversion function of our IsoHelper class to find the corresponding isometric coordinates to our Cartesian coordinates. We are offsetting the drawing by 300 pixels to handle the skewing offset for the image. This approach will work in some cases, but not all top-down tiles can be simply skewed and made into an isometric tile. For example, consider a tree in the top-down view, it will simply look like a skewed tree graphic after we apply the isometric transformation. So, the right approach is to create an isometric tile art specifically and use isometric equations to place them correctly. Let us use the isometric tiles provided in the assets pack to create a sample level. Implementing the isometric view via isometric art Please refer to the SampleIsometricDemo source folder, which implements a sample level of our game using isometric art and the previously mentioned equations. There are some differences in the approach that I will be explaining in the following sections. Most of it has to do with the change in level data, altering the registration point of larger tiles, and handling depth. We also need to offset the image drawing so that it fits in the screen area. We use a variable called screenOffset for this purpose. The render code is as follows: var pt_Point=new Point(); for(var i_int=0;i<groundArray.length;i++){ for(var j_int=0;j<groundArray[0].length;j++){ //draw the ground img=new Image(texAtlas.getTexture(String(groundArray[i][j]).split(".")[0])); pt.x=j*tileWidth; pt.y=i*tileWidth; pt=IsoHelper.cartToIso(pt); img.x=pt.x+screenOffset.x; img.y=pt.y+screenOffset.y; rTex.draw(img); //draw overlay if(overlayArray[i][j]!="*"){ img=new Image(texAtlas.getTexture(String(overlayArray[i][j]).split(".")[0]));])); img.x=pt.x+screenOffset.x; img.y=pt.y+screenOffset.y; if(regPoints[overlayArray[i][j]]!=null){ img.x+=regPoints[overlayArray[i][j]].x; img.y-=regPoints[overlayArray[i][j]].y; } rTex.draw(img); } } } The result is shown in the following screenshot: Level data structure The level data for our isometric level is not just a simple 2D array with index numbers any more, but a combination of multiple data structures. We have a 2D array for the ground tiles, another 2D array for overlay tiles, and a dictionary to store altered registration points of the overlay tiles. Ground tiles are those tiles which exactly fit the isometric tile dimensions, which in this case is 80 x 40, and makes up the bottom-most layer of the isometric level. These tiles won't take part in any depth sorting as they are always rendered below all other items that populate the level. Overlay tiles are items which may not fit into the isometric tile dimensions and have height, for instance, buildings, trees, bushes, rocks, and so on. Some of these can be fit into tile dimensions, but are kept as such that we have various advantages using the following approach: We are free to place an overlay tile over any ground tile, which adds to flexibility We would need a lot of tiles if we try to fit overlay tiles and ground tiles together for all permutations and combinations Effects such as tinting can be applied independently to the overlay tiles Depth handling becomes much easier Overlay tiles which are smaller than the tile size reduce the game size Altering registration points Starling considers all images as rectangular blocks with their registration point at the top-left corner. The registration point is the point which can be considered as the (0,0) of that image. Traditional Flash had given us the capability to alter the registration points by embedding images inside Sprite or MovieClip. We can still do the same, but it will require unnecessary creation of a lot of Sprites. Alternately, we can use the pivotX and pivotY properties of Starling objects for the same result too. In our isometric level, we will need to precisely place overlay tiles inside the isometric grid space. An overlay tile does not have any standard size as it can be any item— a tree, building, character, and so on. So, placing them correctly is a tricky thing and very specific to the tile concerned. This leads us to have independent registration points for each overlay tile. We use a dictionary structure to save these values and use those values as offsets while placing overlay tiles. For example, we need to place a bush image, nonwalk0009.png, exactly at the middle of an isometric grid, which means moving it 12 pixels to the left and 19 pixels to the top for proper alignment. We save (12,19) as a new point inside our dictionary for ID nonwalk0009.png, as follows: regPoints["nonwalk0009.png"]=new Point(12,19); Finding a tile's precise placement point needs to involve visual interaction; hence, we will build a level editor, which makes this easier. Depth sorting An isometric view needs us to handle the depth of items manually. For ground tiles, there is no depth issue as they always form the lowest layer over which all the overlay items and characters are drawn. But overlay tiles and characters need to be drawn at specific depths for it to look appropriate. By depth, I mean the order at which the images are drawn. An image drawn later will overlap the one drawn earlier, thereby making it seem in front of the latter. For a level which does not change or without any moving items, we need to find the depth only once for the initial render. But for a level with moving characters or vehicles, we need to find every frame in the game loop and render. The current sample level does not change over time, so we can simply render the level by looping through the array. Any overlay item placed at a higher I or J value will be rendered later, and hence will be shown in front, where I and J are array indices. Thus, items placed at higher indices appear closer to the camera, that is, for the same I, a higher J is closer to the camera and vice versa. When we have a moving item, we need to find the corresponding array position it occupies based on its current screen position. By using these new found array indices, we can compare with the overlay tile's indices and decide on the drawing sequence. The code to find array indices from the screen position is as follows: //capture screen position var screenPos_Point=new Point(hero.x,hero.y); //convert to cartesian coordinates var cartPos_Point=IsoHelper.isoToCart(screenPos); //find tile indices from cartesian values var tilePos_Point=IsoHelper.getTileIndices(screenPos,tileWidth); Understanding isometric movement Isometric movement is very straightforward to implement. All we need to do is move the item in top-down Cartesian coordinates and draw it on the screen after converting into isometric coordinates. For example, if our character is at a point, heroCart in the Cartesian system, then the following code moves him/her to the right: heroCart.x+=heroSpeed; //convert to isometric coordinates heroIso=IsoHelper.cartToIso(heroCart); heroImage.x=heroIso.x; heroImage.y=heroIso.y; rTex.draw(heroImage); Detecting isometric collision Collision detection for any tile-based game is done based on tiles. When designing, we will make sure that certain tiles are walkable while certain tiles are nonwalkable, which means that the characters can move over some tiles but not over others. So, when we calculate the movements of any character, we first make sure that the character won't end up on a nonwalkable tile. Thus, after each movement, we check if the resulting position falls in a nonwalkable tile by finding array indices as mentioned previously. If the result is true, we will ignore the movement, else we will proceed with the movement and update the on-screen character position. heroCart.x+=heroSpeed; //find new tile point var tilePos_Point=IsoHelper.getTileIndices(heroCart,tileWidth); //this checks if new tile position is occupied, else proceeds if(checkWalkable(tilePos)){ //convert to isometric coordinates heroIso=IsoHelper.cartToIso(heroCart); heroImage.x=heroIso.x; heroImage.y=heroIso.y; rTex.draw(heroImage); } You may be wondering that the hero character should need some special considerations to be drawn correctly as the right depth, but by the way we draw things, it gets handled automatically. We do not allow the hero to move onto a nonwalkable tile, that is, bushes, trees, and so on. So, any tile remains walkable or nonwalkable. The character gets drawn on top of a walkable tile, which does not contain any overlay items, and hence it will occupy the right depth. In this method, a full tile has to be made either walkable or nonwalkable, but this may not be the case for all games. We may need to have tiles, which block entry from a specific direction or block exit in a particular direction as a fence along one border of a tile. In such cases, the tile is still walkable, but valid movement is also checked by tracking the direction in which the character is moving. For our game, the first method will be used along with the four-way freedom of movement. In an isometric view, movement can be either in four directions or eight directions, which in turn is called a four-way movement or an eight-way movement respectively. A four-way movement is when we move along the X or Y axis alone on the Cartesian space. An eight-way movement happens when, in addition to four - way, we also move the item diagonally. Logic still remains the same. Summary In this article, we learned about the isometric projection and the equations that help us to implement it based on the simpler Cartesian system. We implemented a sample isometric level using isometric art as well as learned about matrix-based fake isometric rendering. We analyzed the IsoHelper class, which facilitates easy conversion between Cartesian and isometric coordinates and also helps in finding array indices. We learned why altering the registration points is essential for perfectly placing the overlay tiles and we found that our level data needs to track these registration points as well. We also learned how depth sorting, collision detection, and isometric movement are done based on our tile-based approach. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Flash Game Development: Making of Astro-PANIC! [Article] Collision Detection and Physics in Panda3D Game Development [Article]
Read more
  • 0
  • 0
  • 4684
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-component-based-approach-unity
Packt
18 Dec 2013
4 min read
Save for later

Component-based approach of Unity

Packt
18 Dec 2013
4 min read
(For more resources related to this topic, see here.) First of all, you have a project, which is essentially a folder that contains all of the files and information about your game. Some of the files are called scenes (think of them as levels). A scene contains a number of game objects that you have added to it. The contents of your scenes are determined by you, and you can have as many of them as you want. You can also make your game switch between different scenes, thus making different sets of game objects active. On a smaller scale, you have game objects and components. A game object by itself is simply an invisible container that does not do anything. Without adding appropriate components to it, it cannot, for instance, appear in the scene, receive input from the player, or move and interact with other objects. Using components, you can easily assemble powerful game objects while reusing several small parts, each responsible for a simple task or behavior—rendering the game object, handling the input, taking damage, playing an audio effect, and so on—making your game much simpler to develop and manage. Unity relies heavily on this approach, so the better you grasp it, the faster you will get good at it. The only component that each and every game object in Unity has attached to it by default is Transform. It lets you define the game object's position, rotation, and scale. Normally, you can attach, detach, and destroy components in any given game object at will, but you cannot remove Transform. Each component has a number of properties that you can access and change: these can be integer or floating point numbers, strings of text, textures, scripts, references to game objects or other components. They are used to change the way a certain component behaves, to influence its appearance or interaction. Some of the properties include the position, rotation, and scale properties of the Transform component. The following screenshot shows the Wall game object with the Transform, Mesh Filter, Box Collider, Mesh Renderer, and Script components attached to it. the properties of Transform are displayed. In order to reveal or hide a component's properties you need to left-click on its name or on the small arrow on the left of its icon. Unity has a number of predefined game objects that already have components attached to them, such as cameras, lights, and primitives. You can access them by choosing GameObject | Create from the main menu. Alternatively, you can create empty game objects by pressing command + Shift + N (Ctrl + Shift + N in Windows) and attach components to them using the Components submenu. The following figure shows the project structure that we have discussed. Note that there can be any number of scenes within a single project, any number of game objects within a single scene, any number of components attached to a single game object, and finally, any number of properties within a single component. One final thing that you need to know about components right now is that you can copy them by right-clicking on the name of the component in the Inspector panel and selecting Copy Component from the contextual menu shown in the following screenshot. You can also reset the properties of the components to their default values, remove components, and move them up or down for your convenience. Summary This article has covered the basic concept of the component-based approach of Unity and the figures/screenshots demonstrate the various aspect of the same. Resources for Article: Further resources on this subject: Mobile Game Design [Article] Unity Game Development: Welcome to the 3D world [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 5810

article-image-saying-hello-unity-and-android
Packt
17 Dec 2013
21 min read
Save for later

Saying Hello to Unity and Android

Packt
17 Dec 2013
21 min read
Understanding what makes Unity great Perhaps the greatest feature of Unity is how open-ended it is. Nearly all game engines currently on the market are limited in what one can build. It makes perfect sense but it can limit the capabilities of a team. The average game engine has been highly optimized for creating a specific game type. This is great if all you plan on making is the same game again and again. When one is struck with inspiration for the next great hit, only to find that the game engine can't handle it and everyone has to retrain in a new engine or double the development time to make it capable, it can be quite frustrating. Unity does not suffer this problem. The developers of Unity have worked very hard to optimize every aspect of the engine, without limiting what types of games can be made. Everything ranging from simple 2D platformers to massive online role-playing games is possible in Unity. A development team that just finished an ultra-realistic first-person shooter can turn right around and make 2D fighting games without having to learn an entirely new system. Being so open ended does, however, bring a drawback. There are no default tools optimized for building that perfect game. To combat this, Unity grants the ability to create any tool one can imagine, using the same scripting that creates the game. On top of that, there is a strong community of users that have supplied a wide selection of tools and pieces, both free and paid, to be quickly plugged in and used. This results in a large selection of available content, ready to jump-start you on your way to the next great game. When many prospective users look at Unity, they think that because it is so cheap, it is not as good as an expensive AAA game engine. This is simply not true. Throwing more money at the game engine is not going to make a game any better. Unity supports all of the fancy shaders, normal maps, and particle effects you could want. The best part is, nearly all of the fancy features you could want are included in the free version of Unity and 90 percent of the time beyond that, one does not need to even use the Pro only features. One of the greatest concerns when selecting a game engine, especially for the mobile market, is how much girth it will add to the final build size. Most are quite hefty. With Unity's code stripping, it becomes quite small. Code stripping is the process by which Unity removes every extra little bit of code from the compiled libraries. A blank project, compiled for Android, that utilizes full code stripping ends up being around 7 megabytes. Perhaps one of the coolest features of Unity is the multi-platform compatibility. With a single project one can build for several different platforms. This includes the ability to simultaneously target mobile, PC, and consoles. This allows one to focus on real issues, such as handling inputs, resolution, and performance. In the past, if a company desired to deploy their product on more than one platform, they had to nearly double the development costs in order to essentially reprogram the game. Every platform did, and still does, run by its own logic and language. Thanks to Unity, game development has never been simpler. We can develop games using simple and fast scripting, letting Unity handle the complex translation to each platform. There are of course several other options for game engines. Two major ones that come to mind are cocos2d and Unreal Engine. While both are excellent choices, we can always find them to be a little lacking in certain respects. The engine of Angry Birds, cocos2d, could be a great choice for your next mobile hit. However, as the name suggests, it is pretty much limited to 2D games. A game can look great in it, but if you ever want that third dimension, it can be tricky to add. A second major problem with cocos2d is how bare bones it is. Any tool for building or importing assets needs to be created from scratch, or they need to be found. Unless you have the time and experience, this can seriously slow down development. Then there is the staple of major game development, Unreal Engine. This game engine has been used successfully by developers for many years, bringing great games to the world; Unreal Tournament and Gears of War not the least among them. These are both, however, console and computer games, which is the fundamental problem with the engine. Unreal is a very large and powerful engine. Only so much optimization can be done for mobile platforms. It has always had the same problem; it adds a lot of girth to a project and its final build. The other major issue with Unreal is its rigidity in being a first-person shooter engine. While it is technically possible to create other types of games in it, such tasks are long and complex. A strong working knowledge of the underlying system is a must before achieving such a feat. All in all, Unity definitely stands strong among the rest. But these are still great reasons for choosing Unity for game development. Projects can look just as great as AAA titles. Overhead and girth in the final build is small and very important when working on mobile platforms. The system's potential is open enough to allow you to create any type of game you might want, where other engines tend to be limited to a single type of game. And should your needs change at any point in the project's life cycle, it is very easy to add, remove, or change your choice of target platforms. Understanding what makes Android great With over 30-million devices in the hands of users, why would you not choose the Android platform for your next mobile hit? Apple may have been the first one out of the gate with their iPhone sensation, but Android is definitely a step ahead when it comes to smartphone technology. One of its best features is its blatant ability to be opened up so you can take a look at how the phone works, both physically and technically. One can swap out the battery and upgrade the micro SD card, should the need arise. Plugging the phone into a computer does not have to be a huge ordeal; it can simply function as removable storage media. From the point of view of cost of development, the Android market is superior as well. Other mobile app stores require an annual registration fee of about 100 dollars. Some also have a limit on the number of devices that can be registered for development at one time. The Google Play market has a one-time registration fee, and there is no concern about how many or what type of Android devices you are using for development. One of the drawbacks about some of the other mobile development kits is that you have to pay an annual registration fee before you have access to the SDK. With some, registration and payment are required before you can view their documentation. Android is much more open and accessible. Anybody can download the Android SDK for free. The documentation and forums are completely viewable without having to pay any fee. This means development for Android can start earlier, with device testing being a part of it from the very beginning. Understanding how Unity and Android work together Because Unity handles projects and assets in a generic way, there is no need to create multiple projects for multiple target platforms. This means that you could easily start development with the free version of Unity and target personal computers. Then, at a later date, you can switch targets to the Android platform with the click of a button. Perhaps, shortly after your game is launched, it takes the market by storm and there is a great call to bring it to other mobile platforms. With just another click of the button, you can easily target iOS without changing anything in your project. Most systems require a long and complex set of steps to get your project running on a device. For the first application, we will be going through that process because it is important to know about it. However, once your device is set up and recognized by the Android SDK, a single-button click will allow Unity to build your application, push it to a device, and start running it. There is nothing that has caused more headaches for some developers than trying to get an application on a device. Unity makes it simple. With the addition of a free Android application, Unity Remote, it is simple and easy to test mobile inputs without going through the whole build process. While developing, there is nothing more annoying than waiting for 5 minutes for a build every time you need to test a minor tweak, especially in the controls and interface. After the first dozen little tweaks the build time starts to add up. Unity Remote makes it simple and easy to test it all without ever having to hit the Build button. These are the big three: generic projects, a one-click build process, and Unity Remote. We could, of course, come up with several more great ways in which Unity and Android can work together. But these three are the major time and money savers. You could have the greatest game in the world but, if it takes 10 times as long to build and test, what is the point? Differences between Pro and Basic Unity comes with two licensing options, Pro and Basic, which can be found at https://store.unity3d.com. In order to follow, Unity Basic is all that is required. If you are not quite ready to spend the $3,000 required to purchase a full Unity Pro license with the Android add-on, there are other options. Unity Basic is free and comes with a 30-day free trial of Unity Pro. This trial is full and complete, just as if one has purchased Unity Pro. It is also possible to upgrade your license at a later date. Where Unity Basic comes with mobile options for free, Unity Pro requires the purchase of Pro add-ons for each of the mobile platforms. License comparison overview License comparisons can be found at http://unity3d.com/unity/licenses. This section will cover the specific differences between Unity Android Pro and Unity Android Basic. We will explore what the feature is and how useful it is. NavMeshes, Pathfinding, and crowd Simulation: This feature is Unity's built-in pathfinding system. It allows characters to find their way from point to point around your game. Just bake your navigation data in the editor and let Unity take over at runtime. This feature is great if you don't have the ability or inclination to program a pathfinding system yourself. There is a whole slew of tutorials online about how to program pathfinding and do crowd simulation. It is completely possible to do all of this in Unity Basic; you just need to provide the tools yourself. LOD Support: LOD(Level-of-detail) lets you control how complex a mesh is, based on its distance from the camera. When the camera is close to an object, render a complex mesh with a bunch of detail in it. When the camera is far from that object, render a simple mesh, because all that detail is not going to be seen anyway. Unity Pro provides a built-in system to manage this. However, this is another system that could be created in Unity Basic. Whether using Pro or not, this is an important feature for game efficiency. By rendering less complex meshes at a distance, everything can be rendered faster, leaving more room for awesome gameplay. Audio Filter: Audio filters allow you to add effects to audio clips at runtime. Perhaps you created gravel footstep sounds for your character. Your character is running, and we can hear the footsteps just fine, when suddenly they enter a tunnel and a solar flare hits, causing a time warp and slowing everything down. Audio filters would allow us to warp the gravel footstep sounds to sound like they are coming from within a tunnel and are slowed by a time warp. Of course, you could also just have the audio guy create a new set of tunnel gravel footsteps in the time warp sounds. But this might double the amount of audio in your game and limits how dynamic we can be with it at runtime. We either are or are not playing the time warp footsteps. Audio filters would allow us to control how much time warp is affecting our sounds. Video Playback and Streaming: When dealing with complex or high-definition cut scenes, being able to play a video becomes very important. Including them in a build especially with a mobile target can require a lot of space. This is where the streaming part of this feature comes in. This feature not only lets us play video, it also lets us stream that video from the internet. There is, however, a drawback to this feature. On mobile platforms, the video has to go through the device's builtin, video-playing system. This means the video can only be played full-screen and cannot be used as a texture. Theoretically, though, you could break your video into individual pictures for each frame and flip through them at runtime, but this is not recommended for build size and video quality reasons. Fully Fledged Streaming with Asset Bundles: Asset bundles are a great feature provided by Unity Pro. They allow you to create extra content and stream it to the users, without ever requiring an update to the game. You could add new characters, levels, or just about any other content you can think of. Their only drawback is that you cannot add more code. The functionality cannot change, but the content can. This is one of the best features of Unity Pro. 100,000 Dollar Turnover: This one isn't so much a feature as it is a guideline. According to Unity's End User License Agreement, the basic version of Unity cannot be licensed by any group or individual that made $100,000 in the previous fiscal year. This basically means, if you make a bunch of money, you have to buy Unity Pro. Of course, if you are making that much money, you can probably afford it without issue. That is the view of Unity at least, and the reason why it is there. Mecanim: IK Rigs: Unity's new animation system, Mecanim, supports many exciting new features, one of which is IK. If you are unfamiliar with the term, IK allows one to define the target point of an animation and let the system figure out how to get there. Imagine you have a cup sitting on a table and a character that wants to pick it up. You could animate the character to bend over and pick it up, but what if the character is slightly to the side? Or any number of other slight offsets that a player could cause, completely throwing off your animation. It is simply impractical to animate for every possibility. With IK, it hardly matters that the character is slightly off. We just define the goal point for the hand and leave the arm to the IK system. It calculates for us how the arm needs to move in order to get the hand to the cup. Another fun use is making characters look at interesting things as they walk around a room. A guard could track the nearest person, the player character could look at things that they can interact with, or a tentacle monster could lash out at the player without all the complex animation. This will be an exciting one to play with. Mecanim: Sync Layers & Additional Curves Sync layers, inside Mecanim, allow us to keep multiple sets of animation states in time with each other. Say you have a soldier that you want to animate differently based on how much health he has. When at full health, he walks around briskly. After a little damage, it becomes more of a trudge. If health is below half, a limp is introduced to his walk. And when almost dead, he crawls along the ground. With sync layers, we can create one animation state machine and duplicate it to multiple layers. By changing the animations and syncing the layers, we can easily transition between the different animations while maintaining the state machine. Additional curves are simply the ability to add curves to your animations. This means we can control various values with the animation. For example, in the game world, when a character picks up their feet for a jump, gravity will pull them down almost immediately. By adding an extra curve to that animation, in Unity, we can control how much gravity is affecting the character, allowing them to actually get in the air when jumping. This is a useful feature for controlling such values right alongside the animations, but one could just as easily create a script that holds and controls the curves. Custom Splash Screen: Though pretty self-explanatory, it is perhaps not immediately evident why this feature is specified, unless you have worked with Unity before. When an application built in Unity initializes on any platform, it displays a splash screen. In Unity Basic this will always be the Unity logo. By purchasing Unity Pro, you can substitute the Unity logo with any image you want. Build Size Stripping: This is an important feature for mobile platforms. Build size stripping removes all of the excess from your final build. Unity does a very good job at only including the assets that you have created that are used in the final build. With the stripping, it also only includes the parts of the engine itself that are used in the game. This is of great use when you absolutely have to get under that limit for downloading from the cell towers. On the other hand, you could create something similar to the asset bundles. Just let the users buy the framework, and download the assets later. Realtime Directional Shadows: Lights and shadows add a lot to the mood of a scene. This feature allows us to go beyond blob shadows and use realistic looking shadows. This is all well and good if you have the processing space for it. Most mobile devices do not. This feature should also never be used for static scenery. Instead, use static lightmaps, which is what they are for. But if you can find a good balance between simple needs and quality, this could be the feature that creates the difference between an alright and an awesome game. HDR, tone mapping: HDR(High Dynamic Range) and tone mapping allow us to create more realistic lighting effects. Standard rendering uses values from zero to one to represent how much of each color in a pixel is on. This does not allow for a full spectrum of lighting options to be explored. HDR lets the system use values beyond this range and process them using tone mapping to create better effects, such as a bright morning room or the bloom from a car window reflecting the sun. The downside of this feature is in the processor. The device can still only handle values between zero and one, so converting them takes time. Additionally, the more complex the effect, the more time it takes to render it. It would be surprising to see this used well on handheld devices, even in a simple game. Maybe the modern tablets could handle it. Light Probes: Light probes are an interesting little feature. When placed in the world, light probes figure out how an object should be lit. Then, as a character walks around, they tell it how to be shaded. The character is, of course, lit by the lights in the scene but there are limits on how many lights can shade an object at once. Light probes do all the complex calculations beforehand, allowing for better shading at runtime. Again, however, there are concerns about the processing power. Too little and you won't get a good effect; too much and there will be no processing left for playing the game. Lightmapping with Global Illumination and area lights: All versions of Unity support lightmaps, allowing for the baking of complex static shadows and lighting effects. With the addition of global illumination and area lights, you can add another touch of realism to your scenes. However, every version of Unity also lets you import your own lightmaps. This means, you could use some other program to render the lightmaps and import them separately. Static Batching: This feature speeds up the rendering process. Instead of spending time on each frame grouping objects for faster rendering, this allows the system to save the groups generated beforehand. Reducing the number of draw calls is a powerful step towards making a game run faster. That is exactly what this feature does. Render-to-Texture Effects: This is a fun feature, but of limited use. It simply allows you to redirect the rendering of the camera from going to the screen and instead go to a texture. This texture could then, in its most simple form, be put onto a mesh and act like a surveillance camera. You could also do some custom post processing, such as removing the color from the world as the player loses their health. However, that option could become very processor-intensive. Full-Screen Post-Processing Effects: This is another processor-intensive feature that probably will not make it into your mobile game. But you can add some very cool effects to your scene. Such as, adding motion blur when the player is moving really fast, or a vortex effect to warp the scene as the ship passes through a warped section of space. One of the best is using the bloom effect to give things a neon-like glow. Occlusion Culling: This is another great optimization feature. The standard camera system renders everything that is within the camera's view frustum, the view space. Occlusion culling lets us set up volumes in the space our camera can enter. These volumes are used to calculate what the camera can actually see from those locations. If there is a wall in the way, what is the point of rendering everything behind it? Occlusion culling calculates this and stops the camera from rendering anything behind that wall. Navmesh: Dynamic Obstacles and Priority: This feature works in conjunction with the pathfinding system. In scripts, we can dynamically set obstacles, and characters will find their way around them. Being able to set priorities means different types of characters can take different types of objects into consideration when finding their way around. A soldier must go around the barricades to reach his target. The tank, however, could just crash through, should it desire to. .Net Socket Support: This feature is only useful if you plan on doing fancy things over a user's network. Multiplayer networking is already supported in every version of Unity. The multiplayer that is available, though, does require a master server. With the use of sockets, one could create connections to other devices locally. Profiler and GPU profiling: This is a very useful feature. The profiler provides tons of information about how much load your game puts on the processor. With this information we can get right down into the nitty-gritties and determine exactly how long a script takes to process. Towards the end, though, we will also create a tool for determining how long specific parts of your code take to process. Script Access to Asset Pipeline: This is an alright feature. With full access to the pipeline, there is a lot of custom processing that can be done on assets and builds. The full range of possibilities are beyond our scope. But think of it as being able to tint all of the imported textures slightly blue. Dark Skin: This is entirely a cosmetic feature. Its point and purpose are questionable. But if a smooth, dark-skinned look is what you desire, this is the feature you want. There is an option in the editor to change it to the color scheme used in Unity Basic. For this feature, whatever floats your boat goes. Setting up the development environment Before we can create the next great game for Android, we need to install a few programs. In order to make the Android SDK work, we will first install the JDK. Then, we will be installing the Android SDK. After that is the installation of Unity. We then have to install an optional code editor. To make sure everything is set up correctly, we will connect to our devices and take a look at some special strategies if the device is a tricky one. Finally, we will install Unity Remote, a program that will become invaluable in your mobile development.
Read more
  • 0
  • 0
  • 1771

article-image-organizing-virtual-filesystem
Packt
23 Nov 2013
13 min read
Save for later

Organizing a Virtual Filesystem

Packt
23 Nov 2013
13 min read
(For more resources related to this topic, see here.) Files are the building blocks of any computer system. This article deals with portable handling of read-only application resources, and provides recipes to store the application data. Let us briefly consider the problems covered in this article. The first one is the access to application data files. Often, application data for desktop operating systems resides in the same folder as the executable file. With Android, things get a little more complicated. The application files are packaged in the .apk file, and we simply cannot use the standard fopen()-like functions, or the std::ifstream and std::ofstream classes. The second problem results from the different rules for the filenames and paths. Windows and Linux-based systems use different path separator characters, and provide different low-level file access APIs. The third problem comes from the fact that file I/O operations can easily become the slowest part in the whole application. User experience can become problematic if interaction lags are involved. To avoid delays, we should perform the I/O on a separate thread and handle the results of the Read() operation on yet another thread. To implement this, we have all the tools required. We start with abstract I/O interfaces, implement a portable .zip archives handling approach, and proceed to asynchronous resources loading. Abstracting file streams File I/O APIs differ slightly between Windows and Android (POSIX) operating systems, and we have to hide these differences behind a consistent set of C++ interfaces. Getting ready Please make sure you are familiar with the UNIX concept of the file and memory mapping. Wikipedia may be a good start (http://en.wikipedia.org/wiki/Memory-mapped_file). How to do it... From now on, our programs will read input data using the following simple interface. The base class iObject is used to add an intrusive reference counter to instances of this class: class iIStream: public iObject { public: virtual void Seek( const uint64 Position ) = 0; virtual uint64 Read( void* Buf, const uint64 Size ) = 0; virtual bool Eof() const = 0; virtual uint64 GetSize() const = 0; virtual uint64 GetPos() const = 0; The following are a few methods that take advantage of memory-mapped files: virtual const ubyte* MapStream() const = 0; virtual const ubyte* MapStreamFromCurrentPos() const = 0; }; This interface supports both memory-mapped access using the MapStream() and MapStreamFromCurrentPos() member functions, and sequential access with the BlockRead() and Seek() methods. To write some data to the storage, we use an output stream interface, as follows (again, the base class iObject is used to add a reference counter): class iOStream: public iObject { public: virtual void Seek( const uint64 Position ) = 0; virtual uint64 GetFilePos() const = 0; virtual uint64 Write( const void* B, const uint64 Size ) = 0; }; The Seek(), GetFileSize(), GetFilePos(), and filename-related methods of the iIStream interface can be implemented in a single class called FileMapper: class FileMapper: public iIStream { public: explicit FileMapper( clPtr<iRawFile> File ); virtual ~FileMapper(); virtual std::string GetVirtualFileName() const{return FFile->GetVirtualFileName(); } virtual std::string GetFileName() const{ return FFile->GetFileName(); } Read a continuous block of data from this stream and return the number of bytes actually read: virtual uint64 BlockRead( void* Buf, const uint64 Size ) { uint64 RealSize =( Size > GetBytesLeft() ) ? GetBytesLeft() : Size; Return zero if we have already read everything: if ( RealSize < 0 ) { return 0; } memcpy( Buf, ( FFile->GetFileData() + FPosition ),static_cast<size_t>( RealSize ) ); Advance the current position and return the number of copied bytes: FPosition += RealSize; return RealSize; } virtual void Seek( const uint64 Position ) { FPosition = Position; } virtual uint64 GetFileSize() const { return FFile->GetFileSize(); } virtual uint64 GetFilePos() const { return FPosition; } virtual bool Eof() const { return ( FPosition >= GetFileSize() ); } virtual const ubyte* MapStream() const { return FFile->GetFileData(); } virtual const ubyte* MapStreamFromCurrentPos() const { return ( FFile->GetFileData() + FPosition ); } private: clPtr<iRawFile> FFile; uint64 FPosition; }; The FileMapper uses the following iRawFile interface to abstract the data access: class iRawFile: public iObject { public: iRawFile() {}; virtual ~iRawFile() {}; void SetVirtualFileName( const std::string& VFName );voidSetFileName( const std::string& FName );std::string GetVirtualFileName() const; std::string GetFileName(); virtual const ubyte* GetFileData() const = 0; virtual uint64 GetFileSize() const = 0; protected: std::string FFileName; std::string FVirtualFileName; }; Along with the trivial GetFileName() and SetFileName() methods implemented here, in the following recipes we implement the GetFileData() and GetFileSize() methods. How it works... The iIStream::BlockRead() method is useful when handling non-seekable streams. For the fastest access possible, we use memory-mapped files implemented in the following recipe. The MapStream() and MapStreamFromCurrentPos() methods are there to provide access to memory-mapped files in a convenient way. These methods return a pointer to the memory where your file, or a part of it, is mapped to. The iOStream::Write() method works similar to the standard ofstream::write() function. Refer to the project 1_AbstractStreams for the full source code of this and the following recipe. There's more... The important problem while programming for multiple platforms, in our case for Windows and Linux-based Android, is the conversion of filenames. We define the following PATH_SEPARATOR constant, using OS-specific macros, to determine the path separator character in the following way: #if defined( _WIN32 ) const char PATH_SEPARATOR = '\'; #else const char PATH_SEPARATOR = '/'; #endif The following simple function helps us to make sure we use valid filenames for our operating system: inline std::string Arch_FixFileName(const std::string& VName) { std::string s( VName ); std::replace( s.begin(), s.end(), '\', PATH_SEPARATOR ); std::replace( s.begin(), s.end(), '/', PATH_SEPARATOR ); return s; } See also Implementing portable memory-mapped files Working with in-memory files Implementing portable memory-mapped files Modern operating systems provide a powerful mechanism called the memory-mapped files. In short, it allows us to map the contents of the file into the application address space. In practice, this means we can treat files as usual arrays and access them using C pointers. Getting ready To understand the implementation of the interfaces from the previous recipe we recommend to read about memory mapping. The overview of this mechanism implementation in Windows can be found on the MSDN page at http://msdn.microsoft.com/en-us/library/ms810613.aspx. To find out more about memory mapping, the reader may refer to the mmap() function documentation. How to do it... In Windows, memory-mapped files are created using the CreateFileMapping() and MapViewOfFile() API calls. Android uses the mmap() function, which works pretty much the same way. Here we declare the RawFile class implementing the iRawFile interface. RawFile holds a pointer to a memory-mapped file and its size: ubyte* FFileData; uint64 FSize; For the Windows version, we use two handles for the file and memory-mapping object, and for the Android, we use only the file handle: #ifdef _WIN32 HANDLE FMapFile; HANDLE FMapHandle; #else int FFileHandle; #endif We use the following function to open the file and create the memory mapping: bool RawFile::Open( const string& FileName,const string& VirtualFileName ) { At first, we need to obtain a valid file descriptor associated with the file: #ifdef OS_WINDOWS FMapFile = (void*)CreateFileA( FFileName.c_str(),GENERIC_READ,FILE_SHARE_READ,NULL, OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL |FILE_FLAG_RANDOM_ACCESS,NULL ); #else FFileHandle = open( FileName.c_str(), O_RDONLY ); if ( FFileHandle == -1 ) { FFileData = NULL; FSize = 0; } #endif Using the file descriptor, we can create a file mapping. Here we omit error checks for the sake of clarity. However, the example in the supplementary materials contains more error checks: #ifdef OS_WINDOWS FMapHandle = (void*)CreateFileMapping( ( HANDLE )FMapFile,NULL, PAGE_READONLY,0, 0, NULL ); FFileData = (Lubyte*)MapViewOfFile((HANDLE)FMapHandle,FILE_MAP_READ, 0,0,0 ); DWORD dwSizeLow = 0, dwSizeHigh = 0; dwSizeLow = ::GetFileSize( FMapFile, &dwSizeHigh ); FSize = ((uint64)dwSizeHigh << 32) | (uint64)dwSizeLow; #else struct stat FileInfo; fstat( FFileHandle, &FileInfo ); FSize = static_cast<uint64>( FileInfo.st_size ); FFileData = (Lubyte*) mmap(NULL, FSize, PROT_READ,MAP_PRIVATE, FFileHandle, 0); close( FFileHandle ); #endif return true; } The correct deinitialization function closes all the handles: bool RawFile::Close() { #ifdef OS_WINDOWS if ( FFileData ) UnmapViewOfFile( FFileData ); if ( FMapHandle ) CloseHandle( (HANDLE)FMapHandle ); CloseHandle( (HANDLE)FMapFile ); #else if ( FFileData ) munmap( (void*)FFileData, FSize ); #endif return true; } The main functions of the iRawFile interface, GetFileData and GetFileSize, have trivial implementation here: virtual const ubyte* GetFileData() { return FFileData; } virtual uint64 GetFileSize() { return FSize; } How it works... To use the RawFile class we create an instance and wrap it into a FileMapper class instance: clPtr<RawFile> F = new RawFile(); F->Open("SomeFileName"); clPtr<FileMapper> FM = new FileMapper(F); The FM object can be used with any function supporting the iIStream interface. The hierarchy of all our iRawFile implementations looks like what is shown in the following figure: Implementing file writers Quite frequently, our application might want to store some of its data on the disk. Another typical use case we have already encountered is the downloading of some file from the network into a memory buffer. Here, we implement two variations of the iOStream interface for the ordinary and in-memory files. How to do it... Let us derive the FileWriter class from the iOStream interface. We add the Open() and Close() member functions on top of the iOStream interface and carefully implement the Write() operation. Our output stream implementation does not use memory-mapped files and uses ordinary file descriptors, as shown in the following code: class FileWriter: public iOStream { public: FileWriter(): FPosition( 0 ) {} virtual ~FileWriter() { Close(); } bool Open( const std::string& FileName ) { FFileName = FileName; We split Android and Windows-specific code paths using defines: #ifdef _WIN32 FMapFile = CreateFile( FFileName.c_str(),GENERIC_WRITE, FILE_SHARE_READ,NULL, CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL, NULL ); return !( FMapFile == ( void* )INVALID_HANDLE_VALUE ); #else FMapFile = open( FFileName.c_str(), O_WRONLY|O_CREAT ); FPosition = 0; return !( FMapFile == -1 ); #endif } The same technique is used in the other methods. The difference between both OS systems is is trivial, so we decided to keep everything inside a single class and separate the code using defines: void Close() { #ifdef _WIN32 CloseHandle( FMapFile ); #else if ( FMapFile != -1 ) { close( FMapFile ); } #endif } virtual std::string GetFileName() const { return FFileName; } virtual uint64 GetFilePos() const { return FPosition; } virtual void Seek( const uint64 Position ) { #ifdef _WIN32 SetFilePointerEx( FMapFile,*reinterpret_cast<const LARGE_INTEGER*>(&Position ),NULL, FILE_BEGIN ); #else if ( FMapFile != -1 ) { lseek( FMapFile, Position, SEEK_SET ); } #endif FPosition = Position; } However, things may get more complex if you decide to support more operating systems. It can be a good refactoring exercise. virtual uint64 Write( const void* Buf, const uint64 Size ) { #ifdef _WIN32 DWORD written; WriteFile( FMapFile, Buf, DWORD( Size ),&written, NULL ); #else if ( FMapFile != -1 ) { write( FMapFile, Buf, Size ); } #endif FPosition += Size; return Size; } private: std::string FFileName; #ifdef _WIN32 HANDLE FMapFile; #else int FMapFile; #endif uint64 FPosition; }; How it works… Now we can also present an implementation of the iOStream that stores everything in a memory block. To store arbitrary data in a memory block, we declare the Blob class, as shown in the following code: class Blob: public iObject { public: Blob(); virtual ~Blob(); Set the blob data pointer to some external memory block: void SetExternalData( void* Ptr, size_t Sz ); Direct access to data inside this blob: void* GetData(); … Get the current size of the blob: size_t GetSize() const; Check if this blob is responsible for managing the dynamic memory it uses: bool OwnsData() const; … Increase the size of the blob and add more data to it. This method is very useful in a network downloader: bool AppendBytes( void* Data, size_t Size ); … }; There are lots of other methods in this class. You can find the full source code in the Blob.h file. We use this Blob class, and declare the MemFileWriter class, which implements our iOStream interface, in the following way: class MemFileWriter: public iOStream { public: MemFileWriter(clPtr<Blob> Container); Change the absolute position inside a file, where new data will be written to: virtual void Seek( const uint64 Position ) { if ( Position > FContainer->GetSize() ) { Check if we are allowed to resize the blob: if ( Position > FMaxSize - 1 ) { return; } And try to resize it: if ( !FContainer->SafeResize(static_cast<size_t>( Position ) + 1 )) { return; } } FPosition = Position; } Write data to the current position of this file: virtual uint64 Write( const void* Buf, const uint64 Size ) { uint64 ThisPos = FPosition; Ensure there is enough space: Seek( ThisPos + Size ); if ( FPosition + Size > FMaxSize ) { return 0; } void* DestPtr = ( void* )( &( ( ( ubyte* )(FContainer->GetData() ))[ThisPos] ) ); Write the actual data: memcpy( DestPtr, Buf, static_cast<size_t>( Size ) ); return Size; } } private: … }; We omit the trivial implementations of GetFileName(), GetFilePos(), GetMaxSize(), SetContainer(), GetContainer(), GetMaxSize(), and SetMaxSize() member functions, along with fields declarations. You will find the full source code of them in the code bundle of the book. See also Working with in-memory files Working with in-memory files Sometimes it is very convenient to be able to treat some arbitrary in-memory runtime generated data as if it were in a file. As an example, let's consider using a JPEG image downloaded from a photo hosting, as an OpenGL texture. We do not need to save it into the internal storage, as it is a waste of CPU time. We also do not want to write separate code for loading images from memory. Since we have our abstract iIStream and iRawFile interfaces, we just implement the latter to support memory blocks as the data source. Getting ready In the previous recipes, we already used the Blob class, which is a simple wrapper around a void* buffer. How to do it... Our iRawFile interface consists of two methods: GetFileData() and GetFileSize(). We just delegate these calls to an instance of Blob: class ManagedMemRawFile: public iRawFile { public: ManagedMemRawFile(): FBlob( NULL ) {} virtual const ubyte* GetFileData() const { return ( const ubyte* )FBlob->GetData(); } virtual uint64 GetFileSize() const { return FBlob->GetSize(); } void SetBlob( const clPtr<Blob>& Ptr ) { FBlob = Ptr; } private: clPtr<Blob> FBlob; }; Sometimes it is useful to avoid the overhead of using a Blob object, and for such cases we provide another class, MemRawFile, that holds a raw pointer to a memory block and optionally takes care of the memory allocation: class MemRawFile: public iRawFile { public: virtual const ubyte* GetFileData() const { return (const ubyte*) FBuffer; } virtual uint64 GetFileSize() const { return FBufferSize; } void CreateFromString( const std::string& InString ); void CreateFromBuffer( const void* Buf, uint64 Size ); void CreateFromManagedBuffer( const void* Buf, uint64 Size ); private: bool FOwnsBuffer; const void* FBuffer; uint64 FBufferSize; }; How it works... We use the MemRawFile as an adapter for the memory block extracted from a .zip file and ManagedMemRawFile as the container for data downloaded from photo sites.
Read more
  • 0
  • 0
  • 2265

Packt
22 Nov 2013
15 min read
Save for later

Unity Networking – The Pong Game

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Multiplayer is everywhere. It's a staple of AAA games and small-budget indie offerings alike. Multiplayer games tap into our most basic human desires. Whether it be teaming up with strangers to survive a zombie apocalypse, or showing off your skills in a round of "Capture the Flag" on your favorite map, no artificial intelligence in the world comes close to the feeling of playing with a living, breathing, and thinking human being. Unity3D has a sizable number of third-party networking middleware aimed at developing multiplayer games, and is arguably one of the easiest platforms to prototype multiplayer games. The first networking system most people encounter in Unity is the built-in Unity Networking API . This API simplifies a great many tasks in writing networked code by providing a framework for networked objects rather than just sending messages. This works by providing a NetworkView component, which can serialize object state and call functions across the network. Additionally, Unity provides a Master server, which essentially lets players search among all public servers to find a game to join, and can also help players in connecting to each other from behind private networks. In this article, we will cover: Introducing multiplayer Introducing UDP communication Setting up your own Master server for testing What a NetworkView is Serializing object state Calling RPCs Starting servers and connecting to them Using the Master server API to register servers and browse available hosts Setting up a dedicated server model Loading networked levels Creating a Pong clone using Unity networking Introducing multiplayer games Before we get started on the details of communication over the Internet, what exactly does multiplayer entail in a game? As far as most players are concerned, in a multiplayer game they are sharing the same experience with other players. It looks and feels like they are playing the same game. In reality, they aren't. Each player is playing a separate game, each with its own game state. Trying to ensure that all players are playing the exact same game is prohibitively expensive. Instead, games attempt to synchronize just enough information to give the illusion of a shared experience. Games are almost ubiquitously built around a client-server architecture, where each client connects to a single server. The server is the main hub of the game, ideally the machine for processing the game state, although at the very least it can serve as a simple "middleman" for messages between clients. Each client represents an instance of the game running on a computer. In some cases the server might also have a client, for instance some games allow you to host a game without starting up an external server program. While an MMO ( Massively Multiplayer Online ) might directly connect to one of these servers, many games do not have prior knowledge of the server IPs. For example, FPS games often let players host their own servers. In order to show the user a list of servers they can connect to, games usually employ another server, known as the "Master Server" or alternatively the "Lobby server". This server's sole purpose is to keep track of game servers which are currently running, and report a list of these to clients. Game servers connect to the Master server in order to announce their presence publicly, and game clients query the Master server to get an updated list of game servers currently running. Alternatively, this Master server sometimes does not keep track of servers at all. Sometimes games employ "matchmaking", where players connect to the Lobby server and list their criteria for a game. The server places this player in a "bucket" based on their criteria, and whenever a bucket is full enough to start a game, a host is chosen from these players and that client starts up a server in the background, which the other players connect to. This way, the player does not have to browse servers manually and can instead simply tell the game what they want to play. Introducing UDP communication The built-in Unity networking is built upon RakNet . RakNet uses UDP communication for efficiency. UDP ( User Datagram Protocols ) is a simple way to send messages to another computer. These messages are largely unchecked, beyond a simple checksum to ensure that the message has not been corrupted. Because of this, messages are not guaranteed to arrive, nor are they guaranteed to only arrive once (occasionally a single message can be delivered twice or more), or even in any particular order. TCP, on the other hand, guarantees each message to be received just once, and in the exact order they were sent, although this can result in increased latency (messages must be resent several times if they fail to reach the target, and messages must be buffered when received, in order to be processed in the exact order they were sent). To solve this, a reliability layer must be built on top of UDP. This is known as rUDP ( reliable UDP ). Messages can be sent unreliably (they may not arrive, or may arrive more than once), or reliably (they are guaranteed to arrive, only once per message, and in the correct order). If a reliable message was not received or was corrupt, the original sender has to resend the message. Additionally, messages will be stored rather than immediately processed if they are not in order. For example, if you receive messages 1, 2, and 4, your program will not be able to handle those messages until message 3 arrives. Allowing unreliable or reliable switching on a per-message basis affords better overall performance. Messages, such as player position, are better suited to unreliable messages (if one fails to arrive, another one will arrive soon anyway), whereas damage messages must be reliable (you never want to accidentally drop a damage message, and having them arrive in the same order they were sent reduces race conditions). In Unity, you can serialize the state of an object (for example, you might serialize the position and health of a unit) either reliably or unreliably (unreliable is usually preferred). All other messages are sent reliably. Setting up the Master Server Although Unity provide their own default Master Server and Facilitator (which is connected automatically if you do not specify your own), it is not recommended to use this for production. We'll be using our own Master Server, so you know how to connect to one you've hosted yourself. Firstly, go to the following page: http://unity3d.com/master-server/ We're going to download two of the listed server components: the Master Server and the Facilitator as shown in the following screenshot: The servers are provided in full source, zipped. If you are on Windows using Visual Studio Express, open up the Visual Studio .sln solution and compile in the Release mode. Navigate to the Release folder and run the EXE (MasterServer.exe or Facilitator.exe). If you are on a Mac, you can either use the included XCode project, or simply run the Makefile (the Makefile works under both Linux and Mac OS X). The Master Server, as previously mentioned, enables our game to show a server lobby to players. The Facilitator is used to help clients connect to each other by performing an operation known as NAT punch-through . NAT is used when multiple computers are part of the same network, and all use the same public IP address. NAT will essentially translate public and private IPs, but in order for one machine to connect to another, NAT punch-through is necessary. You can read more about it here: http://www.raknet.net/raknet/manual/natpunchthrough.html The default port for the Master Server is 23466, and for the Facilitator is 50005. You'll need these later in order to configure Unity to connect to the local Master Server and Facilitator instead of the default Unity-hosted servers. Now that we've set up our own servers, let's take a look at the Unity Networking API itself. NetworkViews and state serialization In Unity, game objects that need to be networked have a NetworkView component. The NetworkView component handles communication over the network, and even helps make networked state serialization easier. It can automatically serialize the state of a Transform, Rigidbody, or Animation component, or in one of your own scripts you can write a custom serialization function. When attached to a game object, NetworkView will generate a NetworkViewID for NetworkView. This ID serves to uniquely identify a NetworkView across the network. An object can be saved as part of a scene with NetworkView attached (this can be used for game managers, chat boxes, and so on), or it can be saved in the project as a prefab and spawned later via Network.Instantiate (this is used to generate player objects, bullets, and so on). Network.Instantiate is the multiplayer equivalent to GameObject.Instantiate —it sends a message over the network to other clients so that all clients spawn the object. It also assigns a network ID to the object, which is used to identify the object across multiple clients (the same object will have the same network ID on every client). A prefab is a template for a game object (such as the player object). You can use the Instantiate methods to create a copy of the template in the scene. Spawned network game objects can also be destroyed via Network.Destroy. It is the multiplayer counterpart of GameObject.Destroy. It sends a message to all clients so that they all destroy the object. It also deletes any RPC messages associated with that object. NetworkView has a single component that it will serialize. This can be a Transform, a Rigidbody, an Animation, or one of your own components that has an OnSerializeNetworkView function. Serialized values can either be sent with the ReliableDeltaCompressed option, where values are always sent reliably and compressed to include only changes since the last update, or they can be sent with the Unreliable option, where values are not sent reliably and always include the full values (not the change since the last update, since that would be impossible to predict over UDP). Each method has its own advantages and disadvantages. If data is constantly changing, such as player position in a first person shooter, in general Unreliable is preferred to reduce latency. If data does not often change, use the ReliableDeltaCompressed option to reduce bandwidth (as only changes will be serialized). NetworkView can also call methods across the network via Remote Procedure Calls ( RPC ). RPCs are always completely reliable in Unity Networking, although some networking libraries allow you to send unreliable RPCs, such as uLink or TNet. Writing a custom state serializer While initially a game might simply serialize Transform or Rigidbody for testing, eventually it is often necessary to write a custom serialization function. This is a surprisingly easy task. Here is a script that sends an object's position over the network: using UnityEngine; using System.Collections; public class ExampleUnityNetworkSerializePosition : MonoBehaviour { public void OnSerializeNetworkView( BitStream stream, NetworkMessageInfo info ) { // we are currently writing information to the network if( stream.isWriting ) { // send the object's position Vector3 position = transform.position; stream.Serialize( ref position ); } // we are currently reading information from the network else { // read the first vector3 and store it in 'position' Vector3 position = Vector3.zero; stream.Serialize( ref position ); // set the object's position to the value we were sent transform.position = position; } } } Most of the work is done with BitStream. This is used to check if NetworkView is currently writing the state, or if it is reading the state from the network. Depending on whether it is reading or writing, stream.Serialize behaves differently. If NetworkView is writing, the value will be sent over the network. However, if NetworkView is reading, the value will be read from the network and saved in the referenced variable (thus the ref keyword, which passes Vector3 by reference rather than value). Using RPCs RPCs are useful for single, self-contained messages that need to be sent, such as a character firing a gun, or a player saying something in chat. In Unity, RPCs are methods marked with the [RPC] attribute. This can be called by name via networkView.RPC( "methodName", … ). For example, the following script prints to the console on all machines when the space key is pressed. using UnityEngine; using System.Collections; public class ExampleUnityNetworkCallRPC : MonoBehavior { void Update() { // important – make sure not to run if this networkView is notours if( !networkView.isMine ) return; // if space key is pressed, call RPC for everybody if( Input.GetKeyDown( KeyCode.Space ) ) networkView.RPC( "testRPC", RPCMode.All ); } [RPC] void testRPC( NetworkMessageInfo info ) { // log the IP address of the machine that called this RPC Debug.Log( "Test RPC called from " + info.sender.ipAddress ); } } Also note the use of NetworkView.isMine to determine ownership of an object. All scripts will run 100 percent of the time regardless of whether your machine owns the object or not, so you have to be careful to avoid letting some logic run on remote machines; for example, player input code should only run on the machine that owns the object. RPCs can either be sent to a number of players at once, or to a specific player. You can either pass an RPCMode to specify which group of players to receive the message, or a specific NetworkPlayer to send the message to. You can also specify any number of parameters to be passed to the RPC method. RPCMode includes the following entries: All (the RPC is called for everyone) AllBuffered (the RPC is called for everyone, and then buffered for when new players connect, until the object is destroyed) Others (the RPC is called for everyone except the sender) OthersBuffered (the RPC is called for everyone except the sender, and then buffered for when new players connect, until the object is destroyed) Server (the RPC is sent to the host machine) Initializing a server The first thing you will want to set up is hosting games and joining games. To initialize a server on the local machine, call Network.InitializeServer. This method takes three parameters: the number of allowed incoming connections, the port to listen on, and whether to use NAT punch-through. The following script initializes a server on port 25000 which allows 8 clients to connect: using UnityEngine; using System.Collections; public class ExampleUnityNetworkInitializeServer : MonoBehavior { void OnGUI() { if( GUILayout.Button( "Launch Server" ) ) { LaunchServer(); } } // launch the server void LaunchServer() { // Start a server that enables NAT punchthrough, // listens on port 25000, // and allows 8 clients to connect Network.InitializeServer( 8, 25005, true ); } // called when the server has been initialized void OnServerInitialized() { Debug.Log( "Server initialized" ); } } You can also optionally enable an incoming password (useful for private games) by setting Network.incomingPassword to a password string of the player's choice, and initializing a general-purpose security layer by calling Network.InitializeSecurity(). Both of these should be set up before actually initializing the server. Connecting to a server To connect to a server you know the IP address of, you can call Network.Connect. The following script allows the player to enter an IP, a port, and an optional password and attempts to connect to the server: using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToServer : MonoBehavior { private string ip = ""; private string port = ""; private string password = ""; void OnGUI() { GUILayout.Label( "IP Address" ); ip = GUILayout.TextField( ip, GUILayout.Width( 200f ) ); GUILayout.Label( "Port" ); port = GUILayout.TextField( port, GUILayout.Width( 50f ) ); GUILayout.Label( "Password (optional)" ); password = GUILayout.PasswordField( password, '*',GUILayout.Width( 200f ) ); if( GUILayout.Button( "Connect" ) ) { int portNum = 25005; // failed to parse port number – a more ideal solution is tolimit input to numbers only, a number of examples can befound on the Unity forums if( !int.TryParse( port, out portNum ) ) { Debug.LogWarning( "Given port is not a number" ); } // try to initiate a direct connection to the server else { Network.Connect( ip, portNum, password ); } } } void OnConnectedToServer() { Debug.Log( "Connected to server!" ); } void OnFailedToConnect( NetworkConnectionError error ) { Debug.Log( "Failed to connect to server: " +error.ToString() ); } } Connecting to the Master Server While we could just allow the player to enter IP addresses to connect to servers (and many games do, such as Minecraft), it's much more convenient to allow the player to browse a list of public servers. This is what the Master Server is for. Now that you can start up a server and connect to it, let's take a look at how to connect to the Master Server you downloaded earlier. First, make sure both the Master Server and Facilitator are running. I will assume you are running them on your local machine (IP is 127.0.0.1), but of course you can run these on a different computer and use that machine's IP address. Keep in mind, if you want the Master Server publicly accessible, it must be installed on a machine with a public IP address (it cannot be in a private network). Let's configure Unity to use our Master Server rather than the Unity-hosted test server. The following script configures the Master Server and Facilitator to connect to a given IP (by default 127.0.0.1): using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToMasterServer : MonoBehaviour { // Assuming Master Server and Facilitator are on the same machine public string MasterServerIP = "127.0.0.1"; void Awake() { // set the IP and port of the Master Server to connect to MasterServer.ipAddress = MasterServerIP; MasterServer.port = 23466; // set the IP and port of the Facilitator to connect to Network.natFacilitatorIP = MasterServerIP; Network.natFacilitatorPort = 50005; } }
Read more
  • 0
  • 0
  • 4743
article-image-limits-game-data-analysis
Packt
20 Nov 2013
7 min read
Save for later

Limits of Game Data Analysis

Packt
20 Nov 2013
7 min read
(For more resources related to this topic, see here.) Which game analytics should be used This section will focus on the role data that should take in your production process. As a studio, the first step is to identify your needs and to choose the goals you will attribute to game analytics. Game analytics as a tool Firstly, it is important to understand that game analytics are a tool, which means they can serve several purposes. You can use them for marketing, science, sociological studies, and so on. Following this statement, you will need different tools and different approaches to reach your goal. As this article has tried to highlight it, tools are chosen according to problems, regardless if the choice is technique or analysis. You must not choose a tool because it is said to be the best performing tool ever made, or because it is fashionable. Instead, you must choose a tool because it is said to be the most efficient tool for your needs. Try to answer the following questions: What are the long-term uses I plan to do with game analytics? Is it simply reporting the Key Performance Indicators or is it the building a user-centric framework for deep analysis? What are the types and the level of skills of the people who will work on it?Do I have all of the skills, from data scientists to game analysts, or do I need to choose a solution which will offset some lacks in a particular field? How much data will be collected? How do I plan to deal with possible peaks of frequentation? How do I adapt temporalities of reporting and analysis with the rhythmof production I have on my project? Do I split them weekly or monthly? What are the main goals of my process? Do I want to build a predictive model (for example, based on correlations) in order to define the next acquisition campaign I will run? Do I want to increase the monetization rate on the current player base? Do I want to perform A/B testing? And the list goes on. Game analytics must serve your team Secondly, it is important to ensure that the use of game analytics must serve your team as a whole. They should not have any disagreements about the long-term objectives that you have chosen. They must accompany it and especially improve it, but the general objective should remain the same. Given the current state of the field, withdrawing the "human touch" from the design process entirely and listening only to data would be a mistake. That's why the game analytics process should be thought through the prism of your own team; and therefore, should be presented as a new tool. This will help them to make good decisions for the game. The best example for the democratization of "game analytics way of thinking" inside your team is certainly the A/B testing aspect. If you experience debates about particular features in the game, instead of taking part you can propose to use A/B tests for some of those features. Following this, there are no particular limits to the use of the tool. A game designer can test different balancing on the virtual economy of a game and an artist can experience different graphic styles. When starting, focus your attention on simple practices If you are new to the field, the following list may help you to start defining your first objectives. It contains most of the typical use for online games, especially free-to-play games: Producing KPIs on a weekly or monthly basis, according to your needs. These KPIs will help you to orient the upcoming development of your game and to anticipate the return on investment of your acquisition campaigns. Identifying if some of the steps of your tutorial phase are poorly designed; for example, if you have a sudden player loss at a particular step of your tutorial. On the same idea, having the loss of players at each level is also very useful to improve the general balancing of your game, especially the progress curve and the difficulty. This topic is more important if you have a part of your business model based on purchasable goods, which can increase the progression rate of the player. You can evaluate which area and which purchasable goods of your game are generating the best income. You can perform A/B testing on particular key features of your game in order to see which ones are the most efficient. What game analytics should not be used for On the other hand, there are a few limits that you need to know before using methods and processes from game analytics. Keep away from numbers You must always be careful about the fact that numbers are used to represent a given situation during a "T" instant. From this statement, the predictive models must always be revised and improved. they should never be considered as the perfect truth. In order for the process to be efficient, it is quite important to keep research on the data inside the structure defined by the initial goals. Otherwise, you might split your efforts and no actionable insights would be identified. In other words, numbers must remain at their place. They are a tool in the hands of a human subject, and they should not become an obsession. Try to reason if they make any sense and if you are asking the right question. Practices that need to be avoided As mentioned in the the previous section, if you are new to this field, be aware of the following situations: Data cannot dictate the full content of your next update. If it is the case, you may first re-evaluate the general intention behind your product and talk with the game designer. When starting, try to avoid complex questions that involve external factors in the game, even if they seem crucial for you. For example, trying to understand why people stopped playing your game over a long period of time is usually impossible. Old players might stop playing because another game came out or they just got bored. Data cannot make miracles at this point of the engagement. Data must not take too much ampleness in the creative process. There are some human intentions and ideas, and only then the data comes in order to verify and improve the potential success of those intentions. Data must not slow down the performances of the game. One of the common methods to avoid this is to send the data when the player logs in or logs out and not at each click or each action. Summary This is the end of this article, and the most important thing you need to remember about game analytics in general is the importance of the definition of your objectives. The reason why you choose this tool instead of another (and this article has tried to list a maximum of them, from data mining to pure analysis) is because it fits your needs as much as possible. This statement is true at every stage of the refiection process which surrounds game analytics, from the choice of the storage solution to the type of analysis you want to perform. The rising of a fully-connected state in the video game industry offers developers the opportunity to change the way they create games, but there is no doubt that the level of maturation related to this tool has not reached its maximum yet. Therefore, even if the benefits of game analytics are great, be prepared to make mistakes as well; and keep your own process open to various criticisms from your team. Resources for Article: Further resources on this subject: Flash 10 Multiplayer Game: Game Interface Design [Article] GNU Octave: Data Analysis Examples [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article]
Read more
  • 0
  • 0
  • 3624

article-image-creating-and-utilizing-custom-entities
Packt
20 Nov 2013
16 min read
Save for later

Creating and Utilizing Custom Entities

Packt
20 Nov 2013
16 min read
(For more resources related to this topic, see here.) Introducing the entity system The entity system exists to spawn and manage entities in the game world. Entities are logical containers, allowing drastic changes in behavior at runtime. For example, an entity can change its model, position, and orientation at any point in the game. Consider this; every item, weapon, vehicle, and even player that you have interacted with in the engine is an entity. The entity system is one of the most important modules present in the engine, and is dealt regularly by programmers. The entity system, accessible via the IEntitySystem interface, manages all entities in the game. Entities are referenced to using the entityId type definition, which allows 65536 unique entities at any given time. If an entity is marked for deletion, for example, IEntity::Remove(bool bNow = false), the entity system will delete this prior to updating at the start of the next frame. If the bNow parameter is set to true, the entity will be removed right away. Entity classes Entities are simply instances of an entity class, represented by the IEntityClass interface. Each entity class is assigned a name that identifies it, for example, SpawnPoint. Classes can be registered via, IEntityClassRegistry::RegisterClass, or via IEntityClassRegistry::RegisterStdClass to use the default IEntityClass implementation. Entities The IEntity interface is used to access the entity implementation itself. The core implementation of IEntity is contained within, CryEntitySystem.dll, and cannot be modified. Instead, we are able to extend entities using game object extensions (have a look at the Game object extensions section in this article) and custom entity classes. entityId Each entity instance is assigned a unique identifier, which persists for the duration of the game session. EntityGUID Besides the entityId parameter, entities are also given globally unique identifiers, which unlike entityId can persist between game sessions, in the case of saving games and more. Game objects When entities need extended functionality, they can utilize game objects and game object extensions. This allows for a larger set of functionality that can be shared by any entity. Game objects allow the handling of binding entities to the network, serialization, per-frame updates, and the ability to utilize existing (or create new) game object extensions such as Inventory and AnimatedCharacter. Typically in CryENGINE development, game objects are only necessary for more important entity implementations, such as actors. The entity pool system The entity pool system allows "pooling" of entities, allowing efficient control of entities that are currently being processed. This system is commonly accessed via flowgraph, and allows the disabling/enabling groups of entities at runtime based on events. Pools are also used for entities that need to be created and released frequently, for example, bullets. Once an entity has been marked as handled by the pool system, it will be hidden in the game by default. Until the entity has been prepared, it will not exist in the game world. It is also ideal to free the entity once it is no longer needed. For example, if you have a group of AI that only needs to be activated when the player reaches a predefined checkpoint trigger, this can be set up using AreaTrigger (and its included flownode) and the Entity:EntityPool flownode. Creating a custom entity Now that we've learned the basics of the entity system, it's time to create our first entity. For this exercise, we'll be demonstrating the ability to create an entity in Lua, C#, and finally C++. . Creating an entity using Lua Lua entities are fairly simple to set up, and revolve around two files: the entity definition, and the script itself. To create a new Lua entity, we'll first have to create the entity definition in order to tell the engine where the script is located: <Entity Name="MyLuaEntity" Script="Scripts/Entities/Others/MyLuaEntity.lua" /> Simply save this file as MyLuaEntity.ent in the Game/Entities/ directory, and the engine will search for the script at Scripts/Entities/Others/MyLuaEntity.lua. Now we can move on to creating the Lua script itself! To start, create the script at the path set previously and add an empty table with the same name as your entity: MyLuaEntity = { } When parsing the script, the first thing the engine does is search for a table with the same name as the entity, as you defined it in the .ent definition file. This main table is where we can store variables, Editor properties, and other engine information. For example, we can add our own property by adding a string variable: MyLuaEntity = { Properties = { myProperty = "", }, } It is possible to create property categories by adding subtables within the Properties table. This is useful for organizational purposes. With the changes done, you should see the following screenshot when spawning an instance of your class in the Editor, via RollupBar present to the far right of the Editor by default: Common Lua entity callbacks The script system provides a set of callbacks that can be utilized to trigger specific logic on entity events. For example, the OnInit function is called on the entity when it is initialized: function MyEntity:OnInit() end Creating an entity in C# The third-party extension, CryMono allows the creation of entities in .NET, which leads us to demonstrate the capability of creating our very own entity in C#. To start, open the Game/Scripts/Entities directory, and create a new file called MyCSharpEntity.cs. This file will contain our entity code, and will be compiled at runtime when the engine is launched. Now, open the script (MyCSharpEntity.cs) IDE of your choice. We'll be using Visual Studio in order to provide IntelliSense and code highlighting. Once opened, let's create a basic skeleton entity. We'll need to add a reference to the CryENGINE namespace, in which the most common CryENGINE types are stored. using CryEngine; namespace CryGameCode { [Entity] public class MyCSharpEntity : Entity { } } Now, save the file and start the Editor. Your entity should now appear in RollupBar, inside the Default category. Drag MyEntity into the viewport in order to spawn it: We use the entity attribute ([Entity]) as a way of providing additional information for the entity registration progress, for example, using the Category property will result in using a custom Editor category, instead of Default. [Entity(Category = "Others")] Adding Editor properties Editor properties allow the level designer to supply parameters to the entity, perhaps to indicate the size of a trigger area, or to specify an entity's default health value. In CryMono, this can be done by decorating supported types (have a look at the following code snippet) with the EditorProperty attribute. For example, if we want to add a new string property: [EditorProperty] public string MyProperty { get; set; } Now when you start the Editor and drag MyCSharpEntity into the viewport, you should see MyProperty appear in the lower part of RollupBar. The MyProperty string variable in C# will be automatically updated when the user edits this via the Editor. Remember that Editor properties will be saved with the level, allowing the entity to use Editor properties defined by the level designer even in pure game mode. Property folders As with Lua scripts, it is possible for CryMono entities to place Editor properties in folders for organizational purposes. In order to create folders, you can utilize the Folder property of the EditorProperty attribute as shown: [EditorProperty(Folder = "MyCategory")] You now know how to create entities with custom Editor properties using CryMono! This is very useful when creating simple gameplay elements for level designers to place and modify at runtime, without having to reach for the nearest programmer. Creating an entity in C++ Creating an entity in C++ is slightly more complex than making one using Lua or C#, and can be done differently based on what the entity is required for. For this example, we'll be detailing the creation of a custom entity class by implementing IEntityClass. Creating a custom entity class Entity classes are represented by the IEntityClass interface, which we will derive from and register via IEntityClassRegistry::RegisterClass(IEntityClass *pClass). To start off, let's create the header file for our entity class. Right-click on your project in Visual Studio, or any of its filters, and go to Add | New Item in the context menu. When prompted, create your header file ( .h). We'll be calling CMyEntityClass. Now, open the generated MyEntityClass.h header file, and create a new class which derives from IEntityClass: #include <IEntityClass.h> class CMyEntityClass : public IEntityClass { }; Now that we have the class set up, we'll need to implement the pure virtual methods we inherit from IEntityClass in order for our class to compile successfully. For most of the methods, we can simply return a null pointer, zero, or an empty string. However, there are a couple of methods which we have to handle for the class to function: Release(): This is called when the class should be released, should simply perform "delete this;" to destroy the class GetName(): This should return the name of the class GetEditorClassInfo(): This should return the ClassInfo struct, containing Editor category, helper, and icon strings to the Editor SetEditorClassInfo(): This is called when something needs to update the Editor ClassInfo explained just now. IEntityClass is the bare minimum for an entity class, and does not support Editor properties yet (we will cover this a bit further later). To register an entity class, we need to call IEntityClassRegistry::RegisterClass. This has to be done prior to the IGameFramework::CompleteInit call in CGameStartup. We'll be doing it inside GameFactory.cpp, in the InitGameFactory function: IEntityClassRegistry::SEntityClassDesc classDesc; classDesc.sName = "MyEntityClass"; classDesc.editorClassInfo.sCategory = "MyCategory"; IEntitySystem *pEntitySystem = gEnv->pEntitySystem; IEntityClassRegistry *pClassRegistry = pEntitySystem- >GetClassRegistry(); bool result = pClassRegistry->RegisterClass(new CMyEntityClass(classDesc)); Implementing a property handler In order to handle Editor properties, we'll have to extend our IEntityClass implementation with a new implementation of IEntityPropertyHandler. The property handler is responsible for handling the setting, getting, and serialization of properties. Start by creating a new header file named MyEntityPropertyHandler.h. Following is the bare minimum implementation of IEntityPropertyHandler. In order to properly support properties, you'll need to implement SetProperty and GetProperty, as well as LoadEntityXMLProperties (the latter being required to read property values from the Level XML). Then create a new class which derives from IEntityPropertyHandler: class CMyEntityPropertyHandler : public IEntityPropertyHandler { }; In order for the new class to compile, you'll need to implement the pure virtual methods defined in IEntityPropertyHandler. Methods crucial for the property handler to work properly can be seen as shown: LoadEntityXMLProperties: This is called by the Launcher when a level is being loaded, in order to read property values of entities saved by the Editor GetPropertyCount: This should return the number of properties registered with the class GetPropertyInfo: This is called to get the property information at the specified index, most importantly when the Editor gets the available properties SetProperty: This is called to set the property value for an entity GetProperty: This is called to get the property value of an entity GetDefaultProperty: This is called to retrieve the default property value at the specified index To make use of the new property handler, create an instance of it (passing the requested properties to its constructor) and return the newly created handler inside IEntityClass::GetPropertyHandler(). We now have a basic entity class implementation, which can be easily extended to support Editor properties. This implementation is very extensible, and can be used for vast amount of purposes, for example, the C# script seen later has simply automated this process, lifting the responsibility of so much code from the programmer. Entity flownodes You may have noticed that when right-clicking inside a graph, one of the context options is Add Selected Entity. This functionality allows you to select an entity inside a level, and then add its entity flownode to the flowgraph. By default, the entity flownode doesn't contain any ports, and will therefore be mostly useless as shown to the right. However, we can easily create our own entity flownode that targets the entity we selected in all three languages. Creating an entity flownode in Lua By extending the entity we created in the Creating an entity using Lua section, we can add its very own entity flownode: function MyLuaEntity:Event_OnBooleanPort() BroadcastEvent(self, "MyBooleanOutput"); end MyLuaEntity.FlowEvents = { Inputs = { MyBooleanPort = { MyLuaEntity.Event_OnBooleanPort, "bool" }, }, Outputs = { MyBooleanOutput = "bool", }, } We just created an entity flownode for our MyLuaEntity class. If you start the Editor, spawn your entity, select it and then click on Add Selected Entity in your flowgraph, you should see the node appearing. Creating an entity flownode using C# Creating an entity flownode in C# is very simple due to being almost exactly identical in implementation as the regular flownodes. To create a new flownode for your entity, simply derive from EntityFlowNode, where T is your entity class name: using CryEngine.Flowgraph; public class MyEntity : Entity { } public class MyEntityNode : EntityFlowNode { [Port] public void Vec3Test(Vec3 input) { } [Port] public void FloatTest(float input) { } [Port] public void VoidTest() { } [Port] OutputPort BoolOutput { get; set; } } We just created an entity flownode in C#. This allows us to utilize TargetEntity in our new node's logic. Creating an entity flownode in C++ In short, entity flownodes are identical in implementation to regular nodes. The difference being the way the node is registered, as well as the prerequisite for the entity to support TargetEntity. Registering the entity node We utilize same methods for registering entity nodes as before, the only difference being that the category has to be entity, and the node name has to be the same as the entity it belongs to: REGISTER_FLOW_NODE("entity:MyCppEntity", CMyEntityFlowNode); The final code Finally, from what we've learned now, we can easily create our first entity flownode in C++: #include "stdafx.h" #include "Nodes/G2FlowBaseNode.h" class CMyEntityFlowNode : public CFlowBaseNode { enum EInput { EIP_InputPort, }; enum EOutput { EOP_OutputPort }; public: CMyEntityFlowNode(SActivationInfo *pActInfo) { } virtual IFlowNodePtr Clone(SActivationInfo *pActInfo) { return new CMyEntityFlowNode(pActInfo); } virtual void ProcessEvent(EFlowEvent evt, SActivationInfo *pActInfo) { } virtual void GetConfiguration(SFlowNodeConfig &config) { static const SInputPortConfig inputs[] = { InputPortConfig_Void("Input", "Our first input port"), {0} }; static const SOutputPortConfig outputs[] = { OutputPortConfig_Void("Output", "Our first output port"), {0} }; config.pInputPorts = inputs; config.pOutputPorts = outputs; config.sDescription = _HELP("Entity flow node sample"); config.nFlags |= EFLN_TARGET_ENTITY; } virtual void GetMemoryUsage(ICrySizer *s) const { s->Add(*this); } }; REGISTER_FLOW_NODE("entity:MyCppEntity", CMyEntityFlowNode); Game objects As mentioned at the start of the article, game objects are used when more advanced functionality is required of an entity, for example, if an entity needs to be bound to the network. There are two ways of implementing game objects, one being by registering the entity directly via IGameObjectSystem::RegisterExtension (and thereby having the game object automatically created on entity spawn), and the other is by utilizing the IGameObjectSystem::CreateGameObjectForEntity method to create a game object for an entity at runtime. Game object extensions It is possible to extend game objects by creating extensions, allowing the developer to hook into a number of entity and game object callbacks. This is, for example, how actors are implemented by default. We will be creating our game object extension in C++. The CryMono entity we created earlier in the article was made possible by a custom game object extension contained in CryMono.dll, and it is currently not possible to create further extensions via C# or Lua. Creating a game object extension in C++ CryENGINE provides a helper class template for creating a game object extension, called CGameObjectExtensionHelper. This helper class is used to avoid duplicating common code that is necessary for most game object extensions, for example, basic RMI functionality. To properly implement IGameObjectExtension, simply derive from the CGameObjectExtensionHelper template, specifying the first template argument as the class you're writing (in our case, CMyEntityExtension) and the second as IGameObjectExtension you're looking to derive from. Normally, the second argument is IGameObjectExtension, but it can be different for specific implementations such as IActor (which in turn derives from IGameObjectExtension). class CMyGameObjectExtension : public CGameObjectExtensionHelper<CMyGameObjectExtension, IGameObjectExtension> { }; Now that you've derived from IGameObjectExtension, you'll need to implement all its pure virtual methods to spare yourself from a bunch of unresolved externals. Most can be overridden with empty methods that return nothing or false, while more important ones have been listed as shown: Init: This is called to initialize the extension. Simply performSetGameObject(pGameObject); and then return true. NetSerialize: This is called to serialize things over the network. You'll also need to implement IGameObjectExtensionCreatorBase in a new class that will serve as an extension factory for your entity. When the extension is about to be activated, our factory's Create() method will be called in order to obtain the new extension instance: struct SMyGameObjectExtensionCreator : public IGameObjectExtensionCreatorBase { virtual IGameObjectExtension *Create() { return new CMyGameObjectExtension(); } virtual void GetGameObjectExtensionRMIData(void **ppRMI, size_t *nCount) { return CMyGameObjectExtension::GetGameObjectExtensionRMIData (ppRMI, nCount); } }; Now that you've created both your game object extension implementation, as well as the game object creator, simply register the extension: static SMyGameObjectExtensionCreator creator; gEnv->pGameFramework->GetIGameObjectSystem()- >RegisterExtension("MyGameObjectExtension", &creator, myEntityClassDesc); By passing the entity class description to IGameObjectSystem::RegisterExtension, you're telling it to create a dummy entity class for you. If you have already done so, simply pass the last parameter pEntityCls as NULL to make it use the class you registered before. Activating our extension In order to activate your game object extension, you'll need to call IGameObject::ActivateExtension after the entity is spawned. One way to do this is using the entity system sink, IEntitySystemSink, and listening to the OnSpawn events. We've now registered our own game object extension. When the entity is spawned, our entity system sink's OnSpawn method will be called, allowing us to create an instance of our game object extension. Summary In this article, we have learned how the core entity system is implemented and exposed and created our own custom entity. You should now be aware of the process of creating accompanying flownodes for your entities, and be aware of the working knowledge surrounding game objects and their extensions. If you want to get more familiar with the entity system, you can try and create a slightly more complex entity on your own. Resources for Article: Further resources on this subject: CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article] How to Create a New Vehicle in CryENGINE 3 [Article]
Read more
  • 0
  • 0
  • 2043

article-image-mobile-game-design
Packt
15 Nov 2013
12 min read
Save for later

Mobile Game Design

Packt
15 Nov 2013
12 min read
(For more resources related to this topic, see here.) The basic game design process The game design process shares many stages with any type of software design; identify what you want the game to do, define how it does it, find someone to program it, then test/ fix the hell out of it until it does what you expect it to do. Let's discuss these stages in a bit more detail. Find an idea. Unless you are one of the lucky few who start with an idea, sitting there staring at a blank piece of paper trying to force an idea out of your blank slate of a brain, may feel like trying to give birth when you're not pregnant: lots of effort with no payoff. Getting the right idea can be the hardest part of the entire design process and it usually takes several brainstorming sessions to achieve a good gameplay idea. In case you get stuck and feel like you're pondering too much, we suggest you to stop trying to be creative; go for a walk, watch a movie, read a book, or play a (gasp!) video game! Give the subconscious mind some space to percolate something cool up to the surface. Rough concept document: Once you have an idea for a game firmly embedded in your consciousness, it's time to write it down. This sounds simple and at this stage it should be. Write down the highlights of your idea; what is/are the fun parts, how does one win, what gets in the way of winning, how the player overcomes their obstacles to winning, and who you imagine would like to play this game. Storyboarding: The best way to test an idea is, well, to test it! Use pen and paper to create storyboards of your game and try to play it out on paper. These can save a lot of (expensive) programming time by eliminating unsuccessful ideas early and by working through interface organization on the cheap. The goal of storyboarding is to get something on paper that at least somewhat resembles the game you imagine in your head and it can go from very basic sketches, also called wire-frames, to detail schematics in Azure. Either way you should try to capture as many elements in the sketch as possible. The following figure represents the sketch of the double jump mechanic for a mobile platform made by one of the authors: Once you have concrete proof that your idea is good, invest some time and resources to create a playable demo that focuses on the action(s) the player will do most during the gameplay. It should have nothing extra such as fancy graphics and sound effects. It should include any pertinent actions that rely on the action in question and vice versa, for example if a previous action contributes to the action being tested, include it in the prototype. The question the prototype should answer is: do I still like my initial idea? While prototyping, it is acceptable to use existing assets scavenged from the net, other projects, and so on. Just be aware of the subtle risks of having the project become inadvertently associated with those assets, especially if they are high quality. For example, one of the authors was working on a simple (but clever!) real-time strategy game for Game Boy Advance. It was decided to add on a storyline to support the gameplay, which included a cast of characters. Instead of immediately creating original art for these characters, the team used the art from a defunct epic RPG project. The problem was that the quality of this placeholder art was so high (done by a world class fantasy/sci-fi artist) that when it was time to do final art for the game, the art the in-house artist did just wasn't up to the team's expectations. And the project didn't have enough money in the budget to hire the world-renowned artist to do the art for it. So both the team and the client (Nintendo) felt like the art was second rate, even though it was appropriate for the game being made. The project was later cancelled, but not necessarily due to the art. The following screenshot shows an adventure title prototype made by one of the authors with GameMaker studio by using assets taken from the Zelda saga: Test it once you have a working prototype, it is time to submit your idea to the public. Get a variety of people in to test your game like crazy. Include team members, former testers (if any), and fresh testers. Have people play often and get initial reactions as well as studied responses and collect all the data you can. Fix the issues that emerge from those testing sessions and be ready to discard anything that doesn't really fit the gameplay experience you had in mind. This can be a tough decision, especially for an element that the designer/design team have grown attached to. A good rule of thumb is if this element is on its third go around on being fixed; cut it if it doesn't pass. By then it is taking up too much of the project's resources. Refine the design document as implemented features pass the tests and the test, fix, or discard cycle is repeated on all the main features of your games, take the changes that were implemented during prototyping and update the design document to reflect them. By the end of this process, you will have a design document, a document that will be what you built for your final product. You can read an interesting article on Gamasutra about the layout of one such document, intended for a mobile team of developers at http://www.gamasutra.com/blogs/JasonBakker/20090604/84211/A_GDD_Template_for_the_Indie_Developer.php. Please note that this does not mean there won't be more changes! Hopefully it means there won't be any major changes, but be prepared for plenty of minor ones. End the preproduction once you have a clear idea of what your gameplay will be and a detailed document about what needs to be done, it is time to approach game production by creating the programming, graphics, audio, and interface of your game. As one works towards realization of the final product, continue using the evaluation procedures implemented during the prototyping process. Continually ask "is this fun for my target audience?" and don't fall into the trap of "well that's how I've always done that". Constantly question the design, and/or its implementation. If it's fun, leave it alone. If not, change it, no matter how late it is in the development process. Remember, you only have one chance to make a good first impression. When is the design really done? By now you have reached the realization that a project is never complete, you're simply done with it. No doubt you have many things you'd like to change, remove, or add but you've run out of time, money, or both. Make sure all those good ideas are recorded somewhere. It is a good idea to gather the team after release, and over snacks and refreshments capture what the team members would change. This is good for team morale as well as a good practice to follow. Mobile design constraints There are a few less obvious design considerations, based on the player's play behavior with a mobile device. What are the circumstances that players use mobile devices to play games? Usually they are waiting for something else to happen: waiting to board the bus, waiting to get off the bus, waiting in line, waiting in the waiting room, and so on. This affects several aspects of game design, as we will show in the following sections. Play time The most obvious design limitation is play time. The player should have a satisfying play experience in three minutes or less. A satisfying play experience usually means accomplishing a goal within the context of the game. A good point of reference is reaching a save game point. If save game points are placed about two and a half minutes of an average game player's ability apart, the average game player will never lose more than a couple of minutes of progress. For example, let's say an average player is waiting for a bus. She plays for three minutes and hits a save game point. The bus comes one minute later so the player stops playing and loses one minute of game progress (assuming there is no pause feature). Game depth Generally speaking, mobile games tend not to have much longevity, when compared to titles, such as Dragon Age or Fallout 3. There are several reasons for this, the most obvious one being the (usually) simple mechanics mobile games are built around. We don't mean that players cannot play Fruit Ninja or Angry Birds for a total of 60 hours or so, but it's not very likely that the average casual player will spend even 10 hours to unfold the story that may be told in a mobile game. At five hours of total gameplay, the player must in fact complete 120 two and a half minute save games. At 50 hours of the total gameplay, the player must complete 1200 two and a half minute save games. Are you sure your gameplay is sustainable over 1200 save game points? Mobile environment Mobile games are frequently played outdoors, in crowded, noisy, and even "shifting" or "scuffling" environments. Such factors must be considered while designing a mobile game. Does direct sunlight prevent players from understanding what's happening on the screen? Does a barking dog prevent the players from listening to important game instructions? Does the gameplay require control finesse and pixel precision to perform actions? If the answer to any of these questions is yes, you should iterate a little more around your design because these are all factors which could sink the success of your product. Smartphones Smartphones are still phones, after all. It is thus necessary that mobile games can handle unexpected events, which may occur while playing on your phone: incoming calls and messages, automatic updates, automatic power management utilities that activate alarms. You surely don't want your players to lose their progress due to an incoming call. Pause and auto-save features are thus mandatory design requirements of any successful mobile game. Single player versus multiplayer Multiplayer is generally much more fun than single player, no question. But how can you set up a multiplayer game in a two and half minute window? For popular multiplayer titles it is possible. Thanks to a turn-based, asynchronous play model where one player submits a move in the two and half minute window and then responds to the player's move. Very popular titles like Ruzzle, Hero Academy, or Skulls of the Shogun game system do that, but keep in mind that to support asynchronous gameplay it requires servers, which cost money and complex networking routines to be programmed. Are these extra difficulties worth their costs? The mobile market The success of any commercial project cannot arise with disregard to its reference market, and mobile games don't make exception. We the authors, believe that if you are reading this article, you are aware that the mobile market is evolving rapidly. The Newzoo market research for the games industry trends report for 2012 states that there are more than 500 million mobile gamers in the world and around 175 million gamers pay for games and that the mobile market was worth 9 billion dollars in 2012 (source: http://www.newzoo.com/insights/placing-mobile-games-in-perspective-of-the-total-games-market-free-mobile-trend-report/). The following screenshot represents the numbers of the mobile gaming market 2012 reported by Newzoo: As Juniper Research, a market intelligence firm, states, "smartphones and tablets are going to be primary devices for gamers to make in-app purchases in the future. Juniper projects 64.1 billion downloads of game apps to mobile devices in 2017, compared to the 21 billion downloaded in 2012." (source: http://www.gamesindustry.biz/articles/2013-04-30-mobile-to-be-primary-hardware-for-gaming-by-2016 ). Even handheld consoles, such as the 3DS by Nintendo or the PSVita by PlayStation are suffering from the competition of mobile phones and tablets, thanks to the improvements on mobile hardware and the quality of games. With regard to market share, a study by Strategy Analytics (source: http://www.strategyanalytics.com/default.aspx?mod=reportabstractviewer&a0=8437) shows that Android is the leading platform in Q1 2013, with 64 percent of all handheld sales. Japan being the only market where iOS is on the lead; though, as Apple is fond of pointing out, iOS users generally spend more money, when compared to Android estimators. All the data tell us that the positive trend in mobile devices growth will continue for several years and that with almost one billion mobile devices in the world, the mobile market cannot be ignored by game developers. Android is growing faster than Apple, but Apple is still the most lucrative market for mobile apps and games. Microsoft phones and tablets, on the other hand, didn't show positive trends as to be compared with iOS and Android growth. So the question is how can an indie team enter this market and have a chance of success? Summary In this article we discussed best practices for designing mobile games that can have chances to emerge in the highly competitive mobile market. We discussed the factors that come into play while designing games for the mobile platform, with regards to hardware and design limitations to the mobile market characteristics and the most successful mobile business models. Resources for Article : Further resources on this subject: Unity Game Development: Welcome to the 3D world [Article] So, what is Spring for Android? [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 1684
article-image-installing-gideros
Packt
08 Nov 2013
8 min read
Save for later

Installing Gideros

Packt
08 Nov 2013
8 min read
(For more resources related to this topic, see here.) About Gideros Gideros is a set of software packages created and managed by a company named Gideros Mobile. It provides developers with the ability to create 2D games for multiple platforms by reusing the same code. Games created with Gideros run as native applications, thus having all the benefits of high performance and the utilization of the hardware power of a mobile device. Gideros uses Lua as its programming language, which is a lightweight scripting language with an easy learning curve and it is quite popular in the context of game development. A few of the greatest Gideros features are as follows: Its rapid prototyping and fast development time by providing a single-click on-device testing that enables you to compile and run your game from your computer to device in an instant A clean object-oriented approach that enables you to write clean and reusable code Additionally, Gideros is not limited to its provided API and can be extended to offer virtually any native platform features through its plugin system You can use all of these to create and even publish your game for free, if you don't mind a small Gideros splash screen being shown before your game starts Installing Gideros Currently, Gideros has no registration requirements for downloading its SDK, so you can easily navigate to their download page (http://giderosmobile.com/download) and download the version that is suitable for your operating system. As Gideros can be used on Linux only using the WINE emulator, it means that even for Linux you have to download the Windows version of Gideros. So, to sum it up: Download the Windows version for Windows and Linux OS Download the Mac version for OS X Gideros consists of multiple programs providing you with a basic package needed to develop your own mobile games. This software package includes the following features: Gideros Studio: It is a lightweight IDE to manage Gideros projects Gideros Player: It is a fast and lightweight desktop; iOS and Android players can run their apps with one click when testing Gideros Texture Packer: It is used to pack multiple textures in one texture for faster texture rendering Gideros Font Creator: It is used to create Bitmap fonts from different font formats for faster font rendering Gideros License Manager: It is used to license your downloaded copy of Gideros before exporting a project (required even for free accounts) An offline copy of the Gideros documentation and Reference API to get you started Creating your first project After you have downloaded and installed Gideros, you can try to create your first Gideros project. Although Gideros is IDE independent, and lot of other IDE's such as Lua Glider, Zero Brane, IntelliJ IDEA, and even Sublime can support Gideros, I would recommend that first-time users choose the provided Gideros Studio. That is what we will be using in this article. Trying out Gideros Studio You should note that I will be using the Windows version for screenshots and explanations, but Gideros Studio on other operating systems is quite similar, if not exactly the same. Therefore, it should not cause any confusion if you are using other versions of Gideros. When you open Gideros Studio, you will see a lot of different sections or what we will call panes. The largest pane will be the Start Page, which will provide you with the following options: Create New Project Access offline the Getting Started guide Access offline the Reference Manual Browse and try out Gideros Example Projects Go ahead and click on Create New Project, a New Project dialog will open. Now enter the name of your project, for example, New Project. Change the location of the project if you want to or leave it set to the default value, and click on OK when you are ready. Note that the Start Page is automatically closed and the space occupied by the Start Page is now free. This will be your coding pane, where all the code will be displayed. But first let's draw our attention to the Project pane, where you can see your chosen project name inside. In this pane, you will manage all the files used by your app. One important thing to note is that file/folder structure in Gideros Project pane is completely independent from your filesystem. This means that you will have to add files manually to the Gideros Studio Project pane. They won't show up automatically when you copy them into the project folder. And in your filesystem, files and folders may be organized completely different than those in Gideros Studio. This feature gives you the flexibility of managing multiple projects with the same code or asset base. When you, for example, want to include specific things in the iOS version of the game, which Android won't have, you can create two different projects in the same project directory, which could reuse the same files and simultaneously have their own independent, platform-specific files. So let's see how it actually works. Right-click on your project name inside the Project pane and select Add New File.... It will pop up the Add New File dialog. Like in many Lua development environments, an application should start with a main.lua file; so name your file main.lua and click on OK. You will now see that main.lua was added to your Project pane. And if you check the directory of your project in your filesystem, you will see that it also contains the main.lua file. Now double-click on main.lua inside the Project pane and it will open this file inside the code pane, where you can write a code for it. So let's try it out. Write a simple line of code: print("Hello world!") What this line will do is simply print out the provided string (Hello world!) inside the output console. Now save the project by either using the File menu or a diskette icon on the toolbar and let's run this project on a local desktop player. Using the Gideros desktop player To run our app, we first need to launch Gideros Player by clicking on a small joystick icon on the toolbar. This will open up the Gideros desktop player. The default screen of Gideros Player shows the current version of Gideros used and the IP address the player is bound to. Additionally, the desktop player provides different customizations: You can make it appear on the top of every window by navigating to View | Always on Top. You can change the zoom by navigating to View | Zoom. It is helpful when running the player in high resolutions, which might not fit the screen. You can select the orientation (portrait or landscape) of the player by navigating to Hardware | Orientation, to suit the needs of your app. You can provide the resolution you want to test your app in by navigating to Hardware | Resolution. It provides the most popular resolution templates to choose from. You can also set the frame rate of your app by navigating to Hardware | Frame Rate. Resolution selected in Gideros Player settings corresponds to the physical device you want to test your application on. All these options give you the flexibility to test your app across different device configurations from within one single desktop player. Now when the player is launched, you should see that the start and stop buttons of Gideros Studio are now enabled. And to run your project, all you need to do is click on the start button. You might need to launch Gideros Player and Gideros Studio with proper permissions and even add them to your Antivirus or Firewall's exceptions list to allow them to connect. The IP address and Gideros version of the player should disappear and you should only see a white screen there. That is because we did not actually display any graphical object as image. But what we did was printing some information to the console. So let's check the Output pane in the Gideros Studio. As you see the Output pane, there are some information messages, like the fact that main.lua was uploaded and the uploading process to the Gideros Player was finished successfully; but it also displays any text we pass to Lua print command, as in our case it was Hello world!. The Output pane is very handy for a simple debugging process by printing out the information using the print command. It also provides the error information if something is wrong with the project and it cannot be built. Now when we know what an Output pane is, let's actually display something on the player's screen. Summary In this article, you've learned a few features about Gideros Studio, such as installing Gideros on your machine, creating your first project, how to use the Gideros Player, and trying out your first project. Resources for Article: Further resources on this subject: Getting Started with PlayStation Mobile [Article] Getting Started with Marmalade [Article] Getting Started with GameSalad [Article]
Read more
  • 0
  • 0
  • 1570

article-image-major-sdk-components
Packt
24 Oct 2013
11 min read
Save for later

Major SDK components

Packt
24 Oct 2013
11 min read
(For more resources related to this topic, see here.) Controller The Leap::Controller class is a liaison between the controller and your code. Whenever you wish to do anything at all with the device you must first go through your controller. From a controller instance we can interact with the device configuration, detected displays, current and past frames, and set up event handling with our listener subclass. Config An instance of the Config class can be obtained from a controller. It provides a key/value interface to modify the operation of the Leap device and driver behavior. Some of the options available are: Robust mode : Somewhat slower frame processing but works better with less light. Low resource mode : Less accurate and responsive tracking, but uses less CPU and USB bandwidth. Tracking priority : Can prioritize either precision of tracking data or the rate at which data is sampled (resulting in approximately 4x data frame-rate boost), or a balance between the two (approximately 2x faster than the precise mode). Flip tracking : Allows you to use the controller with the USB cable coming out of either side. This setting simply flips the positive and negative coordinates on the X-axis. Screen A controller may have one or more calibratedScreens, which are computer displays in the field of view of the controller, which have a known position and dimensions. Given a pointable direction and a screen we can determine what the user is pointing at. Math Several math-related functions and types such as Leap::Vector, Leap::Matrix, and Leap::FloatArray are provided by LeapMath.h. All points in space, screen coordinates, directions, and normal are returned by the API as three-element vectors representing X, Y, and Z coordinates or unit vectors. Frame The real juicy information is stored inside each Frame. A Frame instance represents a point in time in which the driver was able to generate an updated view of its world and detect where screens, your hands, and pointables are. Hand At present the only body parts you can use with the controller are your hands. Given a frame instance we can inspect the number of hands in the frame, their position and rotation, normal vectors, and gestures. The hand motion API allows you to compare two frames and determine if the user has performed a translation, rotation, or scaling gesture with their hands in that time interval. The methods we can call to check for these interactions are: Leap::Hand::translation(sinceFrame): Translation (also known as movement) returned as a Leap::Vector including the direction of the movement of the hand and the distance travelled in millimeters. Leap::Hand::rotationMatrix(sinceFrame), ::rotationAxis(sinceFrame), ::rotationAngle(sinceFrame, axisVector): Hand rotation, either described as a rotation matrix, vector around an axis or float angle around a vector between –π and π radians (that's -180° to 180° for those of you who are a little rusty with your trigonometry). Leap::Hand::scaleFactor(sinceFrame): Scaling represents the distance between two hands. If the hands are closer together in the current frame compared to sinceFrame, the return value will be less than 1.0 but greater than 0.0. If the hands are further apart the return value will be greater than 1.0 to indicate the factor by which the distance has increased. Pointable A Hand also can contain information about Pointable objects that were recognized in the frame as being attached to the hand. A distinction is made between the two different subclasses of pointable objects, Tool, which can be any slender, long object such as a chopstick or a pencil, and Finger, whose meaning should be apparent. You can request either fingers or tools from a Hand, or a list of pointables to get both if you don't care. Finger positioning Suppose we want to know where a user's fingertips are in space. Here's a short snippet of code to output the spatial coordinates of the tips of the fingers on a hand that is being tracked by the controller: if (frame.hands().empty()) return; const Leap::Hand firstHand = frame.hands()[0]; const Leap::FingerList fingers = firstHand.fingers(); Here we obtain a list of the fingers on the first hand of the frame. For an enjoyable diversion let's output the locations of the fingertips on the hand, given in the Leap coordinate system: for (int i = 0; i < fingers.count(); i++) { const Leap::Finger finger = fingers[i]; std::cout << "Detected finger " << i << " at position (" << finger.tipPosition().x << ", " << finger.tipPosition().y << ", " << finger.tipPosition().z << ")" << std::endl; } This demonstrates how to get the position of the fingertips of the first hand that is recognized in the current frame. If you hold three fingers out the following dazzling output is printed: Detected finger 0 at position (-119.867, 213.155, -65.763) Detected finger 1 at position (-90.5347, 208.877, -61.1673) Detected finger 2 at position (-142.919, 211.565, -48.6942) While this is clearly totally awesome, the exact meaning of these numbers may not be immediately apparent. For points in space returned by the SDK the Leap coordinate system is used. Much like our forefathers believed the Earth to be the cornerstone of our solar system, your Leap device has similar notions of centricity. It measures locations by their distance from the Leap origin, a point centered on the top of the device. Negative X values represent a point in space to the left of the device, positive values are to the right. The Z coordinates work in much the same way, with positive values extending towards the user and negative values in the direction of the display. The Y coordinate is the distance from the top of the device, starting 25 millimeters above it and extending to about 600 millimeters (two feet) upwards. Note that the device cannot see below itself, so all Y coordinates will be positive. An example of cursor control By now we are feeling pretty saucy, having diligently run the sample code thus far and controlling our computer in a way never before possible. While there is certain utility and endless amusement afforded by printing out finger coordinates while waving your hands in the air and pretending to be a magician, there are even more exciting applications waiting to be written, so let's continue onwards and upwards. Until computer-gesture interaction is commonplace, pretending to be a magician while you test the functionality of Leap SDK is not recommended in public places such as coffee shops. In some cultures it is considered impolite to point at people. Fortunately your computer doesn't have feelings and won't mind if we use a pointing gesture to move its cursor around (you can even use a customarily offensive finger if you so choose). In order to determine where to move the cursor, we must first locate the position on the display that the user is pointing at. To accomplish this we will make use of the screen calibration and detection API in the SDK. If you happen to leave your controller near a computer monitor it will do its best to try and determine the location and dimensions of the monitor by looking for a large, flat surface in its field of view. In addition you can use the complementary Leap calibration functionality to improve its accuracy if you are willing to take a couple of minutes to point at various dots on your screen. Note that once you have calibrated your screen, you should ensure that the relative positions of the Leap and the screen do not change. Once your controller has oriented itself within your surroundings, hands and display, you can ask your trusty controller instance for a list of detected screens: // get list of detected screens const Leap::ScreenList screens = controller.calibratedScreens(); // make sure we have a detected screen if (screens.empty()) return;const Leap::Screen screen = screens[0]; We now have a screen instance that we can use to find out the physical location in space of the screen as well as its boundaries and resolution. Who cares about all that though, when we can use the SDK to compute where we're pointing to with the intersect() method? // find the first finger or tool const Leap::Frame frame = controller.frame(); const Leap::HandList hands = frame.hands(); if (hands.empty()) return; const Leap::PointableList pointables = hands[0].pointables(); if (pointables.empty()) return; const Leap::Pointable firstPointable = pointables[0]; // get x, y coordinates on the first screen const Leap::Vector intersection = screen.intersect( firstPointable, true, // normalize 1.0f // clampRatio ); The vector intersection contains what we want to know here; the pixel pointed at by our pointable. If the pointable argument to intersect() is not actually pointing at the screen then the return value will be (NaN, NaN, NaN). NaN stands for not a number . We can easily check for the presence of non-finite values in a vector with the isValid() method: if (! intersection.isValid()) return; // print intersection coordinates std::cout << "You are pointing at (" << intersection.x << ", " << intersection.y << ", " << intersection.z << ")" << std::endl; Prepare to be astounded when you point at the middle of your screen and the transfixing message You are pointing at (0.519522, 0.483496, 0) is revealed. Assuming your screen resolution is larger than one pixel on either side, this output may be somewhat unexpected, so let's talk about what screen.intersect(const Pointable &pointable, bool normalize, float clampRatio=1.0f) is returning. The intersect() method draws an imaginary ray from the tip of pointable extending in the same direction as your finger or tool and returns a three-element vector containing the coordinates of the point of intersection between the ray and the screen. If the second parameter normalize is set to false then intersect() will return the location in the leap coordinate system. Since we have no interest in the real world we have set normalize to true, which causes the coordinates of the returned intersection vector to be fractions of the screen width and height. When intersect() returns normalized coordinates, (0, 0, 0) is considered the bottom-left pixel, and (1, 1, 0) is the top-right pixel. It is worth noting that many computer graphics coordinate systems define the top-left pixel as (0, 0) so use caution when using these coordinates with other libraries. There is one last (optional) parameter to the intersect() method, clampRatio, which is used to expand or contract the boundaries of the area at which the user can point, should you want to allow pointing beyond the edges of the screen. Now that we have our normalized screen position, we can easily work out the pixel coordinate in the direction of the user's rude gesticulations: unsigned int x = screen.widthPixels() * intersection.x; // flip y coordinate to standard top-left origin unsigned int y = screen.heightPixels() * (1.0f - intersection.y); std::cout << "You are offending the pixel at (" << x << ", " << y << std::endl; Since intersection.x and intersection.y are fractions of the screen dimensions, simply multiply by the boundary sizes to get our intersection coordinates on the screen. We'll go ahead and leave out the Z-coordinate since it's usually (OK, always) zero. Now for the coup de grace —moving the cursor location, here's how to do it on Mac OS X: CGPoint destPoint = CGPointMake(x, y); CGDisplayMoveCursorToPoint(kCGDirectMainDisplay, de.stPoint); You will need to #include <CoreGraphics/CoreGraphics.h> and link it ( –framework CoreGraphics) to make use of CGDisplayMoveCursorToPoint(). Now all of our hard efforts are rewarded, and we can while away the rest of our days making the cursor zip around with nothing more than a twitch of the finger. At least until our arm gets tired. After a few seconds (or minutes, for the easily-amused) it may become apparent that the utility of such an application is severely limited, as we can't actually click on anything. So maybe you shouldn't throw your mouse away just yet, but read on if you are ready to escape from the shackles of such an antiquated input device. Summary In this article, we learned about the major components of the Leap SDK. We went through the various components of the Leap SDK. Resources for Article: Further resources on this subject: Kinect in Motion – An Overview [Article] Getting started with Kinect for Windows SDK Programming [Article] Getting Started with Kinect [Article]
Read more
  • 0
  • 0
  • 1282