Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - 3D Game Development

115 Articles
article-image-unreal-engine-4-23-releases-with-major-new-features-like-chaos-virtual-production-improvement-in-real-time-ray-tracing-and-more
Vincy Davis
09 Sep 2019
5 min read
Save for later

Unreal Engine 4.23 releases with major new features like Chaos, Virtual Production, improvement in real-time ray tracing and more

Vincy Davis
09 Sep 2019
5 min read
Last week, Epic released the stable version of Unreal Engine 4.23 with a whopping 192 improvements. The major features include beta varieties like Chaos - Destruction, Multi-Bounce Reflection fallback in Real-Time Ray Tracing, Virtual Texturing, Unreal Insights, HoloLens 2 native support, Niagara improvements and many more. Unreal Engine 4.23 will no longer support iOS 10, as iOS 11 is now the minimum required version. What’s new in Unreal Engine 4.23? Chaos - Destruction Labelled as “Unreal Engine's new high-performance physics and destruction system” Chaos is available in beta for users to attain cinematic-quality visuals in real-time scenes. It also supports high level artist control over content creation and destruction. https://youtu.be/fnuWG2I2QCY Chaos supports many distinct characteristics like- Geometry Collections: It is a new type of asset in Unreal for short-lived objects. The Geometry assets can be built using one or more Static Meshes. It offers flexibility to the artist on choosing what to simulate, how to organize and author the destruction. Fracturing: A Geometry Collection can be broken into pieces either individually, or by applying one pattern across multiple pieces using the Fracturing tools. Clustering: Sub-fracturing is used by artists to increase optimization. Every sub-fracture is an extra level added to the Geometry Collection. The Chaos system keeps track of the extra levels and stores the information in a Cluster, to be controlled by the artist. Fields: It can be used to control simulation and other attributes of the Geometry Collection. Fields enable users to vary the mass, make something static, to make the corner more breakable than the middle, and others. Unreal Insights Currently in beta, Unreal Insights enable developers to collect and analyze data about Unreal Engine's behavior in a fixed way. The Trace System API system is one of its components and is used to collect information from runtime systems consistently. Another component of Unreal Insights is called the Unreal Insights Tool. It supplies interactive visualization of data through the Analysis API. For in-depth details about Unreal Insights and other features, you can also check out the first preview release of Unreal Engine 4.23. Virtual Production Pipeline Improvements Unreal Engine 4.23 explores advancements in virtual production pipeline by improving virtually scout environments and compose shots by connecting live broadcast elements with digital representations and more. In-Camera VFX: With improvements in-Camera VFX, users can achieve final shots live on set by combining real-world actors and props with Unreal Engine environment backgrounds. VR Scouting for Filmmakers: The new VR Scouting tools can be used by filmmakers to navigate and interact with the virtual world in VR. Controllers and settings can also be customized in Blueprints,rather than rebuilding the engine in C++. Live Link Datatypes and UX Improvements: The Live Link Plugin be used to drive character animation, camera, lights, and basic 3D transforms dynamically from other applications and data sources in the production pipeline. Other improvements include save and load presets for Live Link setups, better status indicators to show the current Live Link sources, and more. Remote Control over HTTP: Unreal Engine 4.23 users can send commands to Unreal Engine and Unreal Editor remotely over HTTP. This makes it possible for users to create customized web user interfaces to trigger changes in the project's content. Read Also: Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Real-Time Ray tracing Improvements Performance and Stability Expanded DirectX 12 Support Improved Denoiser quality Increased Ray Traced Global Illumination (RTGI) quality Additional Geometry and Material Support Landscape Terrain Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM) Procedural Meshes Transmission with SubSurface Materials World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries Multi-Bounce Reflection Fallback Unreal Engine 4.23 provides improved support for multi-bounce Ray Traced Reflections (RTR) by using Reflection Captures. This will increase the performance of all types of intra-reflections. Virtual Texturing The beta version of Virtual Texturing in Unreal Engine 4.23 enables users to create and use large textures for a lower and more constant memory footprint at runtime. Streaming Virtual Texturing: The Streaming Virtual Texturing uses the Virtual Texture assets to present an option to stream textures from disk rather than the existing Mip-based streaming. It minimizes the texture memory overhead and increases performance when using very large textures. Runtime Virtual Texturing: The Runtime Virtual Texturing avails a Runtime Virtual Texture asset. It can be used to supply shading data over large areas, thus making it suitable for Landscape shading. Unreal Engine 4.23 also presents new features like Skin Weight Profiles, Animation Streaming, Dynamic Animation Graphs, Open Sound Control, Sequencer Curve Editor Improvements, and more. As expected, users love the new features in Unreal Engine 4.23, especially Chaos. https://twitter.com/rista__m/status/1170608746692673537 https://twitter.com/jayakri59101140/status/1169553133518782464 https://twitter.com/NoisestormMusic/status/1169303013149806595 To know about the full updates in Unreal Engine 4.23, users can head over to the Unreal Engine blog. Other news in Game Development Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 6012

article-image-2d-twin-stick-shooter
Packt
11 Nov 2014
21 min read
Save for later

2D Twin-stick Shooter

Packt
11 Nov 2014
21 min read
This article written by John P. Doran, the author of Unity Game Development Blueprints, teaches us how to use Unity to prepare a well formed game. It also gives people experienced in this field a chance to prepare some great stuff. (For more resources related to this topic, see here.) The shoot 'em up genre of games is one of the earliest kinds of games. In shoot 'em ups, the player character is a single entity fighting a large number of enemies. They are typically played with a top-down perspective, which is perfect for 2D games. Shoot 'em up games also exist with many categories, based upon their design elements. Elements of a shoot 'em up were first seen in the 1961 Spacewar! game. However, the concept wasn't popularized until 1978 with Space Invaders. The genre was quite popular throughout the 1980s and 1990s and went in many different directions, including bullet hell games, such as the titles of the Touhou Project. The genre has recently gone through a resurgence in recent years with games such as Bizarre Creations' Geometry Wars: Retro Evolved, which is more famously known as a twin-stick shooter. Project overview Over the course of this article, we will be creating a 2D multidirectional shooter game similar to Geometry Wars. In this game, the player controls a ship. This ship can move around the screen using the keyboard and shoot projectiles in the direction that the mouse is points at. Enemies and obstacles will spawn towards the player, and the player will avoid/shoot them. This article will also serve as a refresher on a lot of the concepts of working in Unity and give an overview of the recent addition of native 2D tools into Unity. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from beginning to end. Here is the outline of our tasks: Setting up the project Creating our scene Adding in player movement Adding in shooting functionality Creating enemies Adding GameController to spawn enemy waves Particle systems Adding in audio Adding in points, score, and wave numbers Publishing the game Prerequisites Before we start, we will need to get the latest Unity version, which you can always get by going to http://unity3d.com/unity/download/ and downloading it there: At the time of writing this article, the version is 4.5.3, but this project should work in future versions with minimal changes. Navigate to the preceding URL, and download the Chapter1.zip package and unzip it. Inside the Chapter1 folder, there are a number of things, including an Assets folder, which will have the art, sound, and font files you'll need for the project as well as the Chapter_1_Completed.unitypackage (this is the complete article package that includes the entire project for you to work with). I've also added in the complete game exported (TwinstickShooter Exported) as well as the entire project zipped up in the TwinstickShooter Project.zip file. Setting up the project At this point, I have assumed that you have Unity freshly installed and have started it up. With Unity started, go to File | New Project. Select Project Location of your choice somewhere on your hard drive, and ensure you have Setup defaults for set to 2D. Once completed, select Create. At this point, we will not need to import any packages, as we'll be making everything from scratch. It should look like the following screenshot: From there, if you see the Welcome to Unity pop up, feel free to close it out as we won't be using it. At this point, you will be brought to the general Unity layout, as follows: Again, I'm assuming you have some familiarity with Unity before reading this article; if you would like more information on the interface, please visit http://docs.unity3d.com/Documentation/Manual/LearningtheInterface.html. Keeping your Unity project organized is incredibly important. As your project moves from a small prototype to a full game, more and more files will be introduced to your project. If you don't start organizing from the beginning, you'll keep planning to tidy it up later on, but as deadlines keep coming, things may get quite out of hand. This organization becomes even more vital when you're working as part of a team, especially if your team is telecommuting. Differing project structures across different coders/artists/designers is an awful mess to find yourself in. Setting up a project structure at the start and sticking to it will save you countless minutes of time in the long run and only takes a few seconds, which is what we'll be doing now. Perform the following steps: Click on the Create drop-down menu below the Project tab in the bottom-left side of the screen. From there, click on Folder, and you'll notice that a new folder has been created inside your Assets folder. After the folder is created, you can type in the name for your folder. Once done, press Enter for the folder to be created. We need to create folders for the following directories:      Animations      Prefabs      Scenes      Scripts      Sprites If you happen to create a folder inside another folder, you can simply drag-and-drop it from the left-hand side toolbar. If you need to rename a folder, simply click on it once and wait, and you'll be able to edit it again. You can also use Ctrl + D to duplicate a folder if it is selected. Once you're done with the aforementioned steps, your project should look something like this: Creating our scene Now that we have our project set up, let's get started with creating our player: Double-click on the Sprites folder. Once inside, go to your operating system's browser window, open up the Chapter 1/Assets folder that we provided, and drag the playerShip.png file into the folder to move it into our project. Once added, confirm that the image is Sprite by clicking on it and confirming from the Inspector tab that Texture Type is Sprite. If it isn't, simply change it to that, and then click on the Apply button. Have a look at the following screenshot: If you do not want to drag-and-drop the files, you can also right-click within the folder in the Project Browser (bottom-left corner) and select Import New Asset to select a file from a folder to bring it in. The art assets used for this tutorial were provided by Kenney. To see more of their work, please check out www.kenney.nl. Next, drag-and-drop the ship into the scene (the center part that's currently dark gray). Once completed, set the position of the sprite to the center of the Screen (0, 0) by right-clicking on the Transform component and then selecting Reset Position. Have a look at the following screenshot: Now, with the player in the world, let's add in a background. Drag-and-drop the background.png file into your Sprites folder. After that, drag-and-drop a copy into the scene. If you put the background on top of the ship, you'll notice that currently the background is in front of the player (Unity puts newly added objects on top of previously created ones if their position on the Z axis is the same; this is commonly referred to as the z-order), so let's fix that. Objects on the same Z axis without sorting layer are considered to be equal in terms of draw order; so just because a scene looks a certain way this time, when you reload the level it may look different. In order to guarantee that an object is in front of another one in 2D space is by having different Z values or using sorting layers. Select your background object, and go to the Sprite Renderer component from the Inspector tab. Under Sorting Layer, select Add Sorting Layer. After that, click on the + icon for Sorting Layers, and then give Layer 1 a name, Background. Now, create a sorting layer for Foreground and GUI. Have a look at the following screenshot: Now, place the player ship on the foreground and the background by selecting the object once again and then setting the Sorting Layer property via the drop-down menu. Now, if you play the game, you'll see that the ship is in front of the background, as follows: At this point, we can just duplicate our background a number of times to create our full background by selecting the object in the Hierarchy, but that is tedious and time-consuming. Instead, we can create all of the duplicates by either using code or creating a tileable texture. For our purposes, we'll just create a texture. Delete the background sprite by left-clicking on the background object in the Hierarchy tab on the left-hand side and then pressing the Delete key. Then select the background sprite in the Project tab, change Texture Type in the Inspector tab to Texture, and click on Apply. Now let's create a 3D cube by selecting Game Object | Create Other | Cube from the top toolbar. Change the object's name from Cube to Background. In the Transform component, change the Position to (0, 0, 1) and the Scale to (100, 100, 1). If you are using Unity 4.6 you will need to go to Game Object | 3D Object | Cube to create the cube. Since our camera is at 0, 0, -10 and the player is at 0, 0, 0, putting the object at position 0, 0, 1 will put it behind all of our sprites. By creating a 3D object and scaling it, we are making it really large, much larger than the player's monitor. If we scaled a sprite, it would be one really large image with pixelation, which would look really bad. By using a 3D object, the texture that is applied to the faces of the 3D object is repeated, and since the image is tileable, it looks like one big continuous image. Remove Box Collider by right-clicking on it and selecting Remove Component. Next, we will need to create a material for our background to use. To do so, under the Project tab, select Create | Material, and name the material as BackgroundMaterial. Under the Shader property, click on the drop-down menu, and select Unlit | Texture. Click on the Texture box on the right-hand side, and select the background texture. Once completed, set the Tiling property's x and y to 25. Have a look at the following screenshot: In addition to just selecting from the menu, you can also drag-and-drop the background texture directly onto the Texture box, and it will set the property. Tiling tells Unity how many times the image should repeat in the x and y positions, respectively. Finally, go back to the Background object in Hierarchy. Under the Mesh Renderer component, open up Materials by left-clicking on the arrow, and change Element 0 to our BackgroundMaterial material. Consider the following screenshot: Now, when we play the game, you'll see that we now have a complete background that tiles properly. Scripting 101 In Unity, the behavior of game objects is controlled by the different components that are attached to them in a form of association called composition. These components are things that we can add and remove at any time to create much more complex objects. If you want to do anything that isn't already provided by Unity, you'll have to write it on your own through a process we call scripting. Scripting is an essential element in all but the simplest of video games. Unity allows you to code in either C#, Boo, or UnityScript, a language designed specifically for use with Unity and modelled after JavaScript. For this article, we will use C#. C# is an object-oriented programming language—an industry-standard language similar to Java or C++. The majority of plugins from Asset Store are written in C#, and code written in C# can port to other platforms, such as mobile, with very minimal code changes. C# is also a strongly-typed language, which means that if there is any issue with the code, it will be identified within Unity and will stop you from running the game until it's fixed. This may seem like a hindrance, but when working with code, I very much prefer to write correct code and solve problems before they escalate to something much worse. Implementing player movement Now, at this point, we have a great-looking game, but nothing at all happens. Let's change that now using our player. Perform the following steps: Right-click on the Scripts folder you created earlier, click on Create, and select the C# Script label. Once you click on it, a script will appear in the Scripts folder, and it should already have focus and should be asking you to type a name for the script—call it PlayerBehaviour. Double-click on the script in Unity, and it will open MonoDevelop, which is an open source integrated development environment (IDE) that is included with your Unity installation. After MonoDevelop has loaded, you will be presented with the C# stub code that was created automatically for you by Unity when you created the C# script. Let's break down what's currently there before we replace some of it with new code. At the top, you will see two lines: using UnityEngine;using System.Collections; The engine knows that if we refer to a class that isn't located inside this file, then it has to reference the files within these namespaces for the referenced class before giving an error. We are currently using two namespaces. The UnityEngine namespace contains interfaces and class definitions that let MonoDevelop know about all the addressable objects inside Unity. The System.Collections namespace contains interfaces and classes that define various collections of objects, such as lists, queues, bit arrays, hash tables, and dictionaries. We will be using a list, so we will change the line to the following: using System.Collections.Generic; The next line you'll see is: public class PlayerBehaviour : MonoBehaviour { You can think of a class as a kind of blueprint for creating a new component type that can be attached to GameObjects, the objects inside our scenes that start out with just a Transform and then have components added to them. When Unity created our C# stub code, it took care of that; we can see the result, as our file is called PlayerBehaviour and the class is also called PlayerBehaviour. Make sure that your .cs file and the name of the class match, as they must be the same to enable the script component to be attached to a game object. Next up is the: MonoBehaviour section of the code. The : symbol signifies that we inherit from a particular class; in this case, we'll use MonoBehaviour. All behavior scripts must inherit from MonoBehaviour directly or indirectly by being derived from it. Inheritance is the idea of having an object to be based on another object or class using the same implementation. With this in mind, all the functions and variables that existed inside the MonoBehaviour class will also exist in the PlayerBehaviour class, because PlayerBehaviour is MonoBehaviour. For more information on the MonoBehaviour class and all the functions and properties it has, check out http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Directly after this line, we will want to add some variables to help us with the project. Variables are pieces of data that we wish to hold on to for one reason or another, typically because they will change over the course of a program, and we will do different things based on their values. Add the following code under the class definition: // Movement modifier applied to directional movement.public float playerSpeed = 2.0f;// What the current speed of our player isprivate float currentSpeed = 0.0f;/** Allows us to have multiple inputs and supports keyboard,* joystick, etc.*/public List<KeyCode> upButton;public List<KeyCode> downButton;public List<KeyCode> leftButton;public List<KeyCode> rightButton;// The last movement that we've madeprivate Vector3 lastMovement = new Vector3(); Between the variable definitions, you will notice comments to explain what each variable is and how we'll use it. To write a comment, you can simply add a // to the beginning of a line and everything after that is commented upon so that the compiler/interpreter won't see it. If you want to write something that is longer than one line, you can use /* to start a comment, and everything inside will be commented until you write */ to close it. It's always a good idea to do this in your own coding endeavors for anything that doesn't make sense at first glance. For those of you working on your own projects in teams, there is an additional form of commenting that Unity supports, which may make your life much nicer: XML comments. They take up more space than the comments we are using, but also document your code for you. For a nice tutorial about that, check out http://unitypatterns.com/xml-comments/. In our game, the player may want to move up using either the arrow keys or the W key. You may even want to use something else. Rather than restricting the player to just having one button, we will store all the possible ways to go up, down, left, or right in their own container. To do this, we are going to use a list, which is a holder for multiple objects that we can add or remove while the game is being played. For more information on lists, check out http://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx One of the things you'll notice is the public and private keywords before the variable type. These are access modifiers that dictate who can and cannot use these variables. The public keyword means that any other class can access that property, while private means that only this class will be able to access this variable. Here, currentSpeed is private because we want our current speed not to be modified or set anywhere else. But, you'll notice something interesting with the public variables that we've created. Go back into the Unity project and drag-and-drop the PlayerBehaviour script onto the playerShip object. Before going back to the Unity project though, make sure that you save your PlayerBehaviour script. Not saving is a very common mistake made by people working with MonoDevelop. Have a look at the following screenshot: You'll notice now that the public variables that we created are located inside Inspector for the component. This means that we can actually set those variables inside Inspector without having to modify the code, allowing us to tweak values in our code very easily, which is a godsend for many game designers. You may also notice that the names have changed to be more readable. This is because of the naming convention that we are using with each word starting with a capital letter. This convention is called CamelCase (more specifically headlessCamelCase). Now change the Size of each of the Button variables to 2, and fill in the Element 0 value with the appropriate arrow and Element 1 with W for up, A for left, S for down, and D for right. When this is done, it should look something like the following screenshot: Now that we have our variables set, go back to MonoDevelop for us to work on the script some more. The line after that is a function definition for a method called Start; it isn't a user method but one that belongs to MonoBehaviour. Where variables are data, functions are the things that modify and/or use that data. Functions are self-contained modules of code (enclosed within braces, { and }) that accomplish a certain task. The nice thing about using a function is that once a function is written, it can be used over and over again. Functions can be called from inside other functions: void Start () {} Start is only called once in the lifetime of the behavior when the game starts and is typically used to initialize data. If you're used to other programming languages, you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project. However, for this behavior, we will not need to use the Start function. Perform the following steps: Delete the Start function and its contents. The next function that we see included is the Update function. Also inherited from MonoBehaviour, this function is called for every frame that the component exists in and for each object that it's attached to. We want to update our player ship's rotation and movement every turn. Inside the Update function (between { and }), put the following lines of code: // Rotate player to face mouse Rotation(); // Move the player's body Movement(); Here, I called two functions, but these functions do not exist, because we haven't created them yet. Let's do that now! Below the Update function and before }, put the following function to close the class: // Will rotate the ship to face the mouse.void Rotation(){// We need to tell where the mouse is relative to the// playerVector3 worldPos = Input.mousePosition;worldPos = Camera.main.ScreenToWorldPoint(worldPos);/*   * Get the differences from each axis (stands for   * deltaX and deltaY)*/float dx = this.transform.position.x - worldPos.x;float dy = this.transform.position.y - worldPos.y;// Get the angle between the two objectsfloat angle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg;/*   * The transform's rotation property uses a Quaternion,   * so we need to convert the angle in a Vector   * (The Z axis is for rotation for 2D).*/Quaternion rot = Quaternion.Euler(new Vector3(0, 0, angle + 90));// Assign the ship's rotationthis.transform.rotation = rot;} Now if you comment out the Movement line and run the game, you'll notice that the ship will rotate in the direction in which the mouse is. Have a look at the following screenshot: Below the Rotation function, we now need to add in our Movement function the following code: // Will move the player based off of keys pressedvoid Movement(){// The movement that needs to occur this frameVector3 movement = new Vector3();// Check for inputmovement += MoveIfPressed(upButton, Vector3.up);movement += MoveIfPressed(downButton, Vector3.down);movement += MoveIfPressed(leftButton, Vector3.left);movement += MoveIfPressed(rightButton, Vector3.right);/*   * If we pressed multiple buttons, make sure we're only   * moving the same length.*/movement.Normalize ();// Check if we pressed anythingif(movement.magnitude > 0){   // If we did, move in that direction   currentSpeed = playerSpeed;   this.transform.Translate(movement * Time.deltaTime *                           playerSpeed, Space.World);   lastMovement = movement;}else{   // Otherwise, move in the direction we were going   this.transform.Translate(lastMovement * Time.deltaTime                           * currentSpeed, Space.World);   // Slow down over time   currentSpeed *= .9f;}} Now inside this function I've created another function called MoveIfPressed, so we'll need to add that in as well. Below this function, add in the following function as well: /** Will return the movement if any of the keys are pressed,* otherwise it will return (0,0,0)*/Vector3 MoveIfPressed( List<KeyCode> keyList, Vector3 Movement){// Check each key in our listforeach (KeyCode element in keyList){   if(Input.GetKey(element))   {     /*       * It was pressed so we leave the function       * with the movement applied.     */    return Movement; }}// None of the keys were pressed, so don't need to movereturn Vector3.zero;} Now, save your file and move back into Unity. Save your current scene as Chapter_1.unity by going to File | Save Scene. Make sure to save the scene to our Scenes folder we created earlier. Run the game by pressing the play button. Have a look at the following screenshot: Now you'll see that we can move using the arrow keys or the W A S D keys, and our ship will rotate to face the mouse. Great! Summary This article talks about the 2D twin-stick shooter game. It helps to bring you to familiarity with the game development features in Unity. Resources for Article: Further resources on this subject: Components in Unity [article] Customizing skin with GUISkin [article] What's Your Input? [article]
Read more
  • 0
  • 1
  • 5947

article-image-lighting-outdoor-scene-blender
Packt
19 Oct 2010
7 min read
Save for later

Lighting an Outdoor Scene in Blender

Packt
19 Oct 2010
7 min read
  Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        Getting the right files Before we get started, we need a scene to work with. There are three scenes provided for our use—an outdoor scene, an indoor scene, and a hybrid scene that incorporates elements that are found both inside as well as outside. All these files can be downloaded from http://www.cgshark.com/lightingand-rendering/ The file we are going to use for this scene is called exterior.blend. This scene contains a tricycle, which we will light as if it were a product being promoted for a company. To download the files for this tutorial, visit http://www.cgshark.com/lighting-and-rendering/ and select exterior.blend. Blender render settings In computer graphics, a two-dimensional image is created from three-dimensional data through a computational process known as rendering. It's important to understand how to customize Blender's internal renderer settings to produce a final result that's optimized for our project, be it a single image or a full-length film. With the settings Blender provides us, we can set frame rates for animation, image quality, image resolution, and many other essential parts needed to produce that optimized final result. The Scene menu We can access these render settings through the Scene menu. Here, we can adjust a myriad of settings. For the sake of these projects, we are only going to be concerned with: Which window Blender will render our image in How render layers are set up Image dimensions Output location and file type Render settings The first settings we see when we look at the Scene menu are the Render settings. Here, we can tell Blender to render the current frame or an animation using the render buttons. We can also choose what type of window we want Blender to render our image in using the Display options. The first option (and the one chosen by default) is Full Screen. This renders our image in a window that overlaps the three-dimensional window in our scene. To restore the three-dimensional view, select the Back to Previous button at the top of the window. The next option is the Image Editor that Blender uses both for rendering as well as UV editing. This is especially useful when using the Compositor, allowing us to see our result alongside our composite node setup. By default, Blender replaces the three-dimensional window with the Image Editor. The last option is the option that Blender has used, by default, since day one—New Window. This means that Blender will render the image in a newly created window, separate from the rest of the program's interface. For the sake of these projects, we're going to keep this setting at the default setting—Full Screen. Dimensions settings These are some of the most important settings that we can set when dealing with optimizing our project output. We can set the image size, frame rate, frame range, and aspect ratio of our render. Luckily for us, Blender provides us with preset render settings, common in the film industry: HDTV 1080P HDTV 720P TV NTSC TV PAL TV PAL 16:9 Because we want to keep our render times relatively low for our projects, we're going to set our preset dimensions to TV NTSC, which results in an image 720 pixels wide by 480 pixels high. If you're interested in learning more about how the other formats behave, feel free to visit http://en.wikipedia.org/wiki/Display_resolution. Output settings These settings are an important factor when determining how we want our final product to be viewed. Blender provides us with numerous image and video types to choose from. When rendering an animation or image sequence, it's always easier to manually set the folder we want Blender to save to. We can tell Blender where we want it to save by establishing the path in the output settings. By default on Macintosh, Blender saves to the /tmp/ folder. Now that we understand how Blender's renderer works, we can start working with our scene! Establishing a workflow The key to constantly producing high-quality work is to establish a well-tested and efficient workflow. Everybody's workflow is different, but we are going to follow this series of steps: Evaluate what the scene we are lighting will require. Plan how we want to lay out the lamps in our scene. Set lamp positions, intensities, colors, and shadows, if applicable. Add materials and textures. Tweak until we're satisfied. Evaluating our scene Before we even begin to approach a computer, we need to think about our scene from a conceptual perspective. This is important, because knowing everything about our scene and the story that's taking place will help us produce a more realistic result. To help kick start this process, we can ask ourselves a series of questions that will get us thinking about what's happening in our scene. These questions can pertain to an entire array of possibilities and conditions, including: Weather What is the weather like on this particular day? What was it like the day before or the day after? Is it cloudy, sunny, or overcast? Did it rain or snow? Source of light Where is the light coming from? Is it in front of, to the side, or even behind the object? Remember, light is reflected and refracted until all energy is absorbed; this not only affects the color of the light, but the quality as well. Do we need to add additional light sources to simulate this effect? Scale of light sources What is the scale of our light sources in relation to our three-dimensional scene? Believe it or not, this factor carries a lot of weight when it comes to the quality of the final render. If any lights feel out of place, it could potentially affect the believability of the final product. The goal of these questions is to prove to ourselves that the scene we're lighting has the potential to exist in real life. It's much harder, if not impossible, to light a scene if we don't know how it could possibly act in the real world. Let's take a look at these questions. What is the weather like? In our case, we're not concerned with anything too challenging, weather wise. The goal of this tutorial is to depict our tricycle in an environment that reflects the effects of a sunny, cloudless day. To achieve this, we are going to use lights with blue and yellow hues for simulating the effect the sun and sky will have on our tricycle. What are the sources of our light and where are they coming from in relation to our scene? In a real situation, the sun would provide most of the light, so we'll need a key light that simulates how the sun works. In our case, we can use a Sun lamp. The key to positioning light sources within a three-dimensional scene is to find a compromise between achieving the desired mood of the image and effectively illuminating the object being presented. What is the scale of our light sources? The sun is rather large, but because of the nature of the Sun lamp in Blender, we don't have to worry about the scale of the lamp in our three-dimensional scene. Sometimes—more commonly when working with indoor scenes, such as the scene we'll approach later—certain light sources need to be of certain sizes in relation to our scene, otherwise the final result will feel unnatural. Although we will be using a realistic approach to materials, textures, and lighting, we are going to present this scene as a product visualization. This means that we won't explicitly show a ground plane, allowing the viewer to focus on the product being presented, in this case, our tricycle.
Read more
  • 0
  • 0
  • 5788
Visually different images

article-image-installing-panda3d
Packt
11 Feb 2011
4 min read
Save for later

Installing Panda3D

Packt
11 Feb 2011
4 min read
Getting started with Panda3D installation packages The kind folks who produce Panda3D have made it very easy to get Panda3D up and working. You don't need to worry about any compiling, library linking, or other difficult, multi-step processes. The Panda3D website provides executable files that take care of all the work for you. These files even install the version of Python they need to operate correctly, so you don't need to go elsewhere for it. Time for action - downloading and installing Panda3D I know what you're thinking: "Less talk, more action!" Here are the step-by-step instructions for installing Panda3D: Navigate your web browser to www.Panda3D.org. Under the Downloads option, you'll see a link labeled SDK. Click it. If you are using Windows, scroll down this page you'll find a section titled Download other versions. Find the link to Panda3D SDK 1.6.2 and click it. If you aren't using Windows, click on the platform you are using (Mac, Linux, or any other OS.). That will take you to a page that has the downloads for that platform. Scroll down to the Download other versions section and find the link to Panda3D SDK 1.6.2, as before. When the download is complete, run the file and this screen will pop up: Click Next to continue and then accept the terms. After that, you'll be prompted about where you want to install Panda3D. The default location is just fine. Click the Install button to continue. Wait for the progress bar to fill up. When it's done, you'll see another prompt. This step really isn't necessary. Just click No and move on. When you have finished the installation, you can verify that it's working by going to Start Menu | All Programs | Panda3D 1.6.2 | Sample Programs | Ball in Maze | Run Ball in Maze. A window will open, showing the Ball in Maze sample game, where you tilt a maze to make a ball roll around while trying to avoid the holes. What just happened? You may be wondering why we skipped a part of the installation during step 7. That step of the process caches some of the assets, like 3D models and such that come with Panda3D. Essentially, by spending a few minutes caching these files now, the sample programs that come with Panda3d will load a few seconds faster the first time we run them, that's all. Now that we've got Panda3D up and running let's get ourselves an advanced text editor to do our coding in. Switching to an advanced text editor The next thing we need is Notepad++. Why, you ask? Well, to code with Python all you really need is a text editor, like the notepad that comes with Windows XP. After typing your code you just have to save the file with .py extension. Notepad itself is kind of dull, though, and it doesn't have many features to make coding easier. Notepad++ is a text editor very similar to Notepad. It can open pretty much any text file and it comes with a pile of features to make coding easier. To highlight some fan favorites, it provides language mark-up, a Find and Replace feature, and file tabs to organize multiple open files. The language mark-up will change the color and fonts of specific parts of your code to help you visually understand and organize it. With Find and Replace you can easily change a large number of variable names and also quickly and easily update code. File tabbing keeps all of your open code files in one window and makes it easy to switch back and forth between them.
Read more
  • 0
  • 0
  • 5195

article-image-creating-pseudo-3d-imagery-gimp-part-1
Packt
21 Oct 2009
9 min read
Save for later

Creating Pseudo-3D Imagery with GIMP: Part 1

Packt
21 Oct 2009
9 min read
In the previous article I've written ( Creating Convincing Images with Blender Internal Renderer-part1), I discussed about creating convincing 3D still images through color manipulation, proper shadowing, minimal lighting, and a bit of post-processing, all using but one application – Blender. This time, the article you're about to read will give us some thoughts on how to mimic a 3D scene with the use of some basic 2D tools. Here again, I would stress that nothing beats a properly planned image, that applies to all genres you can think of. Some might think it's a waste of precious time to start sitting and planning without having a concrete output at the end of the thought process. But believe me, the ideas you planned will be far more powerful and beautiful than those ideas you just had, when you were just messing around and playing with the tool directly. In this article, I wouldn't be teaching you how to paint since I'm not good at it, rather I'll be leading you through a series of steps on how to digitally sketch/draw your scenes, give them subtle color shifts, add fake lighting, and apply filter effects to further emulate how 3D does its job. Primarily, this all leads you into a guide on how I create my digital drawings (though I admit they're not the best of its kind), but somehow I'm very proud, I eventually gave life to them from concept stage to digital art stage. It might be a bit daunting at first, but as you go along the series, you'll notice it gets simpler.  However, some might get confused as to how this applies to other applications since we're focusing on The GIMP in this article. That's not a problem at all once you are familiar with your own tool; it will just be a matter of working around the tools and options. I have been using The GIMP for a long time already, and as far as I can remember, I haven't complained on its shortcomings since those shortcomings are only but bits of features which I wouldn't be need at all.  So to those of you who have been and are using other image editing programs like Adobe Photoshop, Corel, etc., you're welcome to wander around and feel free to interpret some GIMP tools to that of yours.  It's all the same after all, just a tad bit difference on the interface. Just like what Jeremy Birn has said on one of his books: “Being an expert user of a 3D Program, bit itself, does not make the user into an artist more than learning to run a Word Processor makes someone into a good writer.” Additionally, one vital skill you have to develop is the skill of observation, which I myself ham yet to master. Methods Used Basic Drawing Selection Addition, Subtraction, Intersection Gradient Coloring Color Mixing Layering Layer Modes Layer Management Using Filters Requirements Latest version of The GIMP (download at http://www.gimp.org/downloads) Basic knowledge of image editing programs with layering capabilities Patience Let's Get Started! I would already assume you have the latest version of GIMP installed on your system and is running properly, otherwise, fix the problem or ask help from the forum (http://www.gimptalk.com). I'm also assuming you have all your previous tasks done before sitting down and going over this article (which I'm pretty much positive you are). And then lastly, be patient. Sketch it out The very first thing we're going to do is to sketch our ideas for the image, much like a single panel of a storyboard. It doesn't matter how good you draw it as long as you understand it yourself and you know what's going on in the drawing. This time, you can already visualize and create a picture of your final output and it's great if you did, if not, it's fine still. The important thing is we have laid down our scene one way or another. You can take your time sketching out your scenes and adding details to them like how many objects are seen, how many are in focus, what colors do they represent, how are your characters' facial expressions, what is the size of your image, etc. So just in case we forgot how it's going to look like in the end, we have a reference to call upon and that is your initial sketch. This way, you'll also be affected by the persistence of vision where after hours and hours (yay!) of looking on your sketch, you somehow see an afterimage of what you are about to create, and that's a good thing! I'm not good at sketching so please bear with my drawing: After this, it's now time to open up The GIMP and begin the actual fun part! First Run After executing GIMP, this should (and most likely) be the initial screen that's going to be displayed on your screen: The GIMP Initial Screen We don't want Wilbur (GIMP's Mascot) to be glaring at us from a blank empty window all the time, do we? And right now we could go ahead and add a canvas with which we'll be adding our aesthetic elements into, but before that you might want to inspect your application and tool preferences just to make sure you have set everything right. Activate the window with the menu bar at the top (since we currently have three windows to choose from), and then locate Edit > Preferences, as seen below: Locating GIMP's Preferences GIMP Preferences Everything you see here should be self-explanatory, if it isn't, just leave it for the moment and check the manual later, since I'm pretty much sure that thing you didn't understand on the Preferences must be something we will not use here.  So go ahead and save whatever changes you did and sometimes, GIMP might ask you to restart the application for the changes to take effect, then do what she says and we should be back on the black canvas shortly after application restart. By now, we should be having three windows, the main Toolbox Window (located on the left), the main Image Window (located on the middle), and the Layers Window (located on the right).  If, by any chance, the Layer Window is not there, go ahead and activate the Image Window and go to Windows > Layers or press CTRL + L to bring up the Layer Window. Showing the Layers Window Creating the Canvas Now that everything's set up, we'll go ahead and add a properly-sized canvas that we'll paint on, which will be the entire universe for our creation at a later stage. Let's go and create than now by going to File > New or by pressing CTRL+ N. A window will pop up asking you to edit and confirm the image settings for the canvas you're creating. You can choose from a variety of templates to use or you can manually input sizes (which we are going to do).  Before that, change the unit for coordinate display to inches just so we could have a better visual reference of how big our drawing canvass will be. Then on the Width input box, type 9 (for nine inches), and for the Height input box, type 6 (for six inches) respectively. This, however, is a very subjective portion, since you can just have any size you prefer, I just chose nine inches by six inches for the purposes of this article. Clicking the Advanced Options drop-down menu will reveal more options for you.  But right now, we'll never deal with that, just the width and height are sufficient for what we'll be need. When you're done setting up the dimensions and settings, click OK to confirm (is there a chance we could chance the OK buttons to “Alright” buttons, which sounds, uhmmm, better). Creating a New Image At this moment, we should be seeing a blank canvas with the dimensions that we've set awhile back. Then just at the right window (Layers Window), you'll notice there's already one layer present as compared to the default which is none. So everytime we add a new layer (which is very vital), we'll be referencing them over to the Layers Window. Since the creation of the layering system in image editors, it has been a blast to organize elements of an image and apply special effects on them as necessary. We can imagine layers as transparent sheets overlaying each other to form one final image; one transparent sheet can have a landscape drawn, another sheet contains trees and vegetation, and another sheet (which is above the tree sheets) is our main character. So together, we see a character with trees on a landscape in one. But as far as traditional layering is concerned, digital layering has been far more superior in terms of flexibility and the amount of modes we can experiment with. New Image with Layer This time might be a good idea to save our file natively, by that I mean save it in a format that is recognizable only by GIMP and that is lossless in format, so whether we save it a couple of times as such, no image compression happens and the image quality is not compromised. However, the native format is only related to GIMP and is not known elsewhere, so uploading such file to your website will show no image at all because it isn't recognized by the browser. In order to make it generally compatible, we export our image to known formats like JPEG, PNG, GIF, etc. depending on your need. Saving an image file on its native format preserves all the options we have like selections, paths, layers, layer modes, palettes, and many more. This native format that GIMP uses is known as .XCF which stands for “eXperimental Computing Facility”. Throughout this article, we'll save our files mainly in .xcf format and later on, when our tasks are done and we call our image finished, that's the time we export it to a readable and viewable format. Let's go ahead and save our file by going to File > Save, or by pressing CTRL + S. This brings up a window that we can type our filename into and browse and create the location for our files. Type whatever filename you wish and append the “.xcf” file extension at the end of the filename, or you can choose “GIMP xcf image” from a list on the lower half of the window. Saving an Image as XCF
Read more
  • 0
  • 1
  • 5144

article-image-unity-variables-script-unity-2017-games
Amarabha Banerjee
23 May 2018
12 min read
Save for later

Working with Unity Variables to script powerful Unity 2017 games

Amarabha Banerjee
23 May 2018
12 min read
In this tutorial, you will learn how to work with the different variables available with the Unity 2017 platform. We will show you how to use these variables through use cases in order to script powerful Unity games. This article is an excerpt from the book, Learning C# by Developing Games with Unity 2017 written by Micael DaGraca, Greg Lukosek. The most popular game engine of our generation i.e. Unity is a preferred choice among game developers. Due to the flexibility it provides to code and script a game in C#. To understand and leverage the power of C# in your games, it is utterly necessary to get a proper understanding of how C# coding works. We are going to show you exactly that in the section given below. Writing C# statements properly When you do normal writing, it's in the form of a sentence, with a period used to end the sentence. When you write a line of code, it's called a statement, with a semicolon used to end the statement. This is necessary because the console reads the code one line at a time and it's necessary to use a semicolon to tell the console that the line of code is over and that the console can jump to the next line. (This is happening so fast that it looks like the computer is reading all of them at the same time, but it isn't.) When we start learning how to code, forgetting about this detail is very common, so don't forget to check for this error if the code isn't working: The code for a C# statement does not have to be on a single line as shown in the following example: public int number1 = 2; The statement can be on several lines. Whitespace and carriage returns are ignored, so, if you really want to, you can write it as follows: public int number1 = 2; However, I do not recommend writing your code like this because it's terrible to read code that is formatted like the preceding code. Nevertheless, there will be times when you'll have to write long statements that are longer than one line. Unity won't care. It just needs to see the semicolon at the end. Understanding component properties in Unity's Inspector GameObjects have some components that make them behave in a certain way. For instance, select Main Camera and look at the Inspector panel. One of the components is the camera. Without that component, it will cease being a camera. It would still be a GameObject in your scene, just no longer a functioning camera. Variables become component properties When we refer to components, we are basically referring to the available functions of a GameObject, for example, the human body has many functions, such as talking, moving, and observing. Now let's say that we want the human body to move faster. What is the function linked to that action? Movement. So in order to make our body move faster, we would need to create a script that had access to the movement component and then we would use that to make the body move faster. Just like in real life, different GameObjects can also have different components, for example, the camera component can only be accessed from a camera. There are plenty of components that already exist that were created by Unity's programmers, but we can also write our own components. This means that all the properties that we see in Inspector are just variables of some type. They simply store data that will be used by some method. Unity changes script and variable names slightly When we create a script, one of the first things that we need to do is give a name to the script and it's always good practice to use a name that identifies the content of the script. For example, if we are creating a script that is used to control the player movement, ideally that would be the name of the script. The best practice is to write playerMovement, where the first word is uncapitalized and the second one is capitalized. This is the standard way Unity developers name scripts and variables. Now let's say that we created a script named playerMovement. After assigning that script to a GameObject, we'll see that in the Inspector panel we would see that Unity adds a space to separate the words of the name, Player Movement. Unity does this modification to variable names too where, for example, a variable named number1 is shown as Number 1 and number2 as Number 2. Unity capitalizes the first letter as well. These changes improve readability in Inspector. Changing a property's value in the Inspector panel There are two situations where you can modify a property value: During the Play mode During the development stage (not in the Play mode) When you are in the Play mode, you will see that your changes take effect immediately in real time. This is great when you're experimenting and want to see the results. Write down any changes that you want to keep because when you stop the Play mode, any changes you made will be lost. When you are in the Development mode, changes that you make to the property values will be saved by Unity. This means that if you quit Unity and start it again, the changes will be retained. Of course, you won't see the effect of your changes until you click Play. The changes that you make to the property values in the Inspector panel do not modify your script. The only way your script can be changed is by you editing it in the script editor (MonoDevelop). The values shown in the Inspector panel override any values you might have assigned in your script. If you want to undo the changes you've made in the Inspector panel, you can reset the values to the default values assigned in your script. Click on the cog icon (the gear) on the far right of the component script, and then select Reset, as shown in the following screenshot: Displaying public variables in the Inspector panel You might still be wondering what the word public at the beginning of a variable statement means: public int number1 = 2; We mentioned it before. It means that the variable will be visible and accessible. It will be visible as a property in the Inspector panel so that you can manipulate the value stored in the variable. The word also means that it can be accessed from other scripts using the dot syntax. Private variables Not all variables need to be public. If there's no need for a variable to be changed in the Inspector panel or be accessed from other scripts, it doesn't make sense to clutter the Inspector panel with needless properties. In the LearningScript, perform the following steps: Change line 6 to this: private int number1 = 2; Then change line 7 to the following: int number2 = 9; Save the file In Unity, select Main Camera You will notice in the Inspector panel that both properties, Number 1 and Number 2, are gone: Line 6: private int number1 = 2; The preceding line explicitly states that the number1 variable has to be private. Therefore, the variable is no longer a property in the Inspector panel. It is now a private variable for storing data: Line 7: int number2 = 9; The number2 variable is no longer visible as a property either, but you didn't specify it as private. If you don't explicitly state whether a variable will be public or private, by default, the variable will implicitly be private in C#. It is good coding practice to explicitly state whether a variable will be public or private. So now, when you click Play, the script works exactly as it did before. You just can't manipulate the values manually in the Inspector panel anymore. Naming Unity variables properly As we explored previously, naming a script or variable is a very important step. It won't change the way that the code runs, but it will help us to stay organized and, by using best practices, we are avoiding errors and saving time trying to find the piece of code that isn't working. Always use meaningful names to store your variables. If you don't do that, six months down the line, you will be lost. I'm going to exaggerate here a bit to make a point. Let's say you will name a variable as shown in this code: public bool areRoadConditionsPerfect = true; That's a descriptive name. In other words, you know what it means by just reading the variable. So 10 years from now, when you look at that name, you'll know exactly what I meant in the previous comment. Now suppose that instead of areRoadConditionsPerfect, you had named this variable as shown in the following code: public bool perfect = true; Sure, you know what perfect is, but would you know that it refers to perfect road conditions? I know that right now you'll understand it because you just wrote it, but six months down the line, after writing hundreds of other scripts for all sorts of different projects, you'll look at this word and wonder what you meant. You'll have to read several lines of code you wrote to try to figure it out. You may look at the code and wonder who in their right mind would write such terrible code. So, take your time to write descriptive code that even a stranger can look at and know what you mean. Believe me, in six months or probably less time, you will be that stranger. Using meaningful names for variables and methods is helpful not only for you but also for any other game developer who will be reading your code. Whether or not you work in a team, you should always write easy-to-read code. Beginning variable names with lowercase You should begin a variable name with a lowercase letter because it helps distinguish between a class name and a variable name in your code. There are some other guides in the C# documentation as well, but we don't need to worry about them at this stage. Component names (class names) begin with an uppercase letter. For example, it's easy to know that Transform is a class and transform is a variable. There are, of course, exceptions to this general rule, and every programmer has a preferred way of using lowercase, uppercase, and perhaps an underscore to begin a variable name. In the end, you will have to decide upon a naming convention that you like. If you read the Unity forums, you will notice that there are some heated debates on naming variables. In this book, I will show you my preferred way, but you can use whatever is more comfortable for you. Using multiword variable names Let's use the same example again, as follows: public bool areRoadConditionsPerfect = true; You can see that the variable name is actually four words squeezed together. Since variable names can be only one word, begin the first word with a lowercase and then just capitalize the first letter of every additional word. This greatly helps create descriptive names that the viewer is still able to read. There's a term for this, it's called camel casing. I have already mentioned that for public variables, Unity's Inspector will separate each word and capitalize the first word. Go ahead! Add the previous statement to the LearningScript and see what Unity does with it in the Inspector panel. Declaring a variable and its type Every variable that we want to use in a script must be declared in a statement. What does that mean? Well, before Unity can use a variable, we have to tell Unity about it first. Okay then, what are we supposed to tell Unity about the variable? There are only three absolute requirements to declare a variable and they are as follows: We have to specify the type of data that a variable can store We have to provide a name for the variable We have to end the declaration statement with a semicolon The following is the syntax we use to declare a variable: typeOfData nameOfTheVariable; Let's use one of the LearningScript variables as an example; the following is how we declare a variable with the bare minimum requirements: int number1; This is what we have: Requirement #1 is the type of data that number1 can store, which in this case is an int, meaning an integer Requirement #2 is a name, which is number1 Requirement #3 is the semicolon at the end The second requirement of naming a variable has already been discussed. The third requirement of ending a statement with a semicolon has also been discussed. The first requirement of specifying the type of data will be covered next. The following is what we know about this bare minimum declaration as far as Unity is concerned: There's no public modifier, which means it's private by default It won't appear in the Inspector panel or be accessible from other scripts The value stored in number1 defaults to zero We discussed working with the Unity 2017 variables and how you can start working with them to create fun-filled games effectively. If you liked this article, be sure to go through the book Learning C# by Developing games with Unity 2017 to create exciting games with C# and Unity 2017. Read More Unity 2D & 3D game kits simplify Unity game development for beginners Build a Virtual Reality Solar System in Unity for Google Cardboard Unity Machine Learning Agents: Transforming Games with Artificial Intelligence
Read more
  • 0
  • 0
  • 5132
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-lights-and-effects
Packt
29 Sep 2015
27 min read
Save for later

Lights and Effects

Packt
29 Sep 2015
27 min read
 In this article by Matt Smith and Chico Queiroz, authors of Unity 5.x Cookbook, we will cover the following topics: Using lights and cookie textures to simulate a cloudy day Adding a custom Reflection map to a scene Creating a laser aim with Projector and Line Renderer Reflecting surrounding objects with Reflection Probes Setting up an environment with Procedural Skybox and Directional Light (For more resources related to this topic, see here.) Introduction Whether you're willing to make a better-looking game, or add interesting features, lights and effects can boost your project and help you deliver a higher quality product. In this article, we will look at the creative ways of using lights and effects, and also take a look at some of Unity's new features, such as Procedural Skyboxes, Reflection Probes, Light Probes, and custom Reflection Sources. Lighting is certainly an area that has received a lot of attention from Unity, which now features real-time Global Illumination technology provided by Enlighten. This new technology provides better and more realistic results for both real-time and baked lighting. For more information on Unity's Global Illumination system, check out its documentation at http://docs.unity3d.com/Manual/GIIntro.html. The big picture There are many ways of creating light sources in Unity. Here's a quick overview of the most common methods. Lights Lights are placed into the scene as game objects, featuring a Light component. They can function in Realtime, Baked, or Mixed modes. Among the other properties, they can have their Range, Color, Intensity, and Shadow Type set by the user. There are four types of lights: Directional Light: This is normally used to simulate the sunlight Spot Light: This works like a cone-shaped spot light Point Light: This is a bulb lamp-like, omnidirectional light Area Light: This baked-only light type is emitted in all directions from a rectangle-shaped entity, allowing for a smooth, realistic shading For an overview of the light types, check Unity's documentation at http://docs.unity3d.com/Manual/Lighting.html. Different types of lights Environment Lighting Unity's Environment Lighting is often achieved through the combination of a Skybox material and sunlight defined by the scene's Directional Light. Such a combination creates an ambient light that is integrated into the scene's environment, and which can be set as Realtime or Baked into Lightmaps. Emissive materials When applied to static objects, materials featuring the Emission colors or maps will cast light over surfaces nearby, in both real-time and baked modes, as shown in the following screenshot: Projector As its name suggests, a Projector can be used to simulate projected lights and shadows, basically by projecting a material and its texture map onto the other objects. Lightmaps and Light Probes Lightmaps are basically texture maps generated from the scene's lighting information and applied to the scene's static objects in order to avoid the use of processing-intensive real-time lighting. Light Probes are a way of sampling the scene's illumination at specific points in order to have it applied onto dynamic objects without the use of real-time lighting. The Lighting window The Lighting window, which can be found through navigating to the Window | Lighting menu, is the hub for setting and adjusting the scene's illumination features, such as Lightmaps, Global Illumination, Fog, and much more. It's strongly recommended that you take a look at Unity's documentation on the subject, which can be found at http://docs.unity3d.com/Manual/GlobalIllumination.html. Using lights and cookie textures to simulate a cloudy day As it can be seen in many first-person shooters and survival horror games, lights and shadows can add a great deal of realism to a scene, helping immensely to create the right atmosphere for the game. In this recipe, we will create a cloudy outdoor environment using cookie textures. Cookie textures work as masks for lights. It functions by adjusting the intensity of the light projection to the cookie texture's alpha channel. This allows for a silhouette effect (just think of the bat-signal) or, as in this particular case, subtle variations that give a filtered quality to the lighting. Getting ready If you don't have access to an image editor, or prefer to skip the texture map elaboration in order to focus on the implementation, please use the image file called cloudCookie.tga, which is provided inside the 1362_06_01 folder. How to do it... To simulate a cloudy outdoor environment, follow these steps: In your image editor, create a new 512 x 512 pixel image. Using black as the foreground color and white as the background color, apply the Clouds filter (in Photoshop, this is done by navigating to the Filter | Render | Clouds menu). Learning about the Alpha channel is useful, but you could get the same result without it. Skip steps 3 to 7, save your image as cloudCookie.png and, when changing texture type in step 9, leave Alpha from Greyscale checked. Select your entire image and copy it. Open the Channels window (in Photoshop, this can be done by navigating to the Window | Channels menu). There should be three channels: Red, Green, and Blue. Create a new channel. This will be the Alpha channel. In the Channels window, select the Alpha 1 channel and paste your image into it. Save your image file as cloudCookie.PSD or TGA. Import your image file to Unity and select it in the Project view. From the Inspector view, change its Texture Type to Cookie and its Light Type to Directional. Then, click on Apply, as shown: We will need a surface to actually see the lighting effect. You can either add a plane to your scene (via navigating to the GameObject | 3D Object | Plane menu), or create a Terrain (menu option GameObject | 3D Object | Terrain) and edit it, if you so you wish. Let's add a light to our scene. Since we want to simulate sunlight, the best option is to create a Directional Light. You can do this through the drop-down menu named Create | Light | Directional Light in the Hierarchy view. Using the Transform component of the Inspector view, reset the light's Position to X: 0, Y: 0, Z: 0 and its Rotation to X: 90; Y: 0; Z: 0. In the Cookie field, select the cloudCookie texture that you imported earlier. Change the Cookie Size field to 80, or a value that you feel is more appropriate for the scene's dimension. Please leave Shadow Type as No Shadows. Now, we need a script to translate our light and, consequently, the Cookie projection. Using the Create drop-down menu in the Project view, create a new C# Script named MovingShadows.cs. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class MovingShadows : MonoBehaviour{ public float windSpeedX; public float windSpeedZ; private float lightCookieSize; private Vector3 initPos; void Start(){ initPos = transform.position; lightCookieSize = GetComponent<Light>().cookieSize; } void Update(){ Vector3 pos = transform.position; float xPos= Mathf.Abs (pos.x); float zPos= Mathf.Abs (pos.z); float xLimit = Mathf.Abs(initPos.x) + lightCookieSize; float zLimit = Mathf.Abs(initPos.z) + lightCookieSize; if (xPos >= xLimit) pos.x = initPos.x; if (zPos >= zLimit) pos.z = initPos.z; transform.position = pos; float windX = Time.deltaTime * windSpeedX; float windZ = Time.deltaTime * windSpeedZ; transform.Translate(windX, 0, windZ, Space.World); } } Save your script and apply it to the Directional Light. Select the Directional Light. In the Inspector view, change the parameters Wind Speed X and Wind Speed Z to 20 (you can change these values as you wish, as shown). Play your scene. The shadows will be moving. How it works... With our script, we are telling the Directional Light to move across the X and Z axis, causing the Light Cookie texture to be displaced as well. Also, we reset the light object to its original position whenever it traveled a distance that was either equal to or greater than the Light Cookie Size. The light position must be reset to prevent it from traveling too far, causing problems in real-time render and lighting. The Light Cookie Size parameter is used to ensure a smooth transition. The reason we are not enabling shadows is because the light angle for the X axis must be 90 degrees (or there will be a noticeable gap when the light resets to the original position). If you want dynamic shadows in your scene, please add a second Directional Light. There's more... In this recipe, we have applied a cookie texture to a Directional Light. But what if we were using the Spot or Point Lights? Creating Spot Light cookies Unity documentation has an excellent tutorial on how to make the Spot Light cookies. This is great to simulate shadows coming from projectors, windows, and so on. You can check it out at http://docs.unity3d.com/Manual/HOWTO-LightCookie.html. Creating Point Light Cookies If you want to use a cookie texture with a Point Light, you'll need to change the Light Type in the Texture Importer section of the Inspector. Adding a custom Reflection map to a scene Whereas Unity Legacy Shaders use individual Reflection Cubemaps per material, the new Standard Shader gets its reflection from the scene's Reflection Source, as configured in the Scene section of the Lighting window. The level of reflectiveness for each material is now given by its Metallic value or Specular value (for materials using Specular setup). This new method can be a real time saver, allowing you to quickly assign the same reflection map to every object in the scene. Also, as you can imagine, it helps keep the overall look of the scene coherent and cohesive. In this recipe, we will learn how to take advantage of the Reflection Source feature. Getting ready For this recipe, we will prepare a Reflection Cubemap, which is basically the environment to be projected as a reflection onto the material. It can be made from either six or, as shown in this recipe, a single image file. To help us with this recipe, it's been provided a Unity package, containing a prefab made of a 3D object and a basic Material (using a TIFF as Diffuse map), and also a JPG file to be used as the reflection map. All these files are inside the 1362_06_02 folder. How to do it... To add Reflectiveness and Specularity to a material, follow these steps: Import batteryPrefab.unitypackage to a new project. Then, select battery_prefab object from the Assets folder, in the Project view. From the Inspector view, expand the Material component and observe the asset preview window. Thanks to the Specular map, the material already features a reflective look. However, it looks as if it is reflecting the scene's default Skybox, as shown: Import the CustomReflection.jpg image file. From the Inspector view, change its Texture Type to Cubemap, its Mapping to Latitude - Longitude Layout (Cylindrical), and check the boxes for Glossy Reflection and Fixup Edge Seams. Finally, change its Filter Mode to Trilinear and click on the Apply button, shown as follows: Let's replace the Scene's Skybox with our newly created Cubemap, as the Reflection map for our scene. In order to do this, open the Lighting window by navigating to the Window | Lighting menu. Select the Scene section and use the drop-down menu to change the Reflection Source to Custom. Finally, assign the newly created CustomReflection texture as the Cubemap, shown as follows: Check out for the new reflections on the battery_prefab object. How it works... While it is the material's specular map that allows for a reflective look, including the intensity and smoothness of the reflection, the refection itself (that is, the image you see on the reflection) is given by the Cubemap that we have created from the image file. There's more... Reflection Cubemaps can be achieved in many ways and have different mapping properties. Mapping coordinates The Cylindrical mapping that we applied was well-suited for the photograph that we used. However, depending on how the reflection image is generated, a Cubic or Spheremap-based mapping can be more appropriate. Also, note that the Fixup Edge Seams option will try to make the image seamless. Sharp reflections You might have noticed that the reflection is somewhat blurry compared to the original image; this is because we have ticked the Glossy Reflections box. To get a sharper-looking reflection, deselect this option; in which case, you can also leave the Filter Mode option as default (Bilinear). Maximum size At 512 x 512 pixels, our reflection map will probably run fine on the lower-end machines. However, if the quality of the reflection map is not so important in your game's context, and the original image dimensions are big (say, 4096 x 4096), you might want to change the texture's Max Size at the Import Settings to a lower number. Creating a laser aim with Projector and Line Renderer Although using GUI elements, such as a cross-hair, is a valid way to allow players to aim, replacing (or combining) it with a projected laser dot might be a more interesting approach. In this recipe, we will use the Projector and Line components to implement this concept. Getting ready To help us with this recipe, it's been provided with a Unity package containing a sample scene featuring a character holding a laser pointer, and also a texture map named LineTexture. All files are inside the 1362_06_03 folder. Also, we'll make use of the Effects assets package provided by Unity (which you should have installed when installing Unity). How to do it... To create a laser dot aim with a Projector, follow these steps: Import BasicScene.unitypackage to a new project. Then, open the scene named BasicScene. This is a basic scene, featuring a player character whose aim is controlled via mouse. Import the Effects package by navigating to the Assets | Import Package | Effects menu. If you want to import only the necessary files within the package, deselect everything in the Importing package window by clicking on the None button, and then check the Projectors folder only. Then, click on Import, as shown: From the Inspector view, locate the ProjectorLight shader (inside the Assets | Standard Assets | Effects | Projectors | Shaders folder). Duplicate the file and name the new copy as ProjectorLaser. Open ProjectorLaser. From the first line of the code, change Shader "Projector/Light" to Shader "Projector/Laser". Then, locate the line of code – Blend DstColor One and change it to Blend One One. Save and close the file. The reason for editing the shader for the laser was to make it stronger by changing its blend type to Additive. However, if you want to learn more about it, check out Unity's documentation on the subject, which is available at http://docs.unity3d.com/Manual/SL-Reference.html. Now that we have fixed the shader, we need a material. From the Project view, use the Create drop-down menu to create a new Material. Name it LaserMaterial. Then, select it from the Project view and, from the Inspector view, change its Shader to Projector/Laser. From the Project view, locate the Falloff texture. Open it in your image editor and, except for the first and last columns column of pixels that should be black, paint everything white. Save the file and go back to Unity. Change the LaserMaterial's Main Color to red (RGB: 255, 0, 0). Then, from the texture slots, select the Light texture as Cookie and the Falloff texture as Falloff. From the Hierarchy view, find and select the pointerPrefab object (MsLaser | mixamorig:Hips | mixamorig:Spine | mixamorig:Spine1 | mixamorig:Spine2 | mixamorig:RightShoulder | mixamorig:RightArm | mixamorig:RightForeArm | mixamorig:RightHand | pointerPrefab). Then, from the Create drop-down menu, select Create Empty Child. Rename the new child of pointerPrefab as LaserProjector. Select the LaserProjector object. Then, from the Inspector view, click the Add Component button and navigate to Effects | Projector. Then, from the Projector component, set the Orthographic option as true and set Orthographic Size as 0.1. Finally, select LaserMaterial from the Material slot. Test the scene. You will be able to see the laser aim dot, as shown: Now, let's create a material for the Line Renderer component that we are about to add. From the Project view, use the Create drop-down menu to add a new Material. Name it as Line_Mat. From the Inspector view, change the shader of the Line_Mat to Particles/Additive. Then, set its Tint Color to red (RGB: 255;0;0). Import the LineTexture image file. Then, set it as the Particle Texture for the Line_Mat, as shown: Use the Create drop-down menu from Project view to add a C# script named LaserAim. Then, open it in your editor. Replace everything with the following code: using UnityEngine; using System.Collections; public class LaserAim : MonoBehaviour { public float lineWidth = 0.2f; public Color regularColor = new Color (0.15f, 0, 0, 1); public Color firingColor = new Color (0.31f, 0, 0, 1); public Material lineMat; private Vector3 lineEnd; private Projector proj; private LineRenderer line; void Start () { line = gameObject.AddComponent<LineRenderer>(); line.material = lineMat; line.material.SetColor("_TintColor", regularColor); line.SetVertexCount(2); line.SetWidth(lineWidth, lineWidth); proj = GetComponent<Projector> (); } void Update () { RaycastHit hit; Vector3 fwd = transform.TransformDirection(Vector3.forward); if (Physics.Raycast (transform.position, fwd, out hit)) { lineEnd = hit.point; float margin = 0.5f; proj.farClipPlane = hit.distance + margin; } else { lineEnd = transform.position + fwd * 10f; } line.SetPosition(0, transform.position); line.SetPosition(1, lineEnd); if(Input.GetButton("Fire1")){ float lerpSpeed = Mathf.Sin (Time.time * 10f); lerpSpeed = Mathf.Abs(lerpSpeed); Color lerpColor = Color.Lerp(regularColor, firingColor, lerpSpeed); line.material.SetColor("_TintColor", lerpColor); } if(Input.GetButtonUp("Fire1")){ line.material.SetColor("_TintColor", regularColor); } } } Save your script and attach it to the LaserProjector game object. Select the LaserProjector GameObject. From the Inspector view, find the Laser Aim component and fill the Line Material slot with the Line_Mat material, as shown: Play the scene. The laser aim is ready, and looks as shown: In this recipe, the width of the laser beam and its aim dot have been exaggerated. Should you need a more realistic thickness for your beam, change the Line Width field of the Laser Aim component to 0.05, and the Orthographic Size of the Projector component to 0.025. Also, remember to make the beam more opaque by setting the Regular Color of the Laser Aim component brighter. How it works... The laser aim effect was achieved by combining two different effects: a Projector and Line Renderer. A Projector, which can be used to simulate light, shadows, and more, is a component that projects a material (and its texture) onto other game objects. By attaching a projector to the Laser Pointer object, we have ensured that it will face the right direction at all times. To get the right, vibrant look, we have edited the projector material's Shader, making it brighter. Also, we have scripted a way to prevent projections from going through objects, by setting its Far Clip Plane on approximately the same level of the first object that is receiving the projection. The line of code that is responsible for this action is—proj.farClipPlane = hit.distance + margin;. Regarding the Line Renderer, we have opted to create it dynamically, via code, instead of manually adding the component to the game object. The code is also responsible for setting up its appearance, updating the line vertices position, and changing its color whenever the fire button is pressed, giving it a glowing/pulsing look. For more details on how the script works, don't forget to check out the commented code, available within the 1362_06_03 | End folder. Reflecting surrounding objects with Reflection Probes If you want your scene's environment to be reflected by game objects, featuring reflective materials (such as the ones with high Metallic or Specular levels), then you can achieve such effect using Reflection Probes. They allow for real-time, baked, or even custom reflections through the use of Cubemaps. Real-time reflections can be expensive in terms of processing; in which case, you should favor baked reflections, unless it's really necessary to display dynamic objects being reflected (mirror-like objects, for instance). Still, there are some ways real-time reflections can be optimized. In this recipe, we will test three different configurations for reflection probes: Real-time reflections (constantly updated) Real-time reflections (updated on-demand) via script Baked reflections (from the Editor) Getting ready For this recipe, we have prepared a basic scene, featuring three sets of reflective objects: one is constantly moving, one is static, and one moves whenever it is interacted with. The Probes.unitypackage package that is containing the scene can be found inside the 1362_06_04 folder. How to do it... To reflect the surrounding objects using the Reflection probes, follow these steps: Import Probes.unitypackage to a new project. Then, open the scene named Probes. This is a basic scene featuring three sets of reflective objects. Play the scene. Observe that one of the systems is dynamic, one is static, and one rotates randomly, whenever a key is pressed. Stop the scene. First, let's create a constantly updated real-time reflection probe. From the Create drop-down button of the Hierarchy view, add a Reflection Probe to the scene (Create | Light | Reflection Probe). Name it as RealtimeProbe and make it a child of the System 1 Realtime | MainSphere game object. Then, from the Inspector view, the Transform component, change its Position to X: 0; Y: 0; Z: 0, as shown: Now, go to the Reflection Probe component. Set Type as Realtime; Refresh Mode as Every Frame and Time Slicing as No time slicing, shown as follows: Play the scene. The reflections will be now be updated in real time. Stop the scene. Observe that the only object displaying the real-time reflections is System 1 Realtime | MainSphere. The reason for this is the Size of the Reflection Probe. From the Reflection Probe component, change its Size to X: 25; Y: 10; Z: 25. Note that the small red spheres are now affected as well. However, it is important to notice that all objects display the same reflection. Since our reflection probe's origin is placed at the same location as the MainSphere, all reflective objects will display reflections from that point of view. If you want to eliminate the reflection from the reflective objects within the reflection probe, such as the small red spheres, select the objects and, from the Mesh Renderer component, set Reflection Probes as Off, as shown in the following screenshot: Add a new Reflection Probe to the scene. This time, name it OnDemandProbe and make it a child of the System 2 On Demand | MainSphere game object. Then, from the Inspector view, Transform component, change its Position to X: 0; Y: 0; Z: 0. Now, go to the Reflection Probe component. Set Type as Realtime, Refresh Mode as Via scripting, and Time Slicing as Individual faces, as shown in the following screenshot: Using the Create drop-down menu in the Project view, create a new C# Script named UpdateProbe. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class UpdateProbe : MonoBehaviour { private ReflectionProbe probe; void Awake () { probe = GetComponent<ReflectionProbe> (); probe.RenderProbe(); } public void RefreshProbe(){ probe.RenderProbe(); } } Save your script and attach it to the OnDemandProbe. Now, find the script named RandomRotation, which is attached to the System 2 On Demand | Spheres object, and open it in the code editor. Right before the Update() function, add the following lines: private GameObject probe; private UpdateProbe up; void Awake(){ probe = GameObject.Find("OnDemandProbe"); up = probe.GetComponent<UpdateProbe>(); } Now, locate the line of code called transform.eulerAngles = newRotation; and, immediately after it, add the following line: up.RefreshProbe(); Save the script and test your scene. Observe how the Reflection Probe is updated whenever a key is pressed. Stop the scene. Add a third Reflection Probe to the scene. Name it as CustomProbe and make it a child of the System 3 On Custom | MainSphere game object. Then, from the Inspector view, the Transform component, change its Position to X: 0; Y: 0; Z: 0. Go to the Reflection Probe component. Set Type as Custom and click on the Bake button, as shown: A Save File dialog window will show up. Save the file as CustomProbe-reflectionHDR.exr. Observe that the reflection map does not include the reflection of red spheres on it. To change this, you have two options: set the System 3 On Custom | Spheres GameObject (and all its children) as Reflection Probe Static or, from the Reflection Probe component of the CustomProbe GameObject, check the Dynamic Objects option, as shown, and bake the map again (by clicking on the Bake button). If you want your reflection Cubemap to be dynamically baked while you edit your scene, you can set the Reflection Probe Type to Baked, open the Lighting window (the Assets | Lighting menu), access the Scene section, and check the Continuous Baking option as shown. Please note that this mode won't include dynamic objects in the reflection, so be sure to set System 3 Custom | Spheres and System 3 Custom | MainSphere as Reflection Probe Static. How it works... The Reflection Probes element act like omnidirectional cameras that render Cubemaps and apply them onto the objects within their constraints. When creating Reflection Probes, it's important to be aware of how the different types work: Real-time Reflection Probes: Cubemaps are updated at runtime. The real-time Reflection Probes have three different Refresh Modes: On Awake (Cubemap is baked once, right before the scene starts); Every frame (Cubemap is constantly updated); Via scripting (Cubemap is updated whenever the RenderProbe function is used).Since Cubemaps feature six sides, the Reflection Probes features Time Slicing, so each side can be updated independently. There are three different types of Time Slicing: All Faces at Once (renders all faces at once and calculates mipmaps over 6 frames. Updates the probe in 9 frames); Individual Faces (each face is rendered over a number of frames. It updates the probe in 14 frames. The results can be a bit inaccurate, but it is the least expensive solution in terms of frame-rate impact); No Time Slicing (The Probe is rendered and mipmaps are calculated in one frame. It provides high accuracy, but it also the most expensive in terms of frame-rate). Baked: Cubemaps are baked during editing the screen. Cubemaps can be either manually or automatically updated, depending whether the Continuous Baking option is checked (it can be found at the Scene section of the Lighting window). Custom: The Custom Reflection Probes can be either manually baked from the scene (and even include Dynamic objects), or created from a premade Cubemap. There's more... There are a number of additional settings that can be tweaked, such as Importance, Intensity, Box Projection, Resolution, HDR, and so on. For a complete view on each of these settings, we strongly recommend that you read Unity's documentation on the subject, which is available at http://docs.unity3d.com/Manual/class-ReflectionProbe.html. Setting up an environment with Procedural Skybox and Directional Light Besides the traditional 6 Sided and Cubemap, Unity now features a third type of skybox: the Procedural Skybox. Easy to create and setup, the Procedural Skybox can be used in conjunction with a Directional Light to provide Environment Lighting to your scene. In this recipe, we will learn about different parameters of the Procedural Skybox. Getting ready For this recipe, you will need to import Unity's Standard Assets Effects package, which you should have installed when installing Unity. How to do it... To set up an Environment Lighting using the Procedural Skybox and Directional Light, follow these steps: Create a new scene inside a Unity project. Observe that a new scene already includes two objects: the Main Camera and a Directional Light. Add some cubes to your scene, including one at Position X: 0; Y: 0; Z: 0 scaled to X: 20; Y: 1; Z: 20, which is to be used as the ground, as shown: Using the Create drop-down menu from the Project view, create a new Material and name it MySkybox. From the Inspector view, use the appropriate drop-down menu to change the Shader of MySkybox from Standard to Skybox/Procedural. Open the Lighting window (menu Window | Lighting), access the Scene section. At the Environment Lighting subsection, populate the Skybox slot with the MySkybox material, and the Sun slot with the Directional Light from the Scene. From the Project view, select MySkybox. Then, from the Inspector view, set Sun size as 0.05 and Atmosphere Thickness as 1.4. Experiment by changing the Sky Tint color to RGB: 148; 128; 128, and the Ground color to a value that resembles the scene cube floor's color (such as RGB: 202; 202; 202). If you feel the scene is too bright, try bringing the Exposure level down to 0.85, shown as follows: Select the Directional Light and change its Rotation to X: 5; Y: 170; Z: 0. Note that the scene should resemble a dawning environment, something like the following scene: Let's make things even more interesting. Using the Create drop-down menu in the Project view, create a new C# Script named RotateLight. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class RotateLight : MonoBehaviour { public float speed = -1.0f; void Update () { transform.Rotate(Vector3.right * speed * Time.deltaTime); } } Save it and add it as a component to the Directional Light. Import the Effects Assets package into your project (via the Assets | Import Package | Effects menu). Select the Directional Light. Then, from Inspector view, Light component, populate the Flare slot with the Sun flare. From the Scene section of the Lighting window, find the Other Settings subsection. Then, set Flare Fade Speed as 3 and Flare Strength as 0.5, shown as follows: Play the scene. You will see the sun rising and the Skybox colors changing accordingly. How it works... Ultimately, the appearance of Unity's native Procedural Skyboxes depends on the five parameters that make them up: Sun size: The size of the bright yellow sun that is drawn onto the skybox is located according to the Directional Light's Rotation on the X and Y axes. Atmosphere Thickness: This simulates how dense the atmosphere is for this skybox. Lower values (less than 1.0) are good for simulating the outer space settings. Moderate values (around 1.0) are suitable for the earth-based environments. Values that are slightly above 1.0 can be useful when simulating air pollution and other dramatic settings. Exaggerated values (like more than 2.0) can help to illustrate extreme conditions or even alien settings. Sky Tint: It is the color that is used to tint the skybox. It is useful for fine-tuning or creating stylized environments. Ground: This is the color of the ground. It can really affect the Global Illumination of the scene. So, choose a value that is close to the level's terrain and/or geometry (or a neutral one). Exposure: This determines the amount of light that gets in the skybox. The higher levels simulate overexposure, while the lower values simulate underexposure. It is important to notice that the Skybox appearance will respond to the scene's Directional Light, playing the role of the Sun. In this case, rotating the light around its X axis can create dawn and sunset scenarios, whereas rotating it around its Y axis will change the position of the sun, changing the cardinal points of the scene. Also, regarding the Environment Lighting, note that although we have used the Skybox as the Ambient Source, we could have chosen a Gradient or a single Color instead—in which case, the scene's illumination wouldn't be attached to the Skybox appearance. Finally, also regarding the Environment Lighting, please note that we have set the Ambient GI to Realtime. The reason for this was to allow the real-time changes in the GI, promoted by the rotating Directional Light. In case we didn't need these changes at runtime, we could have chosen the Baked alternative. Summary In this article you have learned and had hands-on approach to a number Unity's lighting system features, such as cookie textures, Reflection maps, Lightmaps, Light and Reflection probes, and Procedural Skyboxes. The article also demonstrated the use of Projectors. Resources for Article: Further resources on this subject: Animation features in Unity 5[article] Scripting Strategies[article] Editor Tool, Prefabs, and Main Menu [article]
Read more
  • 0
  • 0
  • 5119

article-image-google-deepminds-ai-alphastar-beats-starcraft-ii-pros-tlo-and-mana-wins-10-1-against-the-gamers
Natasha Mathur
25 Jan 2019
5 min read
Save for later

Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers

Natasha Mathur
25 Jan 2019
5 min read
It was two days back when the Blizzard team announced an update about the demo of the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game. The demo was presented yesterday over a live stream where it showed, AlphaStar, DeepMind’s StarCraft II AI program, beating the top two professional StarCraft II players, TLO and MaNa. The demo presented a series of five separate test matches that were held earlier on 19 December, against Team Liquid’s Grzegorz "MaNa" Komincz, and Dario “TLO” Wünsch. AlphaStar beat the two professional players, managing to score 10-0 in total (5-0 against each). After the 10 straight wins, AlphaStar finally got beaten by MaNa in a live match streamed by Blizzard and DeepMind. https://twitter.com/LiquidTLO/status/1088524496246657030 https://twitter.com/Liquid_MaNa/status/1088534975044087808 How does AlphaStar learn? AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. This initial AI agent managed to defeat the “Elite” level AI in 95% of games. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones.                                          AlphaStar As the league continues to progress, new counter-strategies emerge, that can defeat the earlier strategies. Also, each agent has its own learning objective which gets adapted during the training. One agent might have an objective to beat one specific competitor, while another one might want to beat a whole distribution of competitors. So, the neural network weights of each agent get updated using reinforcement learning, from its games against competitors. This helps optimise their personal learning objective. How does AlphaStar play the game? TLO and MaNa, professional StarCraft players, can issue hundreds of actions per minute (APM) on average. AlphaStar had an average APM of around 280 in its games against TLO and MaNa, which is significantly lower than the professional players. This is because AlphaStar starts its learning using replays and thereby mimics the way humans play the game. Moreover, AlphaStar also showed the delay between observation and action of 350ms on average.                                                    AlphaStar AlphaStar might have had a slight advantage over the human players as it interacted with the StarCraft game engine directly via its raw interface. What this means is that it could observe the attributes of its own as well as its opponent’s visible units on the map directly, basically getting a zoomed out view of the game. Human players, however, have to split their time and attention to decide where to focus the camera on the map. But, the analysis results of the game showed that the AI agents “switched context” about 30 times per minute, akin to MaNa or TLO. This proves that AlphaStar’s success against MaNa and TLO is due to its superior macro and micro-strategic decision-making. It isn’t the superior click-rate, faster reaction times, or the raw interface, that made the AI win. MaNa managed to beat AlphaStar in one match DeepMind also developed a second version of AlphaStar, which played like human players, meaning that it had to choose when and where to move the camera. Two new agents were trained, one that used the raw interface and the other that learned to control the camera, against the AlphaStar league.                                                           AlphaStar “The version of AlphaStar using the camera interface was almost as strong as the raw interface, exceeding 7000 MMR on our internal leaderboard”, states the DeepMind team. But, the team didn’t get the chance to test the AI against a human pro prior to the live stream.   In a live exhibition match, MaNa managed to defeat the new version of AlphaStar using the camera interface, which was trained for only 7 days. “We hope to evaluate a fully trained instance of the camera interface in the near future”, says the team. DeepMind team states AlphaStar’s performance was initially tested against TLO, where it won the match. “I was surprised by how strong the agent was..(it) takes well-known strategies..I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet,” said TLO. The agents were then trained for an extra one week, after which they played against MaNa. AlphaStar again won the game. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected..this has put the game in a whole new light for me. We’re all excited to see what comes next,” said MaNa. Public reaction to the news is very positive, with people congratulating the DeepMind team for AlphaStar’s win: https://twitter.com/SebastienBubeck/status/1088524371285557248 https://twitter.com/KaiLashArul/status/1088534443718045696 https://twitter.com/fhuszar/status/1088534423786668042 https://twitter.com/panicsw1tched/status/1088524675540549635 https://twitter.com/Denver_sc2/status/1088525423229759489 To learn about the strategies developed by AlphaStar, check out the complete set of replays of AlphaStar's matches against TLO and MaNa on DeepMind's website. Best game engines for Artificial Intelligence game development Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 4997

article-image-installation-ogre-3d
Packt
09 Feb 2011
6 min read
Save for later

Installation of Ogre 3D

Packt
09 Feb 2011
6 min read
OGRE 3D 1.7 Beginner's Guide Downloading and installing Ogre 3D The first step we need to take is to install and configure Ogre 3D. Time for action – downloading and installing Ogre 3D We are going to download the Ogre 3D SDK and install it so that we can work with it later. Go to http://www.ogre3d.org/download/sdk. Download the appropriate package. If you need help picking the right package, take a look at the next What just happened section. Copy the installer to a directory you would like your OgreSDK to be placed in. Double-click on the Installer; this will start a self extractor. You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1. Open this folder. It should look similar to the following screenshot: (Move the mouse over the image to enlarge.) What just happened? We just downloaded the appropriate Ogre 3D SDK for our system. Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. After downloading we extracted the Ogre 3D SDK. Different versions of the Ogre 3D SDK Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. This article will focus on the Windows pre-build SDK and how to configure your development environment. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at http://www.ogre3d.org/wiki. The wiki contains detailed tutorials on how to set up your development environment for many different platforms. Exploring the SDK Before we begin building the samples which come with the SDK, let's take a look at the SDK. We will look at the structure the SDK has on a Windows platform. On Linux or MacOS the structure might look different. First, we open the bin folder. There we will see two folders, namely, debug and release. The same is true for the lib directory. The reason is that the Ogre 3D SDK comes with debug and release builds of its libraries and dynamic-linked/shared libraries. This makes it possible to use the debug build during development, so that we can debug our project. When we finish the project, we link our project against the release build to get the full performance of Ogre 3D. When we open either the debug or release folder, we will see many dll files, some cfg files, and two executables (exe). The executables are for content creators to update their content files to the new Ogre version, and therefore are not relevant for us. The OgreMain.dll is the most important DLL. It is the compiled Ogre 3D source code we will load later. All DLLs with Plugin_ at the start of their name are Ogre 3D plugins we can use with Ogre 3D. Ogre 3D plugins are dynamic libraries, which add new functionality to Ogre 3D using the interfaces Ogre 3D offers. This can be practically anything, but often it is used to add features like better particle systems or new scene managers. The Ogre 3D community has created many more plugins, most of which can be found in the wiki. The SDK simply includes the most generally used plugins. The DLLs with RenderSystem_ at the start of their name are, surely not surprisingly, wrappers for different render systems that Ogre 3D supports. In this case, these are Direct3D9 and OpenGL. Additional to these two systems, Ogre 3D also has a Direct3D10, Direct3D11, and OpenGL ES(OpenGL for Embedded System) render system. Besides the executables and the DLLs, we have the cfg files. cfg files are config files that Ogre 3D can load at startup. Plugins.cfg simply lists all plugins Ogre 3D should load at startup. These are typically the Direct3D and OpenGL render systems and some additional SceneManagers. quakemap.cfg is a config file needed when loading a level in the Quake3 map format. We don't need this file, but a sample does. resources.cfg contains a list of all resources, like a 3D mesh, a texture, or an animation, which Ogre 3D should load during startup. Ogre 3D can load resources from the file system or from a ZIP file. When we look at resources.cfg, we will see the following lines: Zip=../../media/packs/SdkTrays.zip FileSystem=../../media/thumbnails Zip= means that the resource is in a ZIP file and FileSystem= means that we want to load the contents of a folder. resources.cfg makes it easy to load new resources or change the path to resources, so it is often used to load resources, especially by the Ogre samples. Speaking of samples, the last cfg file in the folder is samples.cfg. We don't need to use this cfg file. Again, it's a simple list with all the Ogre samples to load for the SampleBrowser. But we don't have a SampleBrowser yet, so let's build one. The Ogre 3D samples Ogre 3D comes with a lot of samples, which show all the kinds of different render effects and techniques Ogre 3D can do. Before we start working on our application, we will take a look at the samples to get a first impression of Ogre's capabilities. Time for action – building the Ogre 3D samples To get a first impression of what Ogre 3D can do, we will build the samples and take a look at them. Go to the Ogre3D folder. Open the Ogre3d.sln solution file. Right-click on the solution and select Build Solution. Visual Studio should now start building the samples. This might take some time, so get yourself a cup of tea until the compile process is finished. If everything went well, go into the Ogre3D/bin folder. Execute the SampleBrowser.exe. You should see the following on your screen: Try the different samples to see all the nice features Ogre 3D offers. What just happened? We built the Ogre 3D samples using our own Ogre 3D SDK. After this, we are sure to have a working copy of Ogre 3D.  
Read more
  • 0
  • 0
  • 4968

article-image-animation-and-unity3d-physics
Packt
27 Oct 2014
5 min read
Save for later

Animation and Unity3D Physics

Packt
27 Oct 2014
5 min read
In this article, written by K. Aava Rani, author of the book, Learning Unity Physics, you will learn to use Physics in animation creation. We will see that there are several animations that can be easily handled by Unity3D's Physics. During development, you will come to know that working with animations and Physics is easy in Unity3D. You will find the combination of Physics and animation very interesting. We are going to cover the following topics: Interpolate and Extrapolate Cloth component and its uses in animation (For more resources related to this topic, see here.) Developing simple and complex animations As mentioned earlier, you will learn how to handle and create simple and complex animations using Physics, for example, creating a rope animation and hanging ball. Let's start with the Physics properties of a Rigidbody component, which help in syncing animation. Interpolate and Extrapolate Unity3D offers a way that its Rigidbody component can help in the syncing of animation. Using the interpolation and extrapolation properties, we sync animations. Interpolation is not only for animation, it works also with Rigidbody. Let's see in detail how interpolation and extrapolation work: Create a new scene and save it. Create a Cube game object and apply Rigidbody on it. Look at the Inspector panel shown in the following screenshot. On clicking Interpolate, a drop-down list that contains three options will appear, which are None, Interpolate, and Extrapolate. By selecting any one of them, we can apply the feature. In interpolation, the position of an object is calculated by the current update time, moving it backwards one Physics update delta time. Delta time or delta timing is a concept used among programmers in relation to frame rate and time. For more details, check out http://docs.unity3d.com/ScriptReference/Time-deltaTime.html. If you look closely, you will observe that there are at least two Physics updates, which are as follows: Ahead of the chosen time Behind the chosen time Unity interpolates between these two updates to get a position for the update position. So, we can say that the interpolation is actually lagging behind one Physics update. The second option is Extrapolate, which is to use for extrapolation. In this case, Unity predicts the future position for the object. Although, this does not show any lag, but incorrect prediction sometime causes a visual jitter. One more important component that is widely used to animate cloth is the Cloth component. Here, you will learn about its properties and how to use it. The Cloth component To make animation easy, Unity provides an interactive component called Cloth. In the GameObject menu, you can directly create the Cloth game object. Have a look at the following screenshot: Unity also provides Cloth components in its Physics sections. To apply this, let's look at an example: Create a new scene and save it. Create a Plane game object. (We can also create a Cloth game object.) Navigate to Component | Physics and choose InteractiveCloth. As shown in the following screenshot, you will see the following properties in the Inspector panel: Let's have a look at the properties one by one. Blending Stiffness and Stretching Stiffness define the blending and stretching stiffness of the Cloth while Damping defines the damp motion of the Cloth. Using the Thickness property, we decide thickness of the Cloth, which ranges from 0.001 to 10,000. If we enable the Use Gravity property, it will affect the Cloth simulation. Similarly, if we enable Self Collision, it allows the Cloth to collide with itself. For a constant or random acceleration, we apply the External Acceleration and Random Acceleration properties, respectively. World Velocity Scale decides movement of the character in the world, which will affect the Cloth vertices. The higher the value, more movement of the character will affect. World Acceleration works similarly. The Interactive Cloth component depends on the Cloth Renderer components. Lots of Cloth components in a game reduces the performance of game. To simulate clothing in characters, we use the Skinned Cloth component. Important points while using the Cloth component The following are the important points to remember while using the Cloth component: Cloth simulation will not generate tangents. So, if you are using a tangent dependent shader, the lighting will look wrong for a Cloth component that has been moved from its initial position. We cannot directly change the transform of moving Cloth game object. This is not supported. Disabling the Cloth before changing the transform is supported. The SkinnedCloth component works together with SkinnedMeshRenderer to simulate clothing on a character. As shown in the following screenshot, we can apply Skinned Cloth: As you can see in the following screenshot, there are different properties that we can use to get the desired effect: We can disable or enable the Skinned Cloth component at any time to turn it on or off. Summary In this article, you learned about how interpolation and extrapolation work. We also learned about Cloth component and its uses in animation with an example. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [article] Unity Networking – The Pong Game [article] The physics engine [article]
Read more
  • 0
  • 0
  • 4856
Packt
22 Nov 2013
15 min read
Save for later

Unity Networking – The Pong Game

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Multiplayer is everywhere. It's a staple of AAA games and small-budget indie offerings alike. Multiplayer games tap into our most basic human desires. Whether it be teaming up with strangers to survive a zombie apocalypse, or showing off your skills in a round of "Capture the Flag" on your favorite map, no artificial intelligence in the world comes close to the feeling of playing with a living, breathing, and thinking human being. Unity3D has a sizable number of third-party networking middleware aimed at developing multiplayer games, and is arguably one of the easiest platforms to prototype multiplayer games. The first networking system most people encounter in Unity is the built-in Unity Networking API . This API simplifies a great many tasks in writing networked code by providing a framework for networked objects rather than just sending messages. This works by providing a NetworkView component, which can serialize object state and call functions across the network. Additionally, Unity provides a Master server, which essentially lets players search among all public servers to find a game to join, and can also help players in connecting to each other from behind private networks. In this article, we will cover: Introducing multiplayer Introducing UDP communication Setting up your own Master server for testing What a NetworkView is Serializing object state Calling RPCs Starting servers and connecting to them Using the Master server API to register servers and browse available hosts Setting up a dedicated server model Loading networked levels Creating a Pong clone using Unity networking Introducing multiplayer games Before we get started on the details of communication over the Internet, what exactly does multiplayer entail in a game? As far as most players are concerned, in a multiplayer game they are sharing the same experience with other players. It looks and feels like they are playing the same game. In reality, they aren't. Each player is playing a separate game, each with its own game state. Trying to ensure that all players are playing the exact same game is prohibitively expensive. Instead, games attempt to synchronize just enough information to give the illusion of a shared experience. Games are almost ubiquitously built around a client-server architecture, where each client connects to a single server. The server is the main hub of the game, ideally the machine for processing the game state, although at the very least it can serve as a simple "middleman" for messages between clients. Each client represents an instance of the game running on a computer. In some cases the server might also have a client, for instance some games allow you to host a game without starting up an external server program. While an MMO ( Massively Multiplayer Online ) might directly connect to one of these servers, many games do not have prior knowledge of the server IPs. For example, FPS games often let players host their own servers. In order to show the user a list of servers they can connect to, games usually employ another server, known as the "Master Server" or alternatively the "Lobby server". This server's sole purpose is to keep track of game servers which are currently running, and report a list of these to clients. Game servers connect to the Master server in order to announce their presence publicly, and game clients query the Master server to get an updated list of game servers currently running. Alternatively, this Master server sometimes does not keep track of servers at all. Sometimes games employ "matchmaking", where players connect to the Lobby server and list their criteria for a game. The server places this player in a "bucket" based on their criteria, and whenever a bucket is full enough to start a game, a host is chosen from these players and that client starts up a server in the background, which the other players connect to. This way, the player does not have to browse servers manually and can instead simply tell the game what they want to play. Introducing UDP communication The built-in Unity networking is built upon RakNet . RakNet uses UDP communication for efficiency. UDP ( User Datagram Protocols ) is a simple way to send messages to another computer. These messages are largely unchecked, beyond a simple checksum to ensure that the message has not been corrupted. Because of this, messages are not guaranteed to arrive, nor are they guaranteed to only arrive once (occasionally a single message can be delivered twice or more), or even in any particular order. TCP, on the other hand, guarantees each message to be received just once, and in the exact order they were sent, although this can result in increased latency (messages must be resent several times if they fail to reach the target, and messages must be buffered when received, in order to be processed in the exact order they were sent). To solve this, a reliability layer must be built on top of UDP. This is known as rUDP ( reliable UDP ). Messages can be sent unreliably (they may not arrive, or may arrive more than once), or reliably (they are guaranteed to arrive, only once per message, and in the correct order). If a reliable message was not received or was corrupt, the original sender has to resend the message. Additionally, messages will be stored rather than immediately processed if they are not in order. For example, if you receive messages 1, 2, and 4, your program will not be able to handle those messages until message 3 arrives. Allowing unreliable or reliable switching on a per-message basis affords better overall performance. Messages, such as player position, are better suited to unreliable messages (if one fails to arrive, another one will arrive soon anyway), whereas damage messages must be reliable (you never want to accidentally drop a damage message, and having them arrive in the same order they were sent reduces race conditions). In Unity, you can serialize the state of an object (for example, you might serialize the position and health of a unit) either reliably or unreliably (unreliable is usually preferred). All other messages are sent reliably. Setting up the Master Server Although Unity provide their own default Master Server and Facilitator (which is connected automatically if you do not specify your own), it is not recommended to use this for production. We'll be using our own Master Server, so you know how to connect to one you've hosted yourself. Firstly, go to the following page: http://unity3d.com/master-server/ We're going to download two of the listed server components: the Master Server and the Facilitator as shown in the following screenshot: The servers are provided in full source, zipped. If you are on Windows using Visual Studio Express, open up the Visual Studio .sln solution and compile in the Release mode. Navigate to the Release folder and run the EXE (MasterServer.exe or Facilitator.exe). If you are on a Mac, you can either use the included XCode project, or simply run the Makefile (the Makefile works under both Linux and Mac OS X). The Master Server, as previously mentioned, enables our game to show a server lobby to players. The Facilitator is used to help clients connect to each other by performing an operation known as NAT punch-through . NAT is used when multiple computers are part of the same network, and all use the same public IP address. NAT will essentially translate public and private IPs, but in order for one machine to connect to another, NAT punch-through is necessary. You can read more about it here: http://www.raknet.net/raknet/manual/natpunchthrough.html The default port for the Master Server is 23466, and for the Facilitator is 50005. You'll need these later in order to configure Unity to connect to the local Master Server and Facilitator instead of the default Unity-hosted servers. Now that we've set up our own servers, let's take a look at the Unity Networking API itself. NetworkViews and state serialization In Unity, game objects that need to be networked have a NetworkView component. The NetworkView component handles communication over the network, and even helps make networked state serialization easier. It can automatically serialize the state of a Transform, Rigidbody, or Animation component, or in one of your own scripts you can write a custom serialization function. When attached to a game object, NetworkView will generate a NetworkViewID for NetworkView. This ID serves to uniquely identify a NetworkView across the network. An object can be saved as part of a scene with NetworkView attached (this can be used for game managers, chat boxes, and so on), or it can be saved in the project as a prefab and spawned later via Network.Instantiate (this is used to generate player objects, bullets, and so on). Network.Instantiate is the multiplayer equivalent to GameObject.Instantiate —it sends a message over the network to other clients so that all clients spawn the object. It also assigns a network ID to the object, which is used to identify the object across multiple clients (the same object will have the same network ID on every client). A prefab is a template for a game object (such as the player object). You can use the Instantiate methods to create a copy of the template in the scene. Spawned network game objects can also be destroyed via Network.Destroy. It is the multiplayer counterpart of GameObject.Destroy. It sends a message to all clients so that they all destroy the object. It also deletes any RPC messages associated with that object. NetworkView has a single component that it will serialize. This can be a Transform, a Rigidbody, an Animation, or one of your own components that has an OnSerializeNetworkView function. Serialized values can either be sent with the ReliableDeltaCompressed option, where values are always sent reliably and compressed to include only changes since the last update, or they can be sent with the Unreliable option, where values are not sent reliably and always include the full values (not the change since the last update, since that would be impossible to predict over UDP). Each method has its own advantages and disadvantages. If data is constantly changing, such as player position in a first person shooter, in general Unreliable is preferred to reduce latency. If data does not often change, use the ReliableDeltaCompressed option to reduce bandwidth (as only changes will be serialized). NetworkView can also call methods across the network via Remote Procedure Calls ( RPC ). RPCs are always completely reliable in Unity Networking, although some networking libraries allow you to send unreliable RPCs, such as uLink or TNet. Writing a custom state serializer While initially a game might simply serialize Transform or Rigidbody for testing, eventually it is often necessary to write a custom serialization function. This is a surprisingly easy task. Here is a script that sends an object's position over the network: using UnityEngine; using System.Collections; public class ExampleUnityNetworkSerializePosition : MonoBehaviour { public void OnSerializeNetworkView( BitStream stream, NetworkMessageInfo info ) { // we are currently writing information to the network if( stream.isWriting ) { // send the object's position Vector3 position = transform.position; stream.Serialize( ref position ); } // we are currently reading information from the network else { // read the first vector3 and store it in 'position' Vector3 position = Vector3.zero; stream.Serialize( ref position ); // set the object's position to the value we were sent transform.position = position; } } } Most of the work is done with BitStream. This is used to check if NetworkView is currently writing the state, or if it is reading the state from the network. Depending on whether it is reading or writing, stream.Serialize behaves differently. If NetworkView is writing, the value will be sent over the network. However, if NetworkView is reading, the value will be read from the network and saved in the referenced variable (thus the ref keyword, which passes Vector3 by reference rather than value). Using RPCs RPCs are useful for single, self-contained messages that need to be sent, such as a character firing a gun, or a player saying something in chat. In Unity, RPCs are methods marked with the [RPC] attribute. This can be called by name via networkView.RPC( "methodName", … ). For example, the following script prints to the console on all machines when the space key is pressed. using UnityEngine; using System.Collections; public class ExampleUnityNetworkCallRPC : MonoBehavior { void Update() { // important – make sure not to run if this networkView is notours if( !networkView.isMine ) return; // if space key is pressed, call RPC for everybody if( Input.GetKeyDown( KeyCode.Space ) ) networkView.RPC( "testRPC", RPCMode.All ); } [RPC] void testRPC( NetworkMessageInfo info ) { // log the IP address of the machine that called this RPC Debug.Log( "Test RPC called from " + info.sender.ipAddress ); } } Also note the use of NetworkView.isMine to determine ownership of an object. All scripts will run 100 percent of the time regardless of whether your machine owns the object or not, so you have to be careful to avoid letting some logic run on remote machines; for example, player input code should only run on the machine that owns the object. RPCs can either be sent to a number of players at once, or to a specific player. You can either pass an RPCMode to specify which group of players to receive the message, or a specific NetworkPlayer to send the message to. You can also specify any number of parameters to be passed to the RPC method. RPCMode includes the following entries: All (the RPC is called for everyone) AllBuffered (the RPC is called for everyone, and then buffered for when new players connect, until the object is destroyed) Others (the RPC is called for everyone except the sender) OthersBuffered (the RPC is called for everyone except the sender, and then buffered for when new players connect, until the object is destroyed) Server (the RPC is sent to the host machine) Initializing a server The first thing you will want to set up is hosting games and joining games. To initialize a server on the local machine, call Network.InitializeServer. This method takes three parameters: the number of allowed incoming connections, the port to listen on, and whether to use NAT punch-through. The following script initializes a server on port 25000 which allows 8 clients to connect: using UnityEngine; using System.Collections; public class ExampleUnityNetworkInitializeServer : MonoBehavior { void OnGUI() { if( GUILayout.Button( "Launch Server" ) ) { LaunchServer(); } } // launch the server void LaunchServer() { // Start a server that enables NAT punchthrough, // listens on port 25000, // and allows 8 clients to connect Network.InitializeServer( 8, 25005, true ); } // called when the server has been initialized void OnServerInitialized() { Debug.Log( "Server initialized" ); } } You can also optionally enable an incoming password (useful for private games) by setting Network.incomingPassword to a password string of the player's choice, and initializing a general-purpose security layer by calling Network.InitializeSecurity(). Both of these should be set up before actually initializing the server. Connecting to a server To connect to a server you know the IP address of, you can call Network.Connect. The following script allows the player to enter an IP, a port, and an optional password and attempts to connect to the server: using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToServer : MonoBehavior { private string ip = ""; private string port = ""; private string password = ""; void OnGUI() { GUILayout.Label( "IP Address" ); ip = GUILayout.TextField( ip, GUILayout.Width( 200f ) ); GUILayout.Label( "Port" ); port = GUILayout.TextField( port, GUILayout.Width( 50f ) ); GUILayout.Label( "Password (optional)" ); password = GUILayout.PasswordField( password, '*',GUILayout.Width( 200f ) ); if( GUILayout.Button( "Connect" ) ) { int portNum = 25005; // failed to parse port number – a more ideal solution is tolimit input to numbers only, a number of examples can befound on the Unity forums if( !int.TryParse( port, out portNum ) ) { Debug.LogWarning( "Given port is not a number" ); } // try to initiate a direct connection to the server else { Network.Connect( ip, portNum, password ); } } } void OnConnectedToServer() { Debug.Log( "Connected to server!" ); } void OnFailedToConnect( NetworkConnectionError error ) { Debug.Log( "Failed to connect to server: " +error.ToString() ); } } Connecting to the Master Server While we could just allow the player to enter IP addresses to connect to servers (and many games do, such as Minecraft), it's much more convenient to allow the player to browse a list of public servers. This is what the Master Server is for. Now that you can start up a server and connect to it, let's take a look at how to connect to the Master Server you downloaded earlier. First, make sure both the Master Server and Facilitator are running. I will assume you are running them on your local machine (IP is 127.0.0.1), but of course you can run these on a different computer and use that machine's IP address. Keep in mind, if you want the Master Server publicly accessible, it must be installed on a machine with a public IP address (it cannot be in a private network). Let's configure Unity to use our Master Server rather than the Unity-hosted test server. The following script configures the Master Server and Facilitator to connect to a given IP (by default 127.0.0.1): using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToMasterServer : MonoBehaviour { // Assuming Master Server and Facilitator are on the same machine public string MasterServerIP = "127.0.0.1"; void Awake() { // set the IP and port of the Master Server to connect to MasterServer.ipAddress = MasterServerIP; MasterServer.port = 23466; // set the IP and port of the Facilitator to connect to Network.natFacilitatorIP = MasterServerIP; Network.natFacilitatorPort = 50005; } }
Read more
  • 0
  • 0
  • 4743

article-image-introduction-blender-25-color-grading
Packt
11 Nov 2010
11 min read
Save for later

Introduction to Blender 2.5: Color Grading

Packt
11 Nov 2010
11 min read
Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        I would like to thank a few people who have made this all possible and I wouldn't be inspired doing this now without their great aid: To Francois Tarlier (http://www.francois-tarlier.com) for patiently bearing with my questions, for sharing his thoughts on color grading with Blender, and for simply developing things to make these things existent in Blender. A clear example of this would be the addition of the Color Balance Node in Blender 2.5's Node Compositor (which I couldn't live without). To Matt Ebb (http://mke3.net/) for creating tools to make Blender's Compositor better and for supporting the efforts of making one. And lastly, to Stu Maschwitz (http://www.prolost.com) for his amazing tips and tricks on color grading. Now, for some explanation. Color grading is usually defined as the process of altering and/or enhancing the colors of a motion picture or a still image. Traditionally, this happens by altering the subject photo-chemically (color timing) in a laboratory. But with modern tools and techniques, color grading can now be achieved digitally. Software like Apple's Final Cut Pro, Adobe's After Effects, Red Giant Software’s Magic Bullet Looks, etc. Luckily, the latest version of Blender has support for color grading by using a selection and plethora of nodes that will then process our input accordingly. However, I really want to stress here that often, it doesn't matter what tools you use, it all really depends on how crafty and artistic you are, regardless of whatever features your application has. Normally, color grading could also be related to color correction in some ways, however strictly speaking, color correction deals majorly on a “correctional” aspect (white balancing, temperature changes, etc.) rather than a specific alteration that would otherwise be achieved when applied with color grading. With color grading, we can turn a motion picture or still image into different types of mood and time of the day, we can fake lens filters and distortions, highlight part of an image via bright spotting, remove red eye effects, denoise an image, add glares, and a lot more. With all the things mentioned above, they can be grouped into three major categories, namely: Color Balancing Contrasting Stylization Material Variation Compensation With Color Balancing, we are trying to fix tint errors and colorizations that occurred during hardware post-production, something that would happen when recording the data into, say, a camera's memory right after it has been internally processed. Or sometimes, this could also be applied to fix some white balance errors that were overlooked while shooting or recording. These are, however, non-solid rules that aren't followed all the time. We can, however, use color balancing to simply correct the tones of an image or frame such that the human skin will look more natural with respect to the scene it is located at. Contrasting deals with how subject/s are emphasized with respect to the scene it is located at. It could also refer to vibrance and high dynamic imaging. It could also be just a general method of “popping out” necessary details present in a frame. Stylization refers to effects that are added on top of the original footage/image after applying color correction, balancing, etc. Some examples would be: dreamy effect, day to night conversion, retro effect, sepia, and many more. And last but not the least is Material Variation Compensation. Often, as artists, there will come a point in time that after hours and hours of waiting for your renders to finish, you will realize at the last minute that something is just not right with how the materials are set up. If you're on a tight deadline, rerendering the entire sequence or frame is not an option. Thankfully, but not absolute all the time, we can compensate this by using color grading techniques to specifically tell Blender to adjust just a portion of an image that looks wrong and save us a ton of time if we were to rerender again. However, with the vast topics that Color Grading has, I can only assume that I will only be leading you to the introductory steps to get you started and for you to have a basis for your own experiments. To have a view of what we could possibly discuss, you can check some of the videos I've done here: http://vimeo.com/13262256 http://vimeo.com/13995077 And to those of you interested with some presets, Francois Tarlier has provided some in this page http://code.google.com/p/ft-projects/downloads/list. Outlining some of the aspects that we'll go through in Part 1 of this article, here's a list of the things we will be doing: Loading Image Files in the Compositor Loading Sequence Files in the Compositor Loading Movie Files in the Compositor Contrasting with Color Curves Colorizing with Color Curves Color Correcting with Color Curves And before we start, here are some prerequisites that you should have: Latest Blender 2.5 version (grab one from http://www.graphicall.org or from the latest svn updates) Movies, Footages, Animations (check http://www.stockfootageforfree.com for free stock footages) Still Images Intermediate Blender skill level Initialization With all the prerequisites met and before we get our hands dirty, there are some things we need to do. Fire up Blender 2.5 and you'll notice (by default) that Blender starts with a cool splash screen and with it on the upper right hand portion, you can see the Blender version number and the revision number. As much as possible, you would want to have a similar revision number as what we'll be using here, or better yet, a newer one. This will ensure that tools we'll be using are up to date, bug free, and possibly feature-pumped. Move the mouse over the image to enlarge it. (Blender 2.5 Initial Startup Screen) After we have ensured we have the right version (and revision number) of Blender, it's time to set up our scenes and screens accordingly to match our ideal workflow later on. Before starting any color grading session, make sure you have a clear plan of what you want to achieve and to do with your footages and images. This way you can eliminate the guessing part and save a lot of time in the process. Next step is to make sure we are in the proper screen for doing color grading. You'll see in the menu bar at the top that we are using the “Default” screen. This is useful for general-purpose Blender workflow like Modeling, Lighting, and Shading setup. To harness Blender's intuitive interface, we'll go ahead and change this screen to something more obvious and useful. (Screen Selection Menu) Click the button on the left of the screen selection menu and you'll see a list of screens to choose from. For this purpose, we'll choose “Compositing”. After enabling the screen, you'll notice that Blender's default layout has been changed to something more varied, but not very dramatic. (Choosing the Compositing Screen) The Compositing Screen will enable us to work seamlessly with color grading in that, by default, it has everything we need to start our session. By default, the compositing screen has the Node Editor on top, the UV/Image Editor on the lower left hand side, the 3D View on the lower right hand side. On the far right corner, equaling the same height as these previous three windows, is the Properties Window, and lastly (but not so obvious) is the Timeline Window which is just below the Properties Window as is situated on the far lower right corner of your screen. Since we won't be digging too much on Blender's 3D aspect here, we can go ahead and ignore the lower right view (3D View), or better yet, let's merge the UV/Image Editor to the 3D View such that the UV/Image Editor will encompass mostly the lower half of the screen (as seen below). You could also merge the Properties Window and the Timeline Window such that the only thing present on the far right hand side is the Properties Window. (Merging the Screen Windows) (Merged Screens) (Merged Screens) Under the Node Editor Window, click on and enable Use Nodes. This will tell Blender that we'll be using the node system in conjunction with the settings we'll be enabling later on. (Enabling “Use Nodes”) After clicking on Use Nodes, you'll notice nodes start appearing on the Node Editor Window, namely the Render Layer and Composite nodes. This is one good hint that Blender now recognizes the nodes as part of its rendering process. But that's not enough yet. Looking on the far right window (Properties Window), look for the Shading and Post Processing tabs under Render. If you can't see some parts, just scroll through until you do. (Locating the Shading and Post Processing Tabs) Under the Shading tab, disable all check boxes except for Texture. This will ensure that we won't get any funny output later on. It will also eliminate the error debugging process, if we do encounter some. (Disabling Shading Options) Next, let's proceed to the Post Processing tab and disable Sequencer. Then let's make sure that Compositing is enabled and checked. (Disabling Post Processing Options) Thats it for now, but we'll get back to the Properties Window whenever necessary. Let's move our attention back to the Node Editor Window above. Same keyboard shortcuts apply here compared to the 3D Viewport. To review, here are the shortcuts we might find helpful while working on the Node Editor Window:   Select Node Right Mouse Button Confirm Left Mouse Button Zoom In Mouse Wheel Up/CTRL + Mouse Wheel Drag Zoom Out Mouse Wheel Down/CTRL + Mouse Wheel Drag Pan Screen Middle Mouse Drag Move Node G Box Selection B Delete Node X Make Links F Cut Links CTRL Left Mouse Button Hide Node H Add Node SHIFT A Toggle Full Screen SHIFT SPACE Now, let's select the Render Layer Node and delete it. We won't be needing it now since we're not directly working with Blender's internal render layer system yet, since we'll be solely focusing our attention on uploading images and footages for grading work. Select the Composite Node and move it far right, just to get it out of view for now. (Deleting the Render Layer Node and Moving the Composite Node) Loading image files in the compositor Blender's Node Compositor can upload pretty much any image format you have. Most of the time, you might want only to work with JPG, PNG, TIFF, and EXR file formats. But choose what you prefer, just be aware though of the image format's compression features. For most of my compositing tasks, I commonly use PNG, it being a lossless type of image, meaning, even after processing it a few times, it retains its original quality and doesn't compress which results in odd results, like in a JPG file. However, if you really want to push your compositing project and use data such as z-buffer (depth), etc. you'll be good with EXR, which is one of the best out there, but it creates such huge file sizes depending on the settings you have. Play around and see which one is most comfortable with you. For ease, we'll load up JPG images for now. With the Node Editor Window active, left click somewhere on an empty space on the left side, imagine placing an imaginative cursor there with the left mouse button. This will tell Blender to place here the node we'll be adding. Next, press SHIFT A. This will bring up the add menu. Choose Input then click on Image. (Adding an Image Node) Most often, when you have the Composite Node selected before performing this action, Blender will automatically connect and link the newly added node to the composite node. If not, you can connect the Image Node's image output node to the Composite Node's image input node. (Image Node Connected to Composite Node) To load images into the Compositor, simply click on Open on the the Image Node and this will bring up a menu for you to browse on. Once you've chosen the desired image, you can double left click on the image or single click then click on Open. After that is done, you'll notice the Image Node's and the Composite Node's preview changed accordingly. (Image Loaded in the Compositor) This image is now ready for compositing work.
Read more
  • 0
  • 0
  • 4580

article-image-introduction-hlsl-language
Packt
28 Jun 2013
8 min read
Save for later

Introduction to HLSL language

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Distance/Height-based fog Distance/Height-based fog is an approximation to the fog you would normally see outdoors. Even in the clearest of days, you should be able to see some fog far in the distance. The main benefit of adding the fog effect is that it helps the viewer estimate how far different elements in the scene are based on the amount of fog covering them. In addition to the realism this effect adds, it has the additional benefit of hiding the end of the visible range. Without fog to cover the far plane, it becomes easier to notice when far scene elements are clipped by the cameras far plane. By tuning the height of the fog you can also add a darker atmosphere to your scene as demonstrated by the following image: This recipe will demonstrate how distance/height-based fog can be added to our deferred directional light calculation. See the How it works… section for details about adding the effect to other elements of your rendering code. Getting ready We will be passing additional fog specific parameters to the directional light's pixel shader through a new constant buffer. The reason for separating the fog values into their own constant buffer is to allow the same parameters to be used by any other shader that takes fog into account. To create the new constant buffer use the following buffer descriptor: Constant buffer descriptor parameter   Value   Usage   D3D11_USAGE_DYNAMIC   BindFlags   D3D11_BIND_CONSTANT_BUFFER   CPUAccessFlags   D3D11_CPU_ACCESS_WRITE   ByteWidth   48   The reset of the descriptor fields should be set to zero. All the fog calculations will be handled in the deferred directional light pixel shader. How to do it... Our new fog constant buffer is declared in the pixel shader as follows: cbuffer cbFog : register( b2 ){float3 FogColor : packoffset( c0 );float FogStartDepth : packoffset( c0.w );float3 FogHighlightColor : packoffset( c1 );float FogGlobalDensity : packoffset( c1.w );float3 FogSunDir : packoffset( c2 );FogHeightFalloff : packoffset( c2.w );} The helper function used for calculating the fog is as follows: float3 ApplyFog(float3 originalColor, float eyePosY, float3eyeToPixel){float pixelDist = length( eyeToPixel );float3 eyeToPixelNorm = eyeToPixel / pixelDist;// Find the fog staring distance to pixel distancefloat fogDist = max(pixelDist - FogStartDist, 0.0);// Distance based fog intensityfloat fogHeightDensityAtViewer = exp( -FogHeightFalloff * eyePosY );float fogDistInt = fogDist * fogHeightDensityAtViewer;// Height based fog intensityfloat eyeToPixelY = eyeToPixel.y * ( fogDist / pixelDist );float t = FogHeightFalloff * eyeToPixelY;const float thresholdT = 0.01;float fogHeightInt = abs( t ) > thresholdT ?( 1.0 - exp( -t ) ) / t : 1.0;// Combine both factors to get the final factorfloat fogFinalFactor = exp( -FogGlobalDensity * fogDistInt *fogHeightInt );// Find the sun highlight and use it to blend the fog colorfloat sunHighlightFactor = saturate(dot(eyeToPixelNorm, FogSunDir));sunHighlightFactor = pow(sunHighlightFactor, 8.0);float3 fogFinalColor = lerp(FogColor, FogHighlightColor,sunHighlightFactor);return lerp(fogFinalColor, originalColor, fogFinalFactor);} The Applyfog function takes the color without fog along with the camera height and the vector from the camera to the pixel the color belongs to and returns the pixel color with fog. To add fog to the deferred directional light, change the directional entry point to the following code: float4 DirLightPS(VS_OUTPUT In) : SV_TARGET{// Unpack the GBufferfloat2 uv = In.Position.xy;//In.UV.xy;SURFACE_DATA gbd = UnpackGBuffer_Loc(int3(uv, 0));// Convert the data into the material structureMaterial mat;MaterialFromGBuffer(gbd, mat);// Reconstruct the world positionfloat2 cpPos = In.UV.xy * float2(2.0, -2.0) - float2(1.0, -1.0);float3 position = CalcWorldPos(cpPos, gbd.LinearDepth);// Get the AO valuefloat ao = AOTexture.Sample(LinearSampler, In.UV);// Calculate the light contributionfloat4 finalColor;finalColor.xyz = CalcAmbient(mat.normal, mat.diffuseColor.xyz) * ao;finalColor.xyz += CalcDirectional(position, mat);finalColor.w = 1.0;// Apply the fog to the final colorfloat3 eyeToPixel = position - EyePosition;finalColor.xyz = ApplyFog(finalColor.xyz, EyePosition.y,eyeToPixel);return finalColor;} With this change, we apply the fog on top of the lit pixels color and return it to the light accumulation buffer. How it works… Fog is probably the first volumetric effect implemented using a programmable pixel shader as those became commonly supported by GPUs. Originally, fog was implemented in hardware (fixed pipeline) and only took distance into account. As GPUs became more powerful, the hardware distance based fog was replaced by a programmable version that also took into account things such as height and sun effect. In reality, fog is just particles in the air that absorb and reflect light. A ray of light traveling from a position in the scene travels, the camera interacts with the fog particles, and gets changed based on those interactions. The further this ray has to travel before it reaches the camera, the larger the chance is that this ray will get either partially or fully absorbed. In addition to absorption, a ray traveling in a different direction may get reflected towards the camera and add to the intensity of the original ray. Based on the amount of particles in the air and the distance a ray has to travel, the light reaching our camera may contain more reflection and less of the original ray which leads to a homogenous color we perceive as fog. The parameters used in the fog calculation are: FogColor: The fog base color (this color's brightness should match the overall intensity so it won't get blown by the bloom) FogStartDistance: The distance from the camera at which the fog starts to blend in FogHighlightColor: The color used for highlighting pixels with pixel to camera vector that is close to parallel with the camera to sun vector FogGlobalDensity: Density factor for the fog (the higher this is the denser the fog will be) FogSunDir: Normalized sun direction FogHeightFalloff: Height falloff value (the higher this value, the lower is the height at which the fog disappears will be) When tuning the fog values, make sure the ambient colors match the fog. This type of fog is designed for outdoor environments, so you should probably disable it when lighting interiors. You may have noticed that the fog requires the sun direction. We already store the inversed sun direction for the directional light calculation. You can remove that value from the directional light constant buffer and use the fog vector instead to avoid the duplicate values This recipe implements the fog using the exponential function. The reason for using the exponent function is because of its asymptote on the negative side of its graph. Our fog implementation uses that asymptote to blend the fog in from the starting distances. As a reminder, the exponent function graph is as follows: The ApplyFog function starts off by finding the distance our ray traveled in the fog (fogDepth). In order to take the fog's height into account, we also look for the lowest height between the camera and the pixel we apply the fog to which we then use to find how far our ray travels vertically inside the fog (fogHeight). Both distance values are negated and multiplied by the fog density to be used as the exponent. The reason we negate the distance values is because it's more convenient to use the negative side of the exponential functions graph which is limited to the range 0 to 1. As the function equals 1 when the exponent is 0, we have to invert the results (stored in fogFactors). At this point we have one factor for the height which gets larger the further the ray travels vertically into the fog and a factor that gets larger the further the ray travels in the fog in any direction. By multiplying both factors with each other we get the combined fog effect on the ray: the higher the result is, the more the original ray got absorbed and light got reflected towards the camera in its direction (this is stored in fogFinalFactor). Before we can compute the final color value, we need to find the fog's color based on the camera and sun direction. We assume that the sun intensity is high enough to get more of its light rays reflected towards the camera direction and sun direction are close to parallel. We use the dot product between the two to determine the angle and narrow the result by raising it to the power of 8 (the result is stored in sunHighlightFactor). The result is used to lerp between the fog base color and the fog color highlighted by the sun. Finally, we use the fog factor to linearly interpolate between the input color and the fog color. The resulting color is then returned from the helper function and stored into the light accumulation buffer. As you can see, the changes to the directional light entry point are very minor as most of the work is handled inside the helper function ApplyFog. Adding the fog calculation to the rest of the deferred and forward light sources should be pretty straightforward. One thing to take into consideration is that fog also has to be applied to scene elements that don't get lit, like the sky or emissive elements. Again, all you have to do is call ApplyFog to get the final color with the fog effect. Summary In this article, we learned how to apply fog effect and add atmospheric scenes to the images. Resources for Article : Further resources on this subject: Creating and Warping 3D Text with Away3D 3.6 [Article] 3D Vector Drawing and Text with Papervision3D: Part 1 [Article] 3D Vector Drawing and Text with Papervision3D: Part 2 [Article]
Read more
  • 0
  • 0
  • 4572
article-image-learning-ngui-unity
Packt
08 May 2015
2 min read
Save for later

Learning NGUI for Unity

Packt
08 May 2015
2 min read
NGUI is a fantastic GUI toolkit for Unity 3D, allowing fast and simple GUI creation and powerful functionality, with practically zero code required. NGUI is a robust UI system both powerful and optimized. It is an effective plugin for Unity, which gives you the power to create beautiful and complex user interfaces while reducing performance costs. Compared to Unity's GUI features, NGUI is much more powerful and optimized. GUI development in Unity requires users to createUI features by scripting lines that display labels, textures and other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. However, this is no longer necessary with NGUI, as they makethe GUI process much simpler. This book by Charles Pearson, the author of Learning NGUI for Unity, will help you leverage the power of NGUI for Unity to create stunning mobile and PC games and user interfaces. Based on this, this book covers the following topics: Getting started with NGUI Creating NGUI widgets Enhancing your UI C# with NGUI Atlas and font customization The in-game user interface 3D user interface Going mobile Screen sizes and aspect ratios User experience and best practices This book is a practical tutorial that will guide you through creating a fully functional and localized main menu along with 2D and 3D in-game user interfaces. The book starts by teaching you about NGUI's workflow and creating a basic UI, before gradually moving on to building widgets and enhancing your UI. You will then switch to the Android platform to take care of different issues mobile devices may encounter. By the end of this book, you will have the knowledge to create ergonomic user interfaces for your existing and future PC or mobile games and applications developed with Unity 3D and NGUI. The best part of this book is that it covers the user experience and also talks about the best practices to follow when using NGUI for Unity. If you are a Unity 3D developer who wants to create an effective and user-friendly GUI using NGUI for Unity, then this book is for you. Prior knowledge of C# scripting is expected; however, no knowledge of NGUI is required. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 4522

article-image-creating-coin-material
Packt
10 Mar 2016
7 min read
Save for later

Creating a Coin Material

Packt
10 Mar 2016
7 min read
In this article by Alan Thorn, the author of Unity 5.x By Example, the coin object, as a concept, represents a basic or fundamental unit in our game logic because the player character should be actively searching the level looking for coins to collect before a timer runs out. This means that the coin is more than mere appearance; its purpose in the game is not simply eye candy, but is functional. It makes an immense difference to the game outcome whether the coin is collected by the player or not. Therefore, the coin object, as it stands, is lacking in two important respects. Firstly, it looks dull and grey—it doesn't really stand out and grab the player's attention. Secondly, the coin cannot actually be collected yet. Certainly, the player can walk into the coin, but nothing appropriate happens in response. Figure 2.1: The coin object so far The completed CollectionGame project, as discussed in this article and the next, can be found in the book companion files in the Chapter02/CollectionGame folder. (For more resources related to this topic, see here.) In this section, we'll focus on improving the coin appearance using a material. A material defines an algorithm (or instruction set) specifying how the coin should be rendered. A material doesn't just say what the coin should look like in terms of color; it defines how shiny or smooth a surface is, as opposed to rough and diffuse. This is important to recognize and is why a texture and material refer to different things. A texture is simply an image file loaded in memory, which can be wrapped around a 3D object via its UV mapping. In contrast, a material defines how one or more textures can be combined together and applied to an object to shape its appearance. To create a new material asset in Unity, right-click on an empty area in the Project panel, and from the context menu, choose Create | Material. See Figure 2.2. You can also choose Assets | Create | Material from the application menu. Figure 2.2: Creating a material A material is sometimes called a Shader. If needed, you can create custom materials using a Shader Language or you can use a Unity add-on, such as Shader Forge. After creating a new material, assign it an appropriate name from the Project panel. As I'm aiming for a gold look, I'll name the material mat_GoldCoin. Prefixing the asset name with mat helps me know, just from the asset name, that it's a material asset. Simply type a new name in the text edit field to name the material. You can also click on the material name twice to edit the name at any time later. See Figure 2.3: Figure 2.3: Naming a material asset Next, select the material asset in the Project panel, if it's not already selected, and its properties display immediately in the object Inspector. There are lots of properties listed! In addition, a material preview displays at the bottom of the object Inspector, showing you how the material would look, based on its current settings, if it were applied to a 3D object, such as a sphere. As you change material settings from the Inspector, the preview panel updates automatically to reflect your changes, offering instant feedback on how the material would look. See the following screenshot: Figure 2.4: Material properties are changed from the Object Inspector Let's now create a gold material for the coin. When creating any material, the first setting to choose is the Shader type because this setting affects all other parameters available to you. The Shader type determines which algorithm will be used to shade your object. There are many different choices, but most material types can be approximated using either Standard or Standard (Specular setup). For the gold coin, we can leave the Shader as Standard. See the following screenshot: Figure 2.5: Setting the material Shader type Right now, the preview panel displays the material as a dull grey, which is far from what we need. To define a gold color, we must specify the Albedo. To do this, click on the Albedo color slot to display a Color picker, and from the Color picker dialog, select a gold color. The material preview updates in response to reflect the changes. Refer to the following screenshot: Figure 2.6: Selecting a gold color for the Albedo channel The coin material is looking better than it did, but it's still supposed to represent a metallic surface, which tends to be shiny and reflective. To add this quality to our material, click and drag the Metallic slider in the object Inspector to the right-hand side, setting its value to 1. This indicates that the material represents a fully metal surface as opposed to a diffuse surface such as cloth or hair. Again, the preview panel will update to reflect the change. See Figure 2.7: Figure 2.7: Creating a metallic material We now have a gold material created, and it's looking good in the preview panel. If needed, you can change the kind of object used for a preview. By default, Unity assigns the created material to a sphere, but other primitive objects are allowed, including cubes, cylinders, and torus. This helps you preview materials under different conditions. You can change objects by clicking on the geometry button directly above the preview panel to cycle through them. See Figure 2.8: Figure 2.8: Previewing a material on an object When your material is ready, you can assign it directly to meshes in your scene just by dragging and dropping. Let's assign the coin material to the coin. Click and drag the material from the Project panel to the coin object in the scene. On dropping the material, the coin will change appearance. See Figure 2.9: Figure 2.9: Assigning the material to the coin You can confirm that material assignment occurred successfully and can even identify which material was assigned by selecting the coin object in the scene and viewing its Mesh Renderer component from the object Inspector. The Mesh Renderer component is responsible for making sure that a mesh object is actually visible in the scene when the camera is looking. The Mesh Renderer component contains a Materials field. This lists all materials currently assigned to the object. By clicking on the material name from the Materials field, Unity automatically selects the material in the Project panel, making it quick and simple to locate materials. See Figure 2.10, The Mesh Renderer component lists all materials assigned to an object: Mesh objects may have multiple materials with different materials assigned to different faces. For best in-game performance, use as few unique materials on an object as necessary. Make the extra effort to share materials across multiple objects, if possible. Doing so can significantly enhance the performance of your game. For more information on optimizing rendering performance, see the online documentation at http://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html. Figure 2.10: The Mesh Renderer component lists all materials assigned to an object That's it! You now have a complete and functional gold material for the collectible coin. It's looking good. However, we're still not finished with the coin. The coin looks right, but it doesn't behave right. Specifically, it doesn't disappear when touched, and we don't yet keep track of how many coins the player has collected overall. To address this, then, we'll need to script. Summary Excellent work! In this article, you've completed the coin collection game as well as your first game in Unity. Resources for Article: Further resources on this subject: Animation features in Unity 5 [article] Saying Hello to Unity and Android [article] Learning NGUI for Unity [article]
Read more
  • 0
  • 0
  • 4497