Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-polygon-modeling-handgun-using-blender-3d-249-part-2
Packt
30 Nov 2009
4 min read
Save for later

Polygon Modeling of a Handgun using Blender 3D 2.49: Part 2

Packt
30 Nov 2009
4 min read
Modeling the hand wrap Set the view to front again, and select the vertices pointed in the following image. This time we will need two extrusions. Select the vertices in the top-left corner of the model, and move them down to align them to the image. Then, select the other three vertices shown in the following image and extrude them once: As the extruded geometry doesn't fit the guides of our reference image, we will have to select and move the lower vertices and place them as shown in the following image. They don't have to be exactly in the same position, but they should be placed in such a way that the shape of the model looks like our reference image. Remember that you can also select the vertices with the brush select tool. Press the B key twice, and then you will be able to paint the selection. Right after you place the vertices in their new positions, make another extrusion. By the end of the extrusion, try to place the lower vertices aligned with the right side of the guidelines. Just by looking at the image, you'll notice that one side of the model won't be aligned. So, select the vertices on the left or right (any one that isn't aligned with the image) and move it until it gets aligned. By now, the work will be a repetition of extrusions until we have our model created. Select the vertices pointed in the next images, and extrude them until you get the final shape. At this point in the project, you should be familiar with the technique. It's important to remember that for those operations, most of the alignment of the objects with the reference image is done by eye. At the end of each extrusion, use the S key to set the size of the new geometry until it fits with the reference image. If you prefer, the transform ation can be executed in face select mode to speed up the selection of the faces used in the extrusion. Here, we'll use the Skin Faces/Edge-Loops option again to connect the two selected faces. Sometimes, the faces created with this option will be generated with a Set Smooth option selected. This may cause the faces to look odd and have a different set of shading from the other faces. To make it look exactly the same as other faces of the model, select the created faces and click on the Set Solid button. If you don't know where this button is located, you will find it in the Editing panel. After the Skin Faces/Edge-Loops option is applied, the shade of the object will look a bit strange. This is because the smooth option of Blender is being used. Use the Set Solid option to make the faces appear in flat shade mode. A big part of the modeling is complete, but there are a few parts of the weapon missing. Our next task is to create additional parts of the gun, such as a detail for the hand wrap and the energy tank. For this project, we will create different objects for those parts to make our modeling easier. As you can see in the following image, we have created a big part of the gun with a well-organized topology and a fairly clean mesh, which is represented by a minimum number of faces and vertices, made only by quad faces. This type of mesh can be easily edited later by using subdivisions and new extrudes, which is a good reason to keep it as clean as possible.
Read more
  • 0
  • 0
  • 1604

article-image-adding-sound-music-and-video-3d-game-development-microsoft-silverlight-3-part-2
Packt
19 Nov 2009
5 min read
Save for later

Adding Sound, Music, and Video in 3D Game Development with Microsoft Silverlight 3: Part 2

Packt
19 Nov 2009
5 min read
Time for action – animating projections Your project manager wants you to animate the perspective transform applied to the video while it is being reproduced. We are going to add a StoryBoard in XAML code to animate the PlaneProjection instance: Stay in the project, 3DInvadersSilverlight. Open MainPage.xaml and replace the PlaneProjection definition with the following line (we have to add a name to refer to it): <PlaneProjection x:Name ="proIntroduction" RotationX="-40" RotationY="15" RotationZ="-6" LocalOffsetX="-70" LocalOffsetY="-105" /> Add the following lines of code before the end of the definition of the cnvVideo Canvas: <Canvas.Resources>    <Storyboard x_Name="introductionSB">        <DoubleAnimation Storyboard.TargetName="proIntroduction"                Storyboard.TargetProperty="RotationX"                From="-40" To="0" Duration="0:0:5"                AutoReverse="False" RepeatBehavior="1x" />    </Storyboard></Canvas.Resources> Now, add the following line of code before the end of the PlayIntroductoryVideo method (to start the animation): introductionSB.Begin(); Build and run the solution. Click on the butt on and the video will start its reproduction after the transition effect. While the video is being played, the projection will be animated, as shown in the following diagram: What just happened? Now, the projection that shows the video is animated while the video is being reproduced. Working with a StoryBoard in XAML to animate a projection First, we added a name to the existing PlaneProjection (proIntroduction). Then, we were able to create a new StoryBoard with a DoubleAnimation instance as a child, with the StoryBoard's TargetName set to proIntroduction and its TargetProperty set to RotationX. Thus, the DoubleAnimation controls proIntroduction's RotationX value. The RotationX value will go from -40 to 0 in five seconds—the same time as the video's duration: From="-40" To="0" Duration="0:0:5" The animation will run once (1x) and it won't reverse its behavior: AutoReverse="False" RepeatBehavior="1x" We added the StoryBoard inside . Thus, we were able to start it by calling its Begin method, in the PlayIntroductionVideo procedure: introductionSB.Begin(); We can define StoryBoard instances and different Animation (System. Windows.Media.Animation) subclasses instances as DoubleAnimation, using XAML code. This way, we can create amazing animations for many properties of many other UIElements defined in XAML code.   Time for action – solving navigation problems When the game starts, there is an undesired side effect. The projected video appears in the right background, as shown in the following screenshot: This usually happens when working with projections. Now, we are going to solve this small problem: Stay in the 3DInvadersSilverlight project. Open MainPage.xaml.cs and add the following line before the first one in the medIntroduction_MediaEnded method: cnvVideo.Visibility = Visibility.Collapsed; Build and run the solution. Click on the button and after the video reproduction and animation, the game will start without the undesired background, as shown in the following screenshot: What just happened? Now, once the video finishes its reproduction and associated animation, we have hidden the Canvas that contains it. Hence, there are no parts of the previous animation visible when the game starts. Time for action – reproducing music Great games have appealing background music. Now, we are going to search and add background music to our game: As with other digital content, sound and music have a copyright owner and a license. Hence, we must be very careful when downloading sound and music for our games. We must read licenses before deploying our games with these digital contents embedded. One of the 3D digital artists found a very cool electro music sample for reproduction as background music. You have to pay to use it. However, you can download a free demo (Distorted velocity. 1) from http://www.musicmediatracks.com/music/Style/Electro/. Save the downloaded MP3 file (distorted_velocity._1.mp3) in the previously created media folder (C:Silverlight3DInvaders3DMedia). You can use any other MP3 sound for this exercise. The aforementioned MP3 demo is not included in the accompanying source code. Stay in the 3DInvadersSilverlight project. Right-click on the Media sub-folder in the 3DInvadersSilverlight.Web project and select Add | Existing item… from the context menu that appears. Go to the folder in which you copied the downloaded MP3 file (C:Silverlight3DInvaders3DMedia). Select the MP3 file and click on Add. This way, the audio file will be part of the web project, in the Media folder, as shown in the following screenshot: Now, add the following lines of code at the beginning of the btnStartGame button's Click event. This code will enable the new background music to start playing: // Background musicMediaElement backgroundMusic = new MediaElement();LayoutRoot.Children.Add(backgroundMusic);backgroundMusic.Volume = 0.8;backgroundMusic.Source = new Uri("Media/distorted_velocity._1.mp3", UriKind.Relative);backgroundMusic.Play(); Build and run the solution. Click on the button and turn on your speakers. You will hear the background music while the transition effect starts.
Read more
  • 0
  • 0
  • 1384

article-image-adding-sound-music-and-video-3d-game-development-microsoft-silverlight-3-part-1
Packt
19 Nov 2009
3 min read
Save for later

Adding Sound, Music, and Video in 3D Game Development with Microsoft Silverlight 3: Part 1

Packt
19 Nov 2009
3 min read
A game needs sound, music and video. It has to offer the player attractive background music. It must also generate sounds associated with certain game events. When a spaceship shoots a laser beam, a sound must accompany this action. Reproducing videos showing high-quality previously rendered animations is a good idea during transitions between one stage and the next. Hear the UFOs coming So far, we have worked with 3D scenes showing 3D models with textures and different kinds of lights. We took advantage of C# object-oriented capabilities and we animated 3D models and moved the cameras. We have read values from many different input devices and we added physics, artificial intelligence, amazing effects, gauges, statistics, skill levels, environments, and stages. However, the game does not use the speakers at all because there is no background music and there are no in-game sounds. Thus, we have to sort this issue out. Modern games use videos to dazzle the player before starting each new stage. They use amazing sound eff ects and music custom prepared for the game by renowned artists. How can we add videos, music, and sounds in Silverlight? We can do this by taking advantage of the powerful multimedia classes offered by Silverlight 3. However, as a game uses more multimedia resources than other simpler applications, we must be careful to avoid including unnecessary resources in the files that must be downloaded before starting the application. Time for action – installing tools to manipulate videos The 3D digital artists used Blender to create an introductory video showing a high quality rendered animation for five seconds. They took advantage of Blender's animation creation features, as shown in the following screenshot: A spaceship flies in a starry universe for a few seconds. Then, the camera navigates through the stars. Your project manager wants you to add this video as an introduction to the game. However, as the video file is in AVI (Audio Video Interleave) format and Silverlight 3 does not support this format, you have to convert the video to an appropriate format. The creation of video animations for a game is very complex and requires specialist skills. We are going to simplify this process by using an existing video. First, we must download and install an additional tool that will help us in converting an existing video to the most appropriate file formats used in Silverlight 3: The necessary tools will depend on the applications the digital artists use to create the videos. However, we will be using some tools that will work fine with our examples. Download one of the following files: parceApplication's name Download link File name Description Expression Encoder 2 http://www.microsoft.com/expression/try-it/try-it-v2.aspx Encoder_Trial_en.exe It is a commercial tool, but the trial offers a free fully functional version for 30 days. This tool will enable us to encode videos to the appropriate format to use in Silverlight 3. Expression Encoder 3 http://www.microsoft.com/expression/try-it Encoder_Trial_en.exe It is the newest trial version of the aforementioned commercial tool. Run the installers and follow the steps to complete the installation wizards. If you installed Expression Encoder 2, download and install its Service Pack 1. The download link for it is http://www.microsoft.com/expression/tryit/try-it-v2.aspx#encodersp1 file name—EncoderV2SP1_en.exe. Once you have installed one of the versions of Expression Encoder, you will be able to load and encode many video files in different file formats, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 1411
Visually different images

article-image-unity-game-development-interactions-part-2
Packt
18 Nov 2009
14 min read
Save for later

Unity Game Development: Interactions (Part 2)

Packt
18 Nov 2009
14 min read
Opening the outpost In this section, we will look at the two differing approaches for triggering the animation giving you an overview of the two techniques that will both become useful in many other game development situations. In the first approach, we'll use collision detection—a crucial concept to get to grips with as you begin to work on games in Unity. In the second approach, we'll implement a simple ray cast forward from the player. Approach 1—Collision detection To begin writing the script that will trigger the door-opening animation and thereby grant access to the outpost, we need to consider which object to write a script for. In game development, it is often more efficient to write a single script for an object that will interact with many other objects, rather than writing many individual scripts that check for a single object. With this in mind, when writing scripts for a game such as this, we will write a script to be applied to the player character in order to check for collisions with many objects in our environment, rather than a script made for each object the player may interact with, which checks for the player. Creating new assets Before we introduce any new kind of asset into our project, it is good practice to create a folder in which we will keep assets of that type. In the Project panel, click on the Create button, and choose Folder from the drop-down menu that appears. Rename this folder Scripts by selecting it and pressing Return (Mac) or by pressing F2 (PC). Next, create a new JavaScript file within this folder simply by leaving the Scripts folder selected and clicking on the Project panel's Create button again, this time choosing JavaScript. By selecting the folder, you want a newly created asset to be in before you create them, you will not have to create and then relocate your asset, as the new asset will be made within the selected folder. Rename the newly created script from the default—NewBehaviourScript—to PlayerCollisions. JavaScript files have the file extension of .js but the Unity Project panel hides file extensions, so there is no need to attempt to add it when renaming your assets. You can also spot the file type of a script by looking at its icon in the Project panel. JavaScript files have a 'JS' written on them, C# files simply have 'C#' and Boo files have an image of a Pacman ghost, a nice little informative pun from the guys at Unity Technologies! Scripting for character collision detection To start editing the script, double-click on its icon in the Project panel to launch it in the script editor for your platform—Unitron on Mac, or Uniscite on PC. Working with OnControllerColliderHit By default, all new JavaScripts include the Update() function, and this is why you'll find it present when you open the script for the first time. Let's kick off by declaring variables we can utilise throughout the script. Our script begins with the definition of four variables, public member variables and two private variables. Their purposes are as follows: doorIsOpen: a private true/false (boolean) type variable acting as a switch for the script to check if the door is currently open. doorTimer: a private floating-point (decimal-placed) number variable, which is used as a timer so that once our door is open, the script can count a defined amount of time before self-closing the door. currentDoor: a private GameObject storing variable used to store the specific currently opened door. Should you wish to add more than one outpost to the game at a later date, then this will ensure that opening one of the doors does not open them all, which it does by remembering the most recent door hit. doorOpenTime: a floating-point (potentially decimal) numeric public member variable, which will be used to allow us to set the amount of time we wish the door to stay open in the Inspector. doorOpenSound/doorShutSound: Two public member variables of data type AudioClip, for allowing sound clip drag-and-drop assignment in the Inspector panel. Define the variables above by writing the following at the top of the PlayerCollisions script you are editing: private var doorIsOpen : boolean = false;private var doorTimer : float = 0.0;private var currentDoor : GameObject;var doorOpenTime : float = 3.0;var doorOpenSound : AudioClip;var doorShutSound : AudioClip; Next, we'll leave the Update() function briefly while we establish the collision detection function itself. Move down two lines from: function Update(){} And write in the following function: function OnControllerColliderHit(hit : ControllerColliderHit){} This establishes a new function called OnControllerColliderHit. This collision detection function is specifically for use with player characters such as ours, which use the CharacterController component. Its only parameter hit is a variable that stores information on any collision that occurs. By addressing the hit variable, we can query information on the collision, including—for starters—the specific game object our player has collided with. We will do this by adding an if statement to our function. So within the function's braces, add the following if statement: function OnControllerColliderHit(hit: ControllerColliderHit){ if(hit.gameObject.tag == "outpostDoor" && doorIsOpen == false){ }} In this if statement, we are checking two conditions, firstly that the object we hit is tagged with the tag outpostDoor and secondly that the variable doorOpen is currently set to false. Remember here that two equals symbols (==) are used as a comparative, and the two ampersand symbols (&&) simply say 'and also'. The end result means that if we hit the door's collider that we have tagged and if we have not already opened the door, then it may carry out a set of instructions. We have utilized the dot syntax to address the object we are checking for collisions with by narrowing down from hit (our variable storing information on collisions) to gameObject (the object hit) to the tag on that object. If this if statement is valid, then we need to carry out a set of instructions to open the door. This will involve playing a sound, playing one of the animation clips on the model, and setting our boolean variable doorOpen to true. As we are to call multiple instructions—and may need to call these instructions as a result of a different condition later when we implement the ray casting approach—we will place them into our own custom function called OpenDoor. We will write this function shortly, but first, we'll call the function in the if statement we have, by adding: OpenDoor(); So your full collision function should now look like this: function OnControllerColliderHit(hit: ControllerColliderHit){ if(hit.gameObject.tag == "outpostDoor" && doorIsOpen == false){ OpenDoor(); }} Writing custom functions Storing sets of instructions you may wish to call at any time should be done by writing your own functions. Instead of having to write out a set of instructions or "commands" many times within a script, writing your own functions containing the instructions means that you can simply call that function at any time to run that set of instructions again. This also makes tracking mistakes in code—known as Debugging—a lot simpler, as there are fewer places to check for errors. In our collision detection function, we have written a call to a function named OpenDoor. The brackets after OpenDoor are used to store parameters we may wish to send to the function—using a function's brackets, you may set additional behavior to pass to the instructions inside the function. We'll take a look at this in more depth later in this article under the heading Function Efficiency. Our brackets are empty here, as we do not wish to pass any behavior to the function yet. Declaring the function To write the function we need to call, we simply begin by writing: function OpenDoor(){} In between the braces of the function, much in the same way as the instructions of an if statement, we place any instructions to be carried out when this function is called. Playing audio Our first instruction is to play the audio clip assigned to the variable called doorOpenSound. To do this, add the following line to your function by placing it within the curly braces after { "and before" }: audio.PlayOneShot(doorOpenSound); To be certain, it should look like this: function OpenDoor(){ audio.PlayOneShot(doorOpenSound);} Here we are addressing the Audio Source component attached to the game object this script is applied to (our player character object, First Person Controller), and as such, we'll need to ensure later that we have this component attached; otherwise, this command will cause an error. Addressing the audio source using the term audio gives us access to four functions, Play(), Stop(), Pause(), and PlayOneShot(). We are using PlayOneShot because it is the best way to play a single instance of a sound, as opposed to playing a sound and then switching clips, which would be more appropriate for continuous music than sound effects. In the brackets of the PlayOneShot command, we pass the variable doorOpenSound, which will cause whatever sound file is assigned to that variable in the Inspector to play. We will download and assign this and the clip for closing the door after writing the script. Checking door status One condition of our if statement within our collision detection function was that our boolean variable doorIsOpen must be set to false. As a result, the second command inside our OpenDoor() function is to set this variable to true. This is because the player character may collide with the door several times when bumping into it, and without this boolean, they could potentially trigger the OpenDoor() function many times, causing sound and animation to recur and restart with each collision. By adding in a variable that when false allows the OpenDoor() function to run and then disallows it by setting the doorIsOpen variable to true immediately, any further collisions will not re-trigger the OpenDoor() function. Add the line: doorOpen = true; to your OpenDoor() function now by placing it between the curly braces after the previous command you just added. Playing animation We have already imported the outpost asset package and looked at various settings on the asset before introducing it to the game in this article. One of the tasks performed in the import process was the setting up of animation clips using the Inspector. By selecting the asset in the Project panel, we specified in the Inspector that it would feature three clips: idle (a 'do nothing' state) dooropen doorshut In our openDoor() function, we'll call upon a named clip using a String of text to refer to it. However, first we'll need to state which object in our scene contains the animation we wish to play. Because the script we are writing is to be attached to the player, we must refer to another object before referring to the animation component. We do this by stating the line: var myOutpost : GameObject = GameObject.Find("outpost"); Here we are declaring a new variable called myOutpost by setting its type to be a GameObject and then selecting a game object with the name outpost by using GameObject.Find. The Find command selects an object in the current scene by its name in the Hierarchy and can be used as an alternative to using tags. Now that we have a variable representing our outpost game object, we can use this variable with dot syntax to call animation attached to it by stating: myOutpost.animation.Play("dooropen"); This simply finds the animation component attached to the outpost object and plays the animation called dooropen. The play() command can be passed any string of text characters, but this will only work if the animation clips have been set up on the object in question. Your finished OpenDoor() custom function should now look like this: function OpenDoor(){ audio.PlayOneShot(doorOpenSound); doorIsOpen = true; var myOutpost : GameObject = GameObject.Find("outpost"); myOutpost.animation.Play("dooropen");} Reversing the procedure Now that we have created a set of instructions that will open the door, how will we close it once it is open? To aid playability, we will not force the player to actively close the door but instead establish some code that will cause it to shut after a defined time period. This is where our doorTimer variable comes into play. We will begin counting as soon as the door becomes open by adding a value of time to this variable, and then check when this variable has reached a particular value by using an if statement. Because we will be dealing with time, we need to utilize a function that will constantly update such as the Update() function we had awaiting us when we created the script earlier. Create some empty lines inside the Update() function by moving its closing curly brace } a few lines down. Firstly, we should check if the door has been opened, as there is no point in incrementing our timer variable if the door is not currently open. Write in the following if statement to increment the timer variable with time if the doorIsOpen variable is set to true: if(doorIsOpen){ doorTimer += Time.deltaTime;} Here we check if the door is open — this is a variable that by default is set to false, and will only become true as a result of a collision between the player object and the door. If the doorIsOpen variable is true, then we add the value of Time.deltaTime to the doorTimer variable. Bear in mind that simply writing the variable name as we have done in our if statement's condition is the same as writing doorIsOpen == true. Time.deltaTime is a Time class that will run independent of the game's frame rate. This is important because your game may be run on varying hardware when deployed, and it would be odd if time slowed down on slower computers and was faster when better computers ran it. As a result, when adding time, we can use Time.deltaTime to calculate the time taken to complete the last frame and with this information, we can automatically correct real-time counting. Next, we need to check whether our timer variable, doorTimer, has reached a certain value, which means that a certain amount of time has passed. We will do this by nesting an if statement inside the one we just added—this will mean that the if statement we are about to add will only be checked if the doorIsOpen if condition is valid. Add the following code below the time incrementing line inside the existing if statement: if(doorTimer > doorOpenTime){shutDoor();doorTimer = 0.0;} This addition to our code will be constantly checked as soon as the doorIsOpen variable becomes true and waits until the value of doorTimer exceeds the value of the doorOpenTime variable, which, because we are using Time.deltaTime as an incremental value, will mean three real-time seconds have passed. This is of course unless you change the value of this variable from its default of 3 in the Inspector. Once the doorTimer has exceeded a value of 3, a function called shutDoor() is called, and the doorTimer variable is reset to zero so that it can be used again the next time the door is triggered. If this is not included, then the doorTimer will get stuck above a value of 3, and as soon as the door was opened it would close as a result. Your completed Update() function should now look like this: function Update(){ if(doorIsOpen){ doorTimer += Time.deltaTime; if(doorTimer > 3){ shutDoor(); doorTimer = 0.0; } }} Now, add the following function called shutDoor() to the bottom of your script. Because it performs largely the same function as openDoor(), we will not discuss it in depth. Simply observe that a different animation is called on the outpost and that our doorIsOpen variable gets reset to false so that the entire procedure may start over: function shutDoor(){audio.PlayOneShot(doorShutSound);doorIsOpen = false;var myOutpost : GameObject = GameObject.Find("outpost");myOutpost.animation.Play("doorshut");}
Read more
  • 0
  • 0
  • 1784

article-image-papervision3d-external-models-part-1
Packt
18 Nov 2009
22 min read
Save for later

Papervision3D External Models: Part 1

Packt
18 Nov 2009
22 min read
This article covers the following: Modeling for Papervision3D Preparing for loading models Creating and loading models using Autodesk 3ds Max Loading an animation from Autodesk 3ds Max Creating and loading models using SketchUp Creating and loading models using Blender Controlling loaded materials Let's start off by having a look at some general practices to keep in mind when modeling for Papervision3D. Modeling for Papervision3D In this section, we will discuss several techniques that relate to modeling for Papervision3D. As Papervision3D is commonly used for web-based projects, modeling requires a different mindset than modeling for an animated movie, visualization, or game. Most of the techniques discussed relate to improving performance. This section is especially useful for modelers who need to create models for Papervision3D. Papervision3D PreviewerPapervision3D Previewer is a small program that should be part of every modeller's toolbox. This tool comes in handy for testing purposes. It allows a modeler to render an exported model in Papervision3D, and it displays some statistics that show how the model performs. At the time of writing, this tool was not compatible with Papervision3D 2.1, which could result in small problems when loading external models.Papervision3D Previewer can be downloaded from http://code.google.com/p/mrdoob/wiki/pv3dpreviewer Keep your polygon count low Papervision3D is a cutting edge technology that brings 3D to the Flash Player. It does this at an amazing speed relative to the capabilities of the Flash player. However, performance of Papervision3D is just a fraction of the performance that can be achieved with hardware-accelerated engines such as used by console games. Even with hardware-accelerated games there is a limit to the number of polygons that can be rendered, meaning there is always a compromise between detail and performance. This counts even more for Papervision3D, so always try to model using as few polygons as possible. Papervision3D users often wonder what the maximum number of triangles is that the Flash player can handle. There is no generic answer to this question, as performance depends on more factors than just the number of triangles. On average, the total triangle count should be no more than 3000, which equals 1500 polygons (remember that one polygon is made of two triangles). Unlike most 3D modeling programs, Papervision3D is triangle based and not polygon based. Add polygons to resolve artifacts Although this seems to contradict the previous suggestion to keep your polygon count low, sometimes you need more polygons to get rid of texture distortion or to reduce z-sorting artifacts. z-sorting artifacts will often occur in areas where objects intersect or closely intersect each other. Subdividing polygons in those areas can make z-sorting more accurate. Often this needs to be done by creating new polygons for the intersecting triangles of approximately the same size. There are several approaches to prevent z-sorting problems. Depending on the object you're using, it can be very time consuming to tweak and find the optimal amount and location of polygons. The amount of polygons you add in order to solve the problem should still be kept as low as possible. Finding the optimal values for your model will often result in switching a lot between Papervision3D and the 3D modeling program. Keep your textures small Textures used in the 3D modeling tool can be exported along with the model to a format that is readable for Papervision3D. This is a valuable feature as the texture will automatically be loaded by Papervision3D. However, the image, which was defined in the 3D authoring tool, will be used exactly as provided by Papervision3D. If you choose a 1024 by 1024 pixels image as the texture, for example the wheels of a car, Papervision3D loads the entire image and draws it on the wheel of a car that appears on screen at a size of 50 by 50 pixels for example. There are several problems related to this: It's a waste of bandwidth to load such a large image. Loading any image takes time, which should be kept as short as possible. It's a waste of capacity. Papervision3D needs to resize the image from 1024 by 1024 pixels to an image, which will be, for example, maximal 50 by 50 pixels on screen. Always choose texture dimensions that make sense for the application using it, and keep in mind that they have to be power of two. This will enable mipmapping and smoothing, which come without extra performance costs. Use textures that Flash can read 3D modeling programs usually read a variety of image sources. Some even support reading Adobe Photoshop's native file-format PSD. Flash can load only GIF, JPG, or PNG files at run time. Therefore, stick to these formats in your model so that you do not have to convert the textures when the model needs to be exported to Papervision3D. Use UV maps If your model is made up of several objects and textures, it's a good idea to use UV mapping, which is the process of unwrapping the model and defining all its textures into one single image. This way we can speed up initial loading of an application by making one request from Flash to load this image instead of loading dozens of images. UV mapping can also be used to tile or reuse parts of the image. The more parts of the UV-mapped image you can reuse, the more bandwidth you'll save. Always try to keep your UV-mapped image as small as possible, just as with keeping your normal textures small. In case you have a lot of objects sharing the same UV map and you need a large canvas to unwrap the UV map, be aware of the fact that the maximum image size supported by Flash Player 9 is 2880x2880 pixels. With the benefits of power of two textures in mind, the maximum width and height is 2048x2048 pixels. Baking textures Baking textures is the process of integrating shadows, lighting, reflection, or entire 3D objects into a single image. Most 3D modeling tools support this. This contradicts what has been said about tiling images in UV maps, as baking results in images that usually can only be used once because of the baked information on the texture. However, it can increase the level of realism of your application, just like shading does, but without the loss of performance caused by calculating shading in real time. Never use them in combination with a tiling image, as repeated shading, for instance, will result in unnatural looking renders. Therefore, each texture needs to be unique, which will cause longer loading times before you can show a scene. Use recognizable names for objects and materials It is always a good convention to use recognizable names for all your objects. This counts for the classes, methods, and properties in your code, and also for the names of the 3D objects in your modeling tool. Always think twice before renaming an object that is used by an application. The application might use the name of an object as the identifier to do something with it—for example, making it clickable. When working in a team of modelers and programmers, you really need to make this clear to the modelers as changing the name of an object can easily break your application. Size and positioning Maintaining the same relative size for your modeled objects, as you would use for instantiating primitives in your scene, is a good convention. Although you could always adjust the scale property of a loaded 3D model, it is very convenient when both Papervision3D and your modeling tool use the same scale. Remember that Papervision3D doesn't have a metric system defining units of a certain value such as meters, yards, pixels, and so on. It just uses units. Another convention is to position your object or objects at the origin of the 3D space in the modeling tool. Especially when exporting a single object from a 3D modeling tool, it is really helpful if it is located at a position of 0 on all axes. This way you can position the 3D object in Papervision3D by using absolute values, without needing to take the offset into account. You can compare this with adding movie clips to your library in Flash. In most cases, it is pretty useful when the elements of a movie clip are centered on their registration point. Finding the balance between quality and performance For each project you should try to find the balance between lightweight modeling and quality. Because each project is different in requirements, scale, and quality, there is no rule that applies for all. Keep the tips mentioned in the previous sections in mind and try to be creative with them. If you see a way to optimize your model, then do not hesitate to use it. Before we have a look at how to create and export models for Papervision3D, we will create a basic application for this purpose. Creating a template class to load models In order to show an imported 3D model using Papervision3D, we will create a basic application. Based on the orbit example (code bundle-chapter 6, click the following link to download: http://www.packtpub.com/files/code/5722_Code.zip) we create the following class. Each time we load a new model we just have to alter the init() method. First, have a look at the following base code for this example: package { import flash.events.Event; import org.papervision3d.materials.WireframeMaterial; import org.papervision3d.materials.utils.MaterialsList; import org.papervision3d.objects.DisplayObject3D; import org.papervision3d.objects.primitives.Cube; import org.papervision3d.view.BasicView; public class ExternalModelsExample extends BasicView { private var model:DisplayObject3D; private var rotX:Number = 0.1; private var rotY:Number = 0.1; private var camPitch:Number = 90; private var camYaw:Number = 270; private var easeOut:Number = 0.1; public function ExternalModelsExample() { stage.frameRate = 40; init(); startRendering(); } private function init():void { model = new Plane(); scene.addChild(model); } private function modelLoaded(e:FileLoadEvent):void { //To be added } override protected function onRenderTick(e:Event=null):void { var xDist:Number = mouseX - stage.stageWidth * 0.5; var yDist:Number = mouseY - stage.stageHeight * 0.5; camPitch += ((yDist * rotX) - camPitch + 90) * easeOut; camYaw += ((xDist * rotY) - camYaw + 270) * easeOut; camera.orbit(camPitch, camYaw); super.onRenderTick(); } }} We have created a new plane using a wireframe as its material. The plane is assigned to a class property named model, which is of the DisplayObject3D type. In fact, any external model is a do3D. No matter what type of model we load in the following examples, we can always assign it to the model property. The classes that we'll use for loading 3D models all inherit from DisplayObject3D. Now that we have created a default application, we are ready to create our first model in 3D Studio Max, export it, and then import it into Papervison3D. Creating models in Autodesk 3ds Max and loading them into Papervision3D Autodesk 3ds Max (also known as 3D Studio Max or 3ds Max) is one of the widely-known commercial 3D modeling and animation programs. This is a good authoring tool to start with, as it can save to two of the file formats Papervision3D can handle. These are: COLLADA (extension *.dae): An open source 3D file type, which is supported by Papervision3D. This is the most advanced format and has been supported since Papervision3D's first release. It also supports animations and is actually just a plain text XML file. 3D Studio (extension *.3ds): As the name suggests, this is one of the formats that 3ds Max natively supports. Generally speaking it is also one of the most common formats to save 3D models in. As of 3ds Max version 9, there is a built-in exporter plugin available that supports exporting to COLLADA. However, you should avoid using this, as at the time of writing, the models it exports are not suitable for Papervision3D. Don't have a license of 3ds Max and want to get along with the following examples? Go to www.autodesk.com to download a 30-day trial. Installing COLLADA Max An exporter that does support COLLADA files suitable for Papervision3D is called COLLADA Max. This is a free and open source exporter that works with all versions of 3ds Max 7 and higher. Installing this exporter is easy. Just follow the steps mentioned below: Make sure you have installed 3ds Max version 7 or higher. Go to http://sourceforge.net/projects/colladamaya/. Click on View all files and select the latest COLLADA Max version. (At the time of writing this is COLLADA Max NextGen 0.9.5, which is still in beta, but is the only version that works with 3ds Max 2010). Save the download somewhere on your computer. Run the installer. Click Next, until the installer confirms that the exporter is installed. Start 3ds Max and double check if we can export using the COLLADA or COLLADA NextGen filetype, as shown in the following screenshot: If the only COLLADA export option is Autodesk Collada, then something went wrong during the installation of COLLADA Max, as this is not the exporter that works with Papervision3D. Now that 3ds Max is configured correctly for exporting a file format that can be read by Papervision3D, we will have a look at how to create a basic textured model in 3ds Max and export it to Papervision3D. Creating the Utah teapot and export it for Papervision3D If you already know how to work with 3ds Max, this step is quite easy. All we need to do is create the Utah teapot, add UV mapping, add a material to it, and export it as COLLADA. However, if you are new to 3ds Max, the following steps needs to be clarified. First, we start 3ds Max and create a new scene. The creation of a new scene happens by default on startup. The Utah teapot is one of the objects that comes as a standard primitive in 3ds Max. This means you can select it from the default primitives menu and draw it in one of the viewports. Draw it in the top viewport so that the teapot will not appear rotated over one of its axes. Give it a Radius of 250 in the properties panel on the right, in order to make it match with the units that we'll use in Papervision3D. Position the teapot at the origin of the scene. You can do this by selecting it and changing the x, y, and z properties at the bottom of your screen. You would expect that you need to set all axes to 0, although this is not the case. In this respect, the teapot differs from other primitives in 3ds Max, as the pivot point is located at the bottom of the teapot. Therefore, we need to define a different value for the teapot on the z-axis. Setting it to approximately -175 is a good value. To map a material to the teapot, we need to define a UV map first. UV mapping is also known as UVW mapping. Some call it UV mapping and others call it UVW mapping. 3ds Max uses the term UVW mapping. While having the teapot still selected, go to modify and then select UVW Mapping from the modifier list. Select Shrink Wrap and click Fit in the Alignment section. This will create a UVW map for us. Open the material editor using keyboard shortcut m. Here we define the materials that we use in 3ds Max. Give the new material a name. Replace 01 – Default with a material name of your choice—for example, teapotMaterial. Provide a bitmap as the diffuse material. You can do this by clicking on the square button, at the right of the Diffuse value within Blinn Basic Parameters section. A new window called Material/Map Browser will open. Double-click Bitmap to load an external image. Select an image of your choice. We will use teapotMaterial.jpg The material editor will now update and show the selected material on an illustrative sphere. This is your newly-created material, which you need to drag on the created teapot. The teapot model can now be exported. Depending on the version of the installed COLLADA exporter, select COLLADA or COLLADA NextGen. Note that you should not export using Autodesk Collada, as this exporter doesn't work properly for Papervision3D. Give it a filename of your choice, for example teapot, and hit Save. The exporter window will pop up. The default settings are fine for exporting to Papervision3D, so click OK to save the file. Save the model in the default 3ds Max file format (.max) somewhere on your local disk, so we can use it later when discussing other ways to export this model to Papervision3D. The model that we have created and exported is now ready to be imported by Papervision3D. Let's take a look at how this works. Importing the Utah teapot into Papervision3D To work with the exported Utah teapot, we will use the ExternalModelsExample project that we created previously in this article. Browse to the folder inside your project where you have saved your document class. Create a new folder called assets and copy to this folder, the created COLLADA file along with the image used as the material of the teapot. The class used to load an external COLLADA file is called DAE, so let's import it. import org.papervision3d.objects.parsers.DAE; This type of class is also known as a parser, as it parses the model from a loaded file. When you have a closer look at the source files of Papervision3D and its model parsers, you will probably find out about the Collada class. This might be a little confusing as we use the DAE parser to load a COLLADA file and we do not use the Collada parser. Although you could use either, this article uses the DAE parser exclusively, as it is a more recent class, supporting more features such as animation. There is no feature that is supported by the Collada parser, and is not supported by the DAE parser. Replace all code inside the init() method with the following code that loads a COLLADA file: model = new DAE();model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);DAE(model).load("assets/teapot.DAE"); Because model is defined as a DisplayObject3D class type, we need to cast it to DAE to make use of its methods so that we can call the load() method. An event listener is defined, waiting for the model to be completely loaded and parsed. Once it is loaded, the modelLoaded() method will be triggered. It is a good convention to add models only to the scene once the model is completely loaded. Add the following line of code to the modelLoaded() method: scene.addChild(model); COLLADA Utah Teapot Example Publishing this code will result in the teapot with the texture as created in 3ds Max. In real-world applications it is good practice to keep your models in one folder and your textures in another. You might want to organize the files similar to the following structure: Models in /assets/models/ Textures in /assets/textures/ By default, textures are loaded from the same folder as the model is loaded from, or optionally from the location as specified in the COLLADA file. To include the /assets/textures/ folder we can add a file search path, which defines to have a look in the specified folder, to see if the file is located there, in case none can be found on the default paths. This can be defined as follows: daeModel.addFileSearchPath("assets/textures"); You can call this method multiple times, in order to have multiple folders defined. Internally, in Papervision3D, it will loop through an array of file paths. Exporting and importing the Utah teapot in 3ds format Now that we have seen how to get an object from 3ds Max into a Papervision3D project, we have a look at another format that is supported by both 3ds Max and Papervision3D. This format is called 3D Studio, using a 3ds extension. It is one of the established 3D file formats that are supported by most 3D modeling tools. Exporting and importing is very similar to COLLADA. Let's first export the file to the 3D Studio format. Open the Utah teapot, which we've modeled earlier in this article. Leave the model as it is, and go straight to export. This time we select 3D Studio (*.3DS) as the file type. Save it into your project folder and name it teapot. Click OK when asked whether to preserve Max's texture coordinates. If your model uses teapotMaterial.jpg, or an image with more than eight characters in its filename, the exporter will output a warning. You can close this warning, but you need to be aware of the output message. It says that the bitmap filename is a non-8.3 filename, that is, a maximum amount of 8 characters for the filename and a 3-character extension. The 3D Studio file is an old format, released at the time when there was a DOS version of 3ds Max. Back then it was an OS naming convention to use short filenames, known as 8.3 filenames. This convention still applies to the 3D Studio format, for the sake of backward compatibility. Therefore, the reference to the bitmap has been renamed inside the exported 3D Studio file. Because the exported 3D Studio file changed only the reference to the bitmap filename internally and it did not affect the file it refers to, we need to create a file using this renamed file reference. Otherwise, it won't be able to find the image. In this case we need to create a version of the image called teapotMa.jpg. Save this file in the same folder as the exported 3D Studio file. As you can see, it is very easy to export a model from 3ds Max to a format Papervision3D can read. Modeling the 3D object is definitely the hardest and most time consuming part, simply because creating models takes a lot of time. Loading the model into Papervision3D is just as easy as exporting it. First, copy the 3D Studio file plus the renamed image to the assets folder of your project. We can then alter the document class in order to load the 3ds file. The class that is used to parse a 3D Studio file is called Max3DS and needs to be imported. import org.papervision3d.objects.parsers.Max3DS; In the init() method you should replace or comment the code that loads the COLLADA model from our previous example, with the following: model = new Max3DS();model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);Max3DS(model).load("assets/teapot.3ds", null, "./assets/"); As the first parameter of the load method, we pass a file reference to the model we want to load. The second parameter defines a materials list, which we will not use for this example. The third and final parameter defines the texture folder. This folder is relative to the location of the published SWF. Note that this works slightly different than the DAE parser, which loads referenced images from the path relative to the folder in which the COLLADA file is located or loads images as specified by the addFileSearchPath() method. ExternalModelsExample Publish the code and you'll see the same teapot. However, this time it's using the 3D Studio file format as its source. Importing animated models The teapot is a static model that we exported from a 3D program and loaded into Papervision3D. It is also possible to load animated models, which contain one or multiple animations. 3ds Max is one of the programs in which you can create an animation for use in Papervision3D. Animating doesn't require any additional steps. You can just create the animation and export it. This also goes for other modeling tools that support exporting animations to COLLADA. For the sake of simplicity, this example will make use of a model that is already animated in 3ds Max. The model contains two animations, which together make up one long animation on a shared timeline. We will export this model and its animation to COLLADA, load it into Papervision3D, and play the two animations. Open animatedMill.max in 3ds Max. This file can be found in the zip file that can be downloaded from: http://www.packtpub.com/files/code/5722_Code.zip. You can see the animation of the model directly in 3ds Max by clicking the play button in the menu at the bottom right corner, which will animate the blades of the mill. The first 180 frames animate the blades from left to right. Frames 181 to 360 animate the blades from right to left. As the model is already animated, we can go ahead with exporting, without making any changes to the model. Export it using the COLLADA filetype and save it somewhere on your computer. When the COLLADA Max exporter settings window pops up, we need to check the Sample animation checkbox. By default Start and End are set to the length of the timeline as it is defined in 3ds Max. In case you just want to export a part of it, you can define the start and end frames you want to export. For this example we leave them as they are: 0 and 360. By completing these steps you have successfully exported an animation in the COLLADA format for Papervision3D. Now, have a look at how we can load the animated model into Papervision3D. First, you need to copy the exported COLLADA and the applied material—Blades.jpg, House.jpg, and Stand.jpg—to the assets folder of your project. To load an animated COLLADA, we can use the DAE class again. We only need to define some parameters at instantiation, so the animation will loop. model = new DAE(true,null,true);model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);DAE(model).load("assets/animatedMill.dae"); Take a look at what these parameters stand for.
Read more
  • 0
  • 0
  • 1948

article-image-applying-special-effects-3d-game-development-microsoft-silverlight-3-part-1
Packt
18 Nov 2009
7 min read
Save for later

Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 1

Packt
18 Nov 2009
7 min read
  A 3D game must be attractive. It has to offer amazing effects for the main characters and in the background. A spaceship has to fly through a meteor shower. An asteroid belt has to draw waves while a UFO pursues a spaceship. A missile should make a plane explode. The real world shows us things moving everywhere. Most of these scenes, however, aren't repetitive sequences. Hence, we have to combine great designs, artificial intelligence (AI), and advanced physics to create special effects. Working with 3D characters in the background So far, we have added physics, collision detection capabilities, life, and action to our 3D scenes. We were able to simulate real-life effects for the collision of two 3D characters by adding some artificial intelligence. However, we need to combine this action with additional effects to create a realistic 3D world. Players want to move the camera while playing so that they can watch amazing effects. They want to be part of each 3D scene as if it were a real life situation. How can we create complex and realistic backgrounds capable of adding realistic behavior to the game? We can do this combining everything we have learned so far with a good object-oriented design. We have to create random situations combined with more advanced physics. We have to add more 3D characters with movement to the scenes. We must add complexity to the backgrounds. We can work with many independent physics engines to work with parallel worlds. In real-life, there are concurrent and parallel words. We have to reproduce this behavior in our 3D scenes. Time for action – adding a transition to start the game Your project manager does not want the game to start immediately. He wants you to add a butt on in order to allow the player to start the game by clicking on it. As you are using Balder, adding a butt on is not as simple as expected. We are going to add a butt on to the main page, and we are going to change Balder's default game initialization: Stay in the 3DInvadersSilverlight project. Expand App.xaml in the Solution Explorer and open App.xaml.cs––the C# code for App.xaml. Comment the following line of code (we are not going to use Balder's services in this class):  //using Balder.Silverlight.Services; Comment the following line of code in the event handler for the Application_Startup event, after the line this.RootVisual = new MainPage();: //TargetDevice.Initialize<InvadersGame>(); Open the XAML code for MainPage.xaml and add the following lines of code after the line (You will see a butt on with the ti tle Start the game.): <!-- A button to start the game --><Button x_Name="btnStartGame" Content="Start the game!" Canvas.Left="200" Canvas.Top="20" Width="200" Height="30" Click="btnStartGame_Click"></Button> Now, expand MainPage.xaml in the Solution Explorer and open MainPage.xaml.cs––the C# code for MainPage.xaml. Add the following line of code at the beginning (As we are going to use many of Balder's classes and interfaces.): using Balder.Silverlight.Services; Add the following lines of code to program the event handler for the button's Click event (this code will initialize the game using Balder's services): private void btnStartGame_Click(object sender, RoutedEventArgs e){ btnStartGame.Visibility = Visibility.Collapsed; TargetDevice.Initialize<InvadersGame>();} Build and run the solution. Click on the Start the game! butt on and the UFOs will begin their chase game. The butt on will make a transition to start the game, as shown in the following screenshots:   What just happened? You could use a Start the game! butt on to start a game using Balder's services. Now, you will be able to offer the player more control over some parameters before starting the game. We commented the code that started the game during the application start-up. Then, we added a button on the main page (MainPage). The code programmed in its Click event handler initializes the desired Balder.Core.Game subclass (InvadersGame) using just one line: TargetDevice.Initialize<InvadersGame>(); This initialization adds a new specific Canvas as another layout root's child, controlled by Balder to render the 3D scenes. Thus, we had to make some changes to add a simple butt on to control this initialization. Time for action – creating a low polygon count meteor model The 3D digital artists are creating models for many aliens. They do not have the time to create simple models. Hence, they teach you to use Blender and 3D Studio Max to create simple models with low polygon count. Your project manager wants you to add dozens of meteors, to the existing chase game. A gravitational force must attract these meteors and they have to appear in random initial positions in the 3D world. First, we are going to create a low polygon count meteor using 3D Studio Max. Then, we are going to add a texture based on a PNG image and export the 3D model to the ASE format, compatible with Balder. As previously explained, we have to do this in order to export the ASE format with a bitmap texture definition enveloping the meshes. We can also use Blender or any other 3D DCC tool to create this model. We have already learned how to export an ASE format from Blender. Thus, this time, we are going to learn the necessary steps to do it using 3D Studio Max. Start 3D Studio Max and create a new scene. Add a sphere with six segments. Locate the sphere in the scene's center. Use the Uniform Scale tool to resize the low polygon count sphere to 11.329 in the three axis, as shown in the following screenshot: Click on the Material Editor button. Click on the first material sphere, on the Material Editor window's upper-left corner. Click on the small square at the right side of the Diffuse color rectangle, as shown in the following screenshot: Select Bitmap from the list shown in the Material/Map Browser window that pops up and click on OK. Select the PNG file to be used as a texture to envelope the sphere. You can use Bricks.PNG, previously downloaded from http://www.freefoto.com/. You just need to add a reference to a bitmap file. Then, click on Open. The Material Editor preview panel will show a small sphere thumbnail enveloped by the selected bitmap, as shown in the following screenshot: Drag the new material and drop it on the sphere. If you are facing problems, remember that the 3D digital artist created a similar sphere a few days ago and he left the meteor.max file in the following folder (C:Silverlight3DInvaders3D3DModelsMETEOR). Save the file using the name meteor.max in the previously mentioned folder. Now, you have to export the model to the ASE format with the reference to the texture. Therefore, select File | Export and choose ASCII Scene Export (*.ASE) on the Type combo box. Select the aforementioned folder, enter the file name meteor.ase and click on Save. Check the following options in the ASCII Export dialog box. (They are unchecked by default): Mesh Normals Mapping Coordinates Vertex Colors The dialog box should be similar to the one shown in the following screenshot: Click on OK. Now, the model is available as an ASE 3D model with reference to the texture. You will have to change the absolute path for the bitmap that defines the texture in order to allow Balder to load the model in a Silverlight application.
Read more
  • 0
  • 0
  • 1855
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-applying-special-effects-3d-game-development-microsoft-silverlight-3-part-2
Packt
18 Nov 2009
6 min read
Save for later

Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2

Packt
18 Nov 2009
6 min read
Time for action – simulating fluids with movement Your project manager is amazed with the shower of dozens of meteors in the background. However, he wants to add a more realistic background. He shows you a water simulation sample using Farseer Physics Engine. He wants you to use the wave simulation capabilities offered by this powerful physics simulator to create an asteroids belt. First, we are going to create a new class to define a fluid model capable of setting the initial parameters and updating a wave controller provided by the physics simulator. We will use Farseer Physics Engine's wave controller to add real-time fluids with movement for our games. The following code is based on the Silverlight water sample offered with the physics simulator. However, in this case, we are not interested in collision detection capabilities because we are going to create an asteroid belt in the background. Stay in the 3DInvadersSilverlight project. Create a new class—FluidModel. Replace the default using declarations with the following lines of code (we are going to use many classes and interfaces from Farseer Physics Engine): using System;using FarseerGames.FarseerPhysics;using FarseerGames.FarseerPhysics.Controllers;using FarseerGames.FarseerPhysics.Mathematics; Add the following public property to hold the WaveController instance: public WaveController WaveController { get; private set; } Add the following public properties to define the wave generator parameters: public float WaveGeneratorMax { get; set; }public float WaveGeneratorMin { get; set; }public float WaveGeneratorStep { get; set; } Add the following constructor without parameters: public FluidModel(){ // Assign the initial values for the wave generator parameters WaveGeneratorMax = 0.20f; WaveGeneratorMin = -0.15f; WaveGeneratorStep = 0.025f;} Add the Initialize method to create and configure the WaveController instance using the PhysicsSimulator instance received as a parameter: public void Initialize(PhysicsSimulator physicsSimulator){ // The wave controller controls how the waves move // It defines how big and how fast is the wave // It is represented as set of points equally spaced horizontally along the width of the wave. WaveController = new WaveController(); WaveController.Position = ConvertUnits.ToSimUnits(-20, 5); WaveController.Width = ConvertUnits.ToSimUnits(30); WaveController.Height = ConvertUnits.ToSimUnits(3); // The number of vertices that make up the surface of the wave WaveController.NodeCount = 40; // Determines how quickly the wave will dissipate WaveController.DampingCoefficient = .95f; // Establishes how fast the wave algorithm runs (in seconds) WaveController.Frequency = .16f; //The wave generator parameters simply move an end-point of the WaveController.WaveGeneratorMax = WaveGeneratorMax; WaveController.WaveGeneratorMin = WaveGeneratorMin; WaveController.WaveGeneratorStep = WaveGeneratorStep; WaveController.Initialize();} Add the Update method to update the wave controller and update the points that draw the waves shapes: public void Update(TimeSpan elapsedTime){ WaveController.Update((float) elapsedTime.TotalSeconds);} What just happened? We now have a FluidModel class that creates, configures, and updates a WaveController instance according to an associated physics simulator. As we are going to work with different gravitational forces, we are going to use another independent physics simulator to work with the FluidModel instance in our game. Simulating waves The wave controller offers many parameters to represent a set of points equally spaced horizontally along the width of one or many waves. The waves can be: Big or small Fast or slow Tall or short The wave controller's parameters allow us to determine the number of vertices that make up the surface of the wave assigning a value to its NodeCount property. In this case, we are going to create waves with 40 nodes and each point is going to be represented by an asteroid: WaveController.NodeCount = 40; The Initialize method defines the position, width, height and other parameters for the wave controller. We have to convert our position values to the simulator values. Thus, we use the ConvertUnits.ToSimUnits method. For example, this line defines the 2D Vector for the wave's upper left corner (X = -20 and Y = 5): WaveController.Position = ConvertUnits.ToSimUnits(-20, 5); The best way to understand each parameter is changing its values and running the example using these new values. Using a wave controller we can create amazing fluids with movement.   Time for action – creating a subclass for a complex asteroid belt Now, we are going to create a specialized subclass of Actor (Balder.Core.Runtime. Actor) to load, create an update a fluid with waves. This class will enable us to encapsulate an independent asteroid belt and add it to the game. In this case, it is a 3D character composed of many models (many instances of Mesh). Stay in the 3DInvadersSilverlight project. Create a new class, FluidWithWaves (a subclass of Actor) using the following declaration: public class FluidWithWaves : Actor Replace the default using declarations with the following lines of code (we are going to use many classes and interfaces from Balder, Farseer Physics Engine and lists): using System.Windows;using System.Windows.Controls;using System.Windows.Media;using System.Windows.Shapes;// BALDERusing Balder.Core;using Balder.Core.Geometries;using Balder.Core.Math;using Balder.Core.Runtime;// FARSEER PHYSICSusing FarseerGames.FarseerPhysics;using FarseerGames.FarseerPhysics.Collisions;using FarseerGames.FarseerPhysics.Dynamics;using FarseerGames.FarseerPhysics.Factories;using FarseerGames.FarseerPhysics.Mathematics;// LISTSusing System.Collections.Generic; Add the following protected variables to hold references for the RealTimeGame and the Scene instances: protected RealTimeGame _game;protected Scene _scene; Add the following private variables to hold the associated FluidModel instance, the collection of points that define the wave and the list of meshes (asteroids): private FluidModel _fluidModel;private PointCollection _points;private List<Mesh> _meshList; Add the following constructor with three parameters—the RealTimeGame, the Scene, and the PhysicsSimulator instances: public FluidWithWaves(RealTimeGame game, Scene scene, PhysicsSimulator physicsSimulator){ _game = game; _scene = scene; _fluidModel = new FluidModel(); _fluidModel.Initialize(physicsSimulator); int count = _fluidModel.WaveController.NodeCount; _points = new PointCollection(); for (int i = 0; i < count; i++) { _points.Add(new Point(ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.XPosition[i]), ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.CurrentWave[i]))); }} Override the LoadContent method to load the meteors' meshes and set their initial positions according to the points that define the wave: public override void LoadContent(){ base.LoadContent(); _meshList = new List<Mesh>(_points.Count); for (int i = 0; i < _points.Count; i++) { Mesh mesh = _game.ContentManager.Load<Mesh>("meteor.ase"); _meshList.Add(mesh); _scene.AddNode(mesh); mesh.Position.X = (float) _points[i].X; mesh.Position.Y = (float) _points[i].Y; mesh.Position.Z = 0; }} Override the Update method to update the fluid model and then change the meteors' positions taking into account the points that define the wave according to the elapsed time: public override void Update(){ base.Update(); // Update the fluid model with the real-time game elapsed time _fluidModel.Update(_game.ElapsedTime); _points.Clear(); for (int i = 0; i < _fluidModel.WaveController.NodeCount; i++) { Point p = new Point(ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.XPosition[i]), ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.CurrentWave[i]) +ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.Position.Y)); _points.Add(p); }// Update the positions for the meshes that define the wave's points for (int i = 0; i < _points.Count; i++) { _meshList[i].Position.X = (float)_points[i].X; _meshList[i].Position.Y = (float)_points[i].Y; }}
Read more
  • 0
  • 0
  • 1358

article-image-unity-game-development-interactions-part-1
Packt
18 Nov 2009
8 min read
Save for later

Unity Game Development: Interactions (Part 1)

Packt
18 Nov 2009
8 min read
To detect physical interactions between game objects, the most common method is to use a Collider component—an invisible net that surrounds an object's shape and is in charge of detecting collisions with other objects. The act of detecting and retrieving information from these collisions is known as collision detection. Not only can we detect when two colliders interact, but we can also pre-empt a collision and perform many other useful tasks by utilizing a technique called Ray Casting, which draws a Ray—put simply, an invisible (non-rendered) vector line between two points in 3D space—which can also be used to detect an intersection with a game object's collider. Ray casting can also be used to retrieve lots of other useful information such as the length of the ray (therefore—distance), and the point of impact of the end of the line. In the given example, a ray facing the forward direction from our character is demonstrated. In addition to the direction, a ray can also be given a specific length, or allowed to cast until it finds an object. Over the course of the article, we will work with the outpost model. Because this asset has been animated for us, the animation of the outpost's door opening and closing is ready to be triggered—once the model is placed into our scene. This can be done with either collision detection or ray casting, and we will explore what you will need to do to implement either approach. Let's begin by looking at collision detection and when it may be appropriate to use ray casting instead of, or in complement to, collision detection. Exploring collisions When objects collide in any game engine, information about the collision event becomes available. By recording a variety of information upon the moment of impact, the game engine can respond in a realistic manner. For example, in a game involving physics, if an object falls to the ground from a height, then the engine needs to know which part of the object hit the ground first. With that information, it can correctly and realistically control the object's reaction to the impact. Of course, Unity handles these kinds of collisions and stores the information on your behalf, and you only have to retrieve it in order to do something with it. In the example of opening a door, we would need to detect collisions between the player character's collider and a collider on or near the door. It would make little sense to detect collisions elsewhere, as we would likely need to trigger the animation of the door when the player is near enough to walk through it, or to expect it to open for them. As a result, we would check for collisions between the player character's collider and the door's collider. However, we would need to extend the depth of the door's collider so that the player character's collider did not need to be pressed up against the door in order to trigger a collision, as shown in the following illustration. However, the problem with extending the depth of the collider is that the game interaction with it becomes unrealistic. In the example of our door, the extended collider protruding from the visual surface of the door would mean that we would bump into an invisible surface which would cause our character to stop in their tracks, and although we would use this collision to trigger the opening of the door through animation, the initial bump into the extended collider would seem unnatural to the player and thus detract from their immersion in the game. So while collision detection will work perfectly well between the player character collider and the door collider, there are drawbacks that call for us as creative game developers to look for a more intuitive approach, and this is where ray casting comes in. Ray casting While we can detect collisions between the player character's collider and a collider that fits the door object, a more appropriate method may be to check for when the player character is facing the door we are expecting to open and is within a certain distance of this door. This can be done by casting a ray forward from the player's forward direction and restricting its length. This means that when approaching the door, the player needn't walk right up to it—or bump into an extended collider—in order for it to be detected. It also ensures that the player cannot walk up to the door facing away from it and still open it—with ray casting they must be facing the door in order to use it, which makes sense. In common usage, ray casting is done where collision detection is simply too imprecise to respond correctly. For example, reactions that need to occur with a frame-by-frame level of detail may occur too quickly for a collision to take place. In this instance, we need to preemptively detect whether a collision is likely to occur rather than the collision itself. Let's look at a practical example of this problem. The frame miss In the example of a gun in a 3D shooter game, ray casting is used to predict the impact of a gunshot when a gun is fired. Because of the speed of an actual bullet, simulating the flight path of a bullet heading toward a target is very difficult to visually represent in a way that would satisfy and make sense to the player. This is down to the frame-based nature of the way in which games are rendered. If you consider that when a real gun is fired, it takes a tiny amount of time to reach its target—and as far as an observer is concerned it could be said to happen instantly—we can assume that even when rendering over 25 frames of our game per second, the bullet would need to have reached its target within only a few frames. In the example above, a bullet is fired from a gun. In order to make the bullet realistic, it will have to move at a speed of 500 feet per second. If the frame rate is 25 frames per second, then the bullet moves at 20 feet per frame. The problem with this is a person is about 2 feet in diameter, which means that the bullet will very likely miss the enemies shown at 5 and 25 feet away that would be hit. This is where prediction comes into play. Predictive collision detection Instead of checking for a collision with an actual bullet object, we find out whether a fired bullet will hit its target. By casting a ray forward from the gun object (thus using its forward direction) on the same frame that the player presses the fire button, we can immediately check which objects intersect the ray. We can do this because rays are drawn immediately. Think of them like a laser pointer—when you switch on the laser, we do not see the light moving forward because it travels at the speed of light—to us it simply appears. Rays work in the same way, so that whenever the player in a ray-based shooting game presses fire, they draw a ray in the direction that they are aiming. With this ray, they can retrieve information on the collider that is hit. Moreover, by identifying the collider, the game object itself can be addressed and scripted to behave accordingly. Even detailed information, such as the point of impact, can be returned and used to affect the resultant reaction, for example, causing the enemy to recoil in a particular direction. In our shooting game example, we would likely invoke scripting to kill or physically repel the enemy whose collider the ray hits, and as a result of the immediacy of rays, we can do this on the frame after the ray collides with, or intersects the enemy collider. This gives the effect of a real gunshot because the reaction is registered immediately. It is also worth noting that shooting games often use the otherwise invisible rays to render brief visible lines to help with aim and give the player visual feedback, but do not confuse these lines with ray casts because the rays are simply used as a path for line rendering. Adding the outpost Before we begin to use both collision detection and ray casting to open the door of our outpost, we'll need to introduce it to the scene. To begin, drag the outpost model from the Project panel to the Scene view and drop it anywhere—bear in mind you cannot position it when you drag-and-drop; this is done once you have dropped the model (that is, let go off the mouse). Once the outpost is in the Scene, you'll notice its name has also appeared in the Hierarchy panel and that it has automatically become selected. Now you're ready to position and scale it!  
Read more
  • 0
  • 0
  • 2415

article-image-papervision3d-external-models-part-2
Packt
17 Nov 2009
7 min read
Save for later

Papervision3D External Models: Part 2

Packt
17 Nov 2009
7 min read
Creating and loading models using SketchUp SketchUp is a free 3D modeling program, which was acquired by Google in 2006. It has been designed as a 3D modeling program that's easier to use than other 3D modeling programs. A key to its success is its easy learning curve compared to other 3D tools. Mac OS X, Windows XP, and Windows Vista operating systems are supported by this program that can be downloaded from http://sketchup.google.com/. Although there is a commercial SketchUp Pro version available, the free version works fine in conjunction with Papervision3D. An interesting feature for non-3D modelers is the integration with Google's 3D Warehouse. This makes it possible to search for models that have been contributed by other SketchUp users. These models are free of any rights and can be used in commercial (Papervision3D) projects. Exporting a model from Google's 3D Warehouse for Papervision3D There are several ways to load a model, coming from Google 3D Warehouse, into Papervision3D. One of them is by downloading a SketchUp file and exporting it to a format Papervision3D works with. This approach will be explained. The strength of Google 3D Warehouse is also its weakness. Anybody with a Google account can add models to the warehouse. This results in a variety of quality of the models. Some are very optimized and work fluently, whereas others reveal problems when you try to make them work in Papervision3D. Or they may not work at all, as they're made of too many polygons to run in Papervision3D. Take this into account while searching for a model in the 3D warehouse. For our example we're going to export a picnic table that was found on Google 3D Warehouse. Start Sketch Up. Choose a template when prompted. This example uses Simple Template – Meters, although there shouldn't be a problem with using one of the other templates. Go to File | 3D Warehouse | Get models to open 3D Warehouse inside SketchUp. Enter a keyword to search for. In this example that will be picnic table. Select a model of your choice. Keep in mind that it has to be low poly, which is something you usually find out by trial and error. Click on Download Model, to import the model into SketchUp and click OK when asked if you want to load the model directly into your Google SketchUp model. Place the model at the origin of the scene. To follow these steps, it doesn't have to be the exact origin, approximately is good enough. By default, a 2D character called Sang appears in the scene, which you do not necessarily have to remove; it will be ignored during export. Because the search returned a lot of picnic tables varying in quality, there is a SketchUp file (can be downloaded from http://www.packtpub.com/files/code/5722_Code.zip). This file has a picnic table already placed on the origin. Of course, you could also choose another picnic table, or any other object of your choice. Leave the model as it is and export it. Go to File | Export | 3D Model. Export it using the Google Earth (*.kmz) format and save it in your assets folder. T he file format we're exporting to originally was meant to display 3D objects in Google Earth. The file ends with .kmz as its extension, and is actually a ZIP archive that contains a COLLADA file and the textures. In the early days of Papervision3D, it was a common trick to create a model using SketchUp and then get the COLLADA file out of the exported Google Earth file, as the Google Earth KMZ file format wasn't still supported then. Importing a Google Earth model into Papervision3D Now that we have successfully exported a model from SketchUp, we will import it into Papervision3D. This doesn't really differ from loading a COLLADA or 3D Studio file. The class we use for parsing the created PicnicTable.kmz file is called KMZ and can be found in the parsers package. Add the following line to the import section of your document class: import org.papervision3d.objects.parsers.KMZ; Replace or comment the code that loads the animated COLLADA model and defines the animations from the previous example. In the init() method we can then instantiate the KMZ class, assign it to the model class property, and load the KMZ file. Make sure you have saved PicnicTable.kmz file into the assets folder of your project. model = new KMZ();model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);KMZ(model).load("assets/PicnicTable.kmz"); ExternalModelsExample That looks familiar, right? Now let's publish the project and your model should appear on the screen. Notice that in many cases, the downloaded and exported model from Google 3D Warehouse might appear very small on your screen in Papervision3D. This is because they are modeled with other metric units than we use in Papervision3D. Our example application places the camera at a 1000 units away from the origin of the scene. Many 3D Warehouse models are made using units that define meters or feet, which makes sense if you were to translate them to real-world units. When a model is, for example, 1 meter wide in SketchUp, this equals to 1 unit in Papervision3D. As you can imagine, a 1 unit wide object in Papervision3D will barely be visible when placing the camera at a distance of 1000. To solve this you could use one of the following options: Use other units in Papervision3D and place your camera at a distance of 5 instead of 1000. Usually you can do this at the beginning of your project, but not while the project is already in progress, as this might involve a lot of changes in your code due to other objects, animations, and calculations that are made with a different scale. Scale your model inside SketchUp to a value that matches the units as you use them in Papervision3D. When the first option can't be realized, this option is recommended. Scale the loaded model in Papervision3D by changing the scale property of your model. model.scale = 20; Although this is an option that works, it's not recommended. Papervision3D has some issues with scaled 3D objects at the time of writing. It is a good convention to use the same units in Papervision3D and your 3D modeling tool. If you want to learn more about modeling with SketchUp, visit the support page on the SketchUp web site http://sketchup.google.com/support. You'll find help resources such as video tutorials and a help forum. Creating and loading models using Blender Blender is an open source, platform-independent 3D modeling tool, which was first released in 1998 as a shareware program. It went open source in 2002. Its features are similar to those in commercial tools such as 3ds Max, Maya, and Cinema4D. However, it has a reputation of having a difficult learning curve compared to other 3D modeling programs. Blender is strongly based on usage of keyboard shortcuts and not menus, which makes it hard for new users to find the options they're looking for. In the last few years, more menu-driven interfaces have been added. It's not in the scope of this article to teach you everything about the modeling tools that can be used with Papervision3D. This also counts for Blender. There are many resources such as online tutorials and books that cover how to work with Blender. A link to the Blender installer download can be found on its web site: www.blender.org.
Read more
  • 0
  • 0
  • 1520

article-image-unity-game-development-welcome-3d-world
Packt
27 Oct 2009
21 min read
Save for later

Unity Game Development: Welcome to the 3D world

Packt
27 Oct 2009
21 min read
Getting to grips with 3D Let's take a look at the crucial elements of 3D worlds, and how Unity lets you develop games in the third dimension. Coordinates If you have worked with any 3D artworking application before, you'll likely be familiar with the concept of the Z-axis. The Z-axis, in addition to the existing X for horizontal and Y for vertical, represents depth. In 3D applications, you'll see information on objects laid out in X, Y, Z format—this is known as the Cartesian coordinate method. Dimensions, rotational values, and positions in the 3D world can all be described in this way. In any documentation of 3D, you'll see such information written with parenthesis, shown as follows: (10, 15, 10) This is mostly for neatness, and also due to the fact that in programming, these values must be written in this way. Regardless of their presentation, you can assume that any sets of three values separated by commas will be in X, Y, Z order. Local space versus World space A crucial concept to begin looking at is the difference between Local space and World space. In any 3D package, the world you will work in is technically infinite, and it can be difficult to keep track of the location of objects within it. In every 3D world, there is a point of origin, often referred to as zero, as it is represented by the position (0,0,0). All world positions of objects in 3D are relative to world zero. However, to make things simpler, we also use Local space (also known as Object space) to define object positions in relation to one another. Local space assumes that every object has its own zero point, which is the point from which its axis handles emerge. This is usually the center of the object, and by creating relationships between objects, we can compare their positions in relation to one another. Such relationships, known as parent-child relationships, mean that we can calculate distances from other objects using Local space, with the parent object's position becoming the new zero point for any of its child objects. Vectors You'll also see 3D vectors described in Cartesian coordinates. Like their 2D counterparts, 3D vectors are simply lines drawn in the 3D world that have a direction and a length. Vectors can be moved in world space, but remain unchanged themselves. Vectors are useful in a game engine context, as they allow us to calculate distances, relative angles between objects, and the direction of objects. Cameras Cameras are essential in the 3D world, as they act as the viewport for the screen. Having a pyramid-shaped field of vision, cameras can be placed at any point in the world, animated, or attached to characters or objects as part of a game scenario. With adjustable Field of Vision (FOV), 3D cameras are your viewport on the 3D world. In game engines, you'll notice that effects such as lighting, motion blurs, and other effects are applied to the camera to help with game simulation of a person's eye view of the world—you can even add a few cinematic effects that the human eye will never experience, such as lens flares when looking at the sun! Most modern 3D games utilize multiple cameras to show parts of the game world that the character camera is not currently looking at—like a 'cutaway' in cinematic terms. Unity does this with ease by allowing many cameras in a single scene, which can be scripted to act as the main camera at any point during runtime. Multiple cameras can also be used in a game to control the rendering of particular 2D and 3D elements separately as part of the optimization process. For example, objects may be grouped in layers, and cameras may be assigned to render objects in particular layers. This gives us more control over individual renders of certain elements in the game. Polygons, edges, vertices, and meshes In constructing 3D shapes, all objects are ultimately made up of interconnected 2D shapes known as polygons. On importing models from a modelling application, Unity converts all polygons to polygon triangles. Polygon triangles (also referred to as faces) are in turn made up of three connected edges. The locations at which these vertices meet are known as points or vertices. By knowing these locations, game engines are able to make calculations regarding the points of impact, known as collisions, when using complex collision detection with Mesh Colliders, such as in shooting games to detect the exact location at which a bullet has hit another object. By combining many linked polygons, 3D modelling applications allow us to build complex shapes, known as meshes. In addition to building 3D shapes, the data stored in meshes can have many other uses. For example, it can be used as surface navigational data by making objects in a game, by following the vertices. In game projects, it is crucial for the developer to understand the importance of polygon count. The polygon count is the total number of polygons, often in reference to a model, but also in reference to an entire game level. The higher the number of polygons, the more work your computer must do to render the objects onscreen. This is why, in the past decade or so, we've seen an increase in the level of detail from early 3D games to those of today—simply compare the visual detail in a game, such as Id's Quake (1996) with the details seen in a game, such as Epic's Gears Of War (2006). As a result of faster technology, game developers are now able to model 3D characters and worlds for games that contain a much higher polygon count and this trend will inevitably continue. Materials, textures, and shaders Materials are a common concept to all 3D applications, as they provide the means to set the visual appearance of a 3D model. From basic colors to reflective image-based surfaces, materials handle everything. Starting with a simple color and the option of using one or more images—known as textures—in a single material, the material works with the shader, which is a script in charge of the style of rendering. For example, in a reflective shader, the material will render reflections of surrounding objects, but maintain its color or the look of the image applied as its texture. In Unity, the use of materials is easy. Any materials created in your 3D modelling package will be imported and recreated automatically by the engine and created as assets to use later. You can also create your own materials from scratch, assigning images as texture files, and selecting a shader from a large library that comes built-in. You may also write your own shader scripts, or implement those written by members of the Unity community, giving you more freedom for expansion beyond the included set. Crucially, when creating textures for a game in a graphics package such as Photoshop, you must be aware of the resolution. Game textures are expected to be square, and sized to a power of 2. This means that numbers should run as follows: 128 x 128 256 x 256 512 x 512 1024 x 1024 Creating textures of these sizes will mean that they can be tiled successfully by the game engine. You should also be aware that the larger the texture file you use, the more processing power you'll be demanding from the player's computer. Therefore, always remember to try resizing your graphics to the smallest power of 2 dimensions possible, without sacrificing too much in the way of quality. Rigid Body physics For developers working with game engines, physics engines provide an accompanying way of simulating real-world responses for objects in games. In Unity, the game engine uses Nvidia's PhysX engine, a popular and highly accurate commercial physics engine. In game engines, there is no assumption that an object should be affected by physics—firstly because it requires a lot of processing power, and secondly because it simply doesn't make sense. For example, in a 3D driving game, it makes sense for the cars to be under the influence of the physics engine, but not the track or surrounding objects, such as trees, walls, and so on—they simply don't need to be. For this reason, when making games, a Rigid Body component is given to any object you want under the control of the physics engine. Physics engines for games use the Rigid Body dynamics system of creating realistic motion. This simply means that instead of objects being static in the 3D world, they can have the following properties: Mass Gravity Velocity Friction As the power of hardware and software increases, rigid body physics is becoming more widely applied in games, as it offers the potential for more varied and realistic simulation. Collision detection While more crucial in game engines than in 3D animation, collision detection is the way we analyze our 3D world for inter-object collisions. By giving an object a Collider component, we are effectively placing an invisible net around it. This net mimics its shape and is in charge of reporting any collisions with other colliders, making the game engine respond accordingly. For example, in a ten-pin bowling game, a simple spherical collider will surround the ball, while the pins themselves will have either a simple capsule collider, or for a more realistic collision, employ a Mesh collider. On impact, the colliders of any affected objects will report to the physics engine, which will dictate their reaction, based on the direction of impact, speed, and other factors. In this example, employing a mesh collider to fit exactly to the shape of the pin model would be more accurate but is more expensive in processing terms. This simply means that it demands more processing power from the computer, the cost of which is reflected in slower performance—hence the term expensive. Essential Unity concepts Unity makes the game production process simple by giving you a set of logical steps to build any conceivable game scenario. Renowned for being non-game-type specific, Unity offers you a blank canvas and a set of consistent procedures to let your imagination be the limit of your creativity. By establishing its use of the Game Object (GO) concept, you are able to break down parts of your game into easily manageable objects, which are made of many individual Component parts. By making individual objects within the game and introducing functionality to them with each component you add, you are able to infinitely expand your game in a logical progressive manner. Component parts in turn have variables—essentially settings to control them with. By adjusting these variables, you'll have complete control over the effect that Component has on your object. Let's take a look at a simple example. The Unity way If I wished to have a bouncing ball as part of a game, then I'd begin with a sphere. This can quickly be created from the Unity menus, and will give you a new Game Object with a sphere mesh (a net of a 3D shape), and a Renderer component to make it visible. Having created this, I can then add a Rigid body. A Rigidbody (Unity refers to most two-word phrases as a single word term) is a component which tells Unity to apply its physics engine to an object. With this comes mass, gravity, and the ability to apply forces to the object, either when the player commands it or simply when it collides with another object. Our sphere will now fall to the ground when the game runs, but how do we make it bounce? This is simple! The collider component has a variable called Physic Material—this is a setting for the Rigidbody, defining how it will react to other objects' surfaces. Here we can select Bouncy, an available preset, and voila! Our bouncing ball is complete, in only a few clicks. This streamlined approach for the most basic of tasks, such as the previous example, seems pedestrian at first. However, you'll soon find that by applying this approach to more complex tasks, they become very simple to achieve. Here is an overview of those key Unity concepts plus a few more. Assets These are the building blocks of all Unity projects. From graphics in the form of image files, through 3D models and sound files, Unity refers to the files you'll use to create your game as assets. This is why in any Unity project folder all files used are stored in a child folder named Assets. Scenes In Unity, you should think of scenes as individual levels, or areas of game content (such as menus). By constructing your game with many scenes, you'll be able to distribute loading times and test different parts of your game individually. Game Objects When an asset is used in a game scene, it becomes a new Game Object—referred to in Unity terms—especially in scripting—using the contracted term "GameObject". All GameObjects contain at least one component to begin with, that is, the Transform component. Transform simply tells the Unity engine the position, rotation, and scale of an object—all described in X, Y, Z coordinate (or in the case of scale, dimensional) order. In turn, the component can then be addressed in scripting in order to set an object's position, rotation, or scale. From this initial component, you will build upon game objects with further components adding required functionality to build every part of any game scenario you can imagine. Components Components come in various forms. They can be for creating behavior, defining appearance, and influencing other aspects of an object's function in the game. By 'attaching' components to an object, you can immediately apply new parts of the game engine to your object. Common components of game production come built-in with Unity, such as the Rigidbody component mentioned earlier, down to simpler elements such as lights, cameras, particle emitters, and more. To build further interactive elements of the game, you'll write scripts, which are treated as components in Unity. Scripts While being considered by Unity to be Components, scripts are an essential part of game production, and deserve a mention as a key concept. You can write scripts in JavaScript, but you should be aware that Unity offers you the opportunity to write in C# and Boo (a derivative of the Python language) also. I've chosen to demonstrate Unity with JavaScript, as it is a functional programming language, with a simple to follow syntax that some of you may already have encountered in other endeavors such as Adobe Flash development in ActionScript or in using JavaScript itself for web development. Unity does not require you to learn how the coding of its own engine works or how to modify it, but you will be utilizing scripting in almost every game scenario you develop. The beauty of using Unity scripting is that any script you write for your game will be straightforward enough after a few examples, as Unity has its own built-in Behavior class—a set of scripting instructions for you to call upon. For many new developers, getting to grips with scripting can be a daunting prospect, and one that threatens to put off new Unity users who are simply accustomed to design only. I will introduce scripting one step at a time, with a mind to showing you not only the importance, but also the power of effective scripting for your Unity games. To write scripts, you'll use Unity's standalone script editor. On Mac, this is an application called Unitron, and on PC, Uniscite. These separate applications can be found in the Unity application folder on your PC or Mac and will be launched any time you edit a new script or an existing one. Amending and saving scripts in the script editor will immediately update the script in Unity. You may also designate your own script editor in the Unity preferences if you wish. Prefabs Unity's development approach hinges around the GameObject concept, but it also has a clever way to store objects as assets to be reused in different parts of your game, and then 'spawned' or 'cloned' at any time. By creating complex objects with various components and settings, you'll be effectively building a template for something you may want to spawn multiple instances of, with each instance then being individually modifiable. Consider a crate as an example—you may have given the object in the game a mass, and written scripted behaviors for its destruction; chances are you'll want to use this object more than once in a game, and perhaps even in games other than the one it was designed for. Prefabs allow you to store the object, complete with components and current configuration. Comparable to the MovieClip concept in Adobe Flash, think of prefabs simply as empty containers that you can fill with objects to form a data template you'll likely recycle. The interface The Unity interface, like many other working environments, has a customizable layout. Consisting of several dockable spaces, you can pick which parts of the interface appear where. Let's take a look at a typical Unity layout: As the previous image demonstrates (PC version shown), there are five different elements you'll be dealing with: Scene [1]—where the game is constructed Hierarchy [2]—a list of GameObjects in the scene Inspector [3]—settings for currently selected asset/object Game [4]—the preview window, active only in play mode Project [5]—a list of your project's assets, acts as a library The Scene window and Hierarchy The Scene window is where you will build the entirety of your game project in Unity. This window offers a perspective (full 3D) view, which is switchable to orthographic (top down, side on, and front on) views. This acts as a fully rendered 'Editor' view of the game world you build. Dragging an asset to this window will make it an active game object. The Scene view is tied to the Hierarchy, which lists all active objects in the currently open scene in ascending alphabetical order. The Scene window is also accompanied by four useful control buttons, as shown in the previous image. Accessible from the keyboard using keys Q, W, E, and R, these keys perform the following operations: The Hand tool [Q]: This tools allows navigation of the Scene window. By itself, it allows you to drag around in the Scene window to pan your view. Holding down Alt with this tool selected will allow you to rotate your view, and holding the Command key (Apple) or Ctrl key (PC) will allow you to zoom. Holding the Shift key down also will speed up both of these functions. The Translate tool [W]: This is your active selection tool. As you can completely interact with the Scene window, selecting objects either in the Hierarchy or Scene means you'll be able to drag the object's axis handles in order to reposition them. The Rotate tool [E]: This works in the same way as Translate, using visual 'handles' to allow you to rotate your object around each axis. The Scale tool [R]: Again, this tool works as the Translate and Rotate tools do. It adjusts the size or scale of an object using visual handles. Having selected objects in either the Scene or Hierarchy, they immediately get selected in both. Selection of objects in this way will also show the properties of the object in the Inspector. Given that you may not be able to see an object you've selected in the Hierarchy in the Scene window, Unity also provides the use of the F key, to focus your Scene view on that object. Simply select an object from the Hierarchy, hover your mouse cursor over the Scene window, and press F. The Inspector Think of the Inspector as your personal toolkit to adjust every element of any game object or asset in your project. Much like the Property Inspector concept utilized by Adobe in Flash and Dreamweaver, this is a context-sensitive window. All this means is that whatever you select, the Inspector will change to show its relevant properties—it is sensitive to the context in which you are working. The Inspector will show every component part of anything you select, and allow you to adjust the variables of these components, using simple form elements such as text input boxes, slider scales, buttons, and drop-down menus. Many of these variables are tied into Unity's drag-and-drop system, which means that rather than selecting from a drop-down menu, if it is more convenient, you can drag-and-drop to choose settings. This window is not only for inspecting objects. It will also change to show the various options for your project when choosing them from the Edit menu, as it acts as an ideal space to show you preferences—changing back to showing component properties as soon as you reselect an object or asset. In this screenshot, the Inspector is showing properties for a target object in the game. The object itself features two components—Transform and Animation. The Inspector will allow you to make changes to settings in either of them. Also notice that to temporarily disable any component at any time—which will become very useful for testing and experimentation—you can simply deselect the box to the left of the component's name. Likewise, if you wish to switch off an entire object at a time, then you may deselect the box next to its name at the top of the Inspector window. The Project window The Project window is a direct view of the Assets folder of your project. Every Unity project is made up of a parent folder, containing three subfolders—Assets, Library, and while the Unity Editor is running, a Temp folder. Placing assets into the Assets folder means you'll immediately be able to see them in the Project window, and they'll also be automatically imported into your Unity project. Likewise, changing any asset located in the Assets folder, and resaving it from a third-party application, such as Photoshop, will cause Unity to reimport the asset, reflecting your changes immediately in your project and any active scenes that use that particular asset. It is important to remember that you should only alter asset locations and names using the Project window—using Finder (Mac) or Windows Explorer (PC) to do so may break connections in your Unity project. Therefore, to relocate or rename objects in your Assets folder, use Unity's Project window instead. The Project window is accompanied by a Create button. This allows the creation of any assets that can be made within Unity, for example, scripts, prefabs, and materials. The Game window The Game window is invoked by pressing the Play button and acts as a realistic test of your game. It also has settings for screen ratio, which will come in handy when testing how much of the player's view will be restricted in certain ratios, such as 4:3 (as opposed to wide) screen resolutions. Having pressed Play, it is crucial that you bear in mind the following advice: In play mode, the adjustments you make to any parts of your game scene are merely temporary—it is meant as a testing mode only, and when you press Play again to stop the game, all changes made during play mode will be undone. This can often trip up new users, so don't forget about it! The Game window can also be set to Maximize when you invoke play mode, giving you a better view of the game at nearly fullscreen—the window expands to fill the interface. It is worth noting that you can expand any part of the interface in this way, simply by hovering over the part you wish to expand and pressing the Space bar. Summary Here we have looked at the key concepts, you'll need to understand for developing games with Unity. Due to space constraints, I cannot cover everything in depth, as 3D development is a vast area of study. With this in mind, I strongly recommend you to continue to read more on the topics discussed in this article, in order to supplement your study of 3D development. Each individual piece of software you encounter will have its own dedicated tutorials and resources dedicated to learning it. If you wish to learn 3D artwork to complement your work in Unity, I recommend that you familiarize yourself with your chosen package, after researching the list of tools that work with the Unity pipeline and choosing which one suits you best.
Read more
  • 0
  • 0
  • 3394
article-image-building-your-first-application-papervision3d-part-1
Packt
26 Oct 2009
7 min read
Save for later

Building your First Application with Papervision3D: Part 1

Packt
26 Oct 2009
7 min read
This article covers the following: Introduction to classes and object-oriented programming Working with the document class/main application file Introduction to classes and object-oriented programming In this article we will learn to write our own classes that can be used in Flash, along with Flex Builder and Flash Builder. If you're a developer using the Flash IDE, it might be the first time you'll write your own classes. Don't worry about the difficulty level if you are new to classes. Learning how to program  in an OOP way using Papervision3D is a good way to become more familiar with classes and it  might motivate you to learn more about this subject. You will not be the first who learned to program in an OOP way, as a side effect of learning an external library such as Papervision3D is. So what are classes? In fact, they are nothing more than a set of functions (methods) and variables (properties) grouped together in a single file, which is known as the class definition. A class forms the blueprint for new objects that you create. Sounds a bit vague? What if you were told that you've probably already used classes? Each object you create from the Flash API is based on classes. For example, Sprites, MovieClips, and TextFields are objects that you have probably used in your code before. In fact, these objects are classes. The blueprints for these objects and their classes are already incorporated in Flash. First, have a look at how you can use them to create a new Sprite object: var mySprite:Sprite = new Sprite(); Looks familiar—right? By doing this, you create a new copy of the Sprite class as an object called mySprite. This is called instantiation of an object. There's no difference between instantiating built-in classes or instantiating custom written classes. Papervision3D is a set of custom classes. var myObject3D:DisplayObject3D = new DisplayObject3D(); So, although you know how to use classes, creating your own classes might be new to you. Creating a custom class An ActionScript class is basically a text-based file with an .as extension stored somewhere on your computer, containing ActionScript code. This code works as the previously mentioned blueprint for an object. Let's see what that blueprint looks like: package { public class ExampleClass { public var myName:String = "Paul"; public function ExampleClass() { } public function returnMyName():String { return "My name is" + myName; } }} On the first line, you'll find the package statement, followed by an opening curly bracket and ended with a closing curly bracket at the bottom of the class. Packages are a way to group classes together and represent the folder in which you saved the file. Imagine you have created a folder called myPackage inside the same folder where you've saved an FLA or inside a defined source folder. In order to have access to the folder and its classes, you will need to define the package using the folder's name as shown next: package myPackage { ...} This works the same way for subfolders. Let's imagine a folder called subPackage has been added to the imaginary folder myPackage. The package definition for classes inside this subfolder should then look like this: package myPackage.subPackage { ...} If you don't create a special folder to group your classes, you can use the so-called default package instead of defining a name. All the examples in this article will use default packages. However, for real projects it's good practice to set up a structure in order to organize your files. After the package definition, you'll find the class definition, which looks as follows: public class ExampleClass{ ...} The name of the class must be the same name as the class file. In this example, the file needs to be saved as ExampleClass.as. Besides the fact that working packages is a good way to organize your files, they can also be used to uniquely identify each class in a project. At the top of the class definition you'll see the word public, which is a keyword defining that the class is accessible to all other code in the project. This keyword is called, an access modifier. The defined name of the class will be used to instantiate new copies of the class. Instantiating this class could be done like this: var classExample:ExampleClass = new ExampleClass(); Inside the class, a string variable is defined in pretty much the same way as you would when working with timeline scripting. public var myName:String = "Paul"; The definition of a variable inside a class is called as a class property. You can add as many properties to a class as you want. Each definition starts off with an access modifier. In this case the access modifier is set to public, meaning that it is both readable and writeable by code located outside the class: var classExample:ExampleClass = new ExampleClass();classExample.myName = "Jeff"; As you can see, this creates an instance of ExampleClass. We also changed the myName property from Paul to Jeff. When a property is defined as public, this is allowed to happen. In case you want access to this property inside the class itself, you can mark the property as private: private var myName:String = "Paul"; Executing the previous code to change myName from Paul to Jeff will result in a compile-time error. In the next lines we see the creation of a new function called ExampleClass in the class. Functions that have the same name as the name of the class are known as constructors. public function ExampleClass(){} Constructors always have a public access modifier and are called automatically each time the class is instantiated. This means that the code inside this function will be executed automatically. All other function definitions inside the class are called methods. public function returnMyName():String{ return "My name is" + myName;} At the end of this method definition, you'll notice a colon followed by a data type. This defines the type of object the method returns. When you work with functions on the timeline in Flash, you can define this as well, but it's not required to do so. The method returnMyName() is defined to return a string. In case you do not want to return any data type, you can define this by using a void as the return type: public function returnNothing():void{ //Do not return something} We need to define return type only for methods and not for constructors. Classes, as well as properties and methods, have access modifiers that define from where each of these objects can be accessed. So far we've seen that a public keyword allows code outside the class to access a property, or call a method inside the class. When you allow access from other code, you need to be aware that this code can mess up your class. You can prevent this by using the private keyword, which makes the property or method only accessible inside your class. Two other access modifiers that are often used are internal and protected. Classes inside the same package can access internal methods or properties and protected methods can only be used inside a related subclass. Subclasses are a part of inheritance, which will be explained in a bit. As long as you have not planned to give access to scripts outside your class, it's a good practice to mark all properties and methods of a class as private by default. When defining a property or method, you should always ask yourself whether you want them to be accessed from outside your class.
Read more
  • 0
  • 0
  • 966

article-image-building-your-first-application-papervision3d-part-2
Packt
26 Oct 2009
10 min read
Save for later

Building your First Application with Papervision3D: Part 2

Packt
26 Oct 2009
10 min read
This article covers: Basics of a 3D composition in Papervision3D Building your first application Basics of a 3D scene in Papervision3D Before we're ready to program some code for Papervision3D, we need to know a little bit more about 3D in Papervision3D. The following section is meant to give you a better understanding of the code that you will write. Each of these concepts relates directly to classes that are used by Papervision3D. All the object types that will be described are active elements in a 3D scene. Let's have a visualized look at all these objects: Scene The scene is the entire composition of 3D objects in a 3D space. Think of it as your stage in Flash with three axes—x, y, and z. Each object that you want to be visible should be added to the scene. If you don't add objects to the scene, they will not appear on your screen. Camera You can think of this as a real camera, which is somewhere in 3D space recording activity inside the scene. The camera defines the point of view from which you are viewing the scene. Because a camera in 3D is not a visible object, it is not part of our scene and you don't need to add it. Like a real camera you can, for example, zoom and focus. Cameras in 3D space can usually do more than real cameras. For example, in 3D you can, exclude objects to be rendered, when they are too close or too far away from the camera. The camera is able to ignore objects that are not in a certain, defined range. This is done for performance reasons. Objects that are too far away, or even behind the camera, don't need to be recorded, saving a lot of calculation for objects that are in a position where the camera simply can't see them. Viewport A viewport is a container sprite on your stage, and shows the output of what the camera sees. In the illustrative object overview, this has been compared with the lens of the camera. The lens is our window onto the 3D scene. We can make this window small and see just a small part of the scene, or make it big and see a lot. This works exactly as a real window in a wall—making it bigger will not affect how the world outside looks, but will have an effect on how much we can see from this world. The following illustration contains the visualization of a relatively big viewport on the left, along with a relatively small and wide viewport on the right. The thick dark outlines represent the viewports. In the right image, you can clearly see how the window to the 3D world works and how changing its size will not stretch the view of the 3D object in the scene. It only affects how much we see of it. 3D Objects A shape in 3D space is called a 3D object, or DisplayObject3D in Papervision3D. You can think of 3D objects as more advanced display objects just like sprites and movie clips. The difference is that a DisplayObject3D has a third axis, so we can place it anywhere in 3D space and rotate over each of the three axes. Material A material is the texture that is printed on an object. When an object doesn't have a material applied, it will be invisible. There are a variety of materials available for you to use. For example, a very simple material is a color; a more advanced example of a material could be a live streaming video. Render engine A render engine is like a rolling camera. As long as you trigger the render engine, it will output information of the scene recorded by the camera to the viewport. When you stop triggering the render engine, your viewport will not show any new image from the scene and shows the last rendered image. Rendering is actually the most intensive task for your computer, as it needs to recalculate each object that is placed inside your scene and output this to the viewport. Left-handed Cartesian coordinate system Flash, as well as Papervision3D, make use of the Cartesian coordinate system. In Flash, a regular visual object can be positioned on the stage by entering an x and y value. The object will be positioned according to these values and relative to the upper left corner of the stage. A higher x value moves the object to the right, whereas a higher y value moves it downward. The coordinate system in Papervision3D works essentially the same, except for the following two differences: Flash uses Cartesian coordinates on two axes, whereas Papervision3D makes use of them on three axes The y-axis in Flash is inversed compared to the y-axis in Papervision3D In Papervision3D, we want to position objects not only on the x and y axes, but also on an extra axis, in order to position objects in depth. This third axis is called the z-axis. In Flash, objects are positioned relative to the upper left corner, which is the 0, 0 coordinate. This is called the center point, or the origin. Predictably, the position of the origin in 3D space is located at 0, 0, 0. The three axes and the origin point are illustrated by the following image: The plus and minus signs illustrate whether the coordinate values are increasing or decreasing in the direction away from the origin. Notice that the y-axis is inverted compared to 2D coordinates in Flash. When you want to position an object lower on your screen in Flash, you need to give it a higher y position. This is how 2D coordinates on a computer screen work. The above figure shows the y-axis as according to the coordinate system as it is used by Papervision3D. Creating a basic class for Papervision3D In the previous sections you've learned what a basic scene is made of, and how you can create your own classes. Now that we have gone through all this theory, it's now time that we get our hands dirty by writing some code. Start your favorite authoring tool. The first application we'll build is a 3D scene containing a sphere that is rotating over its y-axis. A sphere is basically a "3D ball" and is built into Papervision3D as one of the default primitives. The basic document class First, have a look at the document class that defines the basic structure for the rotating sphere application. package { import flash.display.Sprite; import org.papervision3d.cameras.Camera3D; import org.papervision3d.objects.primitives.Sphere; import org.papervision3d.render.BasicRenderEngine; import org.papervision3d.scenes.Scene3D; import org.papervision3d.view.Viewport3D; public class FirstApplication extends Sprite { private var scene:Scene3D; private var viewport:Viewport3D; private var camera:Camera3D; private var renderEngine:BasicRenderEngine; private var sphere:Sphere; public function FirstApplication() { scene = new Scene3D(); camera = new Camera3D(); sphere = new Sphere(); scene.addChild(sphere); viewport = new Viewport3D(); addChild(viewport); renderEngine = new BasicRenderEngine(); renderEngine.renderScene(scene,camera,viewport); } }} Let's have a look at this in detail. While we do that, you can create a new project in Flash, Flex Builder, or Flash Builder with a document class called FirstApplication. If you're using Flex Builder or Flash Builder, don't forget to configure the path to the Papervision3D source. We know that we need to have a scene, a camera, a viewport, a 3D object with a material, and a render engine, in order to create a rendered output on the screen. Remember that the document class needs to extend Sprite. To make all these available inside the class, we need to import them first. import flash.display.Sprite;import org.papervision3d.cameras.Camera3D;import org.papervision3d.objects.primitives.Sphere;import org.papervision3d.render.BasicRenderEngine;import org.papervision3d.scenes.Scene3D;import org.papervision3d.view.Viewport3D; Now that the imports are set, we define the properties of the class. private var scene:Scene3D;private var viewport:Viewport3D;private var camera:Camera3D;private var renderEngine:BasicRenderEngine;private var sphere:Sphere; We only define the properties and their class types here, without assigning any value. We'll do that inside the constructor. Let's first create the constructor. public function FirstApplication(){} We start by creating a scene inside the constructor. A scene is needed as the holder for our 3D objects. It's very easy to create one. scene = new Scene3D(); All you have to do is create an instance of the Scene3D class, which doesn't take any parameters. Next, we add a Camera3D, which is just as easy as the creation of a new Scene. camera = new Camera3D(); A camera can take parameters when you create a new instance; however, the default values will do for now. The scene and camera are useless as long as there are no 3D objects in the scene to film. For this example a sphere will be used, which is one of the built-in 3D objects. sphere = new Sphere(); A camera, as well as a sphere, takes parameters when you instantiate it. However, when you do not pass any parameters, a default sphere will be created. Now that we have created our sphere, we need to add it to the scene, otherwise it will not be seen by the camera. This works in exactly the same way as adding regular display objects to the display list in Flash, by using the addChild() method. scene.addChild(sphere); That looks familiar, right? Now that we have a scene, a camera, and a sphere, we need to set up the window to the 3D scene, so we can see what is going on in there. We need to define our viewport and add it to the stage to make it visible. viewport = new Viewport3D();addChild(viewport); While creating a new viewport, you can pass optional parameters to it, but the default parameters are again fine for now. We are now just one step away from publishing our application. First we need to get our camera rolling. This is done by defining the render engine, which will render the current view of the camera to the viewport. renderEngine = new BasicRenderEngine();renderEngine.renderScene(scene,camera,viewport); Now that the render engine is defined and an instruction to render the scene is added, we are ready to publish this project. This should result in the following image of triangles that together form something like a circle: Case sensitivityPay attention to upper cases and lower cases in the examples. Not following these conventions might result in an error. A good example of this is the definition and instantiation of a sphere as we did in our FirstApplication.var sphere:Sphere = new Sphere();Here we see that we have defined a class property called sphere (lowercase) of a Sphere (uppercase) type, containing an instance of Sphere (uppercase). Giving a variable/property the same name as the defined object type will often cause problems. The result of these few lines of code may not look very impressive yet, but in fact this is the beginning of something that will soon become exciting and could never be accomplished this easy by regular Flash use. When you have managed to compile this application that shows the triangles, you've made it through the hardest part of this example. Let's continue building your first application and see how we can add the illusion of 3D to our object, which still looks very 2D.
Read more
  • 0
  • 0
  • 1454

article-image-blender-3d-interview-allan-brito
Packt
23 Oct 2009
8 min read
Save for later

Blender 3D: Interview with Allan Brito

Packt
23 Oct 2009
8 min read
Meeba Abraham: Hi Allan, thank you for talking to us today, why don’t you tell us a bit about yourself and your background; how did you start working with Blender? Allan Brito: Hi, and thanks for this opportunity to talk a bit about myself. Well, I’m a 29 year-old architect from Brazil. After my graduation, I started working on visualization projects, mostly on 3ds Max for a small studio here in Brazil. After two years I started teaching 3D modeling and animation and I fell in love with teaching. I still teach 3D animation and modeling at a College here. With the help of my teaching experience, I began writing manuals and tutorials about 3D animation. Eventually, I decided to write a book about Blender in Portuguese, and the book was a huge success in Brazil. Currently I`m working on the third edition of this book. With the book, I also needed a way to keep in touch with the readers and discuss about Blender and 3D related stuff. So I started a web site (www.allanbrito.com), where I regularly write short articles and tutorials about Blender and its comparison with other 3D packages. Today the web site has grown considerably, and I continue to update it with content on Blender and other 3D software tools. Meeba Abraham: How long have you been working with it? Allan Brito: My first contact with Blender 3D was in 2003. I was invited by a friend to check out a great open source software for 3D visualization. I was really impressed by Blender, its potential, and the lightweight of the software. Coming from a 3ds Max background, it was a bit hard to get used to the interface and the keyboard shortcuts, but after a few weeks I started getting used to it. After the learning process, I started to use Blender as the main tool for my projects. I can`t say that it was easy to use at first, but with time Blender simply grew on me and became my main tool for my projects. Meeba Abraham: Can you tell about some of the key features of Blender that makes it a viable option to other professional 3D software? Allan Brito: There are many features in Blender that other professional 3D suites do not have. For instance, the integrated Game Engine, which allows you to produce interactive animations, is just awesome! For 3D modeling, Blender has a sculpt module where artists can create 3D models only by sculpt geometry in a way similar to what sculpting tools such as ZBrush and MudBox provides. The node editor in Blender is also an incredible tool to create materials and for post-production. Post-production is a powerful tool in Blender. There is a sequencer editor that works like a video editor. You can cut, join, and post-process videos in the sequence editor. For instance, an animator can create a full animation without the need of any other software. Recently, the Big Buck Bunny project introduced some great tools for character animation in Blender, like better fur, a new and improved particle system, new and improved UV Mapping and much more. I strongly recommend a visit to www.blender.org to check out the full list of features, which is huge. Meeba Abraham: Why is Blender an important 3D application that an aspiring graphics artist should consider using? Allan Brito: I believe that Blender has a great set of features that can help a graphic artist create some impressive art work. Why Blender? I guess the best answer is; why not? All the features offered by other 3D animation software are also available in Blender, such as character animation, physics simulation, particle animation, and much more. And with Blender being a free software, you won’t have to get a single license and be bounded to only one workstation. Besides the features, I believe in the community nature of Blender. If you feel a tool or feature is missing, just make a suggestion to the community or make the feature yourself! Meeba Abraham: Over the years, Blender has grown in popularity. What, in your opinion, are some of the main reasons for this? Allan Brito: In the last few years Blender gained many features that only the so-called high-end and expansive 3D software had. This puts the spotlight right into Blender, and some old and experienced professionals are using Blender today, to take a look at these advanced features, and they like it. Besides the features, the Blender Foundation is doing a great job by supporting Blender and promoting it outside the community. They organize conferences and projects to show the potentials of Blender as a 3D animation package. The last open movie—Big Buck Bunny—supported by the community is a great example of that. Meeba Abraham: Since Blender is an open source 3D application, the Blender community plays an important role in its growth. Can you shed some light on the blender community? How have they helped to popularize Blender? Allan Brito: What can I say? The Blender community is great and has been supporting the development of Blender for a long time. The last open movie is a great example of what this community can do. Big Buck Bunny is a project mainly created by the Blender community. Artists could buy the DVD of the animation even before the project started. And when the animation was finished, all Blender users could buy a shiny DVD of the animation that contains tutorials and all source files of the animation. Now, what if Pixar gave away all the production files of their animations. And even of you don’t want to buy the DVD, you can still download all of the content for free from the project Web site, www.bigbuckbunny.org. This is a great example of the Blender community spirit and how much support Blender gets from around the world. Meeba Abraham: You have just authored a book on Blender; how did you find the experience? Is this the first book you’ve written? Allan Brito: Writing a book on Blender was quite a challenge for me. Even with the experience of writing tutorials and short articles about Blender, writing a book was not easy! But after a few weeks, I was able to write the chapters naturally and almost on schedule. The biggest challenge for me was to write about a subject that no one else had written about yet. In my first book “Blender 3D – Guia do Usuário” written in Brazilian Portuguese, the challenge was even bigger. When I started writing that book, there weren’t any updated documentation on Blender features. So I had to do a lot of research myself. With this book, the challenge again was to write about something that no one else has ever written. Even with a few short tutorials around, there weren`t any full set of procedures or tips for working with architectural visualization in Blender. The experience was great and I hope this is just the first book in a long series of books! I have a few ideas for writing more books about Blender and I’m already working on some of them. Meeba Abraham: How do you anticipate it will help the Blender community? Is it different to other Blender books? Allan Brito: I believe that a lot of users want to use Blender for architectural visualization but have only found tutorials and books on character modeling and animation. This book was written with architectural visualization in mind. So every example and Blender tool is described specifically with architectural examples. Meeba Abraham: You make regular contributions to www.BlenderNation.com, how did you get involved with the site and what does it offer to the community? Allan Brito: BlenderNation is the comprehensive Web site for Blender related news. So if anyone is curious about what`s going on in the Blender community, the first place to look after the Foundation Web site is BlenderNation. My involvement with BlenderNation began with my writing articles about Blender in Brazilian Portuguese for my own web site (www.allanbrito.com). A few months later, I was invited by Bart Veldhuizen to write a few tutorials and I guess they liked my work! After that I was writing articles for BlenderNation as a Contributor Editor. And I have to say that it`s really great to be a part of it, and keep the Blender community updated. The experience with BlenderNation and the books inspired me to start a new project called Blender 3D Architect (www.blender3darchitect.com) where I write articles on how to use Blender for architectural visualization along with tips and tutorials. Meeba Abraham: Thanks for your time and contributions!
Read more
  • 0
  • 0
  • 2693
article-image-textures-blender
Packt
22 Oct 2009
10 min read
Save for later

Textures in Blender

Packt
22 Oct 2009
10 min read
Procedural Textures vs. Bitmap Textures Blender has basically two types of textures, which are procedural textures and bitmap textures. Each one has both positive and negative points. Which one is the best will depend on your project needs. Procedural: This kind of texture is generated by the software at rendering time, just like vector lines. This means that it won't depend on any type of image file. The best thing about this type of texture is that it is resolution independent, so we can set the texture to be rendered with high resolutions with minimum loss of quality. The negative point about this kind of texture is that it's harder to get realistic textures with it. Bitmap: To use this kind of texture, we will need an image file, such as a JPEG, PNG, or TGA file. The good thing about these textures is that we can achieve very realistic materials with it quickly. On the other hand, we must find the texture file before using it. And there is more. If you are creating a high resolution render, the texture file must be big. Texture Library Do you remember the way we organized materials? We can do exactly the same thing about textures. Besides setting names and storing the Blender files to import and use again later, collecting bitmap textures is another important point. Even if you don't start right away, it's important to know where to look for textures. So here is a small list of websites that provides free texture download. http://www.blender-textures.org http://www.cgtextures.com http://blender-archi.tuxfamily.org/textures Applying Textures To use a texture, we must apply a material to an object, and then use the texture with this material. We always use the texture inside a material. For instance, to make a plane that simulates a marble floor, we have to use a texture and set up how the surface will react to light and texture, which can give the surface a proper look of marble using any texture. To do that, we must use the texture panel, which is located right next to the materials button. We can use a keyboard shortcut to open this panel: just hit F6. There is a way to add a texture in the material panel as well, with a menu called Texture. The best way to get all the options is to add the texture on the texture panel. On this panel, we will be able to see a lot of buttons, which represent the texture channels. Each one of these channels can hold a texture. The final texture will be a mix of all the channels. If we have a texture at channel 1 and another texture at channel 2, these textures will be blended and represented in the material. Before adding a new texture, we must select a channel by clicking over one of them. Usually the first channel is selected, but if you want to use another one, just click on the channel. When the channel is selected, just click the Add New button to add a new texture. The texture controls are very similar to the material controls. We can set a name for the texture at the top, or erase it if we don't want it anymore. With the selector, we can choose a previously created texture too—just click and select. Now comes the fun part. Having added a texture, we have to choose a texture type. To do that, we click on the texture type combo box. There are a lot of textures, but most of them are procedural textures and we won't use them much. The only texture type that isn't procedural is the image type. We can use textures like Clouds and Wood to create some effects and give surfaces a more complex look, or even create a grass texture with some dirt on it. But most times, the texture type that we will be using will be the Image type. Each texture has its own set of parameters to determine how it will look in the object. If we add a Wood texture, it will show the configuration parameters at the right. If we choose as texture type Clouds, the parameters showed at the right will be completely different. With the image texture type it's not different, this kind of texture has its own type of setup. This is the control panel: To show how to set up a texture, let's use an image file that represents a wood floor and a plane. We can apply the texture to this plane and set up how it's going to look, testing all the parameters. The first thing to do is assign a material to the plane, and add a texture to this material. We choose as texture type the Image option. It will show the configuration options for this kind of texture. To apply the image as a texture to the plane, just click on the Load button, situated on the Image menu. When we hit this button, we will be able to select the image file. Locate the image file and the texture will be applied. If we want to have more control over how this texture is organized and placed on the plane, we need to learn how the controls work. Every time you make any changes to the setup of a texture, these changes will be shown in the preview window; use it a lot to make good changes. Here is a list of what some of the buttons can do for the texture: UseAlpha: If the texture has an alpha channel, we have to press this button for Blender calculate the channel. An image has an alpha channel when some kind of transparency is stored in the image. For instance, a .png file with transparent background has an alpha channel. We can use this to create a texture with a logo, for a bottle, or to add an image of a tree or person to a plane. Rot90: With this option we can rotate the texture by 90 degrees. Repeat: Every texture must be distributed on the object surface, and repeating the texture in lines and columns is the default way to do that. Extended: If this button is pressed, the texture will be adjusted to fit all the object surface area. Clip: With this option, the texture will be cropped and we will be able to show only a part of it. To adjust which parts of the texture will be displayed, use the Min/Max X/Y options. Xrepeat / Yrepeat: This option determines how many times a texture is repeated, with the repeat option turned on. Normal Map: If the texture will be used to create Normal Maps, press this button. These are textures used to change the face normals of an object. Still: With this button selected, we can determine that the image used as texture is a still image. This option is marked by default. Movie: If you have to use a movie file as texture, press this button. This is very useful if we need to make something like a theatre projection screen or a tv screen. Sequence: We can use a sequence of images as texture too; just press this button. It works the same ways as with a movie file. There are a few more parameters, like the Reload button. If your texture file suffers any kind of change, we must press this button for the changes get accepted by Blender. The X button can erase this texture; use it if you need to select another image file. When we add a texture to any material, an external link is created with this file. This link can be absolute or relative. When we add a texture called "wood.png", which is located at the root of your main hard disk, like C:, a link to this texture will be created like this: "c:wood.png", so every time you open this file, the software will look for that file at that exact place. This is an absolute link, but we can use a relative link as well. For instance, when we add a texture located in the same folder as our scene, a relative link will be created. Every time we use an absolute link and we have to move the ".blend" file to another computer, the texture file must go with it. To imbue the image file with the .blend, just press the icon of gift package. To save all the textures used in a scene, just access the file menu and use the Pack Data option. It will make all the texture files embedded with the source blend file. Mapping Every time we add a texture to any object, we must choose a mapping type to set up how the texture will be applied to the object. For instance, if we have a wall and apply a wood texture, it must be placed like wallpaper. But for cylindrical or spherical objects, or even walls, we have to set up in a way that makes the texture adaptable to the topology of the surface, to avoid effects such as a stretched texture. To set this up, we use the mapping options, which are located on the Map Input menu. On this menu, we can choose between four basic mapping types which are Cube, Sphere, Flat, and Tube. If you have a wall, choose the option that matches the topology type with the model. In this case, the best choice is the Cube. Another important option here is the UV button, which allows us to use another very powerful type of texturing, based on UV Mapping. Normal Map This is a special and useful type of texture, that can change the normals of surfaces. If we have a floor and a texture of ceramic tiles, the surface can be represented with smaller details of that tiling, using this kind of a map. It's almost like modeling the tiles. But everything is created using just a normal map. To use this kind of texture, we must turn on the Nor button on the Map To menu. When this button is turned on, we can set up the Nor slider to determine the intensity of the normal displacement. It works based on the pixel color of the texture. With white pixels, the normals are not affected, and with black pixels, the normals are fully translated. If you want to optimize the normal mapping, using a special texture is much recommended. Some texture libraries even have this type of normal maps ready for use. They can be called bump maps too. Here is an example of how we can use them. We take a stone texture and a tiled texture with a white background and black lines. The stone texture is applied to the floor, and the tiled texture is used to create a tiling for the floor. The setup for that is really simple. Just apply the texture at a lower channel, and turn off the Col button for this channel. Turn on the Nor button, and this texture will affect only the normals and not the material color. Any image can be used as a normal map, but we will always get better results with a greyscale image prepared to be used as a normal map. Now, just set up the Nor intensity with the slider, and see the render. Turn on positive and turn on negativeSome of the buttons on the Map To menu can be turned on with positive and negative values. For instance, the Nor option can be turned on with one click. If we click on it again, the Nor text will turn yellow. This means that the Nor is inverted with negative values. Some other buttons may present the same option.
Read more
  • 0
  • 0
  • 10896

article-image-modeling-furniture-blender
Packt
22 Oct 2009
10 min read
Save for later

Modeling Furniture in Blender

Packt
22 Oct 2009
10 min read
Create Models or Use a Library? There are two possibilities when working with furniture. We can create new furniture, or use pre-made models from a library. The question is: when must we use each type? Some people say that using a pre-made model is not very professional thing but what they forget to say is that most projects have a tight deadline, and we need a quick modeling process to be ready on time. So, what's most important for professionals? Getting things done, or telling the client that all the models were created just for his project? Of course, the deadline is the most important, and your clients normally won't mind if you use pre-made models. Probably they won't even notice. So don't be ashamed to use pre-made models they won't make your projects any less professional. It's even recommended to use these models to speed-up the process, and allow you to spend more time on lighting or texturing. Is there any situation that demands the creation of a furniture model from scratch? Well, there are some. First, if you can't find the model in any library that you know, then it's going to be necessary to create it from scratch. If you are working with an architect who designs the spaces and furniture as well, you will probably have to model the furniture too, since it won't be available at any public library. Any project that deals with customized furniture will require that we work on the modeling for the furniture. Create your own libraryA good practice for anyone doing architectural visualization is to collect a lot of 3D models from public libraries for use in future projects. Keep these models for later, but don't forget to check if the author has released the models with no restrictions for commercial use. Otherwise, you must get their permission to use them. If you want to create your library, with no restrictions, why not create your own models? This could be a good exercise: take a few examples, and start creating some furniture. With time, you will have a good number of models. How to Get Started? In most cases, we have to get used to all that furniture modeling. We will have to start from scratch, with no blueprints available. The only references that we will have would be the photos, either provided by our clients, or provided from some web resources. If you have the time, visit a real store, and take some pictures and measures on your own. Sometimes, these stores will give you fliers and brochures, especially if you work with architecture. With time, you will get a lot of good reference material, and some of them come with measurements. But, if you don't know where to get started, let me point out some great web resources: http://www.e-interiors.net http://resources.blogoscopia.com http://blender-archi.tuxfamily.org/Models http://www.katorlegaz.com/ http://sketchup.google.com/3dwarehouse The first link has a lot of reference images classified by furniture type and designer. And sometimes, they even provide free 3D models. Most models there are saved in DXF, or 3DS file formats. Appending Models Before we start to model, let's see how we can import a model form an external library into Blender. The process is very simple, and what we have to do is to use the File menu, and access the Append or Link option. There is a shortcut for that too - just press SHIFT+F1 to call the same function. With this option, we have to select file that is already in the Blender file format. This option won't import files in other formats. When we select a file, a list of elements available in that particular file will be displayed, for us to select what we want to import. In most cases, the models will be stored under Object. When we click the Object option, all of the objects available in the file will be listed. If you know the name of the object that you want to import, just select the name, and click Load Library. The object will be loaded into our scene. Here, we have two options to handle this object: Append or Link: Append: If we choose this option, the object will be merged into ourcurrent scene. Link: With this option, an external link to the object file will be created. Any modifications to the original file will be reflected in our current scene. What is the best method to use? It will depend on whether we are willing to track all modifications applied to our furniture models. Using the Link method is a great way of keeping the furniture updated, because every modification at the original file is reflected immediately in the scene in which this model is placed. However, we will have to take the original file with the scene file every time we need to put our scene on another computer. They always have to go together. But if you choose to use the Append option, things will be a bit simpler, because the object will be incorporated into the scene file. We won't have to be worried about moving the furniture file along with the scene. Always use the Append option when you want to use furniture, or any other model, saved in another Blender file. To use a furniture model saved in another file, with a type other than “.blend”, we have to use the Import option. Importing Models To import a model, the process is very simple. We must use the File menu and select, Import. Then we have to select the proper file type from the list. The best file type, and the most common for furniture blocks, is the 3Ds file format, which belongs to the old 3D Studio application. There are some other good formats that work well with Blender, such as OBJ and LWO. The 3Ds file format can store lights, and it works well with Blender. The only thing we have to take extra care about is that most models imported come with triangular faces, which are a bit harder to edit. But, if you don't need to make any modifications to the model, this won't be a problem. Append or Import?Just to make things clearer, if you download a furniture model from a web site, and it's saved in the Blender native file format (.blend), you should append the model. If you download or get a furniture model on any file format other than “.blend”, you will have to import it. Since most models aren't saved in the Blender native file format, we can safely say that almost all furniture models that you find will require an import action to be placed in your scenes. Modeling a Chair Let's start with something simple, such as a chair. Even for a simple model, it will help us deal with smaller dimensions and details. Here is an image of the model: What's the main objective of this modeling? We have to create this chair, with the minimum use of faces and vertices. A good amount of detail can be left for textures, and it's always a good choice to use a lower number of vertices and faces in a model. If you consider one model, it won't matter much. But with a large number of chairs, such as in a theater room, it can make a difference in render time. Let's get started with a simple cube. Select this cube, and change the work mode to Edit. Select all vertices and press the W key. This will open the Specials menu. Choose subdivide, just once, from this menu. This will create new vertices and edges. Once these new vertices have been created, as shown in the image to the left, below, press the A key to remove all of the objects from the selection. Now, select the vertices to the right, using the B key. Remember to change the view mode to Wireframe before using the B key, otherwise, we won't be able to select the vertices behind the visible faces. When these vertices are selected, press the X key and choose Vertices to erase only the selected vertices. Using the CTRL+R key, add a new edge loop to the model, as shown in the following image: The next step is to change the scale of our model. Rotate the view to see the model in perspective view. Select all objects and press the S key, immediately after pressing the Z key. This will make the scale work only in the Z axis. Now, select the vertices identified in the following image and erase them using the X key. Change the selection mode to Edges, and select the edges identified in the following image. With the edges selected, press the E key to extrude them. With the new faces created, we can now add some detail to the model. Select only the top edge of the previously created faces. Move this edge down just a bit. This will add a small declivity to the seat. Now, we can move on to the next extrude, which must be from the selected edges identified in the following image. I'm not using any kind of measure for this example, but if you like to work only with real measurements, remember to hold the CTRL key every time a new extrude or edge is moved. This way, all transformations will use the grid lines. For this model, I'm not using vertex snap. With the new faces created, select just the two edges identified in the following image. Extrude these edges until they reach the other side of the base model. Hold the CTRL key, while you extrude them, to help with the precision. If you already want to remove duplicated vertices, select all objects, and press the W key. Choose Remove doubles to erase any duplicated vertices. Select the edges identified in the image to keep adding more parts to the chair. Extrude the edges three times until you have the same structure showed here. Now, we have to close the top with a face. To do that, we must select all four vertices on the top. When the vertices are selected, press the F key to create a new face. The next step is to select the small side edges to create some detail. Select just one edge, beginning from bottom to top, and move it just a bit. Repeat this operation with the other edges until we get the edges positioned as in the following image. The basic shape of our chair has now been created. Now, we can make some adjustments for improving the overall proportions. Select all edges or vertices on the left side, and move them a bit to the left. This will make the model wider. Did you notice that we have modeled only half a chair? Now we can make the other half, using the Mirror modifier. Add the modifier, and choose the right axis to make a perfect copy. If the center point for the model has been moved, you might need to edit the model to create a perfect mirrored match. Don't worry if you have moved the model by accident - this can happen sometimes. Along with the Mirror modifier, add a Subsurf modifier, too. With the Subsurf modifier, we realize that this model needs a new edge loop on the left side. Just press CTRL+R, and add a new loop, as in the following image.
Read more
  • 0
  • 0
  • 7336