Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-ogre-3d-faqs
Packt
14 Mar 2011
8 min read
Save for later

Ogre 3D FAQs

Packt
14 Mar 2011
8 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch         Read more about this book       (For more resources on OGRE 3D, see here.) Q: What is Ogre3D? A: Creating 3D scenes and worlds is an interesting and challenging problem, but the results are hugely rewarding and the process to get there can be a lot of fun. Ogre 3D helps you create your own scenes and worlds. Ogre 3D is one of the biggest open source 3D render engines and enables its users to create and interact freely with their scenes.   Q: What are the system requirements for Ogre 3D? A: You need a compiler to compile the applications. Your computer should have a graphic card with 3D capabilities. It would be best if the graphic card supports DirectX 9.0.   Q: From where can I download the Ogre 3D software? A: Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. The following are the steps to download and install Ogre 3D SDK: Go to http://www.ogre3d.org/download/sdk Download the appropriate package. Copy the installer to a directory you would like your OgreSDK to be placed in. Double-click on the Installer; this will start a self extractor. You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1. Open this folder. It should look similar to the following screenshot:     Q: Which are the different versions of the Ogre 3D SDK? A: Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at http://www.ogre3d.org/wiki. The wiki contains detailed tutorials on how to set up your development environment for many different platforms.   Q: What do you mean by a scene graph? A: A scene graph is one of the most used concepts in graphics programming. Simply put, it's a way to store information about a scene. A scene graph has a root and is organized like a tree. The important thing about a scene graph is that the transformation is relative to the parent of the node. If we modify the orientation of the parent, the children will also be affected by this change.   Q: What are Spotlights? A: Spotlights are just like flashlights in their effect. They have a position where they are and a direction in which they illuminate the scene. This direction was the first thing we set after creating the light. The direction simply defines in which direction the spotlight is pointed. The next two parameters we set were the inner and the outer angles of the spotlight. The inner part of the spotlight illuminates the area with the complete power of the light source's color. The outer part of the cone uses less power to light the illuminated objects. This is done to emulate the effects of a real flashlight.   Q: What is the difference between frame-based and time-based movement? A: When using frame-based movement, the entity is moved the same distance each frame, by time passed movement, the entity is moved the same distance each second.   Q: What is a window handle and how is it used by our application and the operating system? A: A window handle is simply a number that is used as an identifier for a certain window. This number is created by the operating system and each window has a unique handle. The input system needs this handle because without it, it couldn't get the input events. Ogre 3D creates a window for us. So to get the window handle, we need to ask it the following line: win->getCustomAttribute("WINDOW", &windowHnd);   Q: What does a scene manager do? A: A scene manager does a lot of things, which will be obvious when we take a look at the documentation. There are lots of functions which start with create, destroy, get, set, and has. One important task the scene manager fulfills is the management of objects. This can be scene nodes, entities, lights, or a lot of other object types that Ogre 3D has. The scene manager acts as a factory for these objects and also destroys them. Ogre 3D works with the principle—he who creates an object, also destroys it. Every time we want an entity or scene node deleted, we must use the scene manager; otherwise, Ogre 3D might try to free the same memory later, which might result in an ugly application crash. Besides object management, it manages a scene, like its name suggests. This can include optimizing the scene and calculating positions of each object in the scene for rendering. It also implements efficient culling algorithms.   Q: Which three functions offer the FrameListener interface and at which point is each of these functions called? A: A FrameListener is based on the observer pattern. We can add a class instance which inherits from the Ogre::FrameListener interface to our Ogre 3D root instance using the addFrameListener() method of Ogre::Root. When this class instance is added, our class gets notified when certain events happen. The following are the three functions that offer the FrameListener interface: frameStarted which gets called before the frame is rendered frameRenderingQueued which is called after the frame is rendered but before the buffers are swapped and frameEnded which is called after the current frame has been rendered and displayed.   Q: What is a particle system? A: A particle system consists of two to three different constructs—an emitter, a particle, and an affector (optional). The most important of these three is the particle itself, as the name particle system suggests. A particle displays a color or textures using a quad or the point render capability of the graphics cards. When the particle uses a quad, this quad is always rotated to face the camera. Each particle has a set of parameters, including a time to live, direction, and velocity. There are a lot of different parameters, but these three are the most important for the concept of particle systems. The time to live parameter controls the life and death of a particle. Normally, a particle doesn't live for more than a few seconds before it gets destroyed. This effect can be seen in the demo when we look up at the smoke cone. There will be a point where the smoke vanishes. For these particles, the time to live counter reached zero and they got destroyed. An emitter creates a predefined number of particles per second and can be seen as the source of the particles. Affectors, on the other hand, don't create particles but change some of their parameters. An affector could change the direction, velocity, or color of the particles created by the emitter.     Q: Which add-ons are available for Ogre 3D? Where can I get them? A: The following are some of the add-ons available to Ogre 3D: Hydrax Hydrax is an add-on that adds the capability of rendering pretty water scenes to Ogre 3D. With this add-on, water can be added to a scene and a lot of different settings are available, such as setting the depth of the water, adding foam effects, underwater light rays, and so on. The add-on can be found at http://www.ogre3d.org/tikiwiki/Hydrax. Caelum Caelum is another add-on, which introduces sky rendering with day and night cycles to Ogre 3D. It renders the sun and moon correctly using a date and time. It also renders weather effects like snow or rain and a complex cloud simulation to make the sky look as real as possible. The wiki site for this add-on is http://www.ogre3d.org/tikiwiki/Caelum. Particle Universe Another commercial add-on is Particle Universe. Particle Universe adds a new particle system to Ogre 3D, which allows many more different effects than the normal Ogre 3D particle system allows. Also, it comes with a Particle Editor, allowing artists to create particles in a separate application and the programmer can load the created particle script later. This plugin can be found at http://www.ogre3d.org/tikiwiki/Particle+Universe+plugin.   Summary In this article we took a look at some of the most frequently asked questions on Ogre 3D. The article, Common Mistakes : Ogre Wiki, would be helpful for further queries pertaining to Ogre 3D. Further resources on this subject: Starting Ogre 3D [Article] Installation of Ogre 3D [Article] Materials with Ogre 3D [Article] The Ogre Scene Graph [Article] OGRE 3D 1.7 Beginner's Guide [Book]
Read more
  • 0
  • 0
  • 2174

article-image-away3d-36-applying-light-and-pixel-bender-materials
Packt
09 Mar 2011
4 min read
Save for later

Away3D 3.6: Applying Light and Pixel Bender materials

Packt
09 Mar 2011
4 min read
  Away3D 3.6 Essentials Take Flash to the next dimension by creating detailed, animated, and interactive 3D worlds with Away3D         Read more about this book       (For more resources on 3D, see here.) The reader will comprehend this article better by referring the previous articles on: Away3D 3.6: Applying Animated and Composite materials. Materials, Lights and Shading Techniques with Away3D 3.6. Away3D 3.6: Applying Basic and Bitmap Materials. Models and Animations with Away3D 3.6. Light materials Light materials can be illuminated by an external light source. There are three different types of lights that can be applied to these materials: ambient, point, and directional. Also, remember that these materials will not necessarily recognize each type of light, or more than one light source. The table under the Lights and materials section lists which light sources can be applied to which materials. WhiteShadingBitmapMaterial The WhiteShadingBitmapMaterial class uses flat shading to apply lighting over a bitmap texture. As the class name suggests, the lighting is always white in color, ignoring the color of the source light. protected function applyWhiteShadingBitmapMaterial():void{ initSphere(); initPointLight(); materialText.text = "WhiteShadingBitmapMaterial"; var newMaterial:WhiteShadingBitmapMaterial = new WhiteShadingBitmapMaterial( Cast.bitmap(EarthDiffuse) ); currentPrimitive.material = newMaterial;} The WhiteShadingBitmapMaterial class extends the BitmapMaterial class. This means that in addition to those parameters in the following list, the init object parameters listed for the BitmapMaterial are also valid for the WhiteShadingBitmapMaterial. ShadingColorMaterial The ShadingColorMaterial class uses flat shading to apply lighting over a solid base color. protected function applyShadingColorMaterial():void{ initSphere(); initPointLight(); materialText.text = "ShadingColorMaterial"; var newMaterial:ShadingColorMaterial = new ShadingColorMaterial( Cast.trycolor("deepskyblue") ); currentPrimitive.material = newMaterial;} The ShadingColorMaterial class extends the ColorMaterial class. This means that in addition to those parameters in the following list, the init object parameters listed for the ColorMaterial class are also valid for the ShadingColorMaterial class. The color parameter can accept an int or String value. However, due to a bug in the ColorMaterial class, only an int value will work correctly. In the previous example, we have manually converted the color represented by the string deepskyblue into an int with the trycolor() function from the Cast class. PhongBitmapMaterial The PhongBitmapMaterial uses phong shading to apply lighting over a TransformBitmapMaterial base material. protected function applyPhongBitmapMaterial():void{ initSphere(); initDirectionalLight(); materialText.text = "PhongBitmapMaterial"; var newMaterial:PhongBitmapMaterial = new PhongBitmapMaterial(Cast.bitmap(EarthDiffuse)); currentPrimitive.material = newMaterial;} PhongBitmapMaterial is a composite material that passes the init object to a contained instance of the TransformBitmapMaterial class. This means that in addition to those parameters in the following list, the init object parameters listed for the TransformBitmapMaterial are also valid for the PhongBitmapMaterial. PhongColorMaterial The PhongColorMaterial uses phong shading to apply lighting over a solid color base material. protected function applyPhongColorMaterial():void{ initSphere(); initDirectionalLight(); materialText.text = "PhongColorMaterial"; var newMaterial:PhongColorMaterial = new PhongColorMaterial("deepskyblue"); currentPrimitive.material = newMaterial;} PhongMovieMaterial The PhongMovieMaterial uses phong shading to apply lighting over an animated MovieMaterial base material. protected function applyPhongMovieMaterial():void{ initSphere(); initDirectionalLight(); materialText.text = "PhongMovieMaterial"; var newMaterial:PhongMovieMaterial = new PhongMovieMaterial(new Bear()); currentPrimitive.material = newMaterial;} PhongMovieMaterial is a composite material that passes the init object to a contained instance of the MovieMaterial class. This means that in addition to those parameters in the following list, the init object parameters listed for the PhongMovieMaterial are also valid for the MovieMaterial. Dot3BitmapMaterial The Dot3BitmapMaterial uses normal mapping to add depth to a 3D object. protected function applyDot3BitmapMaterial():void{ initSphere(); initDirectionalLight(); materialText.text = "Dot3BitmapMaterial"; var newMaterial:Dot3BitmapMaterial = new Dot3BitmapMaterial( Cast.bitmap(EarthDiffuse), Cast.bitmap(EarthNormal) ); currentPrimitive.material = newMaterial;} Dot3BitmapMaterial is a composite material that passes the init object to a contained instance of the BitmapMaterial class. This means that in addition to those parameters in the following list, the init object parameters listed for the BitmapMaterial are also valid for the Dot3BitmapMaterial.
Read more
  • 0
  • 0
  • 1504

article-image-away3d-36-applying-animated-and-composite-materials
Packt
04 Mar 2011
3 min read
Save for later

Away3D 3.6: Applying Animated and Composite materials

Packt
04 Mar 2011
3 min read
  Away3D 3.6 Essentials Take Flash to the next dimension by creating detailed, animated, and interactive 3D worlds with Away3D         Animated materials As mentioned above a number of materials can be used to display animations on the surface of a 3D object. These animations are usually movies that have been encoded into a SWF file. You can also display an interactive SWF file, like a form, on the surface of a 3D object. MovieMaterial The MovieMaterial displays the output of a Sprite object, which can be animated. The sprite usually originates from another SWF file, which in this case we have embedded and referenced via the Bear class. A new instance of the Bear class is then passed to the MovieMaterial constructor. protected function applyMovieMaterial():void { initCube(); materialText.text = "MovieMaterial"; var newMaterial:MovieMaterial = new MovieMaterial(new Bear()); currentPrimitive.material = newMaterial; } The MovieMaterial class extends the TransformBitmapMaterial class. This means that in addition to those parameters in the following list, the init object parameters listed for the TransformBitmapMaterial are also valid for the MovieMaterial. AnimatedBitmapMaterial The AnimatedBitmapMaterial class displays the frames from a MovieClip object. In order to increase performance, it will first render each frame of the supplied MovieClip into a bitmap. These bitmaps are stored in a cache, which increases playback performance at the cost of using additional memory. Because of the memory overhead resulting from this cache, the AnimatedBitmapMaterial cannot be used to display movie clips longer than two seconds. If you pass a movie clip longer than two seconds an exception will be thrown. The MovieClip object, passed to the AnimatedBitmapMaterial constructor, usually originates from another SWF file. This source SWF file needs to be implemented in the ActionScript Virtual Machine 2 (AVM2) format, which is the format used by Flash Player 9 and above. This is an important point, because a large number of video conversion tools will output AVM1 SWF files. If you need to display a SWF movie in AVM1 format, use MovieMaterial class instead. If you try to use an AVM1 SWF file with the AnimatedBitmapMaterial class, an exception similar to the following will be thrown: TypeError: Error #1034: Type Coercion failed: cannot convert flash.display:: AVM1Movie@51e8e51 to flash.display.MovieClip. FFmapeg is a free, cross-platform tool that can be used to convert video files into AVM2 SWF files. It can be downloaded from , and precompiled Windows binaries can be downloaded from http://sourceforge.net/projects/mplayer-win32/files/FFmpeg/. The following command will convert a WMV video into a two second AVM2 SWF file with a resolution of 320 x 240 without any audio. ffmpeg -i Butterfly.wmv -t 2 -s 320x240 -an -f avm2 Butterfly.SWF protected function applyAnimatedBitmapMaterial():void { initCube(); materialText.text = "AnimatedBitmapMaterial"; var newMaterial:AnimatedBitmapMaterial = new AnimatedBitmapMaterial(new Butterfly()); currentPrimitive.material = newMaterial; } The AnimatedBitmapMaterial class extends the TransformBitmapMaterial class. This means that in addition to those parameters in the following list, the init object parameters listed for the TransformBitmapMaterial are also valid for the AnimatedBitmapMaterial. Interactive MovieMaterial By setting the interactive parameter to true, a MovieMaterial object can pass mouse events to the Sprite object it is displaying. This allows you to interact with the material as if it were added directly to the Flash stage while it is wrapped around a 3D object. protected function applyInteractiveMovieMaterial():void { initCube(); materialText.text = "MovieMaterial - Interactive"; var newMaterial:MovieMaterial = new MovieMaterial(new InteractiveTexture(), { interactive: true, smooth: true } ); currentPrimitive.material = newMaterial; } Refer to the previous table for the MovieMaterial class for the list of constructor parameters.
Read more
  • 0
  • 0
  • 2015
Visually different images

article-image-animating-panda3d
Packt
01 Mar 2011
8 min read
Save for later

Animating in Panda3D

Packt
01 Mar 2011
8 min read
Panda3D 1.6 Game Engine Beginner's Guide Create your own computer game with this 3D rendering and game development framework The first and only guide to building a finished game using Panda3D Learn about tasks that can be used to handle changes over time Respond to events like keyboard key presses, mouse clicks, and more Take advantage of Panda3D's built-in shaders and filters to decorate objects with gloss, glow, and bump effects Follow a step-by-step, tutorial-focused process that matches the development process of the game with plenty of screenshots and thoroughly explained code for easy pick up        Actors and Animations An Actor is a kind of object in Panda3D that adds more functionality to a static model. Actors can include joints within them. These joints have parts of the model tied to them and are rotated and repositioned by animations to make the model move and change. Actors are stored in .egg and .bam files, just like models. Animation files include information on the position and rotation of joints at specific frames in the animation. They tell the Actor how to posture itself over the course of the animation. These files are also stored in .egg and .bam files. Time for action – loading Actors and Animations Let's load up an Actor with an animation and start it playing to get a feel for how this works: Open a blank document in NotePad++ and save it as Anim_01.py in the Chapter09 folder. We need a few imports to start with. Put these lines at the top of the file: import direct.directbase.DirectStart from pandac.PandaModules import * from direct.actor.Actor import Actor We won't need a lot of code for our class' __init__ method so let's just plow through it here : class World: def __init__(self): base.disableMouse() base.camera.setPos(0, -5, 1) self.setupLight() self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg"}) self.kid.reparentTo(render) self.kid.loop("Walk") self.kid.setH(180) The next thing we want to do is steal our setupLight() method from the Track class and paste it into this class: def setupLight(self): primeL = DirectionalLight("prime") primeL.setColor(VBase4(.6,.6,.6,1)) self.dirLight = render.attachNewNode(primeL) self.dirLight.setHpr(45,-60,0) render.setLight(self.dirLight) ambL = AmbientLight("amb") ambL.setColor(VBase4(.2,.2,.2,1)) self.ambLight = render.attachNewNode(ambL) render.setLight(self.ambLight) return Lastly, we need to instantiate the World class and call the run() method. w = World() run() Save the file and run it from the command prompt to see our loaded model with an animation playing on it, as depicted in the following screenshot: What just happened? Now, we have an animated Actor in our scene, slowly looping through a walk animation. We made that happen with only three lines of code: self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg"}) self.kid.reparentTo(render) self.kid.loop("Walk") The first line creates an instance of the Actor class. Unlike with models, we don't need to use a method of loader. The Actor class constructor takes two arguments: the first is the filename for the model that will be loaded. This file may or may not contain animations in it. The second argument is for loading additional animations from separate files. It's a dictionary of animation names and the files that they are contained in. The names in the dictionary don't need to correspond to anything; they can be any string. myActor = Actor( modelPath, {NameForAnim1 : Anim1Path, NameForAnim2 : Anim2Path, etc}) The names we give animations when the Actor is created are important because we use those names to control the animations. For instance, the last line calls the method loop() with the name of the walking animation as its argument. If the reference to the Actor is removed, the animations will be lost. Make sure not to remove the reference to the Actor until both the Actor and its animations are no longer needed. Controlling animations Since we're talking about the loop() method, let's start discussing some of the different controls for playing and stopping animations. There are four basic methods we can use: play("AnimName"): This method plays the animation once from beginning to end. loop("AnimName"): This method is similar to play, but the animation doesn't stop when it's over; it replays again from the beginning. stop() or stop("AnimName"): This method, if called without an argument, stops all the animations currently playing on the Actor. If called with an argument, it only stops the named animation. Note that the Actor will remain in whatever posture they are in when this method is called. pose("AnimName", FrameNumber): Places the Actor in the pose dictated by the supplied frame without playing the animation. We have some more advanced options as well. Firstly, we can provide option fromFrame and toFrame arguments to play or loop to restrict the animation to specific frames. myActor.play("AnimName", fromFrame = FromFrame, toFrame = toFrame) We can provide both the arguments, or just one of them. For the loop() method, there is also the optional argument restart, which can be set to 0 or 1. It defaults to 1, which means to restart the animation from the beginning. If given a 0, it will start looping from the current frame. We can also use the getNumFrames("AnimName") and getCurrentFrame("AnimName") methods to get more information about a given animation. The getCurrentAnim() method will return a string that tells us which animation is currently playing on the Actor. The final method we have in our list of basic animation controls sets the speed of the animation. myActor.setPlayRate(1.5, "AnimName") The setPlayRate() method takes two arguments. The first is the new play rate, and it should be expressed as a multiplier of the original frame rate. If we feed in .5, the animation will play half as fast. If we feed in 2, the animation will play twice as fast. If we feed in -1, the animation will play at its normal speed, but it will play in reverse. Have a go hero – basic animation controls Experiment with the various animation control methods we've discussed to get a feel for how they work. Load the Stand and Thoughtful animations from the animations folder as well, and use player input or delayed tasks to switch between animations and change frame rates. Once we're comfortable with what we've gone over so far, we'll move on. Animation blending Actors aren't limited to playing a single animation at a time. Panda3D is advanced enough to offer us a very handy functionality, called blending. To explain blending, it's important to understand that an animation is really a series of offsets to the basic pose of the model. They aren't absolutes; they are changes from the original. With blending turned on, Panda3D can combine these offsets. Time for action – blending two animations We'll blend two animations together to see how this works. Open Anim_01.py in the Chapter09 folder. We need to load a second animation to be able to blend. Change the line where we create our Actor to look like the following code: self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg", "Thoughtful" : "../Animations/Thoughtful.egg"}) Now, we just need to add this code above the line where we start looping the Walk animation: self.kid.enableBlend() self.kid.setControlEffect("Walk", 1) self.kid.setControlEffect("Thoughtful", 1) Resave the file as Anim_02.py and run it from the command prompt. What just happened? Our Actor is now performing both animations to their full extent at the same time. This is possible because we made the call to the self.kid.enableBlend() method and then set the amount of effect each animation would have on the model with the self.kid.setControlEffect() method. We can turn off blending later on by using the self.kid.disableBlend() method, which will return the Actor to the state where playing or looping a new animation will stop any previous animations. Using the setControlEffect method, we can alter how much each animation controls the model. The numeric argument we pass to setControlEffect() represents a percentage of the animation's offset that will be applied, with 1 being 100%, 0.5 being 50%, and so on. When blending animations together, the look of the final result depends a great deal on the model and animations being used. Much of the work needed to achieve a good result depends on the artist. Blending works well for transitioning between animations. In this case, it can be handy to use Tasks to dynamically alter the effect animations have on the model over time. Honestly, though, the result we got with blending is pretty unpleasant. Our model is hardly walking at all, and he looks like he has a nervous twitch or something. This is because both animations are affecting the entire model at full strength, so the Walk and Thoughtful animations are fighting for control over the arms, legs, and everything else, and what we end up with is a combination of both animation's offsets. Furthermore, it's important to understand that when blending is enabled, every animation with a control effect higher than 0 will always be affecting the model, even if the animation isn't currently playing. The only way to remove an animation's influence is to set the control effect to 0. This obviously can cause problems when we want to play an animation that moves the character's legs and another animation that moves his arms at the same time, without having them screw with each other. For that, we have to use subparts.  
Read more
  • 0
  • 0
  • 3605

article-image-installing-panda3d
Packt
11 Feb 2011
4 min read
Save for later

Installing Panda3D

Packt
11 Feb 2011
4 min read
Getting started with Panda3D installation packages The kind folks who produce Panda3D have made it very easy to get Panda3D up and working. You don't need to worry about any compiling, library linking, or other difficult, multi-step processes. The Panda3D website provides executable files that take care of all the work for you. These files even install the version of Python they need to operate correctly, so you don't need to go elsewhere for it. Time for action - downloading and installing Panda3D I know what you're thinking: "Less talk, more action!" Here are the step-by-step instructions for installing Panda3D: Navigate your web browser to www.Panda3D.org. Under the Downloads option, you'll see a link labeled SDK. Click it. If you are using Windows, scroll down this page you'll find a section titled Download other versions. Find the link to Panda3D SDK 1.6.2 and click it. If you aren't using Windows, click on the platform you are using (Mac, Linux, or any other OS.). That will take you to a page that has the downloads for that platform. Scroll down to the Download other versions section and find the link to Panda3D SDK 1.6.2, as before. When the download is complete, run the file and this screen will pop up: Click Next to continue and then accept the terms. After that, you'll be prompted about where you want to install Panda3D. The default location is just fine. Click the Install button to continue. Wait for the progress bar to fill up. When it's done, you'll see another prompt. This step really isn't necessary. Just click No and move on. When you have finished the installation, you can verify that it's working by going to Start Menu | All Programs | Panda3D 1.6.2 | Sample Programs | Ball in Maze | Run Ball in Maze. A window will open, showing the Ball in Maze sample game, where you tilt a maze to make a ball roll around while trying to avoid the holes. What just happened? You may be wondering why we skipped a part of the installation during step 7. That step of the process caches some of the assets, like 3D models and such that come with Panda3D. Essentially, by spending a few minutes caching these files now, the sample programs that come with Panda3d will load a few seconds faster the first time we run them, that's all. Now that we've got Panda3D up and running let's get ourselves an advanced text editor to do our coding in. Switching to an advanced text editor The next thing we need is Notepad++. Why, you ask? Well, to code with Python all you really need is a text editor, like the notepad that comes with Windows XP. After typing your code you just have to save the file with .py extension. Notepad itself is kind of dull, though, and it doesn't have many features to make coding easier. Notepad++ is a text editor very similar to Notepad. It can open pretty much any text file and it comes with a pile of features to make coding easier. To highlight some fan favorites, it provides language mark-up, a Find and Replace feature, and file tabs to organize multiple open files. The language mark-up will change the color and fonts of specific parts of your code to help you visually understand and organize it. With Find and Replace you can easily change a large number of variable names and also quickly and easily update code. File tabbing keeps all of your open code files in one window and makes it easy to switch back and forth between them.
Read more
  • 0
  • 0
  • 5195

article-image-creating-and-warping-3d-text-away3d-36
Packt
11 Feb 2011
7 min read
Save for later

Creating and Warping 3D Text with Away3D 3.6

Packt
11 Feb 2011
7 min read
  Away3D 3.6 Essentials The external library, swfvector, is contained in the wumedia package. More information about the swfvector library can be found at http://code.google.com/p/swfvector/. This library was not developed as part of the Away3D engine, but has been integrated since version 2.4 and 3.4, to enable Away3D to provide a way to create and display text 3D objects within the scene. Embedding fonts Creating a text 3D object in Away3D requires a source SWF file with an embedded font. To accommodate this, we will create a very simple application using the Fonts class below. This class embeds a single true-type font called Vera Sans from the Vera.ttf file. When compiled, the resulting SWF file can then be referenced by our Away3D application, allowing the embedded font file to be accessed. When embedding fonts using the Flex 4 SDK, you may need to set the embedAsCFF property to false, like: [Embed(mimeType="application/x-font", source="Vera. ttf", fontName="Vera Sans", embedAsCFF=false)] This is due to the new way fonts can be embedded with the latest versions of the Flex SDK. You can find more information on the embedAsCFF property at http://help.adobe.com/en_US/flex/using/WS2db454920e96a9e51e63e3d11c0bf6320a-7fea.html. package { import flash.display.Sprite; public class Fonts extends Sprite { [Embed(mimeType="application/x-font", source="Vera.ttf", fontName="Vera Sans")] public var VeraSans:Class; } } The font used here is Bitstream Vera, which can be freely distributed, and can be obtained from http://www.gnome.org/fonts/. However, not all fonts can be freely redistributed, so be mindful of the copyright or license restrictions that may be imposed by a particular font. Displaying text in the scene Text 3D objects are represented by the TextField3D class, from the away3d.primitives package. Creating a text 3D object requires two steps: Extracting the fonts that were embedded inside a separate SWF file. Creating a new TextField3D object. Let's create an application called FontDemo that creates a 3D textfield and adds it to the scene. package { We import the TextField3D class, making it available within our application. import away3d.primitives.TextField3D; The VectorText class will be used to extract the fonts from the embedded SWF file. import wumedia.vector.VectorText; public class FontDemo extends Away3DTemplate { The Fonts.SWF file was created by compiling the Fonts class above. We want to embed this SWF file as raw data, so we specify the MIME type to be application/octet-stream. [Embed(source="Fonts.swf", mimeType="application/octet-stream")] protected var Fonts:Class; public function FontDemo() { super(); } protected override function initEngine():void { super.initEngine(); Before any TextField3D objects can be created we need to extract the fonts from the embedded SWF file. This is done by calling the static extractFonts() function in the VectorText class, and passing a new instance of the embedded SWF file. Because we specified the MIME type of the embedded file to be application/octet-stream, a new instance of the class is created as a ByteArray. VectorText.extractFont(new Fonts()); } protected override function initScene():void { super.initScene(); this.camera.z = 0; Here we create the new instance of the TextField3D class. The first parameter is the font name, which corresponds to the font name included in the embedded SWF file. The TextField3D constructor also takes an init object, whose parameters are listed in the next table. var text:TextField3D = new TextField3D("Vera Sans", { text: "Away3D Essentials", align: VectorText.CENTER, z: 300 } ); scene.addChild(text); } } } The following table shows you the init object parameters accepted by the TextField3D constructor. When the application is run, the scene will contain a single 3D object that has been created to spell out the words "Away3D Essentials" and formatted using the supplied font. At this point, the text 3D object can be transformed and interacted with, just like other 3D object. 3D Text materials You may be aware of applying bitmap materials to the surface of a 3D object according to their UV coordinates. The default UV coordinates defined by a TextField3D object generally do not allow bitmap materials to be applied in a useful manner. However, simple colored materials like WireframeMaterial, WireColorMaterial, and ColorMaterial can be applied to a TextField3D object. Extruding 3D text By default, a text 3D object has no depth (although it is visible from both sides). One of the extrusion classes called TextExtrusion can be used to create an additional 3D object that uses the shape of a text 3D object and extends it into a third dimension. When combined, the TextExtrusion and TextField3D objects can be used to create the appearance of a solid block of text. The FontExtrusionDemo class in the following code snippet gives an example of this process: package { import away3d.containers.ObjectContainer3D; import away3d.extrusions.TextExtrusion; import away3d.primitives.TextField3D; import flash.events.Event; import wumedia.vector.VectorText; public class FontExtrusionDemo extends Away3DTemplate { [Embed(source="Fonts.swf", mimeType="application/octet-stream")] protected var Fonts:Class; The TextField3D 3D object and the extrusion 3D object are both added as children of a ObjectContainer3D object, referenced by the container property. protected var container:ObjectContainer3D; The text property will reference the TextField3D object used to display the 3D text. protected var text:TextField3D; The extrusion property will reference the TextExtrusion object used to give the 3D text some depth. protected var extrusion:TextExtrusion; public function FontExtrusionDemo() { super(); } protected override function initEngine():void { super.initEngine(); this.camera.z = 0; VectorText.extractFont(new Fonts()); } protected override function initScene():void { super.initScene(); text = new TextField3D("Vera Sans", { text: "Away3D Essentials", align: VectorText.CENTER } ); The TextExtrusion constructor takes a reference to the TextField3D object (or any other Mesh object). It also accepts an init object, which we have used to specify the depth of the 3D text, and to make both sides of the extruded mesh visible. extrusion = new TextExtrusion(text, { depth: 10, bothsides:true } ); The ObjectContainer3D object is created, supplying the TextField3D and TextExtrusion 3D objects that were created above as children. The initial position of the ObjectContainer3D object is set to 300 units down the positive end of the Z-axis. container = new ObjectContainer3D(text, extrusion, { z: 300 } ); The container is then added as a child of the scene. scene.addChild(container); } protected override function onEnterFrame(event:Event):void { super.onEnterFrame(event); The container is slowly rotated around its Y-axis by modifying the rotationY property in every frame. In previous examples, we have simply incremented the rotation property, without any regard for when the value became larger than 360 degrees. After all, rotating a 3D object by 180 or 540 degrees has the same overall effect. But in this case, we do want to keep the value of the rotationY property between 0 and 360 so we can easily test to see if the rotation is within a given range. To do this, we use the mod (%) operator. container.rotationY = (container.rotationY + 1) % 360; Z-sorting issues can rise due to the fact that the TextExtrusion and TextField3D objects are so closely aligned. This issue results in parts of the TextField3D or TextExturude 3D objects showing through where it is obvious that they should be hidden. To solve this problem, we can use the procedure to force the sorting order of 3D objects. Here we are assigning a positive value to the TextField3D screenZOffset property to force it to be drawn behind the TextExturude object, when the container has been rotated between 90 and 270 degrees around the Y-axis. When the container is rotated like this, the TextField3D object is at the back of the scene. Otherwise, the TextField3D is drawn in front by assigning a negative value to the screenZOffset property. if (container.rotationY > 90 && container.rotationY < 270) text.screenZOffset = 10; else text.screenZOffset = -10; } } } The result of the FontExtrusionDemo application is shown in the following image:
Read more
  • 0
  • 0
  • 1923
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-installation-ogre-3d
Packt
09 Feb 2011
6 min read
Save for later

Installation of Ogre 3D

Packt
09 Feb 2011
6 min read
OGRE 3D 1.7 Beginner's Guide Downloading and installing Ogre 3D The first step we need to take is to install and configure Ogre 3D. Time for action – downloading and installing Ogre 3D We are going to download the Ogre 3D SDK and install it so that we can work with it later. Go to http://www.ogre3d.org/download/sdk. Download the appropriate package. If you need help picking the right package, take a look at the next What just happened section. Copy the installer to a directory you would like your OgreSDK to be placed in. Double-click on the Installer; this will start a self extractor. You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1. Open this folder. It should look similar to the following screenshot: (Move the mouse over the image to enlarge.) What just happened? We just downloaded the appropriate Ogre 3D SDK for our system. Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. After downloading we extracted the Ogre 3D SDK. Different versions of the Ogre 3D SDK Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. This article will focus on the Windows pre-build SDK and how to configure your development environment. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at http://www.ogre3d.org/wiki. The wiki contains detailed tutorials on how to set up your development environment for many different platforms. Exploring the SDK Before we begin building the samples which come with the SDK, let's take a look at the SDK. We will look at the structure the SDK has on a Windows platform. On Linux or MacOS the structure might look different. First, we open the bin folder. There we will see two folders, namely, debug and release. The same is true for the lib directory. The reason is that the Ogre 3D SDK comes with debug and release builds of its libraries and dynamic-linked/shared libraries. This makes it possible to use the debug build during development, so that we can debug our project. When we finish the project, we link our project against the release build to get the full performance of Ogre 3D. When we open either the debug or release folder, we will see many dll files, some cfg files, and two executables (exe). The executables are for content creators to update their content files to the new Ogre version, and therefore are not relevant for us. The OgreMain.dll is the most important DLL. It is the compiled Ogre 3D source code we will load later. All DLLs with Plugin_ at the start of their name are Ogre 3D plugins we can use with Ogre 3D. Ogre 3D plugins are dynamic libraries, which add new functionality to Ogre 3D using the interfaces Ogre 3D offers. This can be practically anything, but often it is used to add features like better particle systems or new scene managers. The Ogre 3D community has created many more plugins, most of which can be found in the wiki. The SDK simply includes the most generally used plugins. The DLLs with RenderSystem_ at the start of their name are, surely not surprisingly, wrappers for different render systems that Ogre 3D supports. In this case, these are Direct3D9 and OpenGL. Additional to these two systems, Ogre 3D also has a Direct3D10, Direct3D11, and OpenGL ES(OpenGL for Embedded System) render system. Besides the executables and the DLLs, we have the cfg files. cfg files are config files that Ogre 3D can load at startup. Plugins.cfg simply lists all plugins Ogre 3D should load at startup. These are typically the Direct3D and OpenGL render systems and some additional SceneManagers. quakemap.cfg is a config file needed when loading a level in the Quake3 map format. We don't need this file, but a sample does. resources.cfg contains a list of all resources, like a 3D mesh, a texture, or an animation, which Ogre 3D should load during startup. Ogre 3D can load resources from the file system or from a ZIP file. When we look at resources.cfg, we will see the following lines: Zip=../../media/packs/SdkTrays.zip FileSystem=../../media/thumbnails Zip= means that the resource is in a ZIP file and FileSystem= means that we want to load the contents of a folder. resources.cfg makes it easy to load new resources or change the path to resources, so it is often used to load resources, especially by the Ogre samples. Speaking of samples, the last cfg file in the folder is samples.cfg. We don't need to use this cfg file. Again, it's a simple list with all the Ogre samples to load for the SampleBrowser. But we don't have a SampleBrowser yet, so let's build one. The Ogre 3D samples Ogre 3D comes with a lot of samples, which show all the kinds of different render effects and techniques Ogre 3D can do. Before we start working on our application, we will take a look at the samples to get a first impression of Ogre's capabilities. Time for action – building the Ogre 3D samples To get a first impression of what Ogre 3D can do, we will build the samples and take a look at them. Go to the Ogre3D folder. Open the Ogre3d.sln solution file. Right-click on the solution and select Build Solution. Visual Studio should now start building the samples. This might take some time, so get yourself a cup of tea until the compile process is finished. If everything went well, go into the Ogre3D/bin folder. Execute the SampleBrowser.exe. You should see the following on your screen: Try the different samples to see all the nice features Ogre 3D offers. What just happened? We built the Ogre 3D samples using our own Ogre 3D SDK. After this, we are sure to have a working copy of Ogre 3D.  
Read more
  • 0
  • 0
  • 4968

article-image-managing-blender-materials
Packt
09 Feb 2011
15 min read
Save for later

Managing Blender Materials

Packt
09 Feb 2011
15 min read
  Blender 2.5 Materials and Textures Cookbook Over 80 great recipes to create life-like Blender objects Master techniques to create believable natural surface materials Take your models to the next level of realism or artistic development by using the material and texture settings within Blender 2.5. Take the hassle out of material simulation by applying faster and more efficient material and texture strategies Part of Packt's Cookbook series: Each recipe is a logically organized according to the surface types with clear instructions and explanations on how these recipes can be applied across a range of materials including complex materials such as oceans, smoke, fire and explosions. Introduction Organizing your work, as you develop any project, will ensure that you achieve your task sooner and more efficiently. How to do this in Blender may not be immediately obvious. However, Blender has a raft of tools that will make your life, as a materials creator, so much easier. This article deals with the techniques that can be used to organize your textures and materials, and thus encourage some order to complex tasks. While Blender can be a very flexible 3D suite, allowing the designer more than a single approach to a simulation, it is better to develop a more ordered strategy to your material and texture creations. We will explore several recipes that attempt to show how to control material creation. However, apart from the inbuilt tools, there are several setups that will be dependent on personal preference. You are therefore encouraged to modify any of these recipes to suit your own approaches to organizing material production. Setting a default scene for materials creation It's always a good idea to set the initial state of Blender to suit your needs. For us, the primary task is to explore materials and texture creation. When you first install Blender, a default layout will be presented. From here, you can perform most tasks, such as modeling, and rendering, as you create your desired objects. We can aid the process of surface creation by improving the lighting in the default setup. Adding a second light can give better definition to objects that are rendered. Getting ready When you first download Blender, the default factory settings provide a simple cube illuminated by a single light. Even if you have already changed some of these defaults, you will be able to apply the suggested changes in this recipe on top of your personalized settings. So, you can either start with the factory settings or your own. How to do it... Start Blender, or select New from the File menu. This will ensure that any previous default settings are loaded. Move the mouse cursor into the main 3D view and press SHIFT+A to bring up the Add menu and select Lamp of type Hemi. Move, and rotate the lamp so that it will illuminate the shaded side of the default cube. Try to adjust its height and distance from the object similar to the default lamp. From the lamp menu, set the Energy value between 0.200 and 0.500. Render a quick scene and adjust as necessary. Move to the Render panel. In the Dimensions tab, select Render Presets. From the list, select HDTV 1080p. This will give a render size of 1,920 x 1,080 square pixels. However, alter the Resolution percentage slider to 25%. Just below the Aspect Ratio settings are two buttons, check Border and then ,Crop. Now, uncheck Border. This might seem strange but although the Crop checkbox is grayed out it is still set. Ensure the Anti-Aliasing tab is selected, and the figure below that is set to 8, with the anti-aliasing method set to Mitchell-Netravali. Ensure that the Full Sample is NOT, set. Under Shading, ensure Textures, ,Shadows, Ray Tracing, and Color Management are set, while Subsurface Scattering and Environment Map are not. Move down to the ,Output tab and from the list of choices select PNG. You can also change the Compression percentage to between 0% for a loss less saved image or up to 100% for full compression. I usually set to 0% to produce the clearest images. Finally, in the Performance tab, ensure that Threads is set to Autodetect. You can save these settings as the default scene by pressing CTRL+U and selecting Save User Settings. Now, whenever you restart Blender, or select a new scene, you will have a better lit setup with render settings providing an ideal environment to create materials. How it works... The recipe provides a relatively simple set of changes to the factory default scene. However, they are ideal for materials creation because they make it easier to judge the surface characteristics as you develop the material. We started by improving the default lighting by adding a second light to give a little more illumination to the shaded side of the objects you will be creating materials and textures for. Being able to set this as the default scene means we don't have to worry about special light setups every time we create a new material. It also helps with consistency because the levels of light will be very similar between every new material you create. It is not there to provide the finished lighting for every scene you create, but just to give a more even illumination when you test render materials you are developing. What we have done is produced a key light and a fill light, which is the minimum in almost any 3D lighting arrangement. The Hemi light offers a nice broad illumination but will not cast shadows. This is ideal for a fill light as it can represent bounced light off of ceilings, or walls, or even the outside world. The Point light source acting as our key light will cast shadows just as the strongest light in a natural environment would. When you have developed your materials, you can light the actual scene with a more complex or artistic lighting setup if you wish. For the majority of digital work, we need to use square pixels and a resolution that matches the size we wish to render to. Here, we have set the render size and resolution from the presets to HDTV 1080p. This produces a relatively large render area with square pixels. Square pixels are really important when developing objects or materials for digital work. It's possible to set different aspect ratios that would alter the screen and render proportions, which are of no value when creating models or developing and placing textures on them. If you eventually want to render out to these none square pixel ratios, do so only when all you're modeling and scene creation is finalized. The render panel offers several useful presets. These are based on screen resolution and pixel aspect ratio to exactly match the desired output. Be careful not to inadvertently select one of the non-square pixel ratios, like HDV 1080p. In the same step, we set the render resolution down to 25%. This will still give a render size of 480 x 270, which is OK for initial quick renders to check how a material is progressing. You can easily scale that up to 50% or 75% for more detailed renders. However, these will obviously increase the render times. If you create a border in the camera view, by pressing SHIFT+B, and dragging the orange dotted border, Blender will only render what's inside that rectangle. This is why we also checked the Crop checkbox so that Blender automatically crops the rendered image. If you perform a bordered render without the Crop set, the unrendered part is filled with black pixels. Therefore, setting the crop checkbox will ensure it will be cropped if selected. This will save valuable render time and also smaller image saves. Anti-aliasing Even if we are rendering to a large size, we should set Blender to anti-aliase the resultant render to remove the jagged edges that would otherwise appear. Here, we have set the antialiasing method to Mitchell-Netravali. This is probably the best of the available options. It will give very reasonable anti-aliasing at the relatively low setting of 8 without unreasonable render times. For final render, you might want to consider raising the level to 16. Turning off unneeded render settings Subsurface scattering and Environment map are not always required so can be turned off in the default scene. They can always be turned on for a particular material simulation that might require them. However, normally, they are not required and render times will be reduced by having them turned off. If you are working on a material that requires environment mapping or subsurface scattering, you can set these, then save your first file of the simulation. Saving a blendfile will save all additional settings as well as objects and materials. Blender offers an enormous range of output formats for your rendered still or animation masterpieces. You will not need to use them all so which should we choose as a default? PNG (Portable Network Graphics) has lossless data compression, as compared to JPG which is lossy, and therefore, the picture is degraded every time you save. However, PNG can efficiently compress images without the subsequent loss of quality. It can also handle alpha channels. It can be read by most web browsers so is suitable for the Internet. Because Blender can render an animation as a series of still images, it is ideal for producing animations as well. Several video editors, including Quicktime, Adobe Premiere, and of course Blender, can take these sequenced still PNG images and combine them into movie formats like .mov, .avi, .mp4, .mpeg, and so on. I would therefore suggest that PNG is probably the best all-round image format to set as default. If you're primary work is in either the game development, or print, fields you might want to consider using TGA (Truevision Advanced Raster Graphics Adapter), or TIFF (Tagged Image File Format). However, windows-based systems will not display thumbnails of TIFF formatted images at this time. There's more... There are other settings that you may want to consider as appropriate in a default scene. You can set locations of often used resources from the Blender User Preferences window, CTRL+ALT+U. Under the file menu of this preferences window, you can set file paths for such things as Fonts, Textures, Render Output, and so on. Many of these locations will be specific to your operating system and where you choose them to be. Blender defaults are fine, but if you want to be specific go to this window and enter your appropriate choices. To ensure they are saved as default, click the Save User Defaults button, or press CTRL+U. (Move the mouse over the image to enlarge it). Additional settings for default scene If you have a powerful enough computer system, you might want to consider setting some more advanced options to make your test renders look really special. In some ways, what you will be doing with this recipe is creating a more production-ready materials creation environment. However, each render will take longer and if it is a complex mesh object with transparency and multiple large-scale image textures, you may have to wait several minutes for renders to complete. Although this may not seem to be a significant disadvantage, the extra render time can build up as you produce multiple renders to test the look of a material simulation. The renders reproduced in this article, and online, were created with these additional settings. The majority of development work also used these additional settings. While the majority of the simulations only took a few minutes to render at maximum resolution, one or two took a little longer. If that is the case it can slow down your material development, just turn some of these additional settings off before your first save of the blendfile. Any settings will be saved with the blendfile. Getting ready As we are adding additional settings to the default scene, ensure you have either just started Blender, or selected New from the File menu. This will set Blender back to the default scene ready for you to append the additional features suggested here. How to do it... In the 3D window, ensure that the cursor is at the center. SHIFT+S, Cursor to Center. Add new object of type plane. SHIFT+A, Mesh, Plane. Scale the plane by 50 Blender units. The easiest way to achieve this is to type S, enter 50 and press the ENTER key to confirm. Grab the plane and transpose it down -1 in the Z direction. You can either select the plane, then type G, Z, -1, and ENTER to confirm. Or press N to bring up the Properties panel and alter the Z location to -1.000. That has ensured that the plane is below any object you create from the origin. New objects are created at the cursor position and are always 1 Blender unit from their own origin, which will mean they should stand on the ground plane you have just created. Move to the Materials panel and create a new material, naming it ground. Under the Diffuse tab, change its color to R, G, B 1.000 or pure white. In the Specular tab, change the type to WardIso with an Intensity of 0.250, and a Slope of 0.300. Under the Shadow tab, select Receive and Receive Transparent. That has set up the ground ready to act as a shadow-receiving backdrop to our material creations. However, we need to set up a better shadow than that in the default key light setup. Select the key light. From the Light panel, select the Shadow tab and set Ray Shadow with Sampling set to Adaptive QMC, with Samples set to 6, and Soft Size of 1.000 and Threshold 0.001. Finally, we will set up ambient occlusion to give our models a little more shape. Move to the World panel and select Ambient Occlusion. Set its Factor to 1.30 and its Mix type to Multiply. In the Gathering tab, select Approximate, with Attenuation/Falloff selected and its Strength changed to 0.900. Set Sampling/Passes to 6. Select the Render tab, then save this as the new default setting by pressing CTRL+U. How it works... The plane is there just so that any shadows will have somewhere to fall. Shadow casting is normally created as a default once an object has a material assigned. In step 19, we also said let the plane receive transparent shadows. That means that objects casting shadows with an alpha component such as windows, or transparent materials, will have accurate shadows showing that transparency. We also set up soft raytrace shadows by turning on this feature for the key light. The important setting here is the number of samples. Too low and the shadow will look fake. Too high and the render times will become rather lengthy. Setting this to 6 is a good compromise for accuracy and speed. Finally, we have set up ambient occlusion, which simulates the darkening of shadows in crevices and shaded portions of a model. The higher the Factor level, the darker the ambient occlusion will become. Essentially, the darkening is being multiplied on top of the rendered image, although it can also be set to Add. Full raytraced ambient occlusion can take some time to compute, so it is good news that Blender has an excellent Approximate method, which is very quick. The Attenuation and Passes can be tweaked to give the best balance between accuracy and render time. Too low a setting will produce a spotty darkening that is not very real. We set Passes at 6, which is another excellent compromise. Ambient occlusion is only available if you render with raytrace enabled. To save all these extra setting as the default, we only had to press CTRL+U. Now, whenever you start a new scene, these settings will be pre-set. There's more... Occasionally, you may go too far with default settings and find that render times become too long when you only want to check how a material is progressing. Another common problem is that users will sometimes inadvertently save a pre-created scene as the default. If this happens, you can always return to the factory settings by pressing CTRL+N or choosing the Load Factory Settings from the File menu. As you will be doing this regularly, the default light setup is perfectly adequate. However, for a 'hero' render, I would recommend the following settings: Lights Your key light, the one to the right of frame, should be set to Energy 0.500. Under the Light panel, set ShadowRay Shadow. Under Sampling, set to Adaptive QMC and Soft Size to 5.318, and Samples to 6. This will create a nice soft shadow, which is more realistic than the sharp ray shadows produced by the default settings. Ambient occlusion Ambient occlusion can produce a nice darkening of overall illumination around those shaded parts in our renders. It adds a decent approximation of how real light and shadow spread through an environment giving a render depth. In our example, a Blend sky has been set in the World panel, with a Horizon color R and G set to 0.80, and B set to 0.69. The Zenith color has been set to R 0.69, G 0.75, and B 0.80. Ambient Occlusion has been selected with the following settings: Factor 1.00, and Multiply. Under Gather, Raytrace is selected and Sampling is set to Adaptive QMC, with Samples 24. Threshold and Adapt To Speed are all set to defaults. Under Attenuation, Distance is set to 10.000 with Falloff and Strength set to 0.220.
Read more
  • 0
  • 0
  • 3692

article-image-away3d-36-applying-basic-and-bitmap-materials
Packt
07 Feb 2011
7 min read
Save for later

Away3D 3.6: Applying Basic and Bitmap Materials

Packt
07 Feb 2011
7 min read
Away3D 3.6 Essentials Take Flash to the next dimension by creating detailed, animated, and interactive 3D worlds with Away3D Create stunning 3D environments with highly detailed textures Animate and transform all types of 3D objects, including 3D Text Eliminate the need for expensive hardware with proven Away3D optimization techniques, without compromising on visual appeal Written in a practical and illustrative style, which will appeal to Away3D beginners and Flash developers alike To demonstrate the basic materials available in Away3D, we will create a new demo called MaterialsDemo. package { Some primitives show off a material better than others. To accommodate this, we will apply the various materials to the sphere, torus, cube, and plane primitive 3D objects in this demo. All primitives extend the Mesh class, which makes it the logical choice for the type of the variable that will reference instances of all four primitives. import away3d.core.base.Mesh; The Cast class provides a number of handy functions that deal with the casting of objects between types. import away3d.core.utils.Cast; As we saw previously, those materials that can be illuminated support point or directional light sources (and sometimes both). To show off materials that can be illuminated, one of these types of lights will be added to the scene. import away3d.lights.DirectionalLight3D; import away3d.lights.PointLight3D; In order to load textures from external image files, we need to import the TextureLoadQueue and TextureLoader classes. import away3d.loaders.utils.TextureLoadQueue; import away3d.loaders.utils.TextureLoader; The various material classes demonstrated by the MaterialsDemo class are imported from the away3d.materials package. import away3d.materials.AnimatedBitmapMaterial; import away3d.materials.BitmapFileMaterial; import away3d.materials.BitmapMaterial; import away3d.materials.ColorMaterial; import away3d.materials.CubicEnvMapPBMaterial; import away3d.materials.DepthBitmapMaterial; import away3d.materials.Dot3BitmapMaterial; import away3d.materials.Dot3BitmapMaterialF10; import away3d.materials.EnviroBitmapMaterial; import away3d.materials.EnviroColorMaterial; import away3d.materials.FresnelPBMaterial; import away3d.materials.MovieMaterial; import away3d.materials.PhongBitmapMaterial; import away3d.materials.PhongColorMaterial; import away3d.materials.PhongMovieMaterial; import away3d.materials.PhongMultiPassMaterial; import away3d.materials.PhongPBMaterial; import away3d.materials.ShadingColorMaterial; import away3d.materials.TransformBitmapMaterial; import away3d.materials.WhiteShadingBitmapMaterial; import away3d.materials.WireColorMaterial; import away3d.materials.WireframeMaterial; These materials will all be applied to a number of primitive types, which are all imported from the away3d.primitives package. import away3d.primitives.Cube; import away3d.primitives.Plane; import away3d.primitives.Sphere; import away3d.primitives.Torus; The CubFaces class defines a number of constants that identify each of the six sides of a cube. import away3d.primitives.utils.CubeFaces; The following Flash classes are used when loading textures from external image files, to handle events, to display a textfield on the screen, and to define a position or vector within the scene. import flash.geom.Vector3D; import flash.net.URLRequest; import flash.display.BitmapData; import flash.events.Event; import flash.events.KeyboardEvent; import flash.text.TextField; The MaterialsDemo class extends the Away3DTemplate class (download code Ch:1). public class MaterialsDemo extends Away3DTemplate { One of the ways to manage resources that was discussed in the Resource management section was to embed them. Here, we see how an external JPG image file, referenced by the source parameter, has been embedded using the Embed keyword. Embedding an image file in this way means that instantiating the EarthDiffuse class will result in a Bitmap object populated with the image data contained in the earth_diffuse.jpg file. [Embed(source = "earth_diffuse.jpg")] protected var EarthDiffuse:Class; A number of additional images have been embedded in the same way. [Embed(source = "earth_normal.jpg")] protected var EarthNormal:Class; [Embed(source = "earth_specular.jpg")] protected var EarthSpecular:Class; [Embed(source = "checkerboard.jpg")] protected var Checkerboard:Class; [Embed(source = "bricks.jpg")] protected var Bricks:Class; [Embed(source = "marble.jpg")] protected var Marble:Class; [Embed(source = "water.jpg")] protected var Water:Class; [Embed(source = "waternormal.jpg")] protected var WaterNormal:Class; [Embed(source = "spheremap.gif")] protected var SphereMap:Class; [Embed(source = "skyleft.jpg")] protected var Skyleft:Class; [Embed(source = "skyfront.jpg")] protected var Skyfront:Class; [Embed(source = "skyright.jpg")] protected var Skyright:Class; [Embed(source = "skyback.jpg")] protected var Skyback:Class; [Embed(source = "skyup.jpg")] protected var Skyup:Class; [Embed(source = "skydown.jpg")] protected var Skydown:Class; Here we are embedding three SWF files. These are embedded just like the preceding images. [Embed(source = "Butterfly.swf")] protected var Butterfly:Class; [Embed(source = "InteractiveTexture.swf")] private var InteractiveTexture:Class; [Embed(source = "Bear.swf")] private var Bear:Class; A TextField object is used to display the name of the current material on the screen. protected var materialText:TextField; The currentPrimitive property is used to reference the primitive to which we will apply the various materials. protected var currentPrimitive:Mesh; The directionalLight and pointLight properties each reference a light that is added to the scene to illuminate certain materials. protected var directionalLight:DirectionalLight3D; protected var pointLight:PointLight3D; The bounce property is set to true when we want the sphere to bounce along the Z-axis. This bouncing motion will be used to show off the effect of the DepthBitmapMaterial class. protected var bounce:Boolean; The frameCount property maintains a count of the frames that have been rendered while bounce property is set to true. protected var frameCount:int; The constructor calls the Away3DTemplate constructor, which will initialize the Away3D engine. public function MaterialsDemo() { super(); } The removePrimitive() function removes the current primitive 3D object from the scene, in preparation for a new primitive to be created. protected function removePrimitive():void { if (currentPrimitive != null) { scene.removeChild(currentPrimitive); currentPrimitive = null; } } The initSphere() function first removes the existing primitive from the scene by calling the removePrimitive() function, and then creates a new sphere primitive and adds it to the scene. Optionally, it can set the bounce property to true, which indicates that the primitive should bounce along the Z-axis. protected function initSphere(bounce:Boolean = false):void { removePrimitive(); currentPrimitive = new Sphere(); scene.addChild(currentPrimitive); this.bounce = bounce; } The initTorus(), initCube(), and initPlane() functions all work like the initSphere() function to add a specific type of primitive to the scene. These functions all set the bounce property to false, as none of the materials that will be applied to these primitives gain anything by having the primitive bounce within the scene. protected function initTorus():void { removePrimitive(); currentPrimitive = new Torus(); scene.addChild(currentPrimitive); this.bounce = false; } protected function initCube():void { removePrimitive(); currentPrimitive = new Cube( { width: 200, height: 200, depth: 200 } ); scene.addChild(currentPrimitive); this.bounce = false; } protected function initPlane():void { removePrimitive(); currentPrimitive = new Plane( { bothsides: true, width: 200, height: 200, yUp: false } ); scene.addChild(currentPrimitive); this.bounce = false; } The removeLights() function will remove any lights that have been added to the scene in preparation for a new light to be created. protected function removeLights():void { if (directionalLight != null) { scene.removeLight(directionalLight); directionalLight = null; } if (pointLight != null) { scene.removeLight(pointLight); pointLight = null; } } The initPointLight() and initDirectionalLight() functions each remove any existing lights in the scene by calling the removeLights() function, and then add their specific type of light to the scene. protected function initPointLight():void { removeLights(); pointLight = new PointLight3D( { x: -300, y: -300, radius: 1000 } ); scene.addLight(pointLight); } protected function initDirectionalLight():void { removeLights(); directionalLight = new DirectionalLight3D( { x: 300, y: 300, The direction that the light is pointing is set to (0, 0, 0) by default, which effectively means the light is not pointing anywhere. If you have a directional light that is not being reflected off the surface of a lit material, leaving the direction property to this default value may be the cause. Here we override the default to make the light point back to the origin. direction: new Vector3D(-1, -1, 0) } ); scene.addLight(directionalLight); } The initScene() function has been overridden to call the applyWireColorMaterial() function, which will display a sphere with the WireColorMaterial material applied to it. We also set the position of the camera back to the origin. protected override function initScene():void { super.initScene(); this.camera.z = 0; applyWireColorMaterial(); }
Read more
  • 0
  • 0
  • 1757

article-image-implementing-multithreaded-operations-and-rendering-openscenegraph
Packt
04 Feb 2011
8 min read
Save for later

Implementing multithreaded operations and rendering in OpenSceneGraph

Packt
04 Feb 2011
8 min read
OpenThreads basics OpenThreads is a lightweight, cross-platform thread API for OSG classes and applications. It supports the fundamental elements required by a multithreaded program, that is, the thread object (OpenThreads::Thread), the mutex for locking data that may be shared by different threads (OpenThreads::Mutex), barrier (OpenThreads::Barrier), and condition (OpenThreads::Condition). The latter two are often used for thread synchronization. To create a new thread for certain purposes, we have to derive the OpenThreads::Thread base class and re-implement some of its virtual methods. There are also some global functions for conveniently handling threads and thread attributes, for example: The GetNumberOfProcessors() function gets the number of processors available for use. The SetProcessorAffinityOfCurrentThread() function sets the processor affinity (that is, which processor is used to execute this thread) of the current thread. It should be called when the thread is currently running. The CurrentThread() static method of OpenThreads::Thread returns a pointer to the current running thread instance. The YieldCurrentThread() static method of OpenThreads::Thread yields the current thread and lets other threads take over the control of the processor. The microSleep() static method of OpenThreads::Thread makes the current thread sleep for a specified number of microseconds. It can be used in single-threaded applications, too. Time for action – using a separate data receiver thread In this example, we will design a new thread with the OpenThreads library and use it to read characters from the standard input. At the same time, the main process, that is, the OSG viewer and rendering backend will try retrieving the input characters and displaying them on the screen with the osgText library. The entire program can only quit normally when the data thread and main process are both completed. Include the necessary headers: #include <osg/Geode> #include <osgDB/ReadFile> #include <osgText/Text> #include <osgViewer/Viewer> #include <iostream> Declare our new DataReceiverThread class as being derived from OpenThreads::Thread. Two virtual methods should be implemented to ensure that the thread can work properly: the cancel() method defines the cancelling process of the thread, and the run() method defines what action happens from the beginning to the end of the thread. We also define a mutex variable for interprocess synchronization, and make use of the singleton pattern for convenience: class DataReceiverThread : public OpenThreads::Thread { public: static DataReceiverThread* instance() { static DataReceiverThread s_thread; return &s_thread; } virtual int cancel(); virtual void run(); void addToContent( int ch ); bool getContent( std::string& str ); protected: OpenThreads::Mutex _mutex; std::string _content; bool _done; bool _dirty; }; The cancelling work is simple: set the variable _done (which is checked repeatedly during the run() implementation to true) and wait until the thread finishes: int DataReceiverThread::cancel() { _done = true; while( isRunning() ) YieldCurrentThread(); return 0; } The run() method is the core of a thread class. It usually includes a loop in which actual actions are executed all the time. In our data receiver thread, we use std::cin.get() to read characters from the keyboard input and decide if it can be added to the member string _content. When _done is set to true, the run() method will meet the end of its lifetime, and so does the whole thread: void DataReceiverThread::run() { _done = false; _dirty = true; do { YieldCurrentThread(); int ch = 0; std::cin.get(ch); switch (ch) { case 0: break; // We don't want '' to be added case 9: _done = true; break; // ASCII code of Tab = 9 default: addToContent(ch); break; } } while( !_done ); } Be careful of the std::cin.get() function: it firstly reads one or more characters from the user input, until the Enter key is pressed and a 'n' is received. Then it picks characters one by one from the buffer, and continues to add them to the member string. When all characters in the buffer are traversed, it clears the buffer and waits for user input again. The customized addToContent() method adds a new character to _content. This method is sure to be called in the data receiver thread, so we have to lock the mutex object while changing the _content variable, to prevent other threads and the main process from dirtying it: void DataReceiverThread::addToContent( int ch ) { OpenThreads::ScopedLock<OpenThreads::Mutex> lock(_mutex); _content += ch; _dirty = true; } The customized getContent() method is used to obtain the _content variable and add it to the input string argument. This method, the opposite of the previous addToContent() method, must only be called by the following OSG callback implementation. The scoped locking operation of the mutex object will make the entire work thread-safe, as is done in addToContent(): bool getContent( std::string& str ) { OpenThreads::ScopedLock<OpenThreads::Mutex> lock(_mutex); if ( _dirty ) { str += _content; _dirty = false; return true; } return false; } The thread implementation is finished. Now let's go back to rendering. What we want here is a text object that can dynamically change its content according to the string data received from the main process. An update callback of the text object is necessary to realize such functionality. In the virtual update() method of the customized update callback (it is for drawables, so osg::NodeCallback is not needed here), we simply retrieve the osgText::Text object and the receiver thread instance, and then reset the displayed texts: class UpdateTextCallback : public osg::Drawable::UpdateCallback { public: virtual void update( osg::NodeVisitor* nv, osg::Drawable* drawable ) { osgText::Text* text = static_cast<osgText::Text*>(drawable); if ( text ) { std::string str("# "); if ( DataReceiverThread::instance()->getContent(str) ) text->setText( str ); } } }; In the main entry, we first create the osgText::Text drawable and apply a new instance of our text updating callback. The setAxisAlignment() here defines the text as a billboard in the scene, and setDataVariance() ensures that the text object is "dynamic" during updating and drawing. There is also a setInitialBound() method, which accepts an osg::BoundingBox variable as the argument. It forces the definition of the minimum bounding box of the drawable and computes the initial view matrix according to it: osg::ref_ptr<osgText::Text> text = new osgText::Text; text->setFont( "fonts/arial.ttf" ); text->setAxisAlignment( osgText::TextBase::SCREEN ); text->setDataVariance( osg::Object::DYNAMIC ); text->setInitialBound( osg::BoundingBox(osg::Vec3(), osg::Vec3(400.0f, 20.0f, 20.0f)) ); text->setUpdateCallback( new UpdateTextCallback ); Add the text object to an osg::Geode node and turn off lighting. Before starting the viewer, we also have to make sure that the scene is rendered in a fixed-size window. That's because we have to also use the console window for keyboard entry: osg::ref_ptr<osg::Geode> geode = new osg::Geode; geode->addDrawable( text.get() ); geode->getOrCreateStateSet()->setMode( GL_LIGHTING, osg::StateAttribute::OFF ); osgViewer::Viewer viewer; viewer.setSceneData( geode.get() ); viewer.setUpViewInWindow( 50, 50, 640, 480 ); Start the data receiver thread before the viewer runs, and quit it after that: DataReceiverThread::instance()->startThread(); viewer.run(); DataReceiverThread::instance()->cancel(); return 0; Two windows will appear if you are compiling your project with your subsystem console. Set focus to the console window and type some characters. Press Enter when you are finished, and then press Tab followed by Enter in order to quit the receiver thread: You will notice that the same characters come out in the OSG rendering window. This can be treated as a very basic text editor, with the text source in a separate receiver thread, and the drawing interface implemented in the OSG scene graph: What just happened? It is very common that applications use separate threads to load huge files from disk or from the Local Area Network (LAN). Other applications use threads to continuously receive data from the network service and client computers, or user-defined input devices including GPS and radar signals, which is of great speed and efficiency. Extra data handling threads can even specify an affinity processor to work on, and thus make use of today's dual-core and quad-core CPUs. The OpenThreads library provides a minimal and complete object-oriented thread interface for OSG developers, and even general C++ threading programmers. It is used by the osgViewer library to implement multithreaded scene updating, culling, and drawing, which is the secret of highly efficient rendering in OSG. Note here, that multithreaded rendering doesn't simply mean executing OpenGL calls in different threads because the related rendering context (HGLRC under Win32) is thread-specific. One OpenGL context can only be current in one thread (using wglMakeCurrent() function). Thus, one OSG rendering window which wraps only one OpenGL context will never be activated and accept OpenGL calls synchronously in multiple threads. It requires an accurate control of the threading model to make everything work well.
Read more
  • 0
  • 0
  • 4064
article-image-materials-lights-and-shading-techniques-away3d-36
Packt
04 Feb 2011
6 min read
Save for later

Materials, Lights and Shading Techniques with Away3D 3.6

Packt
04 Feb 2011
6 min read
The difference between textures and materials Throughout this article, a number of references will be made to materials and textures. A texture is simply an image, like you would create in an image editing application like Photoshop or view in a web page. Textures are then used by materials, which in Away3D are classes that can be applied to the surface of a 3D object. Resource management Quite a number of the materials included in Away3D rely on textures that exist in external image like a PNG, JPG, or GIF file. There are two ways of dealing with external files: embedding them or accessing them at runtime. ActionScript includes the Embed keyword, which can be used to embed external files directly inside a compiled SWF file. There are a number of benefits to embedded resources: The Flash application can be distributed as a single file There is no wait when accessing the resources at runtime The security issues associated with accessing remote resources are avoided There is no additional network traffic once the SWF is downloaded The SWF file can be run offline The embedded files can have additional compression applied The downside to embedding resources is that the size of the final SWF is increased, resulting in a longer initial download time. Alternatively, the external files can be saved separately and accessed at runtime, which has the following advantages: The SWF file is smaller, resulting in shorter initial download times Resources are only downloaded when they are needed, and cached for future access Resources can be updated or modified without recompiling the SWF file There are several downsides to accessing resources at runtime: Permissions on the server hosting the resources may need to be configured before the external files can be accessed Distribution of the final Flash application is more difficult due to the increased number of individual files There will be a delay when the application is run as the remote resources are downloaded Away3D supports the use of both embedded and external resources, and both methods will be demonstrated below. Embedding the resources is usually the best option when managing resources. It prevents a number of possible errors due to unreliable networks and security restrictions, and produces a SWF file that is much simpler to distribute and publish. However, for applications where it is not possible to know what resources will be required beforehand, like a 3D image gallery, loading external resources is the only option. You may also want to load external resources for applications where there is a large volume of data that does not need to be downloaded immediately, like a large game with levels that the player won't necessarily see in a single sitting. Defining colors in Away3D The appearance of a number of materials can be modified by supplying a color. A good example is the WireColorMaterial material (the same one that is applied to a 3D object when no material is specified), the fill and outline colors of which can be defined via the color and wirecolor init object parameters. Colors can be defined in Away3D in a number of different formats. Common to all the formats is the idea that a color is made up of red, green, and blue component. For example, the color purple is made up of red and blue, while yellow is made up of red and green. By integer Colors can be defined as an integer. These int values are usually defined in their hexadecimal form, which looks like 0x12CD56. The characters that make up the int can be digits between 0 and 9, and characters between A and F. You can think of the characters A through to F as representing the numbers 10 to 15, allowing each character to represent 16 different values. For each color component, 00 is the lowest value, and FF is the highest. The first two characters define the red components of the color, the next two define the green component, and the final two define the blue component. It is sometimes necessary to define the transparency of a color. This is done by adding two additional characters to the beginning of the hexadecimal notation, such as 0xFF12CD56. In this form, the two leading characters define the transparency, or alpha, of the color. The last six characters represent the red, green, and blue components. Smaller alpha values make a color more transparent, while higher alpha values make a color more opaque. You can see an example of a color being defined as an int in the applyWireframeMaterial() function from the MaterialsDemo class. By string The same hexadecimal format used by integers can also be represented as a String. The only difference is that the prefix 0x is left off. An example would be "12CD56", or "FF12CD56". The MaterialsDemo applyColorMaterial() function demonstrates the use of this color format. Away3D also recognizes a number of colors by name. These are listed in the following table. The MaterialsDemo applyWireColorMaterial() function demonstrates the use of colors defined by name. Pixel Bender Pixel Bender is a technology, new to Flash Player 10, that implements generalized graphics processing in the Pixel Bender language. The programs written using Pixel Bender are known as kernels or shaders. Shaders have the advantage of being able to be run across multiple CPUs and CPU cores, unlike the graphics processing done via the Flash graphics API. This gives shaders the potential to be much faster. The term shader and kernel can be used interchangeably with respect to Pixel Bender. One of the advantages of using Away3D version 3.x over version 2.x is the ability to use Pixel Bender shaders. The implementation of these shaders is largely hidden by the material classes that utilize them, meaning that they can be used much like the regular material classes, while at the same time offering a much higher level of detail. A common misconception is that Flash Player 10 uses the Graphics Processing Unit (GPU), which is common to most video chipsets these days, to execute shaders. This is incorrect. Unlike some other Adobe products that also make use of Pixel Bender shaders, Flash Player 10 does not utilize the GPU when executing shaders. Adobe has indicated that GPU rendering support for Pixel Bender may be included in future releases of Flash Player.
Read more
  • 0
  • 0
  • 2154

article-image-openscenegraph-methods-improving-rendering-efficiency
Packt
04 Feb 2011
11 min read
Save for later

OpenSceneGraph: methods for improving rendering efficiency

Packt
04 Feb 2011
11 min read
Improving your application There are a lot of tricks to improve the rendering performance of applications with a large amount of data. But the essence of them is easy to understand: the smaller the number of resources (geometries, display lists, texture objects, and so on) allocated, the faster and smoother the user application is. You might benefit from the previous article on Implementing Multithreaded Operations and Rendering in OpenSceneGraph. There are lots of ideas on how to find the bottleneck of an inefficient application. For example, you can replace certain objects by simple boxes, or replace textures in your application by 1x1 images to see if the performance can increase, thanks to the reduction of geometries and texture objects. The statistics class (osgViewer::StatsHandler, or press the S key in the osgviewer) can also provide helpful information. To achieve a less-enough scene resource, we can refer to the following table and try to optimize our applications if they are not running in good shape: ProblemInfluencePossible solutionToo many geometriesLow frame rate and huge resource cost Use LOD and culling techniques to reduce the vertices of the drawables. Use primitive sets and the index mechanism rather than duplicate vertices. Merge geometries into one, if possible. This is because one geometry object allocates one display list, and too many display lists occupy too much of the video memory. Share geometries, vertices, and nodes as often as possible. Too many dynamic objects (configured with the setDataVariance() method)Low frame rate because the DRAW phase must wait until all dynamic objects finish updating Don't use the DYNAMIC flag on nodes and drawables that do not need to be modified on the fly.   Don't set the root node to be dynamic unless you are sure that you require this, because data variance can be inherited in the scene graph. Too many texture objectsLow frame rate and huge resource cost Share rendering states and textures as much as you can. Lower the resolution and compress them using the DXTC format if possible. Use osg::TextureRectangle to handle non-power-of-two sized textures, and osg::Texture2D for regular 2D textures. Use LOD to simplify and manage nodes with large-sized textures. The scene graph structure is "loose", that is, nodes are not grouped together effectively.Very high cull and draw time, and many redundant state changes If there are too many parent nodes, each with only one child, which means the scene has as many group nodes as leaf nodes, and even as many drawables as leaf nodes, the performance will be totally ruined. You should rethink your scene graph and group nodes that have close features and behaviors more effectively. Loading and unloading resources too frequentlyLower and lower running speed and wasteful memory fragmentationUse the buffer pool to allocate and release resources. OSG has already done this to textures and buffer objects, by default. An additional helper is the osgUtil::Optimizer class. This can traverse the scene graph before starting the simulation loop and do different kinds of optimizations in order to improve efficiency, including removing redundant nodes, sharing duplicated states, checking and merging geometries, optimizing texture settings, and so on. You may start the optimizing operation with the following code segment: osgUtil::Optimizer optimizer; optimizer.optimize( node ); Some parts of the optimizer are optional. You can see the header file include/osgUtil/Optimizer for details. Time for action – sharing textures with a customized callback We would like to explain the importance of scene optimization by providing an extreme situation where massive textures are allocated without sharing the same ones. We have a basic solution to collect and reuse loaded images in a file reading callback, and then share all textures that use the same image object and have the same parameters. The idea of sharing textures can be used to construct massive scene graphs, such as digital cities; otherwise, the video card memory will soon be eaten up and thus cause the whole application to slow down and crash. Include the necessary headers: #include <osg/Texture2D> #include <osg/Geometry> #include <osg/Geode> #include <osg/Group> #include <osgDB/ReadFile> #include <osgViewer/Viewer> The function for quickly producing massive data can be used in this example, once more. This time we will apply a texture attribute to each quad. That means that we are going to have a huge number of geometries, and the same amount of texture objects, which will be a heavy burden for rendering the scene smoothly: #define RAND(min, max) ((min) + (float)rand()/(RAND_MAX+1) * ((max)-(min))) osg::Geode* createMassiveQuads( unsigned int number, const std::string& imageFile ) { osg::ref_ptr<osg::Geode> geode = new osg::Geode; for ( unsigned int i=0; i<number; ++i ) { osg::Vec3 randomCenter; randomCenter.x() = RAND(-100.0f, 100.0f); randomCenter.y() = RAND(1.0f, 100.0f); randomCenter.z() = RAND(-100.0f, 100.0f); osg::ref_ptr<osg::Drawable> quad = osg::createTexturedQuadGeometry( randomCenter, osg::Vec3(1.0f, 0.0f, 0.0f), osg::Vec3(0.0f, 0.0f, 1.0f) ); osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; texture->setImage( osgDB::readImageFile(imageFile) ); quad->getOrCreateStateSet()->setTextureAttributeAndModes( 0, texture.get() ); geode->addDrawable( quad.get() ); } return geode.release(); } The createMassiveQuads() function is, of course, awkward and ineffective here. However, it demonstrates a common situation: assuming that an application needs to often load image files and create texture objects on the fly, it is necessary to check if an image has been loaded already and then share the corresponding textures automatically. The memory occupancy will be obviously reduced if there are plenty of textures that are reusable. To achieve this, we should first record all loaded image filenames, and then create a map that saves the corresponding osg::Image objects. Whenever a new readImageFile() request arrives, the osgDB::Registry instance will try using a preset osgDB::ReadFileCallback to perform the actual loading work. If the callback doesn't exist, it will call the readImageImplementation() to choose an appropriate plug-in that will load the image and return the resultant object. Therefore, we can take over the reading image process by inheriting the osgDB::ReadFileCallback class and implementing a new functionality that compares the filename and re-uses the existing image objects, with the customized getImageByName() function: class ReadAndShareImageCallback : public osgDB::ReadFileCallback { public: virtual osgDB::ReaderWriter::ReadResult readImage( const std::string& filename, const osgDB::Options* options ); protected: osg::Image* getImageByName( const std::string& filename ) { ImageMap::iterator itr = _imageMap.find(filename); if ( itr!=_imageMap.end() ) return itr->second.get(); return NULL; } typedef std::map<std::string, osg::ref_ptr<osg::Image> > ImageMap; ImageMap _imageMap; }; The readImage() method should be overridden to replace the current reading implementation. It will return the previously-imported instance if the filename matches an element in the _imageMap, and will add any newly-loaded image object and its name to _imageMap, in order to ensure that the same file won't be imported again: osgDB::ReaderWriter::ReadResult ReadAndShareImageCallback::read Image( const std::string& filename, const osgDB::Options* options ) { osg::Image* image = getImageByName( filename ); if ( !image ) { osgDB::ReaderWriter::ReadResult rr; rr = osgDB::Registry::instance()->readImageImplementation( filename, options); if ( rr.success() ) _imageMap[filename] = rr.getImage(); return rr; } return image; } Now we get into the main entry. The file-reading callback is set by the setReadFileCallback() method of the osgDB::Registry class, which is designed as a singleton. Meanwhile, we have to enable another important run-time optimizer, named osgDB::SharedStateManager, that can be defined by setSharedStateManager() or getOrCreateSharedStateManager(). The latter will assign a default instance to the registry: osgDB::Registry::instance()->setReadFileCallback( new ReadAndShareImageCallback ); osgDB::Registry::instance()->getOrCreateSharedStateManager(); Create the massive scene graph. It consists of two groups of quads, each of which uses a unified image file to decorate the quad geometry. In total, 1,000 quads will be created, along with 1,000 newly-allocated textures. Certainly, there are too many redundant texture objects (because they are generated from only two image files) in this case: osg::ref_ptr<osg::Group> root = new osg::Group; root->addChild( createMassiveQuads(500, "Images/lz.rgb") ); root->addChild( createMassiveQuads(500, "Images/osg64.png") ); The osgDB::SharedStateManager is used for maximizing the reuse of textures and state sets. It is actually a node visitor, traversing all child nodes' state sets and comparing them when the share() method is invoked. State sets and textures with the same attributes and data will be combined into one: osgDB::SharedStateManager* ssm = osgDB::Registry::instance()->getSharedStateManager(); if ( ssm ) ssm->share( root.get() ); Finalize the viewer: osgViewer::Viewer viewer; viewer.setSceneData( root.get() ); return viewer.run(); Now the application starts with a large number of textured quads. With the ReadAndShareImageCallback sharing image objects, and the osgDB::SharedStateManager sharing textures, the rendering process can work without a hitch. Try commenting out the lines of setReadFileCallback() and getOrCreateSharedStateManager() and restart the application, and then see what has happened. The Windows Task Manager is helpful in displaying the amount of currently-used memory here: What just happened? You may be curious about the implementation of osgDB::SharedStateManager. It collects rendering states and textures that firstly appear in the scene graph, and then replaces duplicated states of successive nodes with the recorded ones. It compares two states' member attributes in order to decide whether the new state should be recorded (because it's not the same as any of the recorded ones) or replaced (because it is a duplication of the previous one). For texture objects, the osgDB::SharedStateManager will determine if they are exactly the same by checking the data() pointer of the osg::Image object, rather than by comparing every pixel of the image. Thus, the customized ReadAndShareImageCallback class is used here to share image objects with the same filename first, and the osgDB::SharedStateManager shares textures with the same image object and other attributes. The osgDB::DatabasePager also makes use of osgDB::SharedStateManager to share states of external scene graphs when dynamically loading and unloading paged nodes. This is done automatically if getOrCreateSharedStateManager() is executed. Have a go hero – sharing public models Can we also share models with the same name in an application? The answer is absolutely yes. The osgDB::ReadFileCallback could be used again by overriding the virtual method readNode(). Other preparations include a member std::map for recording filename and node pointer pairs, and a user-defined getNodeByName() method as we have just done in the last example. Paging huge scene data Are you still struggling with the optimization of huge scene data? Don't always pay attention to the rendering API itself. There is no "super" rendering engine in the world that can work with unlimited datasets. Consider using the scene paging mechanism at this time, which can load and unload objects according to the current viewport and frustum. It is also important to design a better structure for indexing regions of spatial data, like quad-tree, octree, R-tree, and the binary space partitioning (BSP). Making use of the quad-tree A classic quad-tree structure decomposes the whole 2D region into four square children (we call them cells here), and recursively subdivides each cell into four regions, until a cell reaches its target capacity and stops splitting (a so-called leaf). Each cell in the tree either has exactly four children, or has no children. It is mostly useful for representing terrains or scenes on 2D planes. The quad-tree structure is useful for view-frustum culling terrain data. Because the terrain is divided into small pieces that are a part of it, we can easily render pieces of small data in the frustum, and discard those that are invisible. This can effectively unload a large number of chunks of a terrain from memory at a time, and load them back when necessary—which is the basic principle of dynamic data paging. This process can be progressive: when the terrain model is far enough from the viewer, we may only handle its root and first levels. But as it is drawing near, we can traverse down to corresponding levels of the quad-tree, and cull and unload as many cells as possible, to keep the load balance of the scene.
Read more
  • 0
  • 0
  • 6549

article-image-creating-man-made-materials-blender-25
Packt
28 Jan 2011
10 min read
Save for later

Creating Man-made Materials in Blender 2.5

Packt
28 Jan 2011
10 min read
  Blender 2.5 Materials and Textures Cookbook Over 80 great recipes to create life-like Blender objects Master techniques to create believable natural surface materials Take your models to the next level of realism or artistic development by using the material and texture settings within Blender 2.5. Take the hassle out of material simulation by applying faster and more efficient material and texture strategies Part of Packt's Cookbook series: Each recipe is a logically organized according to the surface types with clear instructions and explanations on how these recipes can be applied across a range of materials including complex materials such as oceans, smoke, fire and explosions.        Creating a slate roof node material that repeats but with ultimate variety Man-made materials will often closely resemble their natural surface attributes. Slate is a natural material that is used in many building projects. Its tendency to shear into natural slices makes it an ideal candidate for roofing. However, in its man-made form it is much more regularly shaped and graded to give a nice repeating pattern on a roof surface. That doesn't mean that there is no irregularity in either the edges or surface of manufactured slate tiles. In fact, architects often use this irregularity to add character and drama to a roof. Repeat patterns in a 3D suite like Blender can be extremely difficult to control. If repeats become too obvious, particularly when surface and edges are supposed to be random, it can ruin the illusion. Fortunately, we will be employing Blender controls to add randomness to a repeated image representing the tiled pattern of the roof. Of course, slates like any building material need to be fixed to a roof. This is usually achieved with nails. After time, these metal joiners will age and either rust or channel water to add streaks and rust lines across the slate, often emphasizing the slope of the roof. All these secondary events will add character and dimension to a material simulation. However, such a material need not be complex to achieve a believable and stimulating result. Getting ready The preparation for this recipe could not be simpler. The modeling required to produce the mesh roof is no more than a simple plane created at the origin and rotated in its Y axis to 30°. The plane can be at the default size of one blender unit and should have no more than four vertices. That's just about the simplest model you can have in a 3D graphics program. Position the camera so that you have a reasonably close-up view as shown in the following image: The default lights will be fine for this simulation. But, you are welcome to place lights as you wish. Please bear in mind that a slate roof tends to be quite a dark material. So, if test renders appear too dark, raise the light energy until a reasonable render can be produced. You can also turn off Raytrace render and Ambient Occlusion, if it has been previously set, as they are not required for this material. This will save considerable time in rendering test images. Save your blendfile as slate-roof-01.blend. You will also need to either create or download a small tileable image to represent the pattern of the slate roof. Instructions are given on how to create it within the recipe but a downloadable version is available from the Packtpub website. How to do it... We need to create an image of the smallest repeatable pattern of our slates. This can act both as a bump map and also to mask and apply color variation to the material. The image is very simple and is based on the shape and dimension of a standard rectangular slate. You will see later how the shape can be changed to represent other slate patterns. This was created in GIMP, although any reasonable paint package could be used. Here are the steps to aid you in creating one yourself: Create a new image with size 260 x 420 pixels. I will show later how you can scale an image to give better proportions for more efficient use within Blender. Either place guides or create a grid to sub-divide the rectangle into four sections. In the top half of the rectangle, create a blend fill from black at the top to white at the middle. Do the same for the bottom half of the rectangle. Create a new layer and draw a black line, of three pixels' width, from the middle of the top rectangle section to divide the top rectangle into two. Draw black lines of the same thickness on each side of the whole rectangle. If you used a grid, you should find that one of these verticals is two pixels' width and the other one. Obviously, when this image is tiled, the black lines will all appear as equal in thickness. Finally, create another blend fill from the bottom of each rectangle from black to white upwards about ten pixels. Save your completed image as slate-tile.png to your Blender textures directory. If you want to skip these steps you can download a pre-created one from the Packtpub website. How it works... The image that you want to tile must be carefully designed to hide any seams that might appear when repeated. Most of the major paint packages, such as Photoshop and GIMP, have tools to aid you in that process. However, manual drawing, or editing of an image, will almost always be necessary to create accurate tileable images. Even tiny variations between seams will show up if repeated enough times across a surface. Fortunately, there are techniques available in Blender that will help mask these repeat image shortcomings. Using a tileable texture to add complexity to a surface We will use the tileable texture created in the previous recipe and apply it to a slate roof material in Blender. Reload the slate-roof-01.blend file saved earlier and select the roof mesh object. From the Materials panel, create a new material and name it slate-roof. In the Diffuse tab, set the color selector to R 0.250, G 0.260, and B 0.300. Under Specular tab, change the specular type to Wardiso, with Intensity to 0.400 and Slope to 0.300. The color should stay at the default white. That's set the general color and specularity for the first material that we will use to start a node material solution for our slate roof shader. Ensure you have a Node Editor window displayed. In the Node Editor, select the material node button and check the Use Nodes checkbox. A blank material node should be displayed connected to an output node. From the Browse ID Data button, on the Material node, select the material previously created named slate-roof. To confirm that the material is loaded into the node, re-select that node by left clicking it. Of course, at the moment, the material is no more than a single color with a soft specular shine. To start turning it into a proper representation of a slate roof, we have to add our tileable texture and set up some bump and color settings to make our simple plane look a little more like a slate roof with many tiles. With the Material node still selected, go to the Texture panel and in the first texture slot, create a new texture of type Image or Movie and name it slate-tile. From the Image tab, open the slate-tile.png image you saved earlier. Under Image Mapping/ Extension, select Repeat and set Repeat to X 9 and Y 6. That means the image will be repeated nine times in the X direction and six in the Y of the texture space. In the Influence tab, select Diffuse/Color and set to 0.831. Also, select Geometry/Normal and set to -5.000. Finally, set the Blend type to Darken. Save your work at this point, incrementing your filename number to slate-roof-02.blend. As you can see, a repeated pattern has been stamped on our flat surface with darker colors representing the slate tile separations and a darker top that currently looks like a shadow. This will be corrected in following recipes, along with the obvious clinical precision of each edge. How it works... The surface properties of slate produce a spread of specular highlight when the slate is dry. The best way of simulating that in Blender is to employ a specular shader that can easily generate this type of specular property. The Wardiso specular shader is ideal for this task as it allows a wide range of slope settings from very tight, below 0.100, to very widely spread, 0.400. This is different from the other specular shaders that use a hardness setting to vary the highlight spread. However, you will notice that other specular shader types produce a narrower range than the Wardiso shader. In our slate example, this particular shader provides the ideal solution. Man-made materials are often made from repeated patterns. This is often because it's easier to manufacture objects as components and bring them together when building thus producing patterns. Utilizing simple tileable images to represent those shapes is an extremely efficient way of creating a Blender material simulation. Blender provides some really useful tools to ease the process, using repeats within a material as well as techniques to add variety and drama to a material. Repeat is a really useful way of tiling an image any number of times across the object's texture space. In our example, we were applying the image texture to the object's generated texture space. That's basically the physical dimensions of the object. You can find out what the texture space looks like for any object by selecting the Object panel and choosing the Display tab and checking Texture Space. An orange dotted border, representing the texture space, will surround the object. The plane object used for this material simulation is a square rectangle. If you were to scale the plane disproportionately, the texture would distort accordingly. If we were using this material for a roof simulation, where the scale may not be square, we may need to alter the repeat settings in the texture to match the proportions of the roof rectangle. In our recipe, we started with a one blender unit square mesh then set the repeat pattern to X 9 and Y 6. The repeat settings have to be integer numbers so it may be necessary to calculate the nearest repeat numbers for the image you want to use. In our example, we didn't need to be absolutely accurate. Slates, after all, are often quite variable in size between buildings. If you want to be absolutely accurate, scale your original mesh in Object mode to match to the image proportions. So, in our example, we could have scaled the plane to 2.60 (or 0.26) blender units on its X axis and 4.20 (or 0.42) on its Y axis, and then designed our repeats from that point.
Read more
  • 0
  • 0
  • 1991
article-image-models-and-animations-away3d-36
Packt
28 Jan 2011
7 min read
Save for later

Models and Animations with Away3D 3.6

Packt
28 Jan 2011
7 min read
Away3D 3.6 Essentials Take Flash to the next dimension by creating detailed, animated, and interactive 3D worlds with Away3D Create stunning 3D environments with highly detailed textures Animate and transform all types of 3D objects, including 3D Text Eliminate the need for expensive hardware with proven Away3D optimization techniques, without compromising on visual appeal Written in a practical and illustrative style, which will appeal to Away3D beginners and Flash developers alike Models and Animations It is possible to create a 3D object from the ground up using basic elements like vertices, triangle faces, Sprite3D objects, and segments. However, creating each element manually in code is not practical for more complex models. While the classes from the away3d.primitives package offer a solution by providing a way to quickly create some standard shapes, advanced applications will need to display more complex shapes. For those situations where these standard primitive shapes do not provide enough fexibility, Away3D can load and display 3D models created by external 3D modeling applications. 3D modeling applications are specifcally designed to provide a visual environment in which 3D models can be manipulated. It is certainly much more convenient to create or edit a 3D mesh in one of these applications than it is to build up a mesh in code using ActionScript. Away3D can directly load a wide range of 3D formats. The process of exporting a 3D mesh into a fle that can be used with Away3D will be covered for the following 3D modeling applications: 3ds Max: A popular commercial modeling, animation, and rendering application which runs on Windows. Blender: A free and open source modeling application, which is available on a number of platforms, including Windows, Linux, and MacOS. Milkshape: A commercial low-polygon modeler which runs on Windows that was originally designed for the game Half-Life. Sketch-up: A free 3D modeling application provided by Google. A commercial version is also available that includes a number of additional features. Sketch-up runs on Windows and MacOS. Actually creating a model in these 3D modeling applications is outside the scope of this article. However, 3D models are provided that can be loaded and then exported from these applications, which will allow you run through the procedure without having to know how to make a 3D model from scratch. 3D formats supported by Away3D Away3D includes classes that can load a wide range of 3D model fle formats. All the supported formats can be used to load a static 3D model, while a smaller number can be used to load animated models. The following table lists the 3D model formats supported by Away3D, their common extensions, whether they can load animated 3D models, and the Away3D class that is used to load and parse them. Exporting 3D models The following instructions show you how to export a Collada fle from a number of different 3D modeling applications. Collada is an open, XML-based format that has been designed to provide a way to exchange data between 3D applications. Away3D supports loading both static and animated 3D models from the Collada format. Exporting from 3ds Max 3ds Max is a commercial 3D modeling application. At the time of writing, the latest version of the ColladaMax plugin, which is the plugin that we will use to export the 3D model, was 3.05C. This version supports 3ds Max 2008, 3ds Max 9, 3ds Max 8 SP3, or 3ds Max 7 SP1. Note that this version does not support 3ds Max 2010 or 2011. A trial version of 3ds Max 9 is available, although it can be diffcult to fnd. You should be able to fnd a copy if you search the Internet for Autodesk3dsMax2009_ENU_TrialDownload.exe, which is the name of fle that will install the trial version of 3ds Max 9. Download and install the ColladaMax plugin from http://sourceforge.net/projects/colladamaya/files/. Open 3ds Max. Click File | Open. Select the MAX file you wish to open and click on the Open button. Click File | Export from within 3ds Max. Select COLLADA (*.DAE) from the Save as type drop-down list. Select the same directory where the original MAX fle was located. Type a fle name for the exported fle in the File name textbox, and click on the Save button. In the ColladaMax Export dialog box make sure the following checkboxes are enabled: Relative Paths Normals Triangulate If you want to export animations, enable the Enable export checkbox. If you want to export a specifc range of frames, enable the Sample animation checkbox and enter the required values in the Start and End textboxes. Click on the OK button to export the fle. Exporting from MilkShape The Collada exporter supplied with MilkShape does not export animations. So even if the MilkShape MS3D file we are loading contains an animated model, the exported Collada DAE file will be a static mesh. A trial version of MilkShape can be downloaded and installed from its website at http://chumbalum.swissquake.ch/. Click File | Open. Select the MS3D file you wish to open and click on the Open button. Click File | Export | COLLADA…. Select the same directory where the original MS3D file was located. Type a flename for the exported fle in the File name textbox and click the Save button. Exporting from Sketch-Up Like Milkshape, Sketch-up does not support exporting animated Collada fles. Sketch-Up can be downloaded for free from http://sketchup.google.com/. Click File | Open. Select the SKP file you wish to open and click on the Open button. Click File | Export | 3D Model…. Select Collada File (*.dae) from the Export type combobox. Select an appropriate directory, and type a filename for the exported file in the File name textbox. Click on the Options... button. Make sure the Triangulate All Faces checkbox is enabled. If the Export Texture Maps option is enabled, Sketch-Up will export the textures along with the DAE file. Click on the OK button to save the options. Click on the Export button to export the file. Exporting from Blender The latest version of the Collada exporter for Blender, which is version 0.3.162 at the time of writing, does support exporting animations. However, in most cases Away3D will not load these animations correctly. It is recommended that only static meshes be exported from Blender to a Collada fle. Click File | Open.... Select the BLEND file you wish to open and click on the Open button. Click File | Export | COLLADA1.4 (*.dae) .... Type a flename for the exported fle in the directory where the original BLEND fle was located in the Export File textbox. Make sure the Triangles and Use Relative Paths buttons are pressed. Click on the Export and Close button. A note about the Collada exporters Despite being free and open standard, exporting to a Collada fle that can be correctly parsed by Away3D can be a hit-and-miss affair. The Collada exporters for 3ds Max are a good example. During testing, neither the built-in Collada exporter included with 3ds Max, nor the third-party OpenCollada exporter from http://opencollada.org (version 1.2.5 was the latest version at the time of writing) would export an animated Collada fle that Away3D could read. At best Away3D would display a static mesh, and at worst it would throw an exception when reading the DAE fle. Likewise, neither of the Collada exporters that come with Blender (which was at version 2.49b at the time of writing) would consistently export an animated Collada mesh that was compatible with Away3D. It is important to be aware that just because a 3D modeling application says that it can export to a Collada fle, this is no guarantee that the resulting fle can be read correctly by Away3D.
Read more
  • 0
  • 0
  • 2798

article-image-3d-animation-techniques-xna-game-studio-40-2
Packt
14 Jan 2011
3 min read
Save for later

3D Animation Techniques with XNA Game Studio 4.0

Packt
14 Jan 2011
3 min read
Object animation We will first look at the animation of objects as a whole. The most common ways to animate an object are rotation and translation (movement). We will begin by creating a class that will interpolate a position and rotation value between two extremes over a given amount of time. We could also have it interpolate between two scaling values, but it is very uncommon for an object to change size in a smooth manner during gameplay, so we will leave it out for simplicity's sake. The ObjectAnimation class has a number of parameters—starting and ending position and rotation values, a duration to interpolate during those values, and a Boolean indicating whether or not the animation should loop or just remain at the end value after the duration has passed: public class ObjectAnimation { Vector3 startPosition, endPosition, startRotation, endRotation; TimeSpan duration; bool loop; } We will also store the amount of time that has elapsed since the animation began, and the current position and rotation values: TimeSpan elapsedTime = TimeSpan.FromSeconds(0); public Vector3 Position { get; private set; } public Vector3 Rotation { get; private set; } The constructor will initialize these values: public ObjectAnimation(Vector3 StartPosition, Vector3 EndPosition, Vector3 StartRotation, Vector3 EndRotation, TimeSpan Duration, bool Loop) { this.startPosition = StartPosition; this.endPosition = EndPosition; this.startRotation = StartRotation; this.endRotation = EndRotation; this.duration = Duration; this.loop = Loop; Position = startPosition; Rotation = startRotation; } Finally, the Update() function takes the amount of time that has elapsed since the last update and updates the position and rotation values accordingly: public void Update(TimeSpan Elapsed) { // Update the time this.elapsedTime += Elapsed; // Determine how far along the duration value we are (0 to 1) float amt = (float)elapsedTime.TotalSeconds / (float)duration. TotalSeconds; if (loop) while (amt > 1) // Wrap the time if we are looping amt -= 1; else // Clamp to the end value if we are not amt = MathHelper.Clamp(amt, 0, 1); // Update the current position and rotation Position = Vector3.Lerp(startPosition, endPosition, amt); Rotation = Vector3.Lerp(startRotation, endRotation, amt); } As a simple example, we'll create an animation (in the Game1 class) that rotates our spaceship in a circle over a few seconds: We'll also have it move the model up and down for demonstration's sake: ObjectAnimation anim; We initialize it in the constructor: models.Add(new CModel(Content.Load<Model>("ship"), Vector3.Zero, Vector3.Zero, new Vector3(0.25f), GraphicsDevice)); anim = new ObjectAnimation(new Vector3(0, -150, 0), new Vector3(0, 150, 0), Vector3.Zero, new Vector3(0, -MathHelper.TwoPi, 0), TimeSpan.FromSeconds(10), true); We update it as follows: anim.Update(gameTime.ElapsedGameTime); models[0].Position = anim.Position; models[0].Rotation = anim.Rotation;
Read more
  • 0
  • 0
  • 2675