Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

368 Articles
article-image-flash-10-multiplayer-game-lobby-and-new-game-screen-implementation
Packt
27 Jul 2010
5 min read
Save for later

Flash 10 Multiplayer Game: The Lobby and New Game Screen Implementation

Packt
27 Jul 2010
5 min read
(For more resources on Flash and Games, see here.) The lobby screen implementation In this section, we will learn how to implement the room display within the lobby. Lobby screen in Hello World Upon login, the first thing the player needs to do is enter the lobby. Once the player has logged into the server successfully, the default behavior of the PulseGame in PulseUI is to call enterLobby API. The following is the implementation within PulseGame: protected function postInit():void { m_netClient.enterLobby();} Once the player has successfully entered the lobby, the client will start listening to all the room updates that happen in the lobby. These updates include any newly created room, any updates to the room objects, for example, any changes to the player count of a game room, host change, etc. Customizing lobby screen In the PulseUI, the lobby screen is the immediate screen that gets displayed after a successful login. The lobby screen is drawn over whatever the outline object has drawn onto the screen. The following is added to the screen when the lobby screen is shown to the player: Search lobby UI Available game rooms Game room scroll buttons Buttons for creating a new game room Navigation buttons to top ten and register screens When the lobby is called to hide, the lobby UI elements are taken off the screen to make way for the incoming screen. For our initial game prototype, we don't need to make any changes. The PulseUI framework already offers all of the essential set of functionalities of a lobby for any kind of multiplayer game. However, the one place you may want to add more details is in what gets display for each room within the lobby. Customizing game room display The room display is controlled by the class RoomsDisplay, an instance of which is contained in GameLobbyScreen. The RoomsDisplay contains a number of RoomDisplay object instances, one for each room being displayed. In order to modify what gets displayed in each room display, we do it inside of the class that is subclassed from RoomDisplay. The following figure shows the containment of the Pulse layer classes and shows what we need to subclass in order to modify the room display: In all cases, we would subclass (MyGame) the PulseGame. In order to have our own subclass of lobby screen, we first need to create class (MyGameLobbyScreen) inherited from GameLobbyScreen. In addition, we also need to override the method initLobbyScreen as shown below: protected override function initLobbyScreen():void { m_gameLobbyScreen = new MyGameLobbyScreen();} In order to provide our own RoomsDisplay, we need to create a subclass (MyRoomsDisplay) inherited from RoomsDisplay class and we need to override the method where it creates the RoomsDisplay in GameLobbyScreen as shown below: protected function createRoomsDisplay():void { m_roomsDisplay = new MyRoomsDisplay();} Finally, we do similar subclassing for MyRoomDisplay and override the method that creates the RoomDisplay in MyRoomsDisplay as follows: protected override function createRoomDisplay (room:GameRoomClient):RoomDisplay { return new MyRoomDisplay(room);} Now that we have hooked up to create our own implementation of RoomDisplay, we are free to add any additional information we like. In order to add additional sprites, we now simply need to override the init method of GameRoom and provide our additional sprites. Filtering rooms to display The choice is up to the game developer to either display all the rooms currently created or just the ones that are available to join. We may override the method shouldShowRoom method in the subclass of RoomsDisplay (MyRoomsDisplay) to change the default behavior. The default behavior is to show rooms that are only available to join as well as rooms that allow players to join even after the game has started. Following is the default method implementation: protected function shouldShowRoom(room:GameRoomClient):Boolean { var show:Boolean; show = (room.getRoomType() == GameConstants.ROOM_ALLOW_POST_START); if(show == true) return true; else { return (room.getRoomStatus() == GameConstants.ROOM_STATE_WAITING); }} Lobby and room-related API Upon successful logging, all game implementation must call the enterLobby method. public function enterLobby(gameLobbyId:String = "DefaultLobby"):void You may pass a null string in case you only wish to have one default lobby. The following notification will be received again by the client whether the request to enter a lobby was successful or not. At this point, the game screen should switch to the lobby screen. function onEnteredLobby(error:int):void If entering a lobby was successful, then the client will start to receive a bunch of onNewGameRoom notifications, one for each room that was found active in the entered lobby. The implementation should draw the corresponding game room with the details on the lobby screen. function onNewGameRoom(room:GameRoomClient):void The client may also receive other lobby-related notifications such as onUpdateGameRoom for any room updates and onRemoveGameRoom for any room objects that no longer exist in lobby. function onUpdateGameRoom(room:GameRoomClient):voidfunction onRemoveGameRoom(room:GameRoomClient):void If the player wishes to join an existing game room in the lobby, you simply call joinGameRoom and pass the corresponding room object. public function joinGameRoom(gameRoom:GameRoomClient):void In response to a join request, the server notifies the requesting client of whether the action was successful or failed via the game client callback method. function onJoinedGameRoom(gameRoomId:int, error:int):void A player already in a game room may leave the room and go back to the lobby, by calling the following API: public function leaveGameRoom():void Note that if the player successfully left the room, the calling game client will receive the notification via the following callback API: function onLeaveGameRoom(error:int):void
Read more
  • 0
  • 0
  • 2168

article-image-flash-10-multiplayer-game-introduction-lobby-and-room-management
Packt
14 Jul 2010
7 min read
Save for later

Flash 10 Multiplayer Game: Introduction to Lobby and Room Management

Packt
14 Jul 2010
7 min read
(For more resources on Flash and Games, see here.) A lobby, in a multiplayer game, is where people hang around before they go into a specific room to play. When the player comes out of the room, the players are dropped into the lobby again. The main function of a lobby is to help players quickly find a game room that is suited for them and join. When the player is said to be in a lobby, the player will be able to browse rooms within the lobby that can be entered. The player may be able to see several attributes and the status of each game room. For example, the player will be able to see how many players are in the room already, giving a hint that the game in the room is about to begin. If it is a four-player game and three are already in, there is a greater chance that the game will start soon. Depending on the game, the room may also show a lot of other information such as avatar names of all the players already in the room and who the host is. In a car race game, the player may be able to see what kind of map is chosen to play, what level of difficulty the room is set to, etc. Most lobbies also offer a quick join functionality where the system chooses a room for the player and enters them into it. The act of joining a room means the player leaves the lobby, which in turn means that the player is now unaware or not interested in any updates that happen in the lobby. The player now only receives events that occur within the game room, such as, another player has entered or departed or the host has left and a new host was chosen by the server. When a player is in the lobby, the player constantly receives updates that happen within the lobby. For example, events such as new room creation, deletion, and room-related updates. The room-related updates include the players joining or leaving the room and the room status changing from waiting to playing. A sophisticated lobby design lets a player delay switching to the room screen until the game starts. This is done so as to not have a player feel all alone once they create a room and get inside it. In this design, the player can still view activities in the lobby, and there's an opportunity for players to change their mind and jump to another table (game room) instantaneously. The lobby screen may also provide a chatting interface. The players will be able to view all the players in the lobby and even make friends. Note that the lobby for a popular game may include thousands of players. The server may be bogged down by sending updates to all the players in the lobby. As an advanced optimization, various pagination schemes may be adopted where the player only receives updates from only a certain set of rooms that is currently being viewed on the screen. In some cases, lobbies are organized into various categories to lessen the player traffic and thus the load on the server. Some of the ways you may want to break down the lobbies are based on player levels, game genres, and geographic location, etc. The lobbies are most often statically designed, meaning a player may not create a lobby on the fly. The server's responsibility is to keep track of all the players in the lobby and dispatch them with all events related to lobby and room activity. The rooms that are managed within a lobby may be created dynamically or sometimes statically. In a statically created room, the players simply occupy them, play the game, and then leave. Also in this design, the game shows with a bunch of empty rooms, say, one hundred of them. If all rooms are currently in play state, then the player needs to wait to join a room that is in a wait state and is open to accepting a new player into the room. Modeling game room The game room required for the game is also modeled via the schema file (Download here-chap3). Subclassing should be done when you want to define additional properties on a game room that you want to store within the game room. The properties that you might want to add would be specific to your game. However, some of the commonly required properties are already defined in the GameRoom class. You will only need to define one such subclass for a game. The following are the properties defined on the GameRoom Class: Property Notes Room name Name of the game room typically set during game room creation Host name The server keeps track of this value and is set to the current host of the room. The room host is typically the creator. If the host leaves the room while others are still in it, an arbitrary player in the room is set as host. Host ID Is maintained by the server similar to host name. Password Should be set by the developer upon creating a new game room. Player count The server keeps track of this value as and when the players enter or leave the room. Max player count Should be set by the developer upon creating a new game room. The server will automatically reject the player joining the room if the player count is equal to the max player count Room status The possible values for this property are GameConstants.ROOM_STATE_WAITING or GameConstants.ROOM_STATE_PLAYING The server keeps track of these value-based player actions such as PulseClient.startGame API. Room type The possible values for this property are value combinations of GameConstants.ROOM_TURN_BASED and GameConstants.ROOM_DISALLOW_POST_START The developer should set this value upon creating a new room. The server controls the callback API behavior based on this property. Action A server-reserved property; the developer should not use this for any purpose. The developer may inherit from the game room and specify an arbitrary number of properties. Note that the total number of bytes should not exceed 1K bytes. Game room management A room is where a group of players play a particular game. A player that joins the room first or enters first is called game or room host. The host has some special powers. For example, the host can set the difficulty level of the game, set the race track to play, limit the number of people that can join the game, and even set a password on the room. If there is more than one player in the room and the host decides to leave the room, then usually the system automatically chooses another player in the room as a host and this event is notified to all players in room. Once the player is said to be in the room, the player starts receiving events for any player entering or leaving the room, or any room-property changes such as the host setting a different map to play, etc. The players can also chat with each other when they are in a game room. Seating order Seating order is not required for all kinds of games, for example, a racing game may not place as much importance on the starting position of the player's cars, although the system may assign one automatically. This is also true in the case of two-player games such as chess. But players in a card game of four players around a table may wish to be seated at a specific position, for example, across from a friend who would be a partner during the game play. In these cases, a player entering a room also requests to sit at a certain position. In this kind of a lobby or room design, the GUI shows which seats are currently occupied and which seats are not. The server may reject the player request if another player is already seated at the requested seat. This happens when the server has already granted the position to another player just an instant before the player requested and the UI was probably not updated. In this case, the server will choose another vacant seat if one is available or else the server will reject the player entrance into the room.
Read more
  • 0
  • 0
  • 6179

Packt
19 Jun 2010
5 min read
Save for later

Getting Started with Blender’s Particle System- A Sequel

Packt
19 Jun 2010
5 min read
Creating Fire Taking from the same setup we had for the creating the smoke, making a fire is almost a similar process except for a few changes: the halo shader settings and force field strengths. Let’s go ahead and start changing the halo shader such that we change the color, hardness, add, and to disable the texture option. Then, we change the Force Field from Texture to Force with Strength of -6.7. Halo Settings for Fire   Force Field Settings   Fire Test Render   Furthermore, we can achieve even more believable results when we plug these image renders over to our compositor for some contrast boosting and other cool 2d effects. Creating Bubbles Let’s start a new Blender file, delete the default Cube, and replace it with a Plane primitive. Then let’s position the camera such that our plane is just below our view. Preparing the View   Next, let’s add a new particle system to the Plane and name it “Bubble”. Check the screenshots below for the settings. Bubble Cache Settings   Bubble Emission Settings   Bubble Velocity Settings   Bubble Physics Settings   Bubble Display Settings Bubble Field Weights Settings Now that we’ve got those settings in (remember though to play around because your settings might be way better than mine), let’s add a UV Sphere with the default divisions to our scene and name it “Bubble”. Then place it somewhere that the camera view won’t see. Adding, Moving, and Renaming the UV Sphere   What we’ll be doing next is to “instance” the UV Sphere (“Bubble”) we just added into the emitter plane, thus obeying the Particle Settings that we’ve set awhile back. To do this, select the Emitter plane and edit the Render and Display settings under Particle Settings (as seen below). Emitter Render and Display Settings Now if we play the animation in our 3D Viewport, you’ll now notice that the UV Sphere is now instanced to where the particle points are before, replacing them with a mesh object. Often, the instanced Bubble object would look small in our view, if this happens, simply scale the Bubble object and it will propagate accordingly in our Particle System. Instanced Bubble Objects   And that’s about it! Coupled with some nice shaders and compositing effects, you can definitely achieve impressive and seamless results. Bubbles with Sample Shaders and Image Background   Bubble Particles Composited Over a Photograph Bubbles and Butterflies   Creating Rockslides Similar to the concept of creating bubbles via Particle Systems, let’s derive the steps and create something different. This time, we’ll take advantage of Blender’s physics systems to automate natural motion and collision interaction. We won’t be using the Blender Game Engine for this matter (which should do almost the same thing), but instead we’re still going to use the particle system that is present in Blender. Like how we started the other topics, this time again, we’ll start by refreshing Blender’s view and starting a new session. Delete the default cube and add a plane mesh or a grid and start modeling a mountain-like terrain. This will be our slope from which our rock particles will fall and slide later on. You can use whichever technique you have on your disposal. Fast forward into time, here’s closely what we should have: Terrain Model for Rock Sliding   Next step is to create the actual rocks that are going to be falling and sliding on our terrain mesh. It’s optimal to start with an Icosphere and model from there. Be sure to move the models out of the camera’s view since we don’t want to see the original meshes, only the instances that are going to be generated. Model five (5) variations of the rocks and create a group for them named “RockGroup”. Rock Group Add an emitter plane across the top of the mountain terrain, this will be our particle rock emitter. Rock Particle Emitter   Next, create a Particle System on the emitter mesh and call it “RockSystem”. And this time, we’ll use the default gravity settings to simulate falling rock. Check the screenshots below for the particle setup.           Additionally, we must set the terrain mesh as a collision object such that the particles react to it whenever they collide. Play around with the settings until you’re satisfied with the behavior of your particles. Press ALT+A or click the play button in the Timeline Window to preview the animation. Setting Terrain as Collision   Single Frame from the Animation   Single Frame Rendered  
Read more
  • 0
  • 0
  • 2641
Visually different images

Packt
19 Jun 2010
10 min read
Save for later

Getting Started with Blender’s Particle System

Packt
19 Jun 2010
10 min read
We’ll cover a general introduction about Blender’s particle system; how to use it, when to use it, how useful it is on both small scale and large scale projects, and an overview of its implications. In this article, I’ll also discuss how to use the particle system to instance objects These are just but very few examples of what can really be achieved with the particle system, there’s a vast array of possibilities out there that can be discovered. But I’m hoping with these examples and definitions, you’ll find your way through the wonderful world of visual effects with the particle system. If you remember watching Blender Foundation’s Big Buck Bunny, it might not seem as a surprise to you that almost the entire film is filled up with so much particle goodness. Thanks to that, great improvements have been made to Blender’s particle system. As compared to my previous articles, this one uses Blender 2.5, the latest version of Blender which is currently in Alpha stage right now but is fully workable. However, in this article I wouldn’t be telling you what each of the button would do since that would take another article in itself and might deviate from an introductory approach. If you wanted to know more in-depth information about the whole particle system, you can check out the official Blender documentation or the Blender wiki. There had been great improvements on the particle system and now is the right time to start trying it out. So what are you waiting for? Hop on and ride with me in this journey! You can watch the video here. Prerequisites Before we begin with this article, it’s vital to first determine our requirements before proceeding on the actual application. Before going on, we need to have the following: Latest Blender 2.5 (you can grab a copy at http://www.blender.org or http://www.graphicall.org) Adequate hard disk space Decent processor and graphics card What is a Particle System and where can we find it in Blender? A particle system is a technique in Computer Graphics that is used to simulate a visual effect that would otherwise be very difficult and cumbersome if not impossible to do in traditional 3D techniques. These effects include crowd simulation, hair, fur, grass, fire, smoke, fluids, dust, snowflakes, and more. Particle Systems are calculated inside of your simulation program in several ways and one of the most common ones is the Newtonian Physics calculation which regards forces like wind, magnetism, friction, gravity, etc. to generate the particle’s motion along a controllable environment. Properties or parameters of particle systems include: mass, speed, velocity, size, life, color, transparency, softness, damping, and many more. Particle’s behavior along a certain span of time are then saved and written on to your disk. This process is called caching, which enables you to view and edit the particle’s behavior within the timeframe set. And when you’re satisfied with the way your particles act on your simulation space, you can now permanently write these settings to the disk, called baking. Be aware that the longer your particle system’s life is and the greater the number, the more hard disk space it uses, so for testing purposes, it is best to keep the particles number low, then progressively increase them as needed. Particle mass refers the natural weight (in a gravitational field) of a single particle object/point, where higher mass means a heavier particle and a lower mass implies a lighter particle. A heavy particle is more resistant to external forces such as wind but more reactive to gravitational force as compared to a lighter particle system which is a direct opposite. Particle speed/velocity refers to how fast or slow the particle points are being thrown or emitted from their source within a given time. They can be displaced and controlled in the x, y, or z directions accordingly. A higher velocity will create a faster shooting particle (as seen in muzzle flares/flash) and a lower one will create slower particle shots. Size of the particles is one of the most important setting when using a particle system as object instances, since this will better control the size of the objects on the emitting plane as compared to manually resizing the original object. With this option, you can have random sizes which will give the scene a more believable and natural look and offcourse, an easier setup as compared to creating numerous objects for this matter. Particle life refers to the lifespan of the particles, for how long they will be existent until they disappear and die from the system. In some cases, having a large particle life is useful but it can have some speed drawbacks on your machine. Imagine emitting one million particles within 5 seconds where only half of it is relevant within the first few seconds. That could mean 500,000 particles cached is a sheer waste and rendering your machine slower. Having smaller particle lives sometimes is also useful especially when you’re creating small scale smoke and fire. But all these really depend on the way you setup your scenes. Play around and you’ll find the best settings that suit your needs. Note though that in Blender, only Mesh objects can be particle emitters (which is simply rational for this purpose). You can modify the size and shape of the mesh object and the particle system reacts accordingly. The direction from which particles are emitted is dictated by the mesh normals, which I will discuss along the way. In Blender 2.5, we can find the Particle System by selecting any mesh object and clicking the Particles Button in the Properties Editor, as seen in the screenshot. Later, we’ll add and name particle systems and go over their settings more closely. Blender 2.5’s Particle System   What are the types of Blender particle system? Currently, there are only two types of particle systems in Blender 2.5, namely Emmiter and Hair.  With few on the list, these two have already proved to be very handy tools in creating robust simulations. Particle System Types   Let’s begin by learning what an Emitter Type is. Aside from the fact that the term emitter is used to define the source of the particles, the emitter type is one different thing on its own. The Emitter, being the default particle type, is one of the most commonly used particle system type. With it you can create dust, smoke, fire, fluids, snowflakes, and the like. Inside Blender, make the default cube a particle emitter with the type set as Emitter. Leave the default settings as they are and press ALT+A in the 3D Viewport to playback the animation and observe the way the particles act in your observable space, that is, the 3D space that your object is in. During the playback process, the particle system is already caching the position of the particle points in your space and writing these on to the disk. Naturally, this is how the emitter type will act on our space, with a slight pulse from emitter then it drops down as an effect of the gravity. That pulse is coming from the Normal value which tells how strong the points are spurted out until they get affected by external forces. There’s more to the emitter type than there seems to be, so don’t hold back and keep changing the settings and see where they lead you to. And if you can, please send me what you got, I’d be very pleased to see what fun things you came up with. Emitter Type   Leaving from the default settings we had awhile back, change your current frame to 1 (if it is not yet set as such) then let’s change the type from Emitter to Hair. Instantly, you’ll notice that strands come bursting out of our default mesh. This is the particle hair’s nature, whichever frame you are in now, it will stay in the same frame, regardless of any explicit animation you add in. As compared to the Emitter type, Hair doesn’t need to be cached to see the results, it’s an on-the-fly process of editing particle settings. Another great advantage of the hair type over the emitter type is that it has a dedicated Particle Mode which enables you to edit the strands as though you were actually touching real hair. With the hair type, you can create systems like fur, grass, static particle instances/grouping, and more. Hair Type   Basic and Practical Uses of the Particle System Now let’s on the real business. From here on, I’ll approach the proceeding steps in a practical application manner. First, let’s check some of the default settings and see where they lead us. Fire up a fresh Blender session by opening up Blender or by pressing CTRL+N (when you already have it open from our steps awhile ago). Fresh Blender 2.5 Session   Delete the default cube and add in a Plane object. Adding a Plane Primitive   Position the camera’s view such that the plane is just on top of the viewing range, this will give us more space to see the particles falling. Adjusting the View   Select the Plane Mesh and in the Particle Buttons window, add a new Particle System and leave the default values as they are. In the 3D Viewport, press ALT+A to play and pause the animation or use the play buttons in the Timeline Window. Pick a frame you’re satisfied with, any will do right now. New Particle System   Next, add a new material to the Plane Mesh and the Particle System. Activate the Material Buttons and add a new material, then change the material type from Surface to Halo. Halos is a special material type for particles; they give each point a unique shading system compared to the Surface shaders. With halos, you can control the size of the particles, their transparency, color, brightness, hardness, textures, flares, and other interesting effects. With just the default values, let’s render out our scene and see what we get. Halo Material   Halo Rendered   Right now, this doesn’t look too pleasing, but it will be once we get the combinations right. Next, let’s try activating Rings under the halo menu, then Lines, Star, and finally a combination of the three. Below are rendered versions of each and their combination. Additional Options for Halo   Halo with Rings   Halo with Lines   Halo with Star   Halo with Rings, Lines, and Star   Just by altering the halo settings alone, we can achieve particle effects like pollen dust as seen in the image below where the particle effect was composited over a photo. Particle Pollen Dust  
Read more
  • 0
  • 0
  • 6629

article-image-blender-249-scripting-impression-using-different-mesh-each-frame-object
Packt
07 May 2010
7 min read
Save for later

Blender 2.49 Scripting: Impression using Different Mesh on Each Frame of Object

Packt
07 May 2010
7 min read
(Read more interesting articles on Blender 2.49 Scripting here.) Revisiting mesh—making an impression The following illustration gives some impression of what is possible. The tracks are created by animating a rolling car tire on a subdivided plane: In the following part, we will refer to the object mesh being deformed as the source and the object, or objects, doing the deforming as targets. In a sense, this is much like a constraint and we might have implemented these deformations as pycontraints. However, that wouldn't be feasible because constraints get evaluated each time the source or targets move; thereby causing the user interface to come to a grinding halt as calculating the intersections and the resulting deformation of meshes is computationally intensive. Therefore, we choose an approach where we calculate and cache the results each time the frame is changed. Our script will have to serve several functions, it must: Calculate and cache the deformations on each frame change Change vertex coordinates when cached information is present And when run standalone, the script should: Save or restore the original mesh Prompt the user for targets Associate itself as a script link with the source object Possibly remove itself as a script link An important consideration in designing the script is how we will store or cache the original mesh and the intermediate, deformed meshes. Because we will not change the topology of the mesh (that is, the way vertices are connected to each other), but just the vertex coordinates, it will be sufficient to store just those coordinates. That leaves us with the question: where to store this information. If we do not want to write our own persistent storage solution, we have two options: Use Blender's registry Associate the data with the source object as a property Blender's registry is easy to use but we must have some method of associating the data with an object because it is possible that the user might want to associate more than one object with an impression calculation. We could use the name of the object as a key, but if the user would change that name, we would lose the reference with the stored information while the script link functionality would still be there. This would leave the user responsible for removing the stored data if the name of the object was changed. Associating all data as a property would not suffer from any renaming and the data would be cleared when the object is deleted, but the types of data that may be stored in a property are limited to an integer, a floating point value, or a string. There are ways to convert arbitrary data to strings by using Python's standard pickle module, but, unfortunately, this scenario is thwarted by two problems: Vertex coordinates in Blender are Vector instances and these do not support the pickle protocol The size of string properties is limited to 127 characters and that is far too small to store even a single frame of vertex coordinates for a moderately sized mesh Despite the drawbacks of using the registry, we will use it to devise two functions—one to store vertex coordinates for a given frame number and one to retrieve that data and apply it to the vertices of the mesh. First, we define a utility function ckey() that will return a key to use with the registry functions based on the name of the object whose mesh data we want to cache(download full code from here): def ckey(ob): return meshcache+ob.name Not all registries are the sameDo not confuse Blender's registry with the Windows registry. Both serve the similar purpose of providing a persistent storage for all sorts of data, but both are distinct entities. The actual data for Blender registry items that are written to disk resides in .blender/scripts/bpydata/config/ by default and this location may be altered by setting the datadir property with Blender.Set(). Our storemesh() function will take an object and a frame number as arguments. Its first action is to extract just the vertex coordinates from the mesh data associated with the object. Next, it retrieves any data stored in Blender's registry for the object that we are dealing with and we pass the extra True parameter to indicate that if there is no data present in memory, GetKey() should check for it on disk. If there is no data stored for our object whatsoever, GetKey() will return None, in which case we initialize our cache to an empty dictionary. Subsequently, we store our mesh coordinates in this dictionary indexed by the frame number (highlighted in the next code snippet). We convert this integer frame number to a string to be used as the actual key because Blender's SetKey() function assumes all of the keys to be strings when saving registry data to disk, and will raise an exception if it encounters an integer. The final line calls SetKey() again with an extra True argument to indicate that we want the data to be stored to disk as well. def storemesh(ob,frame): coords = [(v.co.x,v.co.y,v.co.z) for v in ob.getData().verts] d=Blender.Registry.GetKey(ckey(ob),True) if d == None: d={} d[str(frame)]=coords Blender.Registry.SetKey(ckey(ob),d,True) The retrievemesh() function will take an object and a frame number as arguments. If it finds cached data for the given object and frame, it will assign the stored vertex coordinates to vertices in the mesh. We first define two new exceptions to indicate some specific error conditions retrievemesh() may encounter: class NoSuchProperty(RuntimeError): pass;class NoFrameCached(RuntimeError): pass; retrievemesh() will raise the NoSuchProperty exception if the object has no associated cached mesh data and a NoFrameCached exception if the data is present but not for the indicated frame. The highlighted line in the next code deserves some attention. We fetch the associated mesh data of the object with mesh=True. This will yield a wrapped mesh, not a copy, so any vertex data we access or alter will refer to the actual data. Also, we encounter Python's built-in zip() function that will take two lists and returns a list consisting of tuples of two elements, one from each list. It effectively lets us traverse two lists in parallel. In our case, these lists are a list of vertices and a list of coordinates and we simply convert these coordinates to vectors and assign them to the co-attribute of each vertex: def retrievemesh(ob,frame): d=Blender.Registry.GetKey(ckey(ob),True) if d == None: raise NoSuchProperty("no property %s for object %s" %(meshcache,ob.name)) try: coords = d[str(frame)] except KeyError: raise NoFrameCached("frame %d not cached on object %s" %(frame,ob.name)) for v,c in zip(ob.getData(mesh=True).verts,coords): v.co = Blender.Mathutils.Vector(c) To complete our set of cache functions we define a function clearcache() that will attempt to remove the registry data associated with our object. The try … except … clause will ensure that the absence of stored data is silently ignored: def clearcache(ob): try: Blender.Registry.RemoveKey(ckey(ob)) except: pass
Read more
  • 0
  • 0
  • 1834

article-image-blender-249-scripting-animating-visibility-objects
Packt
07 May 2010
8 min read
Save for later

Blender 2.49 Scripting: Animating the Visibility of objects

Packt
07 May 2010
8 min read
(Read more interesting articles on Blender 2.49 Scripting here.) Script links are scripts that may be associated with Blender objects (Meshes, Cameras, and so on, but also Scenes and World objects) and that can be set up to run automatically on the following occasions: Just before rendering a frame Just after rendering a frame When a frame is changed When an object is updated When the object data is updated Scene objects may have script links associated with them that may be invoked on two additional occasions: On loading a .blend file On saving a .blend file Space handlers are Python scripts that are invoked each time the 3D view window is redrawn or a key or mouse action is detected. Their primary use is to extend the capabilities of Blender's user interface. Animating the visibility of objects An often recurring issue in making an animation is the wish to make an object disappear or fade away at a certain frame, either for the sake of the effect itself or to replace the object by another one to achieve some dramatic impact (such as an explosion or a bunny rabbit changing into a ball). There are many ways to engineer these effects, and most of them are not specifically tied to script links reacting on a frame change (many can simply be keyed as well). Nevertheless, we will look at two techniques that may easily be adapted to all sorts of situations, even ones that are not easily keyed. For example, we demand some specific behavior of a parameter that is easy to formulate in an expression but awkward to catch in an IPO. Fading a material Our first example will change the diffuse color of a material. It would be just as simple to change the transparency, but it is easier to see changes in diffuse color in illustrations. Our goal is to fade the diffuse color from black to white and back again, spaced over a period of two seconds. We therefore define a function setcolor() that takes a material and changes its diffuse color (the rgbColor attribute). It assumes a frame rate of 25 frames per second and, therefore, the first line fetches the current frame number and performs a modulo operation to determine what fraction of the current whole second is elapsed. The highlighted line in the following code snippet determines whether we are in an odd or even second. If we are in an even second, we ramp up the diffuse color to white so we just keep our computed fraction. If we are in an odd second, we tone down the diffuse color to black so we subtract the fraction from the maximum possible value (25). Finally, we scale our value to lie between 0 and 1 and assign it to all three color components to obtain a shade of gray: import Blenderdef setcolor(mat): s = Blender.Get('curframe')%25 if int(Blender.Get('curframe')/25.0)%2 == 0: c = s else: c = 25-s c /= 25.0 mat.rgbCol = [c,c,c]if Blender.bylink and Blender.event == 'FrameChanged': setcolor(Blender.link) The script ends with an important check: Blender.bylink is True only if this script is called as a script handler and in that case Blender.event holds the event type. We only want to act on frame changes so that is what we check for here. If these conditions are satisfied, we pass Blender.link to our setcolor() function as it holds the object our scriptlink script is associated with—in this case that will be a Material object. (This script is available as MaterialScriptLink.py in scriptlinks.blend- download full code from here) The next thing on our list is to associate the script with the object whose material we want to change. We therefore select the object and in the Buttons Window we select the Script panel. In the Scriptlinks tab, we enable script links and select the MaterialScriptLinks button. (If there is no MaterialScriptLinks button then the selected object has no material assigned to it. Make sure it has.) There should now be a label Select Script link visible with a New button. Clicking on New will show a dropdown with available script links (files in the text editor). In this case, we will select MaterialScriptLink.py and we are done. We can now test our script link by changing the frame in the 3D view (with the arrow keys). The color of our object should change with the changing frame number. (If the color doesn't seem to change, check whether solid or shaded viewing is on in the 3D view.) Changing layers If we want to change the visibility of an object, changing the layer(s) it is assigned to is a more general and powerful technique than changing material properties. Changing its assigned layer has, for instance, the advantage that we can make the object completely invisible for lamps that are configured to illuminate only certain layers and many aspects of an animation (for example, deflection of particles by force fields) may be limited to certain layers as well. Also, changing layers is not limited to objects with associated materials. You can just as easily change the layer of a Lamp or Camera. For our next example, we want to assign an object to layer 1 if the number of elapsed seconds is even and to layer 2 if the number of seconds is odd. The script to implement this is very similar to our material changing script. The real work is done by the function setlayer(). The first line calculates the layer the object should be on in the current frame and the next line (highlighted) assigns the list of layer indices (consisting of a single layer in this case) to the layers attribute of the object. The final two lines of the setlayer() function ensure that the layer's change is actually visible in Blender. import Blenderdef setlayer(ob): layer = 1+int(Blender.Get('curframe')/25.0)%2 ob.layers = [ layer ] ob.makeDisplayList() Blender.Window.RedrawAll()if Blender.bylink and Blender.event == 'FrameChanged': setlayer(Blender.link) As in our previous script, the final lines of our script check whether we are called as a script link and on a frame change event, and if so, pass the associated object to the setlayer() function. (The script is available as OddEvenScriptlink.py in scriptlinks.blend.) All that is left to do is to assign the script as a scriptlink to a selected object. Again, this is accomplished in the Buttons Window | Script panel by clicking on Enabling Script Links in the Scriptlinks tab (if necessary, it might still be selected because of our previous example. It is a global choice, that is, it is enabled or disabled for all objects). This time, we select the object scriptlinks instead of the material scriptlinks and click on New to select OddEvenScriptlink.py from the dropdown. Countdown—animating a timer with script links One of the possibilities of using a script link that acts on frame changes is the ability to modify the actual mesh either by changing the vertices of a Mesh object or by associating a completely different mesh with a Blender object. This is not possible when using IPOs as these are limited to shape keys that interpolate between predefined shapes with the same mesh topology (the same number of vertices connected in the same way). The same is true for curves and text objects. One application of that technique is to implement a counter object that will display the number of seconds since the start of the animation. This is accomplished by changing the text of a Text3d object by way of its setText() method. The setcounter() function in the following code does exactly that together with the necessary actions to update Blender's display. (The script is available as CounterScriptLink.py in scriptlinks.blend.) import Blenderobjectname='Counter'scriptname='CounterScriptLink.py'def setcounter(counterob): seconds = int(Blender.Get('curframe')/25.0)+1 counterob.getData().setText(str(seconds)) counterob.makeDisplayList() Blender.Window.RedrawAll()if Blender.bylink: setcounter(Blender.link)else: countertxt = Blender.Text3d.New(objectname) scn = Blender.Scene.GetCurrent() counterob = scn.objects.new(countertxt) setcounter(counterob) counterob.clearScriptLinks([scriptname]) counterob.addScriptLink(scriptname,'FrameChanged') This script may be associated as a script link with any Text3d object as shown before. However, if run with Alt + P from the text editor it will create a new Text3d object and will associate itself to this object as a script link. The highlighted lines show how we check for this just like in the previous scripts, but in this case we take some action when not called as a script link as well (the else clause). The final two highlighted lines show how we associate the script with the newly created object. First, we remove (clear) any script links with the same name that might have been associated earlier. This is done to prevent associating the same script link more than once, which is valid but hardly useful. Next, we add the script as a script link that will be called when a frame change occurs. The screenshot shows the 3D view with a frame from the animation together with the Buttons window (top-left) that lists the association of the script link with the object. Note that although it is possible to associate a script link with a Blender object from within a Python script, script links must be enabled manually for them to actually run! (In the ScriptLinks tab.) There is no functionality in the Blender Python API to do this from a script.
Read more
  • 0
  • 0
  • 2791
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-blender-249-scripting-shape-keys-ipos-and-poses
Packt
29 Apr 2010
6 min read
Save for later

Blender 2.49 Scripting: Shape Keys, IPOs, and Poses

Packt
29 Apr 2010
6 min read
A touchy subject—defining an IPO from scratch Many paths of motion of objects are hard to model by hand, for example, when we want the object to follow a precise mathematical curve or if we want to coordinate the movement of multiple objects in a way that is not easily accomplished by copying IPOs or defining IPO drivers. Imagine the following scenario: we want to interchange the position of some objects over the duration of some time in a fluid way without those objects passing through each other in the middle and without even touching each other. This would be doable by manually setting keys perhaps, but also fairly cumbersome, especially if we would want to repeat this for several sets of objects. The script that we will devise takes care of all of those details and can be applied to any two objects. Code outline: orbit.py The orbit.py script that we will design will take the following steps: Determine the halfway point between the selected objects. Determine the extent of the selected objects. Define IPO for object one. Define IPO for object two. Determining the halfway point between the selected objects is easy enough: we will just take the average location of both objects. Determining the extent of the selected objects is a little bit more challenging though. An object may have an irregular shape and determining the shortest distance for any rotation of the objects along the path that the object will be taking is difficult to calculate. Fortunately, we can make a reasonable approximation, as each object has an associated bounding box. This bounding box is a rectangular box that just encapsulates all of the points of an object. If we take half the body diagonal as the extent of an object, then it is easy to see that this distance may be an exaggeration of how close we can get to another object without touching, depending on the exact form of the object. But it will ensure that we never get too close. This bounding box is readily available from an object's getBoundBox() method as a list of eight vectors, each representing one of the corners of the bounding box. The concept is illustrated in the following figure where the bounding boxes of two spheres are shown: The length of the body diagonal of a bounding box can be calculated by determining both the maximum and minimum values for each x, y, and z coordinate. The components of the vector representing this body diagonal are the differences between these maximums and minimums. The length of the diagonal is subsequently obtained by taking the square root of the sum of squares of the x, y, and z components. The function diagonal() is a rather terse implementation as it uses many built-in functions of Python. It takes a list of vectors as an argument and then iterates over each component (highlighted. x, y, and z components of a Blender Vector may be accessed as 0, 1, and 2 respectively): def diagonal(bb): maxco=[] minco=[] for i in range(3): maxco.append(max(b[i] for b in bb)) minco.append(min(b[i] for b in bb)) return sqrt(sum((a-b)**2 for a,b in zip(maxco,minco))) It determines the extremes for each component by using the built-in max() and min() functions. Finally, it returns the length by pairing each minimum and maximum by using the zip() function. The next step is to verify that we have exactly two objects selected and inform the user if this isn't the case by drawing a pop up (highlighted in the next code snippet). If we do have two objects selected, we retrieve their locations and bounding boxes. Then we calculate the maximum distance w each object has to veer from its path to be half the minimum distance between them, which is equal to a quarter of the sum of the lengths of the body diagonals of those objects: obs=Blender.Scene.GetCurrent().objects.selectedif len(obs)!=2: Draw.PupMenu('Please select 2 objects%t|Ok')else: loc0 = obs[0].getLocation() loc1 = obs[1].getLocation() bb0 = obs[0].getBoundBox() bb1 = obs[1].getBoundBox() w = (diagonal(bb0)+diagonal(bb1))/4.0 Before we can calculate the trajectories of both objects, we first create two new and empty Object IPOs: ipo0 = Ipo.New('Object','ObjectIpo0')ipo1 = Ipo.New('Object','ObjectIpo1') We arbitrarily choose the start and end frames of our swapping operation to be 1 and 30 respectively, but the script could easily be adapted to prompt the user for these values. We iterate over each separate IPO curve for the Location IPO and create the first point (or key frame) and thereby the actual curve by assigning a tuple (framenumber, value) to the curve (highlighted lines of the next code). Subsequent points may be added to these curves by indexing them by frame number when assigning a value, as is done for frame 30 in the following code: for i,icu in enumerate((Ipo.OB_LOCX,Ipo.OB_LOCY,Ipo.OB_LOCZ)): ipo0[icu]=(1,loc0[i]) ipo0[icu][30]=loc1[i] ipo1[icu]=(1,loc1[i]) ipo1[icu][30]=loc0[i] ipo0[icu].interpolation = IpoCurve.InterpTypes.BEZIER ipo1[icu].interpolation = IpoCurve.InterpTypes.BEZIER Note that the location of the first object keyframed at frame 1 is its current location and the location keyframed at frame 30 is the location of the second object. For the other object this is just the other way around. We set the interpolation modes of these curves to "Bezier" to get a smooth motion. We now have two IPO curves that do interchange the location of the two objects, but as calculated they will move right through each other. Our next step therefore is to add a key at frame 15 with an adjusted z-component. Earlier, we calculated w to hold half the distance needed to keep out of each other's way. Here we add this distance to the z-component of the halfway point of the first object and subtract it for the other: mid_z = (loc0[2]+loc1[2])/2.0ipo0[Ipo.OB_LOCZ][15] = mid_z + wipo1[Ipo.OB_LOCZ][15] = mid_z - w Finally, we add the new IPOs to our objects: obs[0].setIpo(ipo0)obs[1].setIpo(ipo1) The full code is available as swap2.py in the file orbit.blend (download full code from here). The resulting paths of the two objects are sketched in the next screenshot:
Read more
  • 0
  • 0
  • 2777

article-image-video-editing-blender-using-video-sequence-editor-part-1
Packt
19 Feb 2010
6 min read
Save for later

Video Editing in Blender using Video Sequence Editor: Part 1

Packt
19 Feb 2010
6 min read
Blender, the open-source 3D creation suite, as we all know, has been on great heights lately, and with the astounding amount of work and dedication that is being put into the current development, there’s no doubt it is already a state-of-the-art software. From the time I was introduced to Blender, I was pretty amazed at the amount of features it has and the myriad of possibilities you can achieve with it. Features like modeling, shading, texturing, rendering are already a given fact, but what’s much more impressive with Blender is it’s “side features” that come along with it, one great example would be the “Video Sequence Editor”, popularly called VSE is the Blender universe. From the name alone, you can already figure out what it is used for, yup – video editing! Pretty cool, eh? With the right amount of knowledge, strategy, and workarounds, there’s much more leeway than it is really used for. I’ll share with you some tips and guidelines as to how you could start along and begin using Blender itself for editing your videos and not keep jumping from application to application, which can become really troublesome at times. Without further ado, let’s get on with it! Requirements Before we begin, there are a couple of things we need to have: Blender 2.49b Clips/Videos/Movies Skill level: Beginner to Intermediate a little bit of patience lots of coffee to keep you awake (and a comfortable couch?)! Post-processing This might sound odd to you as this came first before anything else in the article. The reason for this is that we don’t want to mess up with a lot of things, and create more trouble later (you’ll see that shortly during the process). But if you already are satisfied with the way your videos look and feel, then you can skip this step and move on to the main one. Partly, we will deal on how to enhance your videos, making them look better than they were originally shot. This part could also dictate how you want the mood of your videos to be (depending on the way you shot it). Just like how we post-process and add more feel to our still images, the same thing goes with our videos, thus using the Composite Nodes to achieve this. And later, use these processed videos into the sequence editor for final editing. To begin with, open up Blender and depending on your default layout, you will see something like this (see screenshot below): Blender Window Let’s change our current 3D viewport into the Node Editor window and under the Scene Button(F10), set the render size to 100%, enable Do Composite under the Anim panel, set the Size/Resolution and the video format under the Format panel, and lastly set your output path under the Output panel on the same window. That was quite a lengthy instruction to pack into one paragraph though, so check out the screenshot below for a clearer picture. Settings Now that we’re already in the Node Editor window, by default, we see nothing but grid and a new header (which apparently gives us a clue what to do next). Since we’re not dealing with Material Nodes and Texture Nodes, we’re safe to ignore some other things that comprise the node editor for now and instead, we’ll use the Composite Nodes, which is represented by a face icon on the header. Click and enable that button. But then, you’ll notice that nothing has happened yet, that’s because we still have to indicate to Blender that, it’s actually going to be using the nodes. So, go ahead and click the Use Nodes and you’ll notice something came up on your node editor window, being the Render Layer Node and the Composite Node respectively.The render layer node is an input node which takes data from our current Blender scene as specified through of render layer options under the scene window, which is often useful in general purpose node compositing direct from our 3d scene or if you wanted to layer your renders into passes. But since, we are not doing that now, we won’t be needing this node right now, go ahead and select the Render Layer Node by right clicking on it and press X or Delete on your keyboard and automatically, without any popups shown, the render layer node is now gone from our composite window. Next step is to load our videos into the node compositor and actually begin the process. To load the videos into our compositor, we use the Image Input Node to call our videos from wherever it is stored. To do this, press spacebar on your keyboard while your mouse cursor is on the node editor window, choose Add < Input > Image. With our Image Node loaded in the compositor, click the Load New button and browse to the directory where the file is loaded. Loading the Video via the Image Input Node After successfully loading the video, you’ll notice the Image Input Node change its appearance, now having a thumbnail preview and some buttons and inputs we can experiment with. The most important setting we have to specify here is the number of frames our video has or else Blender wouldn’t know which ranges to composite. Specifying the Number of Frames in the Image Node Often, this can be a difficult task to deal with and I’ve had so much trouble with this before as I wanted the exact frame count that would precisely match that of my original video without even missing a single glitch. There are a couple of ways to do this, and if you’re smart enough to calculate the number of frames based on the time your video consumes, that’s fair enough, or you could open up a separate application to see how many frames it got, but if you’re like me who likes to make it simple and still within Blender’s grasp, then there’s still hope to this. Right now, we’re off to a tiny part of the main course here which is Blender’s VSE, but this time we’ll only use it to know how many frames our video has. But don’t worry because it’s the main dish and we’ll get to that shortly.
Read more
  • 0
  • 0
  • 3193

article-image-video-editing-blender-using-video-sequence-editor-part-2
Packt
19 Feb 2010
7 min read
Save for later

Video Editing in Blender using Video Sequence Editor: Part 2

Packt
19 Feb 2010
7 min read
Video Sequence Editor (The Main Dish) Since we’ve been always using Blender’s default screen for modeling, setting up materials, or node compositing, let’s try to deviate for a moment and make use of Blender’s screen features to jump from one preset to another, which is a very useful tool in my opinion. Moving your attention over to Blender’s main menu (located at the very top of the window, below the header), you’ll notice the drop-down menu with a prefix of SR: at the beginning. This is Blender’s screen system which can come in handy anytime you want to switch to any preset view or customized view, quickly! You can click on the box itself with the text to edit the name of the screen you currently have, or you can use the dropdown button to either add a new screen or to choose between the selection. Right now we don’t have the diligence to create a new and customized screen since the presets are already of best use. Clicking the drop-down button, you’ll be presented with different screen names, of which, we will be selecting the fourth one labeled 4-Sequence. Instantaneously, after confirming your selection, your Blender screen will be warped to yet another spaceship-like interface, don’t worry though, we’ll get used to it pretty much soon. Changing Screen Layouts On the upper left hand corner, we have our IPO window which is used to add refined and custom controls over the behavior of our strips/inputs, on its right is the preview window, on the middle part is the VSE editor, below it is the Timeline, and lastly at the bottom part is the buttons window. Sequence Screen Layout For this part of the article, we’ll only be delving into some of these parts, namely the Preview, VSE Editor, Timeline, and the Buttons Window. I could’ve just said “except the IPO Window. So before we actually try and add in our videos, there are things we need to do: hover your mouse pointer over to the Timeline Window and press SHIFT+T to bring up the Time Value option and choose Seconds. This will help us later on to recognize our video lengths with seconds as the unit and not frames, which can become clearer as we go on. And next is to click the Sequencer button under the Scene(F10) menu. This will enable us to see options for our video later on. Timeline and Button Options Next thing we need to do is to add our videos (at long last) into the VSE Editor and finally to editing on them.  To do this, move your mouse pointer over to the VSE Editor and press spacebar > Movie or Movie + Audio (HD) or you can click Add > Movie or Movie + Audio (HD) on the menu header. This will then lead you into Blender’s file browser where you can locate your videos. If you want to load your videos simultaneously, you can right click, hold, and drag on the videos to select them, and click the load button. They will then be automatically concatenated, each having its own individual video strip. But right now, I only want to load them one at a time so we could focus on editing them separately and not worry about the other strips floating around and messing around with our view. Once the video/s are loaded into the VSE window and are selected, you’ll notice the Sequencer Buttons Window populated with options.  Normally, we will see four (4) tabs, namely: Edit, Input, Filter, and Proxy. Let’s leave the default settings now as they work great as they are (but you can always doodle around the buttons and settings and see how they work, try to experiment!) Basically, I will introduce you to some of Blender’s video editing capabilities such as: cutting, duplicating, transition effects, artistic glows, and speed controls. Discussing the extents and features of this editor might take me a whole new set of article or two so right now, I’ll only lead you to the basic concepts and have you and your imagination with a lot of experimentation lead you to where you want. First off, Cutting video strips. Often, you want to delete parts of your video or move a section of it on a certain time on your collection, this is where cutting comes in handy. In Blender VSE world, you can cut a video by clicking over or scrubbing on the frame where you want to start your cut and press K for hard-cut, SHIFT+K for soft-cut. Once this operation is successful, you’ll notice your strip change appearance as a result of the cut, and depending on the number of cuts you made, that’s how much sub-strips you have which can individually be moved (G) or deleted X or delete key. Cutting, Moving, and Deleting Strips Once you have made the necessary cuts, you can always arrange your strips by moving them beside each other or with gaps, depending on what you want to achieve. Additionally, you can scrub your videos by click and dragging your mouse over to the VSE window (with the green vertical line as your current frame marker) or you could also click and drag over to the Timeline Window. As you scrub your videos, you’ll notice a live preview of what’s happening on the Preview Window on the upper right hand corner of your screen. Another cool trick with the VSE is adding markers to label parts of your animation or videos. Markers are also a way of identifying events in your timeline as they happen so you won’t lose track of what has occurred on that frame in time. You can add markers to your VSE Window where your current frame marker (vertical green line) by pressing CTRL+ALT+M or by clicking Marker > Add Marker on the menu, and add a label to it by pressing CTRL+M or by clicking Marker > (Re)Name Marker. These markers would also appear on your Timeline Window. Adding and (Re)Naming Markers Next is duplicating strips. Sometimes in your video editing endeavors, you wanted to repeat parts of the video for more emphasis or even just for artistic purposes. Luckily, duplicating strips in Blender’s VSE is just as easy as selecting the strips(s) you wish to duplicate and press SHIFT+D or clicking Strip > Duplicate in the menu. Duplicating Strips Now we discuss transition effects, which are one of the nicest things video editing has ever offered. In this part, we’ll try adding some simple transition effects from within the VSE to add subtlety and variation to our strips. Like any other video editing application, it requires you to have at least two strips of video/image to create the transition. We do this in Blender by selecting two strips by first selecting the first strip with right click, then adding to it a second strip by shift right clicking. This way you’re telling Blender from what video to what will the transition occur. Say, you have selected video A first then shift selected video B next, if we’ll now try to add the transition effect, it will happen from video A to video B and not vice versa. The simplest transition that we could add now is the Gamma Cross which simply takes the first strip and fades it into the second one and so on. Do this by selecting two strips and press spacebar then click Gamma Cross or click Add > Effect > Gamma Cross.  With its default settings, when you now scrub your strips or use the timeline, you’ll notice that in between the two strips is a blend of both. Moving any of the video strips will update automatically the length of the Gamma Cross that’s present.
Read more
  • 0
  • 0
  • 4665

article-image-modeling-steampunk-spacecraft-using-blender-3d-249
Packt
29 Dec 2009
5 min read
Save for later

Modeling a Steampunk Spacecraft using Blender 3D 2.49

Packt
29 Dec 2009
5 min read
Steampunk concept Before we actually begin working on the model, let's make clear the difference between a regular spacecraft and a steampunk spacecraft. Although both of them are based on science fiction, the steampunk spacecraft has a few important characteristics that differ from a regular hi-tech spacecraft. Imagine a world where the advances of science and machinery were actually developed centuries ago. For example, imagine medieval knights using hi-tech armor and destroying castles with rockets. It may sound strange, as the rockets have been made only for the army in the last century. What would a fighter jet look like in the Middle Ages? It would be a mix of steel, glass, and wood. The steampunk environment is made out of these kinds of things, modern objects and vehicles produced and developed in a parallel universe, where those discoveries were made long ago. The secret of designing a good steampunk vehicle or object is to mix the recent technology with the materials and methods available in past times, such as wood and bronze to make a space suit. If you need some inspiration to design objects like those, watch some recent movies that use a steampunk environment to create some interesting machines. But, to really get to the source, I do recommend that you read some books written by Jules Verne, who wrote about incredible environments and machines that dive deep into the ocean or travel to outer space. The following image is an example of a steampunk weapon (Image credits: Halogen Gallery, licensed under Creative Commons): Next is a steampunk historical character (Image credits: Sparr0, licensed under Creative Commons): Here are a few resources to find out more about Steam Punk: Steampunk at Wikipedia, with lots of resources:http://en.wikipedia.org/wiki/Steampunk Guide to drawing and creating steampunk machinery:http://www.crabfu.com/steamtoys/diy_steampunk/ Showcase of steampunk technology:http://www.instructables.com/id/Steampunk/ Spacecraft concept Now that we know how to design a good steampunk machine, let's discuss the concept of this spacecraft. For this project, we will design a machine that mixes some elements of steel, but not those fancy industrial plates and welded parts. Instead, our machine will have the look and feel of a machine built by a blacksmith. As it would be really strange to have wooden parts for a spacecraft, we will skip or use this material only for the interior. Other aspects of the machine that will help give the impression of a steampunk spacecraft are as follows: Riveted sheets of metal Metal with the look of bronze Valves and pipes With that in mind, we can start with this concept image to create our spacecraft: It's not a complete project, but we're off to a great start with Blender and our use of polygons to create the basis for this Incredible Machine. Project workflow This project will improve our modeling and creating skills with Blender to a great extent! So, to make the process more efficient, the workflow will be planned as this would be done by a professional studio. This is the best way to optimize the time and quality of the project. It will also guarantee that future projects will be finished in the shortest timeframe. The first step for all projects is to find some reference images or photos for the pre-visualization stage. At this point, we should make all important decisions about a project based only on your concept studies. The biggest amount of time spent on this type of project is with artistic decisions like the framing of the camera, type and color of materials, shape of the object, and environment setup. All of those decisions should be made before we open Blender and start modeling because a simple detour on the main concept could result in a partial or total loss of all work. When all of the decisions are made, the next step is to start modeling with the reference images we found on the Internet, or we can draw the references ourselves. The modeling stage involves the spacecraft and the related environment, which of course will be outer space. For this environment, Blender will help us design a space with nebulas, star fields, and even a glazing star. Right after the environment is finished, we can begin working with some materials and textures. As the object has a complex set of parts, and in some cases an organic topology, we will have to pay extra attention to the UV mapping process to add textures. We'll use a few tips when working with those complex shapes and topology to simplify the process. What would a spacecraft be without special effects? Special effects make the project more realistic. The addition of a particle system enables the spacecraft's engines to work and simulates the shooting of a plasma gun. With those two effects, we will be able to give dynamism to the scene, showing some working parts of the object. And, to finish things up, there is the light setup for the scene. A light setup for a space scene is rather easy to accomplish because we will only have a strong light source for the scene, and not so much bouncing for the light rays. The goal for this project is to end up with a great space scene. If you already know how to use Blender, get ready to put your knowledge to the test!
Read more
  • 0
  • 0
  • 2276
article-image-learn-baking-blender
Packt
28 Dec 2009
3 min read
Save for later

Learn Baking in Blender

Packt
28 Dec 2009
3 min read
Getting Started Baking in Blender allows you to transfer different aspects of your rendered scene/model to a 2D planar projection, or UV map. This is primarily used for creating Normal Maps but can also be a very helpful aide in texturing, render time optimization, etc. Before we dive into the specifics, let me give you a quick crash course in baking. In order to bake out the necessary maps, the three requirements are that you have at least one mesh, that the UVs of that mesh have been unwrapped and that you have applied an image to the UVs of that mesh. Beyond this, it all depends on what you are doing. To bake out an object: Select all vertices of the default cube (or any object of your choice) in Edit Mode, press U > Unwrap (smart projections) > OK Switch your viewport to the UV/Image Editor, select all UVs with A, add a new image by going to Image > New > OK Under the Render properties, in the Bake panel click Bake. If all is correct, you should now see the results of the Full Render bake in the UV/Image Editor. You should see something like this: With the default Blender settings, you will get this result. As you can see Blender is baking out all of the lighting details of our default cube and saving it to the UV mapped image. These few steps are all it takes for most baking purposes. This is the basics of baking. Of course we did not yet bother to adjust anything or to select the method of baking needed, and so the results are quite useless. However, if you were to adjust the lighting to fit a specific purpose, as demonstrated further in this tutorial, you would see very different results. Let us now examine each of the different kinds of bakes and how to use them. Full Render Baking the Full Render enables you to bake out everything you see at render time, this includes lighting, materials, textures, and ambient occlusion. When used correctly this can be very helpful for transferring procedural materials to a 2D color map or for baking in the lighting details of textures to be used in ultra lowpoly games that do not offer dynamic lighting. Here is an example of procedural materials that have been baked out to our cube using the default lighting setup: Here is the same cube and same material with a basic 3-point lighting setup: Ambient Occlusion This baking method is very useful as an aide in the texturing process. When texturing, particularly for lowpoly models, creating realistic shading from light can be quite difficult. Using Ambient Occlusion baking can ease this process by giving you a plane, shaded map of your object in 2D. This is best demonstrated with an image as seen below: Here is our same cube and same material with ambient occlusion: As you can see it is plain grey, this is due to our cube having no variation in the surface and thus nothing to affect the lightness and shadows. Here is a modified cube with surface variation:   Due to the modified surface I have also re-unwrapped the UVs using Smart Projections.
Read more
  • 0
  • 0
  • 3084

article-image-photo-compositing-gimp-part-2
Packt
07 Dec 2009
5 min read
Save for later

Photo Compositing with The GIMP: Part 2

Packt
07 Dec 2009
5 min read
Adding Realism to the Image As of the current state of our image, it’s almost done.  But we could still do something about adding even more believability to it than just our “2-d object on hand” setup here, right? First thing to consider is that photographed scenes aren’t actually as clean-looking as they are and as compared to common CGish images.  Just to break this cleanliness apart, let’s add in a simple cloud noise to our heart.  If that still doesn’t work for you, you could go ahead and paint over some details like cracks, dirt, etc.  This is to simulate the wear and tear effect that is always present everywhere we look at. To add this texture, let’s first create a new transparent layer to work on and let’s call it “texture” or something much more meaningful to you and easier to remember.  This will be the layer that will hold the cloud texture to use for the heart.  After adding this new layer, right click on the image window and select Filters > Render > Clouds > Solid Noise (as seen in the screenshot below). Creating the Texture Again, a pop-up window will appear wherein you can input values for the noise. This will entirely depend on your preference.  This fill then fill-up the entire layer with the cloud noise texture that we’ll use as overlay image for the heart later on.  Check the screenshot below for my settings. Cloud Noise Options You’ll notice now that what we see is just pure texture which is not what really wanted.  Instead we’ll use it as an overlay effect on top of our layer stack.  Let’s do this by changing the layer mode from Normal to Overlay then let’s adjust the opacity of the texture layer to something relevant and subtle. Texture Overlay However, we notice that the texture is affecting everything in our image including the hand and the cloth.  But we only want the heart to be affected by the texture.  We can do this in a couple of ways: the easiest would be to use the Eraser Tool to erase portions of the texture layer so we only leave the part of the heart, but doing this though will add more layers of undo levels everytime we stroke our eraser. What if we wanted to only have this single layer to work on yet have the flexibility as though we were switching from two layers (an original and a duplicate).  With this in mind, I think it’s time we use Layer Masks for more flexibility over our layer management. To apply our masking, let’s first create a selection to exclude the other parts of the image other than the heart, do this by right clicking on the heart layer then selecting Alpha to Selection. What this will do is select regions of the layer where it is opaque, in this case we’re only selecting the heart shape. Creating the Heart Selection Now with the heart shape selection active, let’s go back and activate our layer texture from which we’ll be creating our layer mask on (be sure that your selection is still active or else it will defeat the purpose of even creating it in the first place).  Right click on the texture layer and select Add Layer Mask (see screenshot). Creating a Layer Mask With the pop-up window that appears, select Black (full transparency) then press Add.  You’ll then notice that the effects the texture has are gone now, that’s because we filled the whole layer mask up with color black (which means full mask), making everything in the layer appear as nothing.  But since we want the current heart selection to have an effect on the layer, we’ll do the reverse instead, by filling up the selection with color white (#FFFFFF). Do this by selecting the layer mask, and not the layer itself, then use the Bucket Fill Tool to fill the selection with white.  Now we’ll notice the effects take place. Applying the Layer Mask   We’re only one step close to finishing the compositing here (yes, finally!). If we’re lucky enough to have gotten this far and not got bored the hell out of us, there’s one thing believably missing in our composition here, and that is the way the two fingers seem to be blocked by the heart (which shouldn’t be).  We should instead see the fingers somehow embrace parts of the heart. With all of our settings for the heart (highlights, shadows, and textures) done, we can now merge all of this into only one layer so we would only be working on one instead of applying the same effect over the rest of the layers which will eventually become a burden.  To merge all of the heart layers, let’s first turn off the visibility of the photograph layer, then right click on any of the layers comprising the heart then choose Merge Visible Layers then choose Expanded as Necessary.  This will then compress all of the heart layers into a single layer which would be very handy for our proceeding steps. Merging Visible Layers  
Read more
  • 0
  • 0
  • 2217

article-image-photo-compositing-gimp-part-1
Packt
07 Dec 2009
7 min read
Save for later

Photo Compositing with The GIMP: Part 1

Packt
07 Dec 2009
7 min read
Basing from my previous GIMP article titled Creating Pseudo-3D Imagery with GIMP, you learned how to do some basic selection manipulation, gradient application, faking Depth of Field, etc.  In line with that, I’m following it with a new article very much related to the concepts discussed therein but we’ll raise the bar a bit by having a glimpse on compositing, where we’ll use an existing image or photograph and later add in our 2-dimensional element seamlessly with the said picture. So if you haven’t read yet “Creating Pseudo-3D Imagery with GIMP”, I highly suggest you do so since almost all major concepts we’ll tackle here are based off of that article.  But if you have an idea on how to do the implied concepts here, then you’re good to go. If you have been following my advices lately, this might feel cliché to you, but you can’t blame me if I say “Always plan what you have to do!”, right? There you go, another useful and tad overused advice. Just to give you an overview, this article you are about to spend some time on will teach you basically how to: 1) add 2-dimensional elements to photos or just any other image you wish to, 2) apply effects to better enhance the composition, 3) plan out your scenes well However, this guide doesn’t teach you how to pick the right color combination nor does it help you how to shoot great photographs, but hopefully though, at the end of your reading, you’ll soon be able to apply the concepts with no hassle and get comfortable with it each time you do. Some of you might be a bit daunted by the title alone of this article, especially those of you most inclined with specialized compositing software, but as much as I would want to make use of those applications, I’m much more comfortable exploring what GIMP is capable of, not only as a simple drawing application but as a minor compositing app as well.  The concepts that I present here though are just basic representations of what compositing actually is.  And in this context, we’ll only be focusing on still images as reference and output all throughout this article.  If you wanted however to do compositing on series of images, animation, or movie, I highly suggest GIMP’s 3D partner – Blender. Ok, promotion set aside, let’s head back to the topic at hand. To give you an idea (because I believe [and I’m positive you do too] that pictures speak louder than words), here’s what we should be having by the end of this article, probably not exactly matching it but fairly close enough and I’ll try my best to be as guiding as possible. So let’s hop on! Heart and Sphere Composited with GIMP Compose, Compose, Compose! Yup, you read it thrice, I did too, don’t worry.  So what’s the fuss about composing anyway? The answer is pretty straightforward, though. Just like how a song is written through a composition, a photo/image is almost the same thing.  Without the proper composition, your image would never give life.  By composition, I mean a proper mix of colors, framing, lighting, etc.  This is one of the hardest obstacles any artist or photographer might face.  It will either ruin a majestic idea or it will turn your doodle into a wonderful creation you could almost hear the melody of your lines rhythm through your senses (wow, that was almost a mouthful!). Whichever tool you’re comfortable using, it really doesn’t differ a lot as compared to how you could easily interpret your ideas into something much more fruitful than worrying how to work your way around. That’s probably one reason I stuck into using GIMP, not only am I confident it can deliver anything I could 2-dimensionally think of but more importantly I am comfortable using it, which is a very important thing regarding design in my opinion. Just like how I wrote this article, composition comes into play (or you might already have doubted me already?).  Without the drafts and planning I made, I don’t believe I could even finish writing a paragraph of this one. To start off the process, we’ll use one photograph I shot just for this article (in an attempt to recreate the first image I showed you). Or if you don’t want to follow this article thoroughly, you can grab a sample photo from Google Images or from Stock Exchange (www.sxc.hu), just be sure to credit the owner though or whatever conditions or licenses the image has. Photo to work on Photo Enhancement Honestly, the photo we have is already decent enough to work with, but let’s just try making it better so we won’t have to go and adjust it later on. First, let’s open our image and do some primary color correction to it, just in case you’re the type who thinks “something has got to be better, always”.  Go ahead and fire up our tool of choice (GIMP in this case) and open the image (as you can see below). Opening the image in GIMP   With our photo active in our canvas and the layer it is on (which is the only layer that you see in the Layer Window by default), right click on the image, select Color, then choose Levels. Adjusting the image’s color levels is one good way to fix some color cast problems and to edit the color range of your colors non-destructively (extreme cases excluded), another great tool is using the Curves Tool to manipulate your image the same way that you do with Levels. But again, for the sake of this tutorial, we’ll use the levels tool since it’s much easier and faster to edit. You can see a screenshot below of the Levels Tool that we’ll be using in awhile. Levels Tool One nifty tool we can use under our Levels Tool is the Auto function which (you guessed it right again!), automates the color adjustment on our image based on the histogram reading and graph analysis of GIMP. Oftentimes, it makes the task easier for you but it might also ruin your image.  Nothing beats your visual judgment still so if you’re not contented with what the Auto Leveling gives you, go on and move the sliders that you see in the window.  Normally, I only adjust the Value data of the image to correct it’s overall brightness and contrast without altering the overall color mood of the photo.  But if in case you weren’t lucky enough to set your color balance settings on your camera the moment you shot the photo or if you felt the image you’re seeing infront of you is color casted too much, you can freely choose the other color channels (Red, Green, and Blue respectively) from the drop-down menu. You can see a screenshot below on how I adjusted the photo we currently loaded. Value Level Adjustment   RGB Color Level Choices That’s basically all that we need to do to enhance our photo (or you could go ahead and repeat the process a few more times to get the appropriate feel you wanted). If you wanted a safer way of editing (just in case you might run out of undo steps), duplicate your base layer that holds your image and work on the duplicate layer instead of the original one, then you can just switch the visibility on and off to see the changes you’ve made so far.
Read more
  • 0
  • 0
  • 4197
article-image-creating-text-logo-blender
Packt
30 Nov 2009
3 min read
Save for later

Creating a Text Logo in Blender

Packt
30 Nov 2009
3 min read
Here is the final image we will be creating: Let us begin! We are going to begin with the default settings in Blender, as you can see below. Creating the Background To create the background we are going to be adding a plane and then making a series of holes in it. This will then act as the basis for our entire background when we replicate the plane with two array modifiers and a mirror modifier. Go ahead and: Add a plane from top view by hitting spacebar > Add > Mesh > Plane Subdivide that plane by hitting W > Subdivide Multi > 3 divisions This will give us a grid that we can punch a few holes in with relative ease. Next, go ahead and select the vertices shown below: Then: Press x > Delete Faces to delete the selected faces Next select the inside edges of the upper-left hole by clicking on one of the edges with alt > RMB You may then hit e > esc to extrude and cancel the transform that extruding activates. Next you can hit cntrl + shift + s > 1 for the To Sphere command, this will modify the extruded vertices into a perfect circle. Check out the result below: From here we can duplicate this circle and the surrounding faces into place of all the other holes such that we have a mesh that will repeat without any gaps. Think of it as a tilable texture but in mesh form! As you will surely notice, on the bottom left and bottom right you will only duplicate have of the circle. After duplicating each piece and moving it into place it will be necessary to remove all the duplicate vertices: Select everything with a Press w > Remove Doubles Moving on, before we can replicate our pattern we need to move it such that the bottom, left corner is at the center point of our grid. If you used the default size for the plane then you can simply select everything and hold down cntrl while moving it to lock it to the grid. Now, for our final background we want the holes in our mesh to have some depth, to do this all we need to do is select each of the inner circles and extrude them down along the Z-axis as you can see in the image below: Now is where things begin to get really fun! We are going to now add two array modifiers to replicate our pattern. The first array will repeat the pattern along the X-axis to the right, and the second array will replicate the pattern down along the Y-axis. We will then you a use mirror modifier along the X and Y axis to duplicate the whole pattern across the axis’. First go to the Editing Buttons and click on Add Modifier > Array Increase the count to 10 Click Merge Add a second Array and change the count to 3 Click Merge Change the X Offset to 0 and the Y Offset to 1.0 This will leave you with 1/4 of our final pattern. To complete it: Add a Mirror Modifier Click Y in addition to the default X, this will mirror it both up and across the central axis. Add a Subsurf modifier to smooth out the mesh Select everything with a and then press w > Set Smooth Setting the mesh to smooth will likely cause some normal issues (black spots) in which case you need to hit cntrl + n > Recalculate Normals while everything is selected.
Read more
  • 0
  • 0
  • 3257

article-image-polygon-modeling-handgun-using-blender-3d-249-part-1
Packt
30 Nov 2009
3 min read
Save for later

Polygon Modeling of a Handgun using Blender 3D 2.49: Part 1

Packt
30 Nov 2009
3 min read
With the base model created, we will be able to analyze the shape of our model and evaluate the next steps of the project. We can even decide to make changes to the project because new ideas may appear when we see the object in 3D rather than in 2D. Starting with a background image The first step to start the modeling is to add the reference image as the background of the Blender 3D view. To do that, we can go to the View menu in the 3D view and choose Background Image. The background image in Blender appears only when we are at an orthogonal or Camera view. The background image is a simple black and white drawing of the weapon, but it will be a great reference for modeling. Before we go any further, it's important to point out a few things about the Background Image menu. We can make some adjustments to the image if it doesn't fit our Blender view: Use: With this button turned on, we will use the image as a background. If you want to hide the image, just turn it off and the image will disappear. Blend: The blend slider will control the transparency of the image. If you feel that the image is actually blocking your view of the whole model, making it a bit transparent may help. Size: As the name says, we can control the scale of the image. X and Y offset: With this option, we will be able to move the image in the X or Y axis to place it in a specific location. After clicking on the Use button, just hit the load button and choose the image to be used as a reference. Since you don't have the image used in this example, visit the Packt web site and download the project files from Support. If you've never used a reference image in Blender, it is important to note that the reference images appear only in 3D view when we are using the orthographic view or the camera view mode. It works only in the top, right, left, front, and other orthographic views. If you hit 5 and change the view to perspective, the image will disappear. By using the middle mouse button or the scroll to rotate the view, the image disappears. However, it's still there and we can see the image again by changing the view to an orthogonal or camera view. Make the image more transparent by using the Blend control. It will help in focusing on the model instead of the image. A value of 0.25 will be enough to help in the modeling without causing confusion.
Read more
  • 0
  • 0
  • 2122