Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-designing-site-layouts-inkscape
Packt
09 Nov 2010
11 min read
Save for later

Designing Site Layouts in Inkscape

Packt
09 Nov 2010
11 min read
  Inkscape 0.48 Essentials for Web Designers Use the fascinating Inkscape graphics editor to create attractive layout designs, images, and icons for your website The first book on the newly released Inkscape version 0.48, with an exclusive focus on web design Comprehensive coverage of all aspects of Inkscape required for web design Incorporate eye-catching designs, patterns, and other visual elements to spice up your web pages Learn how to create your own Inkscape templates in addition to using the built-in ones Written in a simple illustrative manner, which will appeal to web designers and experienced Inkscape users alike Architecting a web site Although as a web designer you will usually be regarded as the "look and feel" person for a web site, you are also a key partner in determining the site architecture. As you design, you often define how the site will be navigated. Is this web site one page or will an end-user "click around" the site to other areas or sub-pages and explore? Where will each page inter-link or link out to other websites? What is the main navigational element: a navigation bar? Where will it reside on all the pages? Is this site content or search driven? All these questions (and many more) require the entire team's involvement (and a client's opinion) to make the appropriate decisions. As the web designer, it is your job to work with all of these elements at a visual level—navigation bars, search fields, buttons, title bars, footers, and more—and fit them into the overall web site design. Web layout—principles and basics Designing for the web and for print are similar in that they have the same principles and goals for an end-viewer: appealing content that works together in the space. Although the principles are basic, they represent guidelines that improve any web page design. Here are the techniques: Proximity or grouping similar information together on a web page. You can get creative in how you group this information by using alignment, icons, and even just white space, but regardless, the technique and principles are the same. Information that belongs together should be together. Alignment is the simple idea of making sure all of the elements line up on the screen. If you have everything left aligned, keep it that way on the page. Use natural alignments within an entire web space when you use more than one graphical element such as photos, graphics, and/or text. Repetition can help unify a web page. Repeating elements such as buttons, shapes (graphical or just placement), or colors can really make an impact and keep the design simple and clean, thus, easier for a viewer to navigate. Contrast can have a huge and favorable impact in web design, as long as it is used effectively. Contrast is achieved with size, colors, direction, shapes, and fonts (mixing modern with old style). Even font weight can help create contrast. Just make sure that all of these elements still work well with the content on the web page itself—not for the sake of contrast. The basic design Before designing the layout in Inkscape, it can help to plan the placement of the main areas of a web page—in essence to help with the design's overall alignment of items and proximity of similar items. For our purposes, we'll create a simple main web page for a small business. This main page will have these areas: Header Footer Sidebar Content Navigation Each web page project is different and you may have to add more page elements to this list or delete some so that the final list of elements fits the needs of the overall design. For the purposes of getting agreement from a team working on this project, a good practice is to create a basic layout showing where each of the areas will be placed on the screen—and is often referred to as the web page wireframe. Typically, wireframes are completed before any graphics are created. The following screenshot illustrates a basic layout or wireframe: This high-level approach is a great start to actually building the page design. It gives us general placements of each area of the design and then we can set up grids and guidelines. Starting a new design project When you open Inkscape, it opens to a new document. However, we want to ensure that we have a page size that is the right size for our web page, so we create a new document based on predefined desktop sizes, which are common web page sizes, as follows: From the main menu select File and then New. A pop-up menu appears showing a number of default page sizes to choose from. Choose a template size based on the browser standards or specifications. Most of the templates in this menu are based on web designs. If you want to view the print media template sizes, go to the main menu and select File and Document Properties and view the Format section. We'll choose desktop_800x600. However, the dimensions should be specified by how your target viewer will view the site, whether it is via a computer, or via a mobile device. Also, sometimes your client or company will specify a specific size your web pages need to be. A new document opens in Inkscape and you are ready to begin. Let's set a few more preferences and then save the file with your project name before we start designing. Using grids and guidelines When designing any space on the web keep the page clean and—as stated in the design principles and techniques—aligned. So let's make the canvas grid viewable on the screen and set up some guidelines based on our wireframes. Viewing the grid With your new document still open on your computer, on the Inkscape main menu select View and then Grid. You'll see that a blue grid will appear across the entire canvas area. We'll use these grids to create basic areas of our layout and then create guides to begin creating our actual layout elements. Making guides Guides are lines on the screen that you will use for aligning i.e. guiding objects. These lines are only seen while you are working in Inkscape and we can set objects to "snap to" them when we are designing. Both of these simple tools (guides and the Snap to feature) will give you automatic alignment for the basic areas of your web page layout—which in turn will help your web page have the best design. To create a guide in any open document, make sure the Select tool is selected and drag from the left side or top of the screen towards your page as shown in the following screenshot. A red line represents the guide until you 'let go' of the guide and place it on the page. Then the line turns blue. You can move the guides after placing them on the page, by using the select tool and clicking and dragging the circle node on the guide. Now let's discuss how to use wireframes and create guides based on those web page layout elements. Creating a new layer When you create documents within Inskcape, you can have layers of objects. This gives great flexibility when creating web layouts. You can move groups of objects on a layer, separate objects by layer, and 'stack', re-ordered, or hide layers. Settings can be set by layer, so you can save drafts or different versions of mockups and keep all of this in one file. The layer you are currently using is called the drawing layer. It is selected in the Layer Dialog and is shown in a darker color. In your open document with viewable grids let's make the layers viewable and create a layer called Basic Layout. To make the Layers Dockable Dialog viewable, from the main menu select Layer and then Layers. On the right side of your screen the Layers Dialog is displayed. You can also press Shift + Ctrl + L on your keyboard to display the Layers Dialogue. In the Layers Dialog, press the + button to create a new layer. In the Layer Name field type the name: Basic Layout and click Add. You will notice the new layer is added above the existing one in the Layers Dialog. Creating basic design areas in Inkscape Here's where we begin to transfer our wireframes into Inkscape so we can start the design process. To start: Use the Rectangle Tool to draw rectangles for each of your layout areas in your basic design. For now, use different shades of gray for each area so you can easily distinguish between them at a glance. To change the fill color of a particular rectangle, left click the rectangle and choose a gray shade for the rectangle. Or drag the gray shade from the color palette onto the rectangle. Use sharp edged (not rounded) rectangles. If you need to change to sharp, click the Make Corners Sharp button in the Tool Controls Bar. Make sure your rectangle shapes do not have an outline or stroke. Use the Shift + Left click keypad shortcut to open the Stroke dialog and choose No Color (the icon with an X) to delete the stroke. Position the rectangles so there are no white spaces in between them. From the main menu choose Object and then Align and Distribute. In the Remove Overlaps section, click the icon. This makes sure that the bounding boxes around each object don't overlap each other and place the objects tangent to each other. Use the Tool Controls Bar W(width): number field to apply a setting of 800.0 px. The X:0.0 and Y:0.0 fields reference the bottom left corner of your page border. Here's roughly what your canvas should look like: Converting shapes to guides Once all of your areas are blocked out on the canvas, we'll need to convert the current rectangles into guides so we can use the guides when creating our web page layout graphics. We can easily keep the Basic Layout Export layer intact; we need to copy all of the rectangles in this layer. On the main menu, select Edit and then Select All (or use the keyboard shortcut keys Ctrl + A). Then select Edit and Duplicate (or use the keyboard shortcut Ctrl + D) to duplicate all of the elements in this layer. Now you are ready to convert these current shapes into guides. First, select all the rectangles in the top (duplicate) layout. Do this by clicking a rectangle and then hold the Shift key on your keypad. Then click/select the next rectangle. When you have all five rectangles selected, from the main menu select Object and then Object to Guide. Your duplicate rectangles will be removed from the canvas and replaced with blue guides. To better see the guides, turn off the grid (from the main menu choose View and Grid). You'll also notice your originally created basic layout areas defined on the screen. We'll use these shapes later on to help export our design into workable graphics for the HTML programmers. Now it is time to save this new document before you start the details of the design. From the main menu select File and then Save As. Choose an appropriate file name for your project and save it. Make sure you save this document in the native Inkscape format of SVG to retain its editable features and layers. Project file management To keep all files for this project in an easy-to-find location, it might make sense to create a project folder on your computer and save this design file within that folder. As you export designs for use within web pages and HTML, you will need a number of files. Using a directory or folder to house all project files makes them easier to find. Summary In this article we took a look at architecting a web site. We discussed design techniques that can make web pages move from good to great. In the next article, Creating a Layout Example in Inkscape, we will see an example to create a web page layout. Further resources on this subject: Creating a Layout Example in Inkscape [Article] Logos in Inkscape [Article] Web Design Principles in Inkscape [Article] Creating Web Templates in Inkscape [Article]
Read more
  • 0
  • 0
  • 6324

article-image-creating-brick-breaking-game
Packt
03 Mar 2015
32 min read
Save for later

Creating a Brick Breaking Game

Packt
03 Mar 2015
32 min read
Have you ever thought about procedurally generated levels? Have you thought about how this could be done, how their logic works, and how their resources are managed? With our example bricks game, you will get to the core point of generating colors procedurally for each block, every time the level gets loaded. Physics has always been a huge and massively important topic in the process of developing a game. However, a brick breaking game can be made in many ways and using the many techniques that the engine can provide, but I choose to make it a physics-based game to cover the usage of the new, unique, and amazing component that Epic has recently added to its engine. The Projectile component is a physics-based component for which you can tweak many attributes to get a huge variation of behaviors that you can use with any game genre. By the end of this article by Muhammad A.Moniem, the author of Learning Unreal Engine iOS Game Development, you will be able to: Build your first multicomponent blueprints Understand more about the game modes Script a touch input Understand the Projectile component in depth Build a simple emissive material Use the dynamic material instances Start using the construction scripts Detect collisions Start adding sound effects to the game Restart a level Have a fully functional gameplay (For more resources related to this topic, see here.) The project structure For this game sample, I made a blank project template and selected to use the starter content so that I could get some cubes, spheres, and all other 3D basic meshes that will be used in the game. So, you will find the project structure still in the same basic structure, and the most important folder where you will find all the content is called Blueprints. Building the blueprints The game, as you might see in the project files, contains only four blueprints. As I said earlier, a blueprint can be an object in your world or even a piece of logic without any physical representation inside the game view. The four blueprints responsible for the game are explained here: ball: This is the blueprint that is responsible for the ball rendering and movement. You can consider it as an entity in the game world, as it has its own representation, which is a 3D ball. platform: This one also has its visual representation in the game world. This is the platform that will receive the player input. levelLayout: This one represents the level itself and its layout, walls, blocks, and game camera. bricksBreakingMode: Every game or level made with Unreal Engine should have a game mode blueprint type. This defines the main player, the controller used to control the gameplay, the pawn that works in the same way as the main player but has no input, the HUD for the main UI controller, and the game state that is useful in multiplayer games. Even if you are using the default setting, it will be better to make a space holder one! Gameplay mechanics I've always been a big fan of planning the code before writing or scripting it. So, I'll try to keep the same habit here as well; before making each game, I'll explain how the gameplay workflow should be. With such a habit, you can figure out the weak points of your logic, even if you didn't build it. It helps you develop quickly and more efficiently. As I mentioned earlier, the game has only three working blueprints, and the fourth one is used to organize the level (which is not gameplay logic and has no logic at all). Here are the steps that the game should follow one by one: At the start of the game, the levelLayout blueprint will start instantiating the bricks and set a different color for each one. The levelLayut blueprint sets the rendering camera to the one we want. The ball blueprint starts moving the ball with a proper velocity and sets a dynamic material for the ball mesh. The platform blueprint starts accepting the input events on a frame-by-frame basis from mouse or touch inputs, and sets a dynamic material for the platform mesh. If the ball blueprint hits any other object, it should never speed up or slow down; it should keep the same speed. If the ball blueprint crossed the bottom line, it should restart the level. If the player pressed the screen or clicked on the mouse, the platform blueprint should move only on the y axis to follow the finger or the mouse cursor. If the ball blueprint hits any brick from the levelLayout blueprint, it should destroy it. The ball plays some sound effects. Depending on the surface it hits, it plays a different sound. Starting a new level As the game will be based on one level only and the engine already gives us this new pretty level with a sky dome and light effects with some basic assets, all of this will not be necessary for our game. So, you need to go to the File menu, select New Level, add it somewhere inside your project files, and give it a special name. In my case, I made a new folder named gameScene to hold my level (or any other levels if my game is a multilevel game) and named it mainLevel. Now, this level will never get loaded into the game without forcing the engine to do that. The Unreal Editor gives you a great set of options to define which is the default map/level to be loaded when the game starts or when the editor runs. Even when you ship the game, the Unreal Editor tells us which levels should be shipped and which levels shouldn't be shipped to save some space. Open the Edit menu and then open Project Settings. When the window pops up, select the Maps & Modes section and set Game Default Map to the newly created level. Editor Startup Map should also have the same level: Building the game mode Although a game mode is a blueprint, I prefer to always separate its creation from the creation of the game blueprints, as it contains zero work for logic or even graphs. A game mode is essential for each level, not only for each game. Right-click in an empty space inside your project directory and select Blueprint under the Basic assets section. When the Pick Parent Class window pops up, select the last type of blueprint, which is called Game Mode, and give your newly created blueprint a name, which, in my case, is bricksBreakingMode. Now, we have a game mode for the game level; this mode will not work at all without being connected to the current level (the empty level I made in the previous section) somehow. Go to World Settings by clicking on the icon in the top shelf of the editor (you need to get used to accessing World Settings, as it has so many options that you will need to tweak them to fit your games):   The World Settings panel will be on the right-hand side of your screen. Scroll down to the Game Mode part and select the one you made from the Game Mode Override drop-down menu. If you cannot find the one you've made, just type its name, and the smart menu will search over the project to find it.   Building the game's main material As the game is an iOS game, we should work with caution when adding elements and code to save the game from any performance overhead, glitches, or crashes. Although the engine can run a game with the Light option on an iOS device, I always prefer to stay as far away as possible from using lights/directional lights in an iOS game, as a directional light source on mealtime would mean recalculating all the vertices. So, if the level has 10k vertices with two directional lights, it will be calculated as 30k vertices. The best way to avoid using a light source for such a simple game like the brick breaking game is to build a special material that can emulate a light emission; this material is called an emissive material. In your project panel, right-click in an empty space (perhaps inside the materialsfolder) and choose a material from the Basic Assets section. Give this material a name (which, in my case, is gameEmissiveMaterial) and then double-click to open the material editor. As you can see, the material editor for a default new material is almost empty, apart from one big node that contains the material outputs with a black colored material. To start adding new nodes, you will need to right-click in an empty space of your editor grid and then either select a node or search for nodes by name; both ways work fine.   The emissive material is just a material with Color and Emissive Color; you can see these names in your output list, which means you will need to connect some sort of nodes or graphs to these two sockets of the material output. Now, add the following three new nodes: VectorParameter: This represents the color; you can pick a color by clicking on the color area on the left-hand panel of the screen or on the Default Value parameter. ScalarParameter: This represents a factor to scale the color of the material; you can set its Default Value to 2, which works fine for the game. Multiply: This will multiply two values (the color and the scalar) to give a value to be used for the emission. With these three nodes in your graph, you might figure out how it works. The basic color has to be added to the base color output, and then the Multiply result of the base color and scalar will be added to the emissive color output of the material: You can rename the nodes and give them special names, which will be useful later on. I named the VectorParameter node BaseColor and the Scalar node EmissiveScalar. You can check out the difference between the emissive material you made and another default material by applying both to two meshes in a level without any light. The default material will light the mesh in black as it expects a light source, but the emissive one will make it colored and shiny. Building the blueprints and components I prefer to call all the blueprints for this game actors as all of them will be based on a class in the engine core. This class usually represents any object with or without logic in the level. Although blueprints based on the actor class are not accepting input, you will learn a way to force any actor blueprint to get input events. In this section, you will build the different blueprints for the game and add components for each one of them. Later on, in another section, you will build the logic and graphs. As I always say, building and setting all the components and the default values should be the first thing you do in any game, and then adding the logic should follow. Do not work on both simultaneously! Building the layout blueprint The layout blueprint should include the bricks that the players are going to break, the camera that renders the level, and the walls that the ball is going to collide with. Start making it by adding an Actor blueprint in your project directory. Name it levelLayout and double-click on it to open the blueprint editor. The blueprint editor, by default, contains the following three subeditors inside it; you can navigate between them via the buttons in the top-right corner: Defaults: This is used to set the default values of the blueprint class type Components: This is used to add different components to build and structure the blueprint Graph: This is where we will add scripting logic The majority of the time, you will be working with the components and graph editors only, as the default editor's default values always work the best:   Open the component graph and start adding these components: Camera: This will be the component that renders the game. As you can see in the preceding screenshot, I added one component and left its name as Camera1. It was set as ROOT of the blueprint; it holds all the other components as children underneath its hierarchy. Changed Values: The only value you need to change in the camera component is Projection Mode. You need to set it to Orthographic, as it will be rendered as a 2D game, and keep Ortho Width as 512, as it will make the screen show all the content in a good size. Feel free to use different values based on the content of your level design. Orthographic cameras work without depth, and they are recommended more in 2D games. On the other hand, the perspective camera has more depth, and it is better to be used with any games with 3D content. Static Mesh: To be able to add meshes as boundaries or triggering areas to collide with the ball, you will need to add cubes to work as collision walls, perhaps hidden walls. The best way to add this is by adding four static meshes and aligning and moving them to build them as a scene stage. Renaming all of them is also a good way to go. To be able to distinguish between them, you can name them as I named them: StaticMeshLeftMargin, StaticMeshRightMargin, StaticMeshTopMargin, and StaticMeshBottomMargin. The first three are the left, right, and top margins; they will be working as collision walls to force the ball to bounce in different directions. However, the bottom one will work as a trigger area to restart the level when the ball passes through it. Changed Values: You need to set Static Mesh for them as the cube and then start to scale and move it to build the scene. For the walls, you need to add the Wall tag for the first three meshes in the Component Tags options area, and for the bottom trigger, you need to add another tag; something like deathTrigger works fine. These tags will be used by the gameplay logic to detect whether the ball hits a wall and you need to play a sound or whether it hits a death area and you need to restart the level. In the Collision section for each static mesh, you need to set both SimulationGeneratesHitEvents and GenerateOverlapEvents to True. Also, for CollisionPreset, you can select BlockAll, as this will create solid walls to block any other object from passing: Finally, from the Rendering options section, you need to select the emissive material we have made to be able to see those static meshes, and you need to mark Hidden in Game as True to hide those objects. Keep in mind that you can keep those objects in the game for debugging reasons, and when you are sure that they are in the correct place, you can move to this option again and remark it as True. Billboard: For now, you can think about the billboard component as a point in space with a representation icon, and this is how it is mostly used inside UE4 as the engine does not support an independent transform component yet. However, billboards have always been used to show the contents that always face the camera, such as particles, text, or any other thing you need to always get rendered from the same angle. As the game will be generating the blocks/bricks during the gameplay, you will need to have some points to define where to build or to start building those bricks. You can add five billboard points, rename them, and rearrange them to look like a column. You don't have to change any values for them, as you will be using their position in space values only! I named those five points as firstRowPoint, SecondRowPoint, thirdRowPoint, fourthRowPoint, and fifthRowPoint. Building the ball blueprint Start making the ball blueprint by adding an Actor blueprint in your project directory. Name it Ball and double-click on it to open the blueprint editor. Then, navigate to the Components subeditor if you are not ready. Start adding the following components to the blueprint: The sphere will work as the collision surface for the Ball blueprint. So, for this reason, you will need to set its Collision option to SimulationGeneratesHitEvents and GenerateOverlapEvents to True. Also, set the CollisionPreset option to BlockAll to act in a manner similar to the walls from the layout blueprint. You need to set the SphereRadius option from the Shape section to 26.0 so that it is of a good size that fits the screen's overall size. The process for adding static meshes is the same as you did earlier, but this time, you will need to select a sphere mesh from the standard assets that came with the project. You will also need to set its material to the project default material you made earlier in this article. Also, after selecting it, you might need to adjust its Scale to 0.5 in all three axes to fit the collision sphere size. Feel free to move the static mesh component on the x, y, and z axes till it fits the collision surface. The projectile movement component is the most important one for the Ball blueprint, or perhaps it is the most important one throughout this article, as it is the one responsible for the ball movement and velocity and for its physics behaviors. After adding the components, you will need to make some tweaks to it to allow it to give the behavior that matches the game. Keep in mind that any small amount of change in values or variables will lead you to have a completely different behavior, so feel free to play through the values and test them to get some crazy ideas about what you can achieve and what you can get. For changed values, you need to set Projectile Gravity Scale to 0.0 from within the Projectile options; this will allow the ball to fly in the air without a gravity force to bring it down (or any other direction for a custom gravity). For Projectile Bounces, you will need to mark Should Bounce as True. In this case, the projectile physics will be forced to keep bouncing with the amount of bounciness you set. As you want the ball to keep bouncing over the walls, you need to set the value to 1.0 to give it full bounciness power: From the Velocity section, you will need to enter a velocity for the ball to start using when the game runs; otherwise, the ball will never move. As you want the first bounce of the ball to be towards the blocks, you need to set the Z value to a high number, such as 300, and give it more level design sense. It shouldn't bounce in a vertical line, so it is better to give some force on the horizontal axis Y as well as move the ball in a diagonal direction. So, let's add 300 into Y as well. Building the platform blueprint Start making the platform blueprint by adding an Actor blueprint in your project directory. Name it platform and double-click on it to open the blueprint editor. Then, navigate to the Components subeditor if you are not there already. You will add only one component, and it will work for everything. You want to add a Static Mesh component, but this time, you will be selecting the Pipe mesh; you can select whatever you want, but the pipe works the best. Don't forget to set its material to be the same emissive material as we used earlier to be able to see it in the game view, and set its Collision option to SimulationGeneratesHitEvents and GenerateOverlapEvents to True. Also, CollisionPreset should be set to BlockAll to act in the same manner as the walls from the layout blueprint. Building the graphs and logic Now, as all the blueprints have been set up with their components, it's time to start adding the gameplay logic/scripting. However, to be able to see the result of what you are going to build, you first need to drag and drop the three blueprints inside your scene and organize them to look like an actual level. As the engine is a 3D engine and there is no support yet for 2D physics, you might notice that I added two extra objects to the scene (giant cubes), which I named depthPreservingCube and depthPreservingCube2. These objects are here basically to prevent the ball from moving in the depth axis, which is X in Unreal Editor. This is how both the new preserving cubes look from a top view: One general step that you will perform for all blueprints is to set the dynamic material for them. As you know, you made only one material and applied it to the platform and to the ball. However, you also want both to look different during the gameplay. Changing the material color right now will change both objects' visibility. However, changing it during the gameplay via the construction script and the dynamic material instances feature will allow you to have many colors for many different objects, but they will still share the same material. So, in this step, you will make the platform blueprint and the ball blueprint. I'll explain how to make it for the ball, and you will perform the same steps to make it for the platform. Select the ball blueprint first and double-click to open the editor; then, this time navigate to the subeditor graphs to start working with the nodes. You will see that there are two major tabs inside the graph; one of them is named Construction Script. This unique tab is responsible for the construction of the blueprint itself. Open the Construction Script tab that always has a Construction Script node by default; then, drag and drop the StaticMesh component of the ball from the panel on the left-hand side. This will cause you to have a small context menu that has only two options: Get and Set. Select Get, and this will add a reference to the static mesh. Now, drag a line from Construction Script, leave it in an empty space, add a Create Dynamic Material Instance node from the context menu, and set its Source Material option to the material we want to instance (which is the emissive material). However, keep in mind that if you are using a later version, Epic introduces a more easy way to access the Create Dynamic Material Instance node by just dragging a line from Static Mesh-ball inside Graph, and not Construction Script. Now, connect the static mesh to be the target and drag a line out of Return Value of the Create Dynamic Material Instance node. From the context menu, select the first option, which is Promote to a Variable; this will add a variable to the left-panel list. Feel free to give it a name you can recognize, which, in my case, is thisColor. Now, the whole thing should look like this: Now that you've created the dynamic material instance, you need to set the new color for it. To do this, you need to go back to the event graph and start adding the logic for it. I'll add it to the ball also, and you need to apply it again in Event Graph of the platform blueprint. Add an Event Begin Play node, which is responsible for the execution of some procedurals when the game starts. Drag a wire out of it and select the Set Vector Parameter Value node that is responsible for setting the value for the material. Now, add a reference for the thisColor variable and connect it to Target of the Set Vector Parameter Value node. Last but not least, enter Parameter name that you used to build the material, which, in my case, is BaseColor. Finally, set Value to a color you like; I picked yellow for the ball. Which color would you like to pick? The layout blueprint graph Before you start working with this section, you need to make several copies of the material we made earlier and give each one its own color. I made six different ones to give a variation of six colors to the blocks. The scripts here will be responsible for creating the blocks, changing their colors, and finally, setting the game view to the current camera. To serve this goal, you need to add several variables with several types. Here are some variables: numberOfColumns: This is an integer variable that has a default value of six, which is the total number of columns per row. currentProgressBlockPosition: This is a vector type variable to hold the position of the last created block. It is very important because you are going to add blocks one after the other, so you want to define the position of the last block and then add spacing to it. aBlockMaterial: This is the material that will be applied to a specific block. materialRandomIndex: This is a random integer value to be used for procedural selected colors for each block. To make things more organized, I managed to make several custom events. You can think about them as a set of functions; each one has a block of procedurals to execute: Initialize The Blocks: This Custom Event node has a set of for loops that are working one by one on initializing the target blocks when the game starts. Each loop cycles six times from Index 0 to the number of columns index. When it is finished, it runs the next loop. Each loop body is a custom function itself, and they all run the same set of procedurals, except that they use a different row. chooseRandomMaterial: This custom event handles the process of picking a random material to be applied to in the process of creation. It works by setting a random value between 1 and 6 to the materialRandomIndex variable, and depending on the selected value, the aBlockMaterial variable will be set to a different material. This aBlockMaterial variable is the one that will be used to set the material of each created block in each iteration of the loop for each row. addRowX: I named this X here, but in fact, there are five functions to add the rows; they are addRow1, addRow2, addRow3, addRow4, and addRow5. All of them are responsible for adding rows; the main difference is the start point of adding the row; each one of them uses a different billboard transform, starting from firstRowPoint and ending with fifthRowPoint. You need to connect your first node as Add Static Mesh and set its properties as any other static mesh. You need to set its material to the emissive one. Set Static Mesh to Shape_Pipe_180, give it a brickPiece tag, and set its Collision options to Simulation Generates Hit Events and Generate Overlap Events to True. Also, Collision Preset has to be set to Block All to act in the same manner as the walls from the layout blueprint and receive the hit events, which will be the core of the ball detection. This created mesh will need a transform point to be instantiated in its cords. This is where you will need to pick the row point transform reference (depending on your row, you will select the point number), add it to a Make Transform node, and finally, set the new transform Y Rotation to -90 and its XYZ scale to 0.7, 0.7, 0.5 to fit the correct size and flip the block to have a better convex look. This second part of the addRow event should use the ChooseRandomMaterial custom event that you already made to select a material from among six random ones. Then, you can execute SetMaterial, make its Target the same mesh that was created via Add Static Mesh, and set its Material to aBlockMaterial; the material changes every time the chooseRandomMaterial event gets called. Finally, you can use SetRelativeLocation of the billboard point that is responsible for that row to another position on the y axis, using the Make Vector and Add Int(+) nodes to add 75 units every time as a spacing between every two created blocks: Now, if you check the project files, you will find that the only difference is that there are five functions called addRow, and each of them uses a different billboard as a starting point to add the blocks. Now, if you run the version you made or the one within the project files, you will be able to see the generated blocks, and each time you stop and run the game, you will get a completely different color variation of the blocks. There is one last thing to completely finish this blueprint. As you might have noticed, this blueprint contains the camera in its components. This means it should be the one that holds the functionality of setting this camera to be the rendering camera. So, in EvenBeginPlay, this functionality will be fired when the level starts. You need to connect the the Set View Target With Blend node that will set the camera to the Target camera, and you need to connect Get Player Controller (player 0 is the player number 1) to the Target socket. This blueprint refers to New View Target. Finally, you need to call the initializeTheBlocks custom event, which will call all the other functions. Congratulations! Now you have built your first functional and complex blueprint that contains the main and important functionalities everyone must use in any game. Also, you got the trick of how you can randomly generate or change things such as the color of the blocks to make the levels feel different every time. The Ball blueprint graph The main event node that will be used in the ball graph is Event Hit, which will be fired automatically every time the ball collider hits another collider. If you still remember, while creating the platform, walls, and blocks, we used to add tags for every static mesh to define them. Those names are used now. Using a node called Component Has Tag, we can compare the object component that the ball has hit with the value of the Component Has Tag node, and then, we either get a positive or negative result. So, this is how it should work: Whenever the ball gets hit with another collider, check whether it is a brickPiece tagged component. If this is true, then disable the collision of the brick piece via the Set Collision Enabled node and set it to No Collision to stop responding to any other collisions. Then, hide the brick mesh using the Set Visibility node and keep the New Visibility option unmarked, which means that it will be hidden. Then, play a sound effect of the hit to make it a more dynamic gameplay. You can play sound in many different ways, but let's use the Play Sound at Location node now, use the location of the ball itself, and use the hitBrick sound effect from the Audio folder by assigning it to the Sound slot of the Play Sound at Location node. Finally, reset the velocity of the ball using the Set Velocity node referenced by the Projectile Movement component and set it to XYZ 300, 0, 300: If it wasn't a brickPiece tag, then let's check whether it is Component Has Tag of Wall. If this is the case, then let's use Play Sound at Location, use the location of the ball itself, and use the hitBlockingWall sound effect from the Audio folder by assigning it to the Sound slot of the Play Sound at Location node: If it wasn't tagged with Wall, then check whether it is finally tagged with deathTrigger. If this is the case, then the player has missed it, and the ball is not below the platform. So, you can use the Open Level node to load the level again and assign the level name as mainLevel (or any other level you want to load) to the Level Name slot: The platform blueprint graph The platform blueprint will be the one that receives the input from the player. You just need to define the player input to make the blueprint able to receive those events from the mouse, touch, or any other available input device. To do this, there are two ways, and I always like to use both these ways: Enable input node: I assume that you've already added the scripting nodes inside Event graph to set the dynamic material color via Set Vector Parameter Value. This means you already have an Event Begin Play node, so you need to connect its network to another node called Enable Input; this node is responsible for forcing the current blueprint to accept input events. Finally, you can set its Player Controller value to a Get Player Controller node and leave Player Index as 0 for the player number 1: Autoreceive input option: By selecting the platform blueprint instance that you've dropped inside the scene from the Scene Outliner, you will see that it has many options in the Details panel on the right-hand side. By changing the Auto Receive Input option to Player 0 under the Input option, this will have the same effect as the previous solution: Now, we can build the logic for the platform movement, and anything that is built can be tested directly in the editor or on the device. I prefer to break the logic into two pieces, and this will make it easier than it looks like for you: Get the touch state: In this phase, you will use the Input Touch event that can be executed when a touch gets pressed or released. So based on the touch state, you will check via a Branch node whether the state is True or False. Your condition for this node should be Touch 1 index, as the game will not need more than one touch. Based on the state, I would like to set a custom Boolean variable named Touched and set its value to match the touch state. Then, you can add a Gate node to control the execution of the following procedurals based on the touch state (Pressed or Released) by connecting the two cases with the Open gate and the Close gate execution sockets. Finally, you can set the actor location and set it to use the Self actor as its target (which is the platform actor/blueprint) to change the platform location based on touches. Defining the New Location value is the next chunk of the logic: Actor location: Using a Make Vector node, you can construct a new point position in the world made of X, Y, and Z coordinates. As the y axis will be the horizontal position, which will be based on the player's touch, only this needs to be changed over time. However, the X and Z positions will stay the same all the time, as the platform will never move vertically or in depth. The new vector position will be based on the touch phase. If the player is pressing, then the position should be matching the touch input position. However, if the players are not pressing, then the position should be the same as the last point the player had pressed. I managed to make a float variable named horizontalAxis; this variable will hold the correct Y position to be added to the Make Vector node. If the player is pressing the screen, then you need to get the finger press position by returning Impact Point by Break Hit Result via a Get Hit Result Under FingerBy Channel node from the current active player. However, if the player is not touching the screen, then the horizontalAxis variable should stay the same as the last-know location for the Self actor. Then, it will set as it is into the MakeVector Y position value: Now, you can save and build all the blueprints. Don't hesitate now or any time during the process of building the game logic to build or launch the game into a real device to check where you are. The best way to learn more about the nodes and those minor changes is by building all the time into the divide and changing some values every time. Summary In this article, you went through the process of building your first Unreal iOS game. Also, you got used to making blueprints by adding nodes in different ways, connecting nodes, and adding several component types into the blueprint and changing its values. Also, you learned how to enable input in an actor blueprint and get the touch and mouse input and fit them to your custom use. You also got your hands on one of the most famous and powerful rendering techniques in the editor, which is called dynamic material instancing. You learned how to make a custom material and change its parameters whenever you want. Procedurally, changing the look of the level is something interesting nowadays, and we barely scratched its surface by setting different materials every time we load the level. Resources for Article: Further resources on this subject: UnrealScript Game Programming Cookbook [article] Unreal Development Toolkit: Level Design HQ [article] The Unreal Engine [article]
Read more
  • 0
  • 0
  • 6322

article-image-features-sitecore
Packt
25 Apr 2016
17 min read
Save for later

Features of Sitecore

Packt
25 Apr 2016
17 min read
In this article by Yogesh Patel, the author of the book, Sitecore Cookbook for Developers, we will discuss about the importance of Sitecore and its good features. (For more resources related to this topic, see here.) Why Sitecore? Sitecore Experience Platform (XP) is not only an enterprise-level content management system (CMS), but rather a web framework or web platform, which is the global leader in experience management. It continues to be very popular because of its highly scalable and robust architecture, continuous innovations, and ease of implementations compared to other CMSs available. It also provides an easier integration with many external platforms such as customer relationship management (CRM), e-commerce, and so on. Sitecore architecture is built with the Microsoft .NET framework and provides greater depth of APIs, flexibility, scalability, performance, and power to developers. It has great out-of-the-box capabilities, but one of its great strengths is the ease of extending these capabilities; hence, developers love Sitecore! Sitecore provides many features and functionalities out of the box to help content owners and marketing teams. These features can be extended and highly customized to meet the needs of your unique business rules. Sitecore provides these features with different user-friendly interfaces for content owners that helps them manage content and media easily and quickly. Sitecore user interfaces are supported on almost every modern browser. In addition, fully customized web applications can be layered in and integrated with other modules and tools using Sitecore as the core platform. It helps marketers to optimize the flow of content continuously for better results and more valuable outcomes. It also provides in-depth analytics, personalized experience to end users, and marketing automation tools, which play a significant role for marketing teams. The following are a few of the many features of Sitecore. CMS based on the .NET Framework Sitecore provides building components on ASP.NET Web forms as well as ASP.NET Model-View-Controller (MVC) frameworks, so developers can choose either approach to match the required architecture. Sitecore provides web controls and sublayouts while working with ASP.NET web forms and view rendering, controller rendering, and models and item rendering while working with the ASP.NET MVC framework. Sitecore also provides two frameworks to prepare user interface (UI) applications for Sitecore clients—Sheer UI and SPEAK. Sheer UI applications are prepared using Extensible Application Markup Language (XAML) and most of the Sitecore applications are prepared using Sheer UI. Sitecore Process Enablement and Accelerator Kit (SPEAK) is the latest framework to develop Sitecore applications with a consistent interface quickly and easily. SPEAK gives you a predefined set of page layouts and components: Component-based architecture Sitecore is built on a component-based architecture, which provides us with loosely coupled independent components. The main advantage of these components is their reusability and loosely coupled independent behaviour. It aims to provide reusability of components at the page level, site level, and Sitecore instance level to support multisite or multitenant sites. Components in Sitecore are built with the normal layered approach, where the components are split into layers such as presentation, business logic, data layer, and so on. Sitecore provides different presenation components, including layouts, sublayouts, web control renderings, MVC renderings, and placeholders. Sitecore manages different components in logical grouping by their templates, layouts, sublayouts, renderings, devices, media, content items, and so on: Layout engine The Sitecore layout engine extends the ASP.NET web application server to merge content with presentation logic dynamically when web clients request resources. A layout can be a web form page (.aspx) or MVC view (.cshtml) file. A layout can have multiple placeholders to place content on predefined places, where the controls are placed. Controls can be HTML markup controls such as a sublayout (.ascx) file, MVC view (.cshtml) file, or other renderings such as web control, controller rendering, and so on, which can contain business logic. Once the request criteria are resolved by the layout engine, such as item, language, and device, the layout engine creates a platform to render different controls and assemble their output to relevant placeholders on the layout. Layout engine provides both static and dynamic binding. So, with dynamic binding, we can have clean HTML markups and reusability of all the controls or components. Binding of controls, layouts, and devices can be applied on Sitecore content items itself, as shown in the following screenshot: Once the layout engine renders the page, you can see how the controls will be bound to the layout, as shown in the following image: The layout engine in Sitecore is reponsible for layout rendering, device detection, rule engine, and personalization: Multilingual support In Sitecore, content can be maintained in any number of languages. It provides easier integration with external translation providers for seamless translation and also supports the dynamic creation of multilingual web pages. Sitecore also supports the language fallback feature on the field, item, and template level, which makes life easier for content owners and developers. It also supports chained fallback. Multi-device support Devices represent different types of web clients that connect to the Internet and place HTTP requests. Each device represents a different type of web client. Each device can have unique markup requirements. As we saw, the layout engine applies the presentation components specified for the context device to the layout details of the context item. In the same way, developers can use devices to format the context item output using different collections of presentation components for various types of web clients. Dynamically assembled content can be transformed to conform to virtually any output format, such as a mobile, tablet, desktop, print, or RSS. Sitecore also supports the device fallback feature so that any web page not supported for the requesting device can still be served through the fallback device. It also supports chained fallback for devices. Multi-site capabilities There are many ways to manage multisites on a single Sitecore installation. For example, you can host multiple regional domains with different regional languages as the default language for a single site. For example, http://www.sitecorecookbook.com will serve English content, http://www.sitecorecookbook.de will serve German content of the same website, and so on. Another way is to create multiple websites for different subsidiaries or franchise of a company. In this approach, you can share some common resources across all the sites such as templates, renderings, user interface elements, and other content or media items, but have unique content and pages so that you can find a separate existence of each website in Sitecore. Sitecore has security capabilities so that each franchise or subsidiary can manage their own website independently without affecting other websites. Developers have full flexibility to re-architect Sitecore's multisite architecture as per business needs. Sitecore also supports multitenant multisite architecture so that each website can work as an individual physical website. Caching Caching plays a very important role in website performance. Sitecore contains multiple levels of caching such as prefetch cache, data cache, item cache, and HTML cache. Apart from this, Sitecore creates different caching such as standard values cache, filtered item cache, registry cache, media cache, user cache, proxy cache, AccessResult cache, and so on. This makes understanding all the Sitecore caches really important. Sitecore caching is a very vast topic to cover; you can read more about it at http://sitecoreblog.patelyogesh.in/2013/06/how-sitecore-caching-work.html. Configuration factory Sitecore is configured using IIS's configuration file, Web.config. Sitecore configuration factory allows you to configure pipelines, events, scheduling agents, commands, settings, properties, and configuration nodes in Web.config files, which can be defined in the /configuration/sitecore path. Configurations inside this path can be spread out between multiple files to make it scalable. This process is often called config patching. Instead of touching the Web.config file, Sitecore provides the Sitecore.config file in the App_ConfigInclude directory, which contains all the important Sitecore configurations. Functionality-specific configurations are split into the number of .config files based, which you can find in its subdirectories. These .config files are merged into a single configuration file at runtime, which you can evaluate using http://<domain>/sitecore/admin/showconfig.aspx. Thus, developers create custom .config files in the App_ConfigInclude directory to introduce, override, or delete settings, properties, configuration nodes, and attributes without touching Sitecore's default .config files. This makes managing .config files very easy from development to deployment. You can learn more about file patching from https://sdn.sitecore.net/upload/sitecore6/60/include_file_patching_facilities_sc6orlater-a4.pdf. Dependency injection in .NET has become very common nowadays. If you want to build a generic and reusable functionality, you will surely go for the inversion of control (IoC) framework. Fortunately, Sitecore provides a solution that will allow you to easily use different IoC frameworks between projects. Using patch files, Sitecore allows you to define objects that will be available at runtime. These nodes are defined under /configuration/sitecore and can be retrieved using the Sitecore API. We can define types, constructors, methods, properties, and their input parameters in logical nodes inside nodes of pipelines, events, scheduling agents, and so on. You can learn more examples of it from http://sitecore-community.github.io/docs/documentation/Sitecore%20Fundamentals/Sitecore%20Configuration%20Factory/. Pipelines An operation to be performed in multiple steps can be carried out using the pipeline system, where each individual step is defined as a processor. Data processed from one processor is then carried to the next processor in arguments. The flow of the pipeline can be defined in XML format in the .config files. You can find default pipelines in the Sitecore.config file or patch file under the <pipelines> node (which are system processes) and the <processors> node (which are UI processes). The following image visualizes the pipeline and processors concept: Each processor in a pipeline contains a method named Process() that accepts a single argument, Sitecore.Pipelines.PipelineArgs, to get different argument values and returns void. A processor can abort the pipeline, preventing Sitecore from invoking subsequent processors. A page request traverses through different pipelines such as <preProcessRequest>, <httpRequestBegin>, <renderLayout>, <httpRequestEnd>, and so on. The <httpRequestBegin> pipeline is the heart of the Sitecore HTTP request execution process. It defines different processors to resolve the site, device, language, item, layout, and so on sequentially, which you can find in Sitecore.config as follows: <httpRequestBegin>   ...   <processor type="Sitecore.Pipelines.HttpRequest.SiteResolver,     Sitecore.Kernel"/>   <processor type="Sitecore.Pipelines.HttpRequest.UserResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.DatabaseResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.BeginDiagnostics,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.DeviceResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.LanguageResolver,     Sitecore.Kernel"/>   ... </httpRequestBegin> There are more than a hundred pipelines, and the list goes on increasing after every new version release. Sitecore also allows us to create our own pipelines and processors. Background jobs When you need to do some long-running operations such as importing data from external services, sending e-mails to subscribers, resetting content item layout details, and so on, we can use Sitecore jobs, which are asynchronous operations in the backend that you can monitor in a foreground thread (Job Viewer) of Sitecore Rocks or by creating a custom Sitecore application. The jobs can be invoked from the user interface by users or can be scheduled. Sitecore provides APIs to invoke jobs with many different options available. You can simply create and start a job using the following code: public void Run() {   JobOptions options = new JobOptions("Job Name", "Job Category",     "Site Name", "current object", "Task Method to Invoke", new     object[] { rootItem })   {     EnableSecurity = true,     ContextUser = Sitecore.Context.User,     Priority = ThreadPriority.AboveNormal   };   JobManager.Start(options); } You can schedule tasks or jobs by creating scheduling agents in the Sitecore.config file. You can also set their execution frequency. The following example shows you how Sitecore has configured PublishAgent, which publishes a site every 12 hours and simply executes the Run() method of the Sitecore.Tasks.PublishAgent class: <scheduling>   <agent type="Sitecore.Tasks.PublishAgent" method="Run"     interval="12:00:00">     <param desc="source database">master</param>     <param desc="target database">web</param>     <param desc="mode (full or smart or       incremental)">incremental</param>     <param desc="languages">en, da</param>   </agent> </scheduling> Apart from this, Sitecore also provides you with the facility to define scheduled tasks in the database, which has a great advantage of storing tasks in the database, so that we can handle its start and end date and time. We can use it once or make it recurring as well. Workflow and publishing Workflows are essential to the content author experience. Workflows ensure that items move through a predefined set of states before they become publishable. It is necessary to ensure that content receives the appropriate reviews and approvals before publication to the live website. Apart from workflow, Sitecore provides highly configurable security features, access permissions, and versioning. Sitecore also provides full workflow history like when and by whom the content was edited, reviewed, or approved. It also allows you to restrict publishing as well as identify when it is ready to be published. Publishing is an essential part of working in Sitecore. Every time you edit or create new content, you have to publish it to see it on your live website. When publishing happens, the item is copied from the master database to the web database. So, the content of the web database will be shown on the website. When multiple users are working on different content pages or media items, publishing restrictions and workflows play a vital role to make releases, embargoed, or go-live successful. There are three types of publishing available in Sitecore: Republish: This publishes every item even though items are already published. Smart Publish: Sitecore compares the internal revision identifier of the item in the master and web databases. If both identifiers are different, it means that the item is changed in the master database, hence Sitecore will publish the item or skip the item if identifiers are the same. Incremental Publish: Every modified item is added to the publish queue. Once incremental publishing is done, Sitecore will publish all the items found in the publish queue and clear it. Sitecore also supports the publishing of subitems as well as related items (such as publishing a content item will also publish related media items). Search Sitecore comes with out-of-the-box Lucene support. You can also switch your Sitecore search to Solr, which just needs to install Solr and enable Solr configurations already available. Sitecore by default indexes Sitecore content in Lucene index files. The Sitecore search engine lets you search through millions of items of the content tree quickly with the help of different types of queries with Lucene or Solr indexes. Sitecore provides you with the following functionalities for content search: We can search content items and documents such as PDF, Word, and so on. It allows you to search content items based on preconfigured fields. It provides APIs to create and search composite fields as per business needs. It provides content search APIs to sort, filter, and page search results. We can apply wildcards to search complex results and autosuggest. We can apply boosting to influence search results or elevate results by giving more priority. We can create custom dictionaries and index files, using which we can suggest did you mean kind of suggestions to users. We can apply facets to refine search results as we can see on e-commerce sites. W can apply different analyzers to hunt MoreLikeThis results or similar results. We can tag content or media items to categorize them so that we can use features such as a tag cloud. It provides a scalable user interface to search content items and apply filters and operations to selected search results. It provides different indexing strategies to create transparent and diverse models for index maintenance. In short, Sitecore allows us to implement different searching techniques, which are available in Google or other search engines. Content authors always find it difficult while working with a big number of items. You can read more about Sitecore search at https://doc.sitecore.net/sitecore_experience_platform/content_authoring/searching/searching. Security model Sitecore has the reputation of being very easy to set up the security of users, roles, access rights, and so on. Sitecore follows the .NET security model, so we get all the basic information of the .NET membership in Sitecore, which offers several advantages: A variety of plug-and-play features provided directly by Microsoft The option to replace or extend the default configuration with custom providers It is also possible to store the accounts in different storage areas using several providers simultaneously Sitecore provides item-level and field-level rights and an option to create custom rights as well Dynamic user profile structure and role management is possible just through the user interface, which is simpler and easier compared to pure ASP.NET solutions It provides easier implementation for integration with external systems Even after having an extended wrapper on the .NET solution, we get the same performance as a pure ASP.NET solution Experience analytics and personalization Sitecore contains state-of-the-art Analysis, Insights, Decisions, Automation (AIDA) framework, which is the heart for marketing programs. It provides comprehensive analytics data and reports, insights from every website interaction with rules, behavior-based personalization, and marketing automation. Sitecore collects all the visitor interactions in a real-time, big data repository—Experience Database (xDB)—to increase the availability, scalability, and performance of website. Sitecore Marketing Foundation provides the following features: Sitecore uses MongoDB, a big marketing data repository that collects all customer interactions. It provides real-time data to marketers to automate interactions across all channels. It provides a unified 360 degree view of the individual website visitors and in-depth analytics reports. It provides fundamental analytics measurement components such as goals and events to evaluate the effectiveness of online business and marketing campaigns. It provides comprehensive conditions and actions to achieve conditional and behavioral or predictive personalization, which helps show customers what they are looking for instead of forcing them to see what we want to show. Sitecore collects, evaluates, and processes Omnichannel visitor behavioral patterns, which helps better planned effective marketing campaigns and improved user experience. Sitecore provides an engagement plan to control how your website interacts with visitors. It helps nurture relationships with your visitors by adapting personalized communication based on which state they are falling. Sitecore provides an in-depth geolocation service, helpful in optimizing campaigns through segmentation, personalization, and profiling strategies. The Sitecore Device Detection service is helpful in personalizing user experience or promotions based on the device they use. It provides different dimensions and reports to reflect data on full taxonomy provided in Marketing Control Panel. It provides different charting controls to get smart reports. It has full flexibility for developers to customize or extend all these features. High performance and scalability Sitecore supports heavy content management and content delivery usage with a large volume of data. Sitecore is architected for high performance and unlimited scalability. Sitecore cache engine provides caching on the raw data as well as rendered output data, which gives a high-performance platform. Sitecore uses the event queue concept for scalability. Theoretically, it makes Sitecore scalable to any number of instances under a load balancer. Summary In this article, we discussed about the importance of Sitecore and its good features. We also saw that Sitecore XP is not only an enterprise-level CMS, but also a web platform, which is the global leader in experience management. Resources for Article: Further resources on this subject: Building a Recommendation Engine with Spark [article] Configuring a MySQL linked server on SQL Server 2008 [article] Features and utilities in SQL Developer Data Modeler [article]
Read more
  • 0
  • 0
  • 6322

article-image-4-important-business-intelligence-considerations-for-the-rest-of-2019
Richard Gall
16 Sep 2019
7 min read
Save for later

4 important business intelligence considerations for the rest of 2019

Richard Gall
16 Sep 2019
7 min read
Business intelligence occupies a strange position, often overshadowed by fields like data science and machine learning. But it remains a critical aspect of modern business - indeed, the less attention the world appears to pay to it, the more it is becoming embedded in modern businesses. Where analytics and dashboards once felt like a shiny and exciting interruption in our professional lives, today it is merely the norm. But with business intelligence almost baked into the day to day routines and activities of many individuals, teams, and organizations, what does this actually mean in practice. For as much as we’d like to think that we’re all data-driven now, the reality is that there’s much we can do to use data more effectively. Research confirms that data-driven initiatives often fail - so with that in mind here’s what’s important when it comes to business intelligence in 2019. Popular business intelligence eBooks and videos Oracle Business Intelligence Enterprise Edition 12c - Second Edition Microsoft Power BI Quick Start Guide Implementing Business Intelligence with SQL Server 2019 [Video] Hands-On Business Intelligence with Qlik Sense Hands-On Dashboard Development with QlikView Getting the balance between self-service business intelligence and centralization Self-service business intelligence is one of the biggest trends to emerge in the last two years. In practice, this means that a diverse range of stakeholders (marketers and product managers for example) have access to analytics tools. They’re no longer purely the preserve of data scientists and analysts. Self-service BI makes a lot of sense in the context of today’s data rich and data-driven environment. The best way to empower team members to actually use data is to remove any bottlenecks (like a centralized data team) and allow them to go directly to the data and tools they need to make decisions. In essence, self-service business intelligence solutions are a step towards the democratization of data. However, while the notion of democratizing data sounds like a noble cause, the reality is a little more complex. There are a number of different issues that make self-service BI a challenging thing to get right. One of the biggest pain points, for example, are the skill gaps of teams using these tools. Although self-service BI should make using data easy for team members, even the most user-friendly dashboards need a level of data literacy to be useful. Read next: What are the limits of self-service BI? Many analytics products are being developed with this problem in mind. But it’s still hard to get around - you don’t, after all, want to sacrifice the richness of data for simplicity and accessibility. Another problem is the messiness of data itself - and this ultimately points to one of the paradoxes of self-service BI. You need strong alignment - centralization even - if you’re to ensure true democratization. The answer to all this isn’t to get tied up in decentralization or centralization. Instead, what’s important is striking a balance between the two. Decentralization needs centralization - there needs to be strong governance and clarity over what data exists, how it’s used, how it’s accessed - someone needs to be accountable for that for decentralized, self-service BI to actually work. Read next: How Qlik Sense is driving self-service Business Intelligence Self-service business intelligence: recommended viewing Power BI Masterclass - Beginners to Advanced [Video] Data storytelling that makes an impact Data storytelling is a phrase that’s used too much without real consideration as to what it means or how it can be done. Indeed, all too often it’s used to refer to stylish graphs and visualizations. And yes, stylish graphs and data visualizations are part of data storytelling, but you can’t just expect some nice graphics to communicate in depth data insights to your colleagues and senior management. To do data storytelling well, you need to establish a clear sense of objectives and goals. By that I’m not referring only to your goals, but also those of the people around you. It goes without saying that data and insight needs context, but what that context should be, exactly, is often the hard part - objectives and aims are perhaps the straightforward way of establishing that context and ensuring your insights are able to establish the scope of a problem and propose a way forward. Data storytelling can only really make an impact if you are able to strike a balance between centralization and self-service. Stakeholders that use self-service need confidence that everything they need is both available and accurate - this can only really be ensured by a centralized team of data scientists, architects, and analysts. Data storytelling: recommend viewing Data Storytelling with Qlik Sense [Video] Data Storytelling with Power BI [Video] The impact of cloud It’s impossible to properly appreciate the extent to which cloud is changing the data landscape. Not only is it easier than ever to store and process data, it’s also easy to do different things with it. This means that it’s now possible to do machine learning, or artificial intelligence projects with relative ease (the word relative being important, of course). For business intelligence, this means there needs to be a clear strategy that joins together every piece of the puzzle, from data collection to analysis. This means there needs to be buy-in and input from stakeholders before a solution is purchased - or built - and then the solution needs to be developed with every individual use case properly understood and supported. Indeed, this requires a combination of business acumen, soft skills, and technical expertise. A large amount of this will rest on the shoulders of an organization’s technical leadership team, but it’s also worth pointing out that those in other departments still have a part to play. If stakeholders are unable to present a clear vision of what their needs and goals are it’s highly likely that the advantages of cloud will pass them by when it comes to business intelligence. Cloud and business intelligence: recommended viewing Going beyond Dashboards with IBM Cognos Analytics [Video] Business intelligence ethics Ethics has become a huge issue for organizations over the last couple of years. With the Cambridge Analytica scandal placing the spotlight on how companies use customer data, and GDPR forcing organizations to take a new approach to (European) user data, it’s undoubtedly the case that ethical considerations have added a new dimension to business intelligence. But what does this actually mean in practice? Ethics manifests itself in numerous ways in business intelligence. Perhaps the most obvious is data collection - do you have the right to use someone’s data in a certain way? Sometimes the law will make it clear. But other times it will require individuals to exercise judgment and be sensitive to the issues that could arise. But there are other ways in which individuals and organizations need to think about ethics. Being data-driven is great, especially if you can approach insight in a way that is actionable and proactive. But at the same time it’s vital that business intelligence isn’t just seen as a replacement for human intelligence. Indeed, this is true not just in an ethical sense, but also in terms of sound strategic thinking. Business intelligence without human insight and judgment is really just the opposite of intelligence. Conclusion: business intelligence needs organizational alignment and buy-in There are many issues that have been slowly emerging in the business intelligence world for the last half a decade. This might make things feel confusing, but in actual fact it underlines the very nature of the challenges organizations, leadership teams, and engineers face when it comes to business intelligence. Essentially, doing business intelligence well requires you - and those around you - to tie all these different elements. It's certainly not straightforward, but with focus and a clarity of thought, it's possible to build a really effective BI program that can fulfil organizational needs well into the future.
Read more
  • 0
  • 0
  • 6315

article-image-introduction-xenconvert
Packt
04 Sep 2013
3 min read
Save for later

Introduction to XenConvert

Packt
04 Sep 2013
3 min read
(For more resources related to this topic, see here.) System requirements Since XenConvert can only convert Windows-based hosts and installs on the same host, the requirements are pretty much the same, as follows: Operating system: Windows XP, Windows Vista, Windows 7, Windows Server 2003 (SP1 or later), Windows Server 2008 (R2) .Net Framework 4.0 Disk Space: 40 MB free disk space XenServer version 6.0 or 6.1 Converting a physical machine to a virtual machine Let's take a quick look at how to convert a physical machine to a virtual machine. First we need to install XenConvert on the source physical machine. We can download XenConvert from this link: http://www.citrix.com/downloads/xenserver/tools/conversion.html. Once the standard Windows installation process is complete, launch the XenConvert tool; but before that we need to prepare the host machine for the conversion. To know more about XenConvert, refer to the XenConvert guide at http://support.citrix.com/article/CTX135017. Preparing the host machine For best results, prepare the host machine as follows: Enable Windows Automount on Windows Server operating systems. Disable Windows Autoplay. Remove any virtualization software before performing a conversion. Ensure that adequate free space exists at the destination, which is approximately 101 percent of used space of all source volumes. Remove any network interface teams; they are not applicable to a virtual machine. We need to run the XenConvert tool on the host machine to start the physical-to-virtual conversion. We can convert the physical machine directly to our XenServer if this host machine is accessible. The other options are to convert to VHD, OVF, or vDisk, which can be imported later on to XenServer using some methods. These options are more useful if we don't have enough disk space or connectivity with XenServer. I chose XenServer and clicked on Next . We can select multiple partitions to be included in the conversion, or select none from the drop-down menu in Source Volume and those disks won't be included in the conversion. We can also increase or decrease the size of the new virtual partition to be allocated for this virtual machine. Click on Next . We'll be asked to provide the details of the XenServer host. The hostname needs either an IP address or a FQDN of the XenServer; a username and password are standard login requirements. In the Workspace field, enter the path to the folder to store the intermediate OVF package that XenConvert will use during the conversion process. XenConvert will store the OVF package in the path we give. Click on Next to select the storage repositories found with XenServer and continue to the last step, in which we'll be provided with the summary of the conversion. Soon after the conversion is completed, we'll be able to have this new machine in our XenCenter. We'll need to have XenServer Tools installed on this new virtual machine. Summary In this article we covered an advanced topic that explained the process of converting a physical Windows server to a virtual machine using XenConvert. Resources for Article : Further resources on this subject: Citrix XenApp Performance Essentials [Article] Defining alerts [Article] Publishing applications [Article]
Read more
  • 0
  • 0
  • 6315

article-image-build-a-foodie-bot-with-javascript
Gebin George
03 May 2018
7 min read
Save for later

Build a foodie bot with JavaScript

Gebin George
03 May 2018
7 min read
Today, we are going to build a chatbot that can search for restaurants based on user goals and preferences. Let us begin by building Node.js modules to get data from Zomato based on user preferences. Create a file called zomato.js. Add a request module to the Node.js libraries using the following command in the console: This tutorial has been taken from Hands-On Chatbots and Conversational UI Development. > npm install request --save In zomato.js, add the following code to begin with: var request = require('request'); var baseURL = 'https://developers.zomato.com/api/v2.1/'; var apiKey = 'YOUR_API_KEY'; var catergories = null; var cuisines = null; getCategories(); getCuisines(76); Replace YOUR_API_KEY with your Zomato key. Let's build functions to get the list of categories and cuisines at startup. These queries need not be run when the user asks for a restaurant search because this information is pretty much static: function getCuisines(cityId){ var options = { uri: baseURL + 'cuisines', headers: { 'user-key': apiKey }, qs: {'city_id':cityId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); cuisines = JSON.parse(body).cuisines; } } request(options,callback); } The preceding code will fetch a list of cuisines available in a particular city (identified by a Zomato city ID). Let us add the code for identifying the list of categories: function getCategories(){ var options = { uri: baseURL + 'categories', headers: { 'user-key': apiKey }, qs: {}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { categories = JSON.parse(body).categories; } } request(options,callback); } Now that we have the basic functions out of our way, let us code in the restaurant search code: function getRestaurant(cuisine, location, category){ var cuisineId = getCuisineId(cuisine); var categoryId = getCategoryId(category); var options = { uri: baseURL + 'locations', headers: { 'user-key': apiKey }, qs: {'query':location}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); locationInfo = JSON.parse(body).location_suggestions; search(locationInfo[0], cuisineId, categoryId); } } request(options,callback); } function search(location, cuisineId, categoryId){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'categories': [categoryId]}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') var results = JSON.parse(body).restaurants; console.log(results); } } request(options,callback); } The preceding code will look for restaurants in a given location, cuisine, and category. For instance, you can search for a list of Indian restaurants in Newington, Edinburgh that do delivery. We now need to integrate this with the chatbot code. Let us create a separate file called index.js. Let us begin with the basics: var restify = require('restify'); var builder = require('botbuilder'); var request = require('request'); var baseURL = 'https://developers.zomato.com/api/v2.1/'; var apiKey = 'YOUR_API_KEY'; var catergories = null; var cuisines = null; Chapter 6 [ 247 ] getCategories(); //setTimeout(function(){getCategoryId('Delivery')}, 10000); getCuisines(76); //setTimeout(function(){getCuisineId('European')}, 10000); // Setup Restify Server var server = restify.createServer(); server.listen(process.env.port || process.env.PORT || 3978, function () { console.log('%s listening to %s', server.name, server.url); }); // Create chat connector for communicating with // the Bot Framework Service var connector = new builder.ChatConnector({ appId: process.env.MICROSOFT_APP_ID, appPassword: process.env.MICROSOFT_APP_PASSWORD }); // Listen for messages from users server.post('/foodiebot', connector.listen()); Add the bot dialog code to carry out the restaurant search. Let us design the bot to ask for cuisine, category, and location before proceeding to the restaurant search: var bot = new builder.UniversalBot(connector, [ function (session) { session.send("Hi there! Hungry? Looking for a restaurant?"); session.send("Say 'search restaurant' to start searching."); session.endDialog(); } ]); // Search for a restaurant bot.dialog('searchRestaurant', [ function (session) { session.send('Ok. Searching for a restaurant!'); builder.Prompts.text(session, 'Where?'); }, function (session, results) { session.conversationData.searchLocation = results.response; builder.Prompts.text(session, 'Cuisine? Indian, Italian, or anything else?'); }, function (session, results) { session.conversationData.searchCuisine = results.response; builder.Prompts.text(session, 'Delivery or Dine-in?'); }, function (session, results) { session.conversationData.searchCategory = results.response; session.send('Ok. Looking for restaurants..'); getRestaurant(session.conversationData.searchCuisine, session.conversationData.searchLocation, session.conversationData.searchCategory, session); } ]) .triggerAction({ matches: /^search restaurant$/i, confirmPrompt: 'Your restaurant search task will be abandoned. Are you sure?' }); Notice that we are calling the getRestaurant() function with four parameters. Three of these are ones that we have already defined: cuisine, location, and category. To these, we have to add another: session. This passes the session pointer that can be used to send messages to the emulator when the data is ready. Notice how this changes the getRestaurant() and search() functions: function getRestaurant(cuisine, location, category, session){ var cuisineId = getCuisineId(cuisine); var categoryId = getCategoryId(category); var options = { uri: baseURL + 'locations', headers: { 'user-key': apiKey }, qs: {'query':location}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); locationInfo = JSON.parse(body).location_suggestions; search(locationInfo[0], cuisineId, categoryId, session); } } request(options,callback); } function search(location, cuisineId, categoryId, session){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'category': categoryId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') console.log(body); //var results = JSON.parse(body).restaurants; //console.log(results); var resultsCount = JSON.parse(body).results_found; console.log('Found:' + resultsCount); session.send('I have found ' + resultsCount + ' restaurants for you!'); session.endDialog(); } } request(options,callback); } Once the results are obtained, the bot responds using session.send() and ends the dialog: Now that we have the results, let's present them in a more visually appealing way using cards. To do this, we need a function that can take the results of the search and turn them into an array of cards: function presentInCards(session, results){ var msg = new builder.Message(session); msg.attachmentLayout(builder.AttachmentLayout.carousel) var heroCardArray = []; var l = results.length; if (results.length > 10){ l = 10; } for (var i = 0; i < l; i++){ var r = results[i].restaurant; var herocard = new builder.HeroCard(session) .title(r.name) .subtitle(r.location.address) .text(r.user_rating.aggregate_rating) .images([builder.CardImage.create(session, r.thumb)]) .buttons([ builder.CardAction.imBack(session, "book_table:" + r.id, "Book a table") ]); heroCardArray.push(herocard); } msg.attachments(heroCardArray); return msg; } And we call this function from the search() function: function search(location, cuisineId, categoryId, session){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'category': categoryId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') console.log(body); var results = JSON.parse(body).restaurants; var msg = presentInCards(session, results); session.send(msg); session.endDialog(); } } request(options,callback); } Here is how it looks: We saw how to build a restaurant search bot, that gives you restaurant suggestions as per your preference. If you found our post useful check out Chatbots and Conversational UI Development. Top 4 chatbot development frameworks for developers How to create a conversational assistant using Python My friend, the robot: Artificial Intelligence needs Emotional Intelligence    
Read more
  • 0
  • 0
  • 6314
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-build-a-neural-network-to-recognize-handwritten-numbers-in-keras-and-mnist
Fatema Patrawala
20 Sep 2018
8 min read
Save for later

Build a Neural Network to recognize handwritten numbers in Keras and MNIST

Fatema Patrawala
20 Sep 2018
8 min read
A neural network is made up of many artificial neurons. Is it a representation of the brain or is it a mathematical representation of some knowledge? Here, we will simply try to understand how a neural network is used in practice. A convolutional neural network (CNN) is a very special kind of multi-layer neural network. CNN is designed to recognize visual patterns directly from images with minimal processing. A graphical representation of this network is produced in the following image. The field of neural networks was originally inspired by the goal of modeling biological neural systems, but since then it has branched in different directions and has become a matter of engineering and attaining good results in machine learning tasks. In this article we will look at building blocks of neural networks and build a neural network which will recognize handwritten numbers in Keras and MNIST from 0-9. This article is an excerpt taken from the book Practical Convolutional Neural Networks, written by Mohit Sewak, Md Rezaul Karim and Pradeep Pujari and published by Packt Publishing. An artificial neuron is a function that takes an input and produces an output. The number of neurons that are used depends on the task at hand. It could be as low as two or as many as several thousands. There are numerous ways of connecting artificial neurons together to create a CNN. One such topology that is commonly used is known as a feed-forward network: Each neuron receives inputs from other neurons. The effect of each input line on the neuron is controlled by the weight. The weight can be positive or negative. The entire neural network learns to perform useful computations for recognizing objects by understanding the language. Now, we can connect those neurons into a network known as a feed-forward network. This means that the neurons in each layer feed their output forward to the next layer until we get a final output. This can be written as follows: The preceding forward-propagating neuron can be implemented as follows: import numpy as np import math class Neuron(object):    def __init__(self):        self.weights = np.array([1.0, 2.0])        self.bias = 0.0    def forward(self, inputs):        """ Assuming that inputs and weights are 1-D numpy arrays and the bias is a number """        a_cell_sum = np.sum(inputs * self.weights) + self.bias        result = 1.0 / (1.0 + math.exp(-a_cell_sum)) # This is the sigmoid activation function        return result neuron = Neuron() output = neuron.forward(np.array([1,1])) print(output) Now that we have understood what are the building blocks of neural networks, let us get to building a neural network that will recognize handwritten numbers from 0 - 9. Handwritten number recognition with Keras and MNIST A typical neural network for a digit recognizer may have 784 input pixels connected to 1,000 neurons in the hidden layer, which in turn connects to 10 output targets — one for each digit. Each layer is fully connected to the layer above. A graphical representation of this network is shown as follows, where x are the inputs, h are the hidden neurons, and y are the output class variables: In this notebook, we will build a neural network that will recognize handwritten numbers from 0-9. The type of neural network that we are building is used in a number of real-world applications, such as recognizing phone numbers and sorting postal mail by address. To build this network, we will use the MNIST dataset. We will begin as shown in the following code by importing all the required modules, after which the data will be loaded, and then finally building the network: # Import Numpy, keras and MNIST dataimportnumpyasnpimportmatplotlib.pyplotaspltfromkeras.datasetsimportmnistfromkeras.modelsimportSequentialfromkeras.layers.coreimportDense,Dropout,Activationfromkeras.utilsimportnp_utils Retrieving training and test data The MNIST dataset already comprises both training and test data. There are 60,000 data points of training data and 10,000 points of test data. If you do not have the data file locally at the '~/.keras/datasets/' + path, it can be downloaded at this location. Each MNIST data point has: An image of a handwritten digit A corresponding label that is a number from 0-9 to help identify the image The images will be called, and will be the input to our neural network, X; their corresponding labels are y. We want our labels as one-hot vectors. One-hot vectors are vectors of many zeros and one. It's easiest to see this in an example. The number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0] as a one-hot vector. Flattened data We will use flattened data in this example, or a representation of MNIST images in one dimension rather than two can also be used. Thus, each 28 x 28 pixels number image will be represented as a 784 pixel 1 dimensional array. By flattening the data, information about the 2D structure of the image is thrown; however, our data is simplified. With the help of this, all our training data can be contained in one array of shape (60,000, 784), wherein the first dimension represents the number of training images and the second depicts the number of pixels in each image. This kind of data is easy to analyze using a simple neural network, as follows: # Retrieving the training and test data (X_train,y_train),(X_test,y_test)=mnist.load_data() print('X_train shape:',X_train.shape) print('X_test shape: ',X_test.shape) print('y_train shape:',y_train.shape) print('y_test shape: ',y_test.shape) Visualizing the training data The following function will help you visualize the MNIST data. By passing in the index of a training example, the show_digit function will display that training image along with its corresponding label in the title: # Visualize the dataimportmatplotlib.pyplotasplt%matplotlibinline #Displaying a training image by its index in the MNIST setdefdisplay_digit(index):label=y_train[index].argmax(axis=0)image=X_train[index]plt.title('Training data, index: %d,  Label: %d'%(index,label))plt.imshow(image,cmap='gray_r')plt.show()# Displaying the first (index 0) training imagedisplay_digit(0) X_train=X_train.reshape(60000,784)X_test=X_test.reshape(10000,784)X_train=X_train.astype('float32')X_test=X_test.astype('float32')X_train/=255X_test/=255print("Train the matrix shape",X_train.shape)print("Test the matrix shape",X_test.shape) #One Hot encoding of labels.fromkeras.utils.np_utilsimportto_categoricalprint(y_train.shape)y_train=to_categorical(y_train,10)y_test=to_categorical(y_test,10)print(y_train.shape) Building the network For this example, you'll define the following: The input layer, which you should expect for each piece of MNIST data, as it tells the network the number of inputs Hidden layers, as they recognize patterns in data and also connect the input layer to the output layer The output layer, as it defines how the network learns and gives a label as the output for a given image, as follows: # Defining the neural networkdefbuild_model():model=Sequential()model.add(Dense(512,input_shape=(784,)))model.add(Activation('relu'))# An "activation" is just a non-linear function that is applied to the output# of the above layer. In this case, with a "rectified linear unit",# we perform clamping on all values below 0 to 0.model.add(Dropout(0.2))#With the help of Dropout helps we can protect the model from memorizing or "overfitting" the training datamodel.add(Dense(512))model.add(Activation('relu'))model.add(Dropout(0.2))model.add(Dense(10))model.add(Activation('softmax'))# This special "softmax" activation,#It also ensures that the output is a valid probability distribution,#Meaning that values obtained are all non-negative and sum up to 1.returnmodel #Building the modelmodel=build_model() model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['accuracy']) Training the network Now that we've constructed the network, we feed it with data and train it, as follows: # Trainingmodel.fit(X_train,y_train,batch_size=128,nb_epoch=4,verbose=1,validation_data=(X_test,y_test)) Testing After you're satisfied with the training output and accuracy, you can run the network on the test dataset to measure its performance! A good result will obtain an accuracy higher than 95%. Some simple models have been known to achieve even up to 99.7% accuracy! We can test the model, as shown here: # Comparing the labels predicted by our model with the actual labelsscore=model.evaluate(X_test,y_test,batch_size=32,verbose=1,sample_weight=None)# Printing the resultprint('Test score:',score[0])print('Test accuracy:',score[1]) To summarize we got to know about the building blocks of neural networks and we successfully built a neural network that recognized handwritten numbers using MNIST dataset in Keras. To implement award winning and cutting edge CNN architectures, check out this one stop guide published by Packtpub, Practical Convolutional Neural Networks. Are Recurrent Neural Networks capable of warping time? Recurrent neural networks and the LSTM architecture Build a generative chatbot using recurrent neural networks (LSTM RNNs)
Read more
  • 0
  • 0
  • 6313

article-image-setting-python-development-environment-mac-os-x
Packt
19 Mar 2010
12 min read
Save for later

Setting Up Python Development Environment on Mac OS X

Packt
19 Mar 2010
12 min read
I decided to use the /usr/local directory as destination and compile everything from source. Some people favor binaries. However, binary distributions are often pre-packaged and end up in some sort of installer - they could contain certain things that we dislike and so on. This is also the case with Python on Mac OS X. You can download binary distribution from the official Python website (and you are in fact encouraged to do so, if using OS X), which suffers exactly from these kind of problems. It comes with an installer and installs some stuff out of the /usr/local directory which we don't need. It maybe useful to some of the Cocoa developers who deal with the Python code also, as it eases the installation of the PyObjC (a bridge between the Python and Objective-C) later on. But we don't need that either. We will end up with the pure, lean and mean installation of Python and some supportive applications. Setting Up Python Development Environment on Mac OS X Background First, a bit of a background. You run Mac OS X, one of the finest operating systems available to date. It's not just the looks, there is a fine machinery under the hood that makes it so good. OS X is essentially a UNIX system, more specifically BSD UNIX. It contains large chunks of FreeBSD code which is a good thing, because FreeBSD is a very robust and quality product. On top of that, there is a beautiful Aqua interface for you to enjoy. For some people, this combination is the best of both worlds. Because of the UNIX background, you get many benefits as well. For instance, you have quite a collection of useful UNIX tools available for you to use. In fact, Apple even ships Python out of the box! You may as well ask yourself why do I then need to install and set it up? - Good question. It depends. OS X is a commercial product and it takes some time from release to release and because of that Python version that is shipped with the current OS X (10.4) is a bit old. Maybe that is not an issue for you, but for some people it is. We will focus on getting and installing the newest Python version as that brings some other benefits we will mention later. Remember that /usr/local thing from the beginning? And what's up with that "safe heaven" talk? Well, there is a thing that is called Filesystem Hierarchy Standard, or FHS. The FHS sets up some basic conventions for use by UNIX and UNIX-like systems when mapping out a filesystem. Mac OS X breaks it at some places (as do many other UNIX variants) but most systems respect it. The FHS defines the /usr/local as the "tertiary hierarchy for local data installed by the system administrator", which basically means that it is the safe, standard place for you to put your own custom compiled programs there. Using the /usr/local directory for this purpose is important for many reasons but there is one that is most critical: System Updates. System Updates are automatic methods used by operating systems to deliver newer versions of the software to their users. This new software pieces are then installed at their usual location often with brute force, regardless of what was there before. So, for instance if you had modified or installed some newer version of some important system software, the Software Update process will overwrite it, thus rendering your changes lost. To overcome this problem, we will install all of our custom software in this safe place, the /usr/local directory. Getting to Work At last, the fun part (or so I hope). Requirements First, some prerequisites. You will need the following to get going: Mac OS X 10.4 XCode 2.4 or newer (this contains the necessary compilers) XCode is not installed by default on new Macs, but it can be obtained from the Mac OS X install DVD or from the Apple Developer Connection for free. Strategy As you might have guessed from the previous discussion, I decided to use the /usr/local directory as destination and compile everything from source. Some people favor binaries. However, binary distributions are often pre-packaged and end up in some sort of installer - they could contain certain things that we dislike and so on. This is also the case with Python on Mac OS X. You can download binary distribution from the official Python website (and you are in fact encouraged to do so, if using OS X), which suffers exactly from these kind of problems. It comes with an installer and installs some stuff out of the /usr/local directory which we don't need. It maybe useful to some of the Cocoa developers who deal with the Python code also, as it eases the installation of the PyObjC (a bridge between the Python and Objective-C) later on. But we don't need that either. We will end up with the pure, lean and mean installation of Python and some supportive applications. Additional benefit of compiling from source is that we can look through the actual source code and audit or modify it before we actually install it. I will focus on Python installation that is web development oriented. You will end up with a basic set of tools which you can use to build database-driven web sites powered by Python scripting language. Let's begin, shall we? Using /usr/local In order for all this to work, we will have to make some slight adjustments. For a system to see our custom Python installation, we will have to set the path to include /usr/local first. Mac OS X, like other UNIX systems, uses a "path" to determine where it should look for UNIX applications. The path is just an environment variable that is executed (if it's set) each time you open a new Terminal window. To set up a path, either create or edit a file .bash_login (notice the dot, it's hidden file) in your home directory using a text editor. I recommend the following native OS X text editors: TextMate, BBEdit or TextWrangler, and the following UNIX editors: Emacs or vi(m). To edit the file with TextMate for example, fire up the Terminal and type: mate ~/.bash_login This will open the file with TextMate. Now, add the following line at the end of the file: export PATH="/usr/local/bin:$PATH" After you save and close the file, apply the changes (from terminal) with the following command: . ~/.bash_login While we're at it, we could just as well (using the previous method) enter the following line to make the Terminal UTF-8 aware: export LC_CTYPE=en_US.UTF-8 In general, you should be using UTF-8 anyway, so this is just a bonus. And it is even required to do for some things to work, Subversion for example has some problems if this isn't set. Setting Up the Working Directory It's nice to have a working directory where you will download all the source files and possibly revert to it later. We'll create a directory called src in the /usr/local directory: sudo mkdir -p /usr/local/srcsudo chgrp admin /usr/local/srcsudo chmod -R 775 /usr/local/srccd /usr/local/src Notice the sudo command. It means "superuser do" or "substitute user and do". It will ask you for your password. Just enter it when asked. We are now in this new working directory and will download and compile everything here. Python Finally we are all set up for the actual work. Just enter all the following commands correctly and you should be good to go. We are starting off with Python, but to compile Python properly we will first install some prerequisites, like readline and sqlite. Technically, SQLite isn't required, but it is necessary to compile it first, so that later on Python picks up its libraries and makes use of them. One of the new things in the newest Python 2.5 is native SQLite database driver. So, we will kill two birds with one stone ;-). curl -O ftp://ftp.cwru.edu/pub/bash/readline-5.2.tar.gztar -xzvf readline-5.2.tar.gzcd readline-5.2./configure --prefix=/usr/localmakesudo make installcd .. If you get an error about no acceptable C compiler, then you haven't installed XCode. We can now proceed with SQLite installation. curl -O http://www.sqlite.org/sqlite-3.3.13.tar.gztar -xzvf sqlite-3.3.13.tar.gzcd sqlite-3.3.13./configure --prefix=/usr/local --with-readline-dir=/usr/localmakesudo make installcd .. Finally, we can download and install Python itself. curl -O http://www.python.org/ftp/python/2.5/Python-2.5.tgztar -xzvf Python-2.5.tgzcd Python-2.5./configure --prefix=/usr/local --with-readline-dir=/usr/local --with-sqlite3=/usr/localmakesudo make installcd .. This should leave us with the core Python and SQLite installation. We can verify this by issuing the following commands: python -Vsqlite3 -version Those commands should report new version numbers we just compiled (2.5 for Python and 3.3.13 for SQLite). Do the happy dance now! Before we get to excited, we should also verify are they properly linked together by entering interactive Python interpreter and issuing few commands (don't type ">>>", these are here for illustrative purposes, because you also get them in the interpreter): python>>> import sqlite3>>> sqlite3.sqlite_version'3.3.13' Press C-D (that's CTRL + D) to exit the interactive Python interpreter. If your session looks like the one above, we're all set. If you get some error about missing modules, that means something is not right. Did you follow all the steps as mentioned above? We now have the Python and SQLite installed. The rest is up to you. Do you want to program sites in Django, CherryPy, Pylons, TurboGears, web.py etc.? Just install the web framework you are interested with. Need any additional modules, like Beautiful Soup for parsing HTML? Just go ahead and install it… For development needs, all frameworks I tried come with a suitable development server, so you don't need to install any web server to get started. CherryPy in addition even comes with a great production-ready WSGI web server. Also, for all your database needs, I find SQLite more then adequate while in development mode. I even find it more then enough for some live sites also. It's great little zero-configuration database. If you have bigger needs, it's easy to switch to some other database on the production server (you are planning to use some database abstraction layer, do you?). For completeness sake, let's pretend you're going to develop sites with CherryPy as web framework, SQLite as database, SQLAlchemy as database abstraction layer (toolkit, ORM) and Mako for templates. So, we are missing CherryPy, SQLAlchemy and Mako. Let's get them while they're hot: cd /usr/local/srccurl -O http://download.cherrypy.org/cherrypy/3.0.1/CherryPy-3.0.1.tar.gztar -xzvf CherryPy-3.0.1.tar.gzcd CherryPy-3.0.1sudo python setup.py installcd ..curl -O http://cheeseshop.python.org/packages/source/S/SQLAlchemy/SQLAlchemy-0.3.5.tar.gztar -xzvf SQLAlchemy-0.3.5.tar.gzcd SQLAlchemy-0.3.5sudo python setup.py installcd ..curl -O http://www.makotemplates.org/downloads/Mako-0.1.4.tar.gztar -xzvf Mako-0.1.4.tar.gzcd Mako-0.1.4sudo python setup.py installcd .. Do the happy dance again! This same pattern applies for many other Python web frameworks and modules. What have we just achieved? Well, we now have "invisible" Python web development environment which is clean, fast, self-contained and in the safe place to rest on. Combine it with TextMate (or any other text editor you like) and you will have some serious good times. Again, for even more completeness sake, we will cover Subversion. Subversion is a version control system. Sounds exciting, eh? Actually, it's a very powerful and sane thing to learn and use. But, I'm not covering it because of actual version control, but because many software projects use it, so you will sometimes have the need to checkout (download your own local copy) some projects code. For example, Django project uses it, and their development version is often better than the actual released "stable" version. So, the only way of having (and keeping up with) the development version is to use Subversion to obtain it and keep it updated. All you usually need to do in order to obtain the latest revision of some software is to issue the following command (example for Django): svn co http://code.djangoproject.com/svn/django/trunk/ django_src Here are the steps to download and compile Subversion: curl -O http://subversion.tigris.org/downloads/subversion-1.4.3.tar.gzcurl -O http://subversion.tigris.org/downloads/subversion-deps-1.4.3.tar.gztar -xzvf subversion-1.4.3.tar.gztar -xzvf subversion-deps-1.4.3.tar.gzcd subversion-1.4.3./configure --prefix=/usr/local --with-openssl --with-ssl --with-zlibmakesudo make installcd .. However, even on some moderate recent computer hardware, Subversion can take a long time to compile. If that's the case you don't want to compile it, or you simply just use it for time to time to do some checkouts, you may prefer to download some pre-compiled binary. I know what I said about binaries before, but there is a very fine one over at Martin Ott. It's packaged as a standard Mac OS X installer, and it installs just where it should, in /usr/local directory. When speaking about version control, I'm more a decentralized version control person. I really like Mercurial — it's fast, small, lightweight, but it also scales fairly well for more demanding scenarios. And guess what, it's also written in Python. So, go ahead, install it too, and start writing those nice Python powered web sites! That would be all from me today. While I provided the exact steps for you to follow, that doesn't mean that you should pick the same components. These days (coming from the Django background), I'm learning Pylons, Mako, SQLAlchemy, Elixir and a couple of other components. It makes sense currently, as Pylons is strongly built around WSGI compliance and philosophy which makes the components more reusable and should make it easier to switch to or from any other Python WSGI-centric framework in the future. Good luck!
Read more
  • 0
  • 0
  • 6302

article-image-distributed-training-in-tensorflow-2-x
Expert Network
30 Apr 2021
7 min read
Save for later

Distributed training in TensorFlow 2.x

Expert Network
30 Apr 2021
7 min read
TensorFlow 2 is a rich development ecosystem composed of two main parts: Training and Serving. Training consists of a set of libraries for dealing with datasets (tf.data), a set of libraries for building models, including high-level libraries (tf.Keras and Estimators), low-level libraries (tf.*), and a collection of pretrained models (tf.Hub). Training can happen on CPUs, GPUs, and TPUs via distribution strategies and the result can be saved using the appropriate libraries.  This article is an excerpt from the book, Deep Learning with TensorFlow 2 and Keras, Second Edition by Antonio Gulli, Amita Kapoor, and Sujit Pal. This book teaches deep learning techniques alongside TensorFlow (TF) and Keras. In this article, we’ll review the addition of the powerful new feature, distributed training, in TensorFlow 2.x.  One very useful addition to TensorFlow 2.x is the possibility to train models using distributed GPUs, multiple machines, and TPUs in a very simple way with very few additional lines of code. tf.distribute.Strategy is the TensorFlow API used in this case and it supports both tf.keras and tf.estimator APIs and eager execution. You can switch between GPUs, TPUs, and multiple machines by just changing the strategy instance. Strategies can be synchronous, where all workers train over different slices of input data in a form of sync data parallel computation, or asynchronous, where updates from the optimizers are not happening in sync. All strategies require that data is loaded in batches via the tf.data.Dataset api.  Note that the distributed training support is still experimental. A roadmap is given in Figure 1:  Figure 1: Distributed training support fr different strategies and APIs  Let’s discuss in detail all the different strategies reported in Figure 1.  Multiple GPUs  TensorFlow 2.x can utilize multiple GPUs. If we want to have synchronous distributed training on multiple GPUs on one machine, there are two things that we need to do: (1) We need to load the data in a way that will be distributed into the GPUs, and (2) We need to distribute some computations into the GPUs too:  In order to load our data in a way that can be distributed into the GPUs, we simply need tf.data.Dataset (which has already been discussed in the previous paragraphs). If we do not have a tf.data.Dataset but we have a normal tensor, then we can easily convert the latter into the former using tf.data.Dataset.from_tensors_slices(). This will take a tensor in memory and return a source dataset, the elements of which are slices of the given tensor. In our toy example, we use NumPy to generate training data x and labels y, and we transform it into tf.data.Dataset with tf.data.Dataset.from_tensor_slices(). Then we apply a shuffle to avoid bias in training across GPUs and then generate SIZE_BATCHES batches:  import tensorflow as tf import numpy as np from tensorflow import keras N_TRAIN_EXAMPLES = 1024*1024 N_FEATURES = 10 SIZE_BATCHES = 256  # 10 random floats in the half-open interval [0.0, 1.0). x = np.random.random((N_TRAIN_EXAMPLES, N_FEATURES)) y = np.random.randint(2, size=(N_TRAIN_EXAMPLES, 1)) x = tf.dtypes.cast(x, tf.float32) print (x) dataset = tf.data.Dataset.from_tensor_slices((x, y)) dataset = dataset.shuffle(buffer_size=N_TRAIN_EXAMPLES).batch(SIZE_BATCHES) In order to distribute some computations to GPUs, we instantiate a distribution = tf.distribute.MirroredStrategy() object, which supports synchronous distributed training on multiple GPUs on one machine. Then, we move the creation and compilation of the Keras model inside the strategy.scope(). Note that each variable in the model is mirrored across all the replicas. Let’s see it in our toy example: # this is the distribution strategy distribution = tf.distribute.MirroredStrategy() # this piece of code is distributed to multiple GPUs with distribution.scope(): model = tf.keras.Sequential()   model.add(tf.keras.layers.Dense(16, activation=‘relu’, input_shape=(N_FEATURES,)))   model.add(tf.keras.layers.Dense(1, activation=‘sigmoid’))   optimizer = tf.keras.optimizers.SGD(0.2)   model.compile(loss=‘binary_crossentropy’, optimizer=optimizer) model.summary()  # Optimize in the usual way but in reality you are using GPUs. model.fit(dataset, epochs=5, steps_per_epoch=10)  Note that each batch of the given input is divided equally among the multiple GPUs. For instance, if using MirroredStrategy() with two GPUs, each batch of size 256 will be divided among the two GPUs, with each of them receiving 128 input examples for each step. In addition, note that each GPU will optimize on the received batches and the TensorFlow backend will combine all these independent optimizations on our behalf. In short, using multiple GPUs is very easy and requires minimal changes to the tf.Keras code used for a single server.  MultiWorkerMirroredStrategy  This strategy implements synchronous distributed training across multiple workers, each one with potentially multiple GPUs. As of September 2019 the strategy works only with Estimators and it has experimental support for tf.Keras. This strategy should be used if you are aiming at scaling beyond a single machine with high performance. Data must be loaded with tf.Dataset and shared across workers so that each worker can read a unique subset.  TPUStrategy  This strategy implements synchronous distributed training on TPUs. TPUs are Google’s specialized ASICs chips designed to significantly accelerate machine learning workloads in a way often more efficient than GPUs. According to this public information (https://github.com/tensorflow/tensorflow/issues/24412):  “the gist is that we intend to announce support for TPUStrategy alongside Tensorflow 2.1. Tensorflow 2.0 will work under limited use-cases but has many improvements (bug fixes, performance improvements) that we’re including in Tensorflow 2.1, so we don’t consider it ready yet.”  ParameterServerStrategy  This strategy implements either multi-GPU synchronous local training or asynchronous multi-machine training. For local training on one machine, the variables of the models are placed on the CPU and operations are replicated across all local GPUs. For multi-machine training, some machines are designated as workers and some as parameter servers with the variables of the model placed on parameter servers. Computation is replicated across all GPUs of all workers. Multiple workers can be set up with the environment variable TF_CONFIG as in the following example:  os.environ[“TF_CONFIG”] = json.dumps({    “cluster”: {        “worker”: [“host1:port”, “host2:port”, “host3:port”],         “ps”: [“host4:port”, “host5:port”]    },    “task”: {“type”: “worker”, “index”: 1} })  In this article, we have seen how it is possible to train models using distributed GPUs, multiple machines, and TPUs in a very simple way with very few additional lines of code. Learn how to build machine and deep learning systems with the newly released TensorFlow 2 and Keras for the lab, production, and mobile devices with Deep Learning with TensorFlow 2 and Keras, Second Edition by Antonio Gulli, Amita Kapoor and Sujit Pal.  About the Authors  Antonio Gulli is a software executive and business leader with a passion for establishing and managing global technological talent, innovation, and execution. He is an expert in search engines, online services, machine learning, information retrieval, analytics, and cloud computing.   Amita Kapoor is an Associate Professor in the Department of Electronics, SRCASW, University of Delhi and has been actively teaching neural networks and artificial intelligence for the last 20 years. She is an active member of ACM, AAAI, IEEE, and INNS. She has co-authored two books.   Sujit Pal is a technology research director at Elsevier Labs, working on building intelligent systems around research content and metadata. His primary interests are information retrieval, ontologies, natural language processing, machine learning, and distributed processing. He is currently working on image classification and similarity using deep learning models. He writes about technology on his blog at Salmon Run. 
Read more
  • 0
  • 0
  • 6301

article-image-using-google-maps-apis-knockoutjs
Packt
22 Sep 2015
7 min read
Save for later

Using Google Maps APIs with Knockout.js

Packt
22 Sep 2015
7 min read
This article by Adnan Jaswal, the author of the book, KnockoutJS by Example, will render a map of the application and allow the users to place markers on it. The users will also be able to get directions between two addresses, both as description and route on the map. (For more resources related to this topic, see here.) Placing marker on the map This feature is about placing markers on the map for the selected addresses. To implement this feature, we will: Update the address model to hold the marker Create a method to place a marker on the map Create a method to remove an existing marker Register subscribers to trigger the removal of the existing markers when an address changes Update the module to add a marker to the map Let's get started by updating the address model. Open the MapsApplication module and locate the AddressModel variable. Add an observable to this model to hold the marker like this: /* generic model for address */ var AddressModel = function() { this.marker = ko.observable(); this.location = ko.observable(); this.streetNumber = ko.observable(); this.streetName = ko.observable(); this.city = ko.observable(); this.state = ko.observable(); this.postCode = ko.observable(); this.country = ko.observable(); }; Next, we create a method that will create and place the marker on the map. This method should take location and address model as parameters. The method will also store the marker in the address model. Use the google.maps.Marker class to create and place the marker. Our implementation of this method looks similar to this: /* method to place a marker on the map */ var placeMarker = function (location, value) { // create and place marker on the map var marker = new google.maps.Marker({ position: location, map: map }); //store the newly created marker in the address model value().marker(marker); }; Now, create a method that checks for an existing marker in the address model and removes it from the map. Name this method removeMarker. It should look similar to this: /* method to remove old marker from the map */ var removeMarker = function(address) { if(address != null) { address.marker().setMap(null); } }; The next step is to register subscribers that will trigger when an address changes. We will use these subscribers to trigger the removal of the existing markers. We will use the beforeChange event of the subscribers so that we have access to the existing markers in the model. Add subscribers to the fromAddress and toAddress observables to trigger on the beforeChange event. Remove the existing markers on the trigger. To achieve this, I created a method called registerSubscribers. This method is called from the init method of the module. The method registers the two subscribers that triggers calls to removeMarker. Our implementation looks similar to this: /* method to register subscriber */ var registerSubscribers = function () { //fire before from address is changed mapsModel.fromAddress.subscribe(function(oldValue) { removeMarker(oldValue); }, null, "beforeChange"); //fire before to address is changed mapsModel.toAddress.subscribe(function(oldValue) { removeMarker(oldValue); }, null, "beforeChange"); }; We are now ready to bring the methods we created together and place a marker on the map. Create a map called updateAddress. This method should take two parameters: the place object and the value binding. The method should call populateAddress to extract and populate the address model, and placeMarker to place a new marker on the map. Our implementation looks similar to this: /* method to update the address model */ var updateAddress = function(place, value) { populateAddress(place, value); placeMarker(place.geometry.location, value); }; Call the updateAddress method from the event listener in the addressAutoComplete custom binding: google.maps.event.addListener(autocomplete, 'place_changed', function() { var place = autocomplete.getPlace(); console.log(place); updateAddress(place, value); }); Open the application in your browser. Select from and to addresses. You should now see markers appear for the two selected addresses. In our browser, the application looks similar to the following screenshot: Displaying a route between the markers The last feature of the application is to draw a route between the two address markers. To implement this feature, we will: Create and initialize the direction service Request routing information from the direction service and draw the route Update the view to add a button to get directions Let's get started by creating and initializing the direction service. We will use the google.maps.DirectionsService class to get the routing information and the google.maps.DirectionsRenderer to draw the route on the map. Create two attributes in the MapsApplication module: one for directions service and the other for directions renderer: /* the directions service */ var directionsService; /* the directions renderer */ var directionsRenderer; Next, create a method to create and initialize the preceding attributes: /* initialise the direction service and display */ var initDirectionService = function () { directionsService = new google.maps.DirectionsService(); directionsRenderer = new google.maps.DirectionsRenderer({suppressMarkers: true}); directionsRenderer.setMap(map); }; Call this method from the mapPanel custom binding handler after the map has been created and cantered. The updated mapPanel custom binding should look similar to this: /* custom binding handler for maps panel */ ko.bindingHandlers.mapPanel = { init: function(element, valueAccessor){ map = new google.maps.Map(element, { zoom: 10 }); centerMap(localLocation); initDirectionService(); } }; The next step is to create a method that will build and fire a request to the direction service to fetch the direction information. The direction information will then be used by the direction renderer to draw the route on the map. Our implementation of this method looks similar to this: /* method to get directions and display route */ var getDirections = function () { //create request for directions var routeRequest = { origin: mapsModel.fromAddress().location(), destination: mapsModel.toAddress().location(), travelMode: google.maps.TravelMode.DRIVING }; //fire request to route based on request directionsService.route(routeRequest, function(response, status) { if (status == google.maps.DirectionsStatus.OK) { directionsRenderer.setDirections(response); } else { console.log("No directions returned ..."); } }); }; We create a routing request in the first part of the method. The request object consists of origin, destination, and travelMode. The origin and destination values are set to the locations for from and to addresses. The travelMode is set to google.maps.TravelMode.DRIVING, which, as the name suggests, specifies that we require driving route. Add the getDirections method to the return statement of the module as we will bind it to a button in the view. One last step before we can work on the view is to clear the route on the map when the user selects a new address. This can be achieved by adding an instruction to clear the route information in the subscribers we registerd earlier. Update the subscribers in the registerSubscribers method to clear the routes on the map: /* method to register subscriber */ var registerSubscribers = function () { //fire before from address is changed mapsModel.fromAddress.subscribe(function(oldValue) { removeMarker(oldValue); directionsRenderer.set('directions', null); }, null, "beforeChange"); //fire before to address is changed mapsModel.toAddress.subscribe(function(oldValue) { removeMarker(oldValue); directionsRenderer.set('directions', null); }, null, "beforeChange"); }; The last step is to update the view. Open the view and add a button under the address input components. Add click binding to the button and bind it to the getDirections method of the module. Add enable binding to make the button clickable only after the user has selected the two addresses. The button should look similar to this: <button type="button" class="btn btn-default" data-bind="enable: MapsApplication.mapsModel.fromAddress && MapsApplication.mapsModel.toAddress, click: MapsApplication.getDirections"> Get Directions </button> Open the application in your browser and select the From address and To address option. The address details and markers should appear for the two selected addresses. Click on the Get Directions button. You should see the route drawn on the map between the two markers. In our browser, the application looks similar to the following screenshot: Summary In this article, we walked through placing markers on the map and displaying the route between the markers. Resources for Article: Further resources on this subject: KnockoutJS Templates[article] Components [article] Web Application Testing [article]
Read more
  • 0
  • 0
  • 6297
article-image-setting-slick2d
Packt
23 Oct 2013
4 min read
Save for later

Setting Up Slick2D

Packt
23 Oct 2013
4 min read
(For more resources related to this topic, see here.) What is Slick2D? Slick2D is a multi-platform library for two dimensional game development that sits upon the LWJGL(Light-Weight Java Game Library). Slick2D simplifies the processes of game development such as game loop, rendering, updating, frame setup, and state-based game creation. It also offers some features that LWJGL does not, such as particle emitters and integration with Tiled (a map editor). Developers of all skill levels can enjoy Slick2D, as it offers a degree of simplicity that you can't find in most libraries. This simplicity not only makes it a great library for programmers but artists as well, who may not have the technical knowledge to create games in other libraries. Downloading the Slick2D and LWJGL files The Slick2D and LWJGL jar files, plus the LWJGL native files, are needed to create a Slick2D game project. The only system requirement for Slick2D is a Java JDK. To get the files, we perform the following steps: Obtaining the LWJGL files: Navigate to http://www.lwjgl.org/download.php. Download the most recent stable build. The .zip file will include both the LWJGL jar file and the native files. (This .zip file will be referenced as lwjgl.zip file.) Obtaining the Slick2D files: Due to hosting issues, the Slick2D files are being hosted by a community member at http://slick.ninjacave.com. If this site is not available, follow the alternative instructions at step 3. Click on Download. Alternative method of obtaining the Slick2D files: Navigate to https://bitbucket.org/kevglass/slick. Download the source. Build the ant script located at slick/trunk/Slick/build.xml Build it in eclipse or command line using $ ant. Setting up an eclipse project We will utilize the Eclipse IDE that can be found at http://www.eclipse.org/ when working with Slick2D in this article. You may, however, utilize other options. Perform the following these steps to set up a Slick2D project: Navigate to File | New | Java Project. Name your project and click on Finish. Create a new folder in your project and name it lib. Add two subfolders named jars and native. Place both lwjgl.jar and slick.jar in the jars subfolder inside our eclipse project. Take all the native files from lwjgl.zip and place them in the native subfolder. Copy the contents of the subfolders inside native from lwjgl.zip not the subfolders themselves. Right-click on project then click on Properties. Click on Java Build Path and navigate to the Libraries tab. Add both the jars from the project. Select and expand lwjgl.jar from the Libraries tab and click on Native library location: (None) then click on Edit and search the workspace for the native's folder. Native files The native files included in lwjgl.zip are platform-specific libraries that allow the developers to make one game that will work on all of the different platforms. What if I want my game to be platform-specific? No real benefit exists to being platform-specific with Slick2D. In the foregoing tutorial, we will establish a game as a multi-platform game. However, if you want your game to be platform-specific, you can make it platform-specific. In the previous tutorial (step 6) we took the content of each operating system's folder and put that content into our native folder. If, instead, you desire to make your game platform-specific, then instead of copying the contents of these folders, you would copy the entire folder as illustrated as follows: When defining the natives for LWJGL (step 10 in previous example), simply point towards the operating system of your choice. Summary In this article we learned tons of important things necessary to create a project in Slick2D. So far we covered: Downloading the necessary library files Setting up a project (platform-specific or multi-platform) Native files Resources for Article: Further resources on this subject: HTML5 Games Development: Using Local Storage to Store Game Data [Article] Adding Sound, Music, and Video in 3D Game Development with Microsoft Silverlight 3: Part 2 [Article] Adding Finesse to Your Game [Article]
Read more
  • 0
  • 0
  • 6297

article-image-build-first-raspberry-pi-project
Gebin George
20 Apr 2018
7 min read
Save for later

Build your first Raspberry Pi project

Gebin George
20 Apr 2018
7 min read
In today's tutorial, we will build a simple Raspberry Pi 3 project. Since our Raspberry Pi now runs Windows 10 IoT Core, .NET Core applications will run on it, including Universal Windows Platform (UWP) applications. From a blank solution, let's create our first Raspberry Pi application. Choose Add and New Project. In the Visual C# category, select Blank App (Universal Windows). Let's call our project FirstApp. Visual Studio will ask us for target and minimum platform versions. Check the screenshot and make sure the version you select is lower than the version installed on your Raspberry Pi. In our case, the Raspberry Pi runs Build 15063. This is the March 2017 release. So, we accept Build 14393 (July 2016) as the target version and Build 10586 (November 2015) as the minimum version. If you want to target the Windows 10 Fall Creators Update, which supports .NET Core 2, you should select Build 16299 for both. In the Solution Explorer, we should now see the files of our new UWP project: New project Adding NuGet packages We proceed by adding functionality to our app from downloadable packages, or NuGets. From the References node, right-click and select Manage NuGet Packages. First, go to the Updates tab and make sure the packages that you already have are updated. Next, go to the Browse tab, type Firmata in the search box, and press Enter. You should see the Windows-Remote-Arduino package. Make sure to install it in your project. In the same way, search for the Waher.Events package and install it. Aggregating capabilities Since we're going to communicate with our Arduino using a USB serial port, we must make a declaration in the Package.appxmanifest file stating this. If we don't do this, the runtime environment will not allow the app to do it. Since this option is not available in the GUI by default, you need to edit the file using the XML editor. Make sure the serialCommunication device capability is added, as follows: <Capabilities> <Capability Name="internetClient" /> <DeviceCapability Name="serialcommunication"> <Device Id="any"> <Function Type="name:serialPort" /> </Device> </DeviceCapability> </Capabilities> Initializing the application Before we do any communication with the Arduino, we need to initialize the application. We do this by finding the OnLaunched method in the App.xml.cs file. After the Window.Current.Activate() call, we make a call to our Init() method where we set up the application. Window.Current.Activate(); Task.Run((Action)this.Init); We execute our initialization method from the thread pool, instead of the standard thread. This is done by calling Task.Run(), defined in the System.Threading.Tasks namespace. The reason for this is that we want to avoid locking the standard thread. Later, there will be a lot of asynchronous calls made during initialization. To avoid problems, we should execute all these from the thread pool, instead of from the standard thread. We'll make the method asynchronous: private async void Init() { try { Log.Informational("Starting application."); ... } catch (Exception ex) { Log.Emergency(ex); MessageDialog Dialog = new MessageDialog(ex.Message, "Error"); await Dialog.ShowAsync(); } IoT Desktop } The static Log class is available in the Waher.Events namespace, belonging to the NuGet we included earlier. (MessageDialog is available in Windows.UI.Popups, which might be a new namespace if you're not familiar with UWP.) Communicating with the Arduino The Arduino is accessed using Firmata. To do that, we use the Windows.Devices.Enumeration, Microsoft.Maker.RemoteWiring, and Microsoft.Maker.Serial namespaces, available in the Windows-Remote-Arduino NuGet. We begin by enumerating all the devices it finds: DeviceInformationCollection Devices = await UsbSerial.listAvailableDevicesAsync(); foreach (DeviceInformationDeviceInfo in Devices) { If our Arduino device is found, we will have to connect to it using USB: if (DeviceInfo.IsEnabled&&DeviceInfo.Name.StartsWith("Arduino")) { Log.Informational("Connecting to " + DeviceInfo.Name); this.arduinoUsb = new UsbSerial(DeviceInfo); this.arduinoUsb.ConnectionEstablished += () => Log.Informational("USB connection established."); Attach a remote device to the USB port class: this.arduino = new RemoteDevice(this.arduinoUsb); We need to initialize our hardware, when the remote device is ready: this.arduino.DeviceReady += () => { Log.Informational("Device ready."); this.arduino.pinMode(13, PinMode.OUTPUT); // Onboard LED. this.arduino.digitalWrite(13, PinState.HIGH); this.arduino.pinMode(8, PinMode.INPUT); // PIR sensor. MainPage.Instance.DigitalPinUpdated(8, this.arduino.digitalRead(8)); this.arduino.pinMode(9, PinMode.OUTPUT); // Relay. this.arduino.digitalWrite(9, 0); // Relay set to 0 this.arduino.pinMode("A0", PinMode.ANALOG); // Light sensor. MainPage.Instance.AnalogPinUpdated("A0", this.arduino.analogRead("A0")); }; Important: the analog input must be set to PinMode.ANALOG, not PinMode.INPUT. The latter is for digital pins. If used for analog pins, the Arduino board and Firmata firmware may become unpredictable. Our inputs are then reported automatically by the Firmata firmware. All we need to do to read the corresponding values is to assign the appropriate event handlers. In our case, we forward the values to our main page, for display: this.arduino.AnalogPinUpdated += (pin, value) => { MainPage.Instance.AnalogPinUpdated(pin, value); }; this.arduino.DigitalPinUpdated += (pin, value) => { MainPage.Instance.DigitalPinUpdated(pin, value); }; Communication is now set up. If you want, you can trap communication errors, by providing event handlers for the ConnectionFailed and ConnectionLost events. All we need to do now is to initiate communication. We do this with a simple call: this.arduinoUsb.begin(57600, SerialConfig.SERIAL_8N1); Testing the app Make sure the Arduino is still connected to your PC via USB. If you run the application now (by pressing F5), it will communicate with the Arduino, and display any values read to the event log. In the GitHub project, I've added a couple of GUI components to our main window, that display the most recently read pin values on it. It also displays any event messages logged. We leave the relay for later chapters. For a more generic example, see the Waher.Service.GPIO project at https://github.com/PeterWaher/IoTGateway/tree/master/Services/Waher.Service.GPIO. This project allows the user to read and control all pins on the Arduino, as well as the GPIO pins available on the Raspberry Pi directly. Deploying the app You are now ready to test the app on the Raspberry Pi. You now need to disconnect the Arduino board from your PC and install it on top of the Raspberry Pi. The power of the Raspberry Pi should be turned off when doing this. Also, make sure the serial cable is connected to one of the USB ports of the Raspberry Pi. Begin by switching the target platform, from Local Machine to Remote Machine, and from x86 to ARM: Run on a remote machine with an ARM processor Your Raspberry Pi should appear automatically in the following dialog. You should check the address with the IoT Dashboard used earlier, to make sure you're selecting the correct machine: Select your Raspberry Pi You can now run or debug your app directly on the Raspberry Pi, using your local PC. The first deployment might take a while since the target system needs to be properly prepared. Subsequent deployments will be much faster. Open the Device Portal from the IoT Dashboard, and take a Screenshot, to see the results. You can also go to the Apps Manager in the Device Portal, and configure the app to be started automatically at startup: App running on the Raspberry Pi To summarize, we saw how to practically build a simple application using Raspberry Pi 3 and C#. You read an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you design and implement scalable IoT solutions with ease. Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless AI and the Raspberry Pi: Machine Learning and IoT, What’s the Impact?    
Read more
  • 0
  • 0
  • 6295

article-image-overview-oracle-hyperion-interactive-reporting
Packt
09 Sep 2010
11 min read
Save for later

An overview of Oracle Hyperion Interactive Reporting

Packt
09 Sep 2010
11 min read
(For more resources on Oracle, see here.) The Interactive Reporting document: The BQY When working with Interactive Reporting, it is hard to escape the term BQY. BQY or BrioQuery, is the extension given to all Interactive Reporting document files. Developers create BQY files using the Workspace, Interactive Reporting Web Client, or Interactive Reporting Studio—a developer tool used to create and manage BQY documents outside of the EPM Workspace. The EPM Workspace The EPM Workspace is similar to a portal, where all Oracle Hyperion applications, reports, and other files can be accessed and integrated using a shared security model. The Workspace is accessible through the web browser and contains a filesystem and other gadgets including personal pages and content subscription. Interactive Reporting is one of the many products that integrate with the Workspace, providing users a central location to save, share, and publish content. Navigating the EPM Workspace To navigate the Workspace, a user account must be created and provisioned with the necessary privileges to the Workspace and the Interactive Reporting components. After the user account is created, users can access the Workspace using a web browser. Each environment may be configured differently and have different login methods and start pages configured. However, this article provides examples based on the default product configuration. The following screenshot shows the default login page for the EPM Workspace in version 11. Once the user enters the assigned username and password, the default home page appears. Other configurations may be configured to use external authentication methods, where the user will bypass the initial login screen and start at the default start page for their configuration. The default home page shown in the following screenshot is new to the Workspace in version 11. The new home page feature allows users to add Quick Links, browse recently opened documents, and view custom created Workspace pages. (Move the mouse over the image to enlarge.) If the desired content is not listed on the main home page, the Explorer can be accessed by clicking on the Explore image on the toolbar at the top of the page, allowing users to browse for content in the Workspace filesystem similar to Windows Explorer. The Explore window, shown in the following screenshot, opens to a page with two frames showing a folder structure on the left for navigating the file structure and the contents of the current folder on the right for managing and executing items. The main parent folder in the file system is called the Root folder and other files and folders can be added under the Root folder as desired. If there is a need to return to the previous page, the user can click on the HomePage tab at the bottom-left of the page. As additional items are opened in the Workspace, additional tabs are created at the bottom of the screen. The user can navigate through the different items opened by using the tabs across the bottom of the Workspace window. If desired, these tabs can be closed by right-clicking on the tab and selecting Close. After navigating to the desired folder, the user can open the documents of interest. Files from different applications are designated with two unique identifiers in the Explore window. The first identifier is the image that is shown to the left of the name, and the second identifier is the object Type. The documents shown in the previous screenshot are Interactive Reporting documents and can be opened using the HTML viewer or the Interactive Reporting Web Client. Selecting the HTML option will render the document in a new tab, while opening the document in the Interactive Reporting Web Client will open the document in a new browser window. Installing the Interactive Reporting Web Client The Interactive Reporting Web Client software must be installed to open documents in the Web Client. This installation is a plug-in to the browser, where the browser will activate the Web Client software when an Interactive Reporting file is initiated. The installation will automatically execute upon opening the Interactive Reporting document in a browser without the Web Client installed, or the installation can be manually executed by accessing the Tools | Install | Interactive Reporting Web Client item as shown in the following screenshot: Once the installation is initiated, a window appears with the ability to customize the installation by checking/unchecking options. The default installation will install all of the components of the tool and is recommended: The installation will commence after clicking on the Next button on the configuration menu and will continue through the completion of the installation, signified by the following window: Opening documents in the Workspace Interactive Reporting documents are opened by double-clicking on the document in the Workspace or by highlighting and right-clicking the document, highlighting Open As, and selecting either HTML or Interactive Reporting Web Client from the menu as shown in the following screenshot. When the document is double-clicked, the default configuration method for opening the document is invoked by Interactive Reporting. Initially the software is configured to use the HTML viewer as the default, but the default preference can easily be changed by modifying the Default Open Format of the document in the Interactive Reporting preferences of the Workspace. The main Preferences window is opened by accessing the File menu and selecting the Preferences menu item, as shown in the following screenshot. Once the Preferences window is open, the Default Open Format is found under the Interactive Reporting tab on the left menu of the window. To change the default format from HTML to the Interactive Reporting Web Client, click on the drop-down arrow, select the Interactive Reporting Web Client item, and then click on the OK button on the Preferences window. In addition to the file open format, other preferences can be modified in this window to address changing formats for date, time, and currency. Opening documents from the local machine Interactive Reporting documents can be saved and opened from the local machine by opening the document using the web browser with Web Client installed. To open the document from the local machine, highlight and right-click on the Interactive Reporting file and select Open With from the menu that appears. If the web browser of choice is not listed, select Choose Program from the list. Browse the window, select the web browser with the plug-in installed, and check the checkbox at the bottom to Always use the selected program to open this kind of file in order to always open the Interactive Reporting document in the selected web browser. Then select OK in the window to open the Interactive Reporting document with the web browser. The web browser will open and the Web Client will load the document into the viewing window. If offline mode is not turned on, the document will open a window to authenticate with the Workspace. If no connection can be established with the Workspace, only the data sections will be visible when the document is opened. If the connection can be established and the file saved to the desktop still exists in the Workspace, then the document will load with proper permissions to the file and the document can be processed as if it was opened from the Workspace. If the file is not located in the Workspace, then the file can be imported by the user if the user has import permissions. Instructions for importing are found in the importing section of this article. The Web Client interface Understanding the Web Client interface is crucial to being proficient in the product. The different sections of the software contain a variety of different options, but the location of where to find and utilize these options is the same across the tool. Knowledge of the interface and how to leverage the features of each section is key to unlock the full potential of the product. The sections of an Interactive Reporting document are the different objects in the software used to aid in querying, analyzing, or displaying information. There are seven types of unique sections. The specifics of each section are as follows: The Query section is the main section used to setup and execute a query from a relational or multi-dimensional database. Each Query section is accompanied by a Results section where the data returned from the Query is displayed and can be manipulated. The Table section is similar to the Results section and is used to manipulate and split a dataset into different subsets for analysis. The Pivot section is specific to a Results or Table section, and is used to graphically display data in pivot table format—similar to Microsoft Excel Pivots. The Chart section is also specific to a Results or Table section and is used to display data in a chart. The Report section provides the ability to present pivots, charts, and tables of data in a well formatted document. Dashboards are used to create custom interfaces or interactive displays of key metrics. The following screenshot displays the Interactive Reporting Web Client window open to the Query section. The arrows shown in the screenshot highlight each of the different features and toolbars of the product. These different features and toolbars can be toggled on and off using the View menu: The Section Catalogue The Section Catalogue, displayed on the left of the previous screenshot, contains two windows for navigating and editing sections. The Sections window displays the different sections in the document, and the Elements window is used to add content to a section. Both windows are used commonly when building documents and performing analysis. Menus The Interactive Reporting Web Client Menus are similar to a typical menu structure seen in most Windows-based applications. Interactive Reporting contains a standard set of menus and each section also contains menus specific to a section. The following menu items are consistent between all product sections: The File menu provides the features for managing the document, including the ability to save documents both to the local drive and to the Workspace, the ability to import external data, and the ability to export and print content. The Edit menu contains the general options for managing sections. These features include the standard copy and paste options, but also include the ability to delete, rename, and duplicate sections. The View menu contains the features for managing the different windows and views of the document, including showing/hiding windows and displaying query-specific information. The Insert menu is used to add a new section into the document. The Format menu is used to format the display of sections, including font, color, size, type, and other common formatting options. The Tools menu provides the ability to execute queries and manage default and program options. The Help menu contains the help contents, links, and information about the product version. These menu items are shown in the following screenshot: Toolbars The Interactive Reporting Web Client contains three standard toolbars used to manage views, content, and formatting. The toolbars are turned on and off through the View menu. To show or hide a toolbar from the viewing area, go to View Menu | Toolbars, and then click on the toolbar name to show or hide it. A checkbox next to the toolbar signifies the toolbar is shown in the viewing area. The following is a description of each of the three toolbars: The Standard toolbar contains shortcuts to the common standard features of the tool including saving, printing, inserting, and query processing. The Formatting toolbar contains controls to manage fonts, backgrounds, and number formats. The Section toolbar only applies to the Dashboard, Report, and Chart sections, and is used to modify and control object layouts and chart types. Section Title Bar The Section Title Bar has two different purposes. The bar contains a navigation dropdown on the left of the bar, and it contains section-specific controls on the right-side of the bar which is used to toggle options on and off. These options are specific to each section and are used to build queries, add content to a section, or to sort content. When these options are toggled on, the options will be displayed at the top or bottom of the main content window. Status Bar The Status Bar is shown at the bottom of the Web Client interface and contains information on a specific section. The information provided in the Status Bar includes the number of rows returned from a query, the number of rows shown in a results set, the number of rows and columns in a pivot table, the number of report pages, and the zoom settings on the dashboard.
Read more
  • 0
  • 0
  • 6292
article-image-how-does-ocs-inventory-ng-meet-our-needs
Packt
12 May 2010
8 min read
Save for later

How does OCS Inventory NG meet our needs?

Packt
12 May 2010
8 min read
OCS Inventory NG stands for Open Computer and Software Inventory Next Generation , and it is the name of an open source project that was started back in late 2005. The project matured into the first final release in the beginning of year 2007. It's an undertaking that is still actively maintained, fully documented, and has support forums. It has all of the requirements that an open source application should have in order to be competitive. There is a tricky part when it comes to open source solutions. Proposing them and getting them accepted by the management requires quite a bit of research. One side of the coin is that it is always favorable, everyone appreciates cutting down licensing costs. The problem with such a solution is that you cannot always take for granted their future support. In order to take an educated guess on whether an open source solution could be beneficial for the company, we need to look at the following criteria: how frequently is the project updated, check the download count, what is the feedback of the community, whether the application is thoroughly documented, and the existence of active community support. OCS-NG occupies a dominant position when it comes to open source projects on the area of inventorying computers and software. Brief overview on OCS Inventory NG's architecture The architecture of OCS-NG is based on the client-server model. The client program is called a network agent. These agents need to be deployed on the client computers that we want to include in our inventory. The management server is composed of four individual server roles: database server, communication server, deployment server, and the administration console server. More often than not, these can be run from the same machine. OCS Inventory NG is cross-platform and supports most Unices, BSD derivates (including Mac OS X), and all kinds of Windows-based operating systems. The server can be also be run on either platform. As it is an open source project, it's based on the popular LAMP or WAMP solution stack. This means that the main server-side prerequisites are Apache web server, MySQL database server, and PHP server. These are also the viable components of a fully functional web server. The network agents communicate with the management server under standardized HTTP protocols. The data that is exchanged is formatted under XML conventions. The screenshot below describes a general overview on the way clients communicate with the management server's sub-server components. Rough performance evaluation of OCS-NG The data that is collected in case of a fully - inventoried computer sums up to something around 5KB. That is a small amount and it will neither overload the server nor create network congestion. It is often said that around one million systems can be inventoried daily on a 3GHz bi-Xeon processor based server with 4 GB of RAM without any issues. Any modest old-generation server should suffice for the inventory of few thousand systems. When scalability is necessary such as over 10,000-20,000 inventoried systems, it is recommended to split those 4 server-role components on two individual servers. Should this be the case, the database server needs to be installed on the same machine with the communication server, and on another system with the administration server and the deployment server with a database replica. Any other combination is also possible. Although distributing the server components is possible, very rarely do we really need to do that. In this day and age, we can seamlessly virtualize up to four or more servers on any dual or quad-core new generation computer. OCS-NG's management server can be one of those VMs. If necessary, distributing server components in the future is possible. Meeting our inventory demands First and foremost, OCS Inventory NG network agents are able to collect all of the must-have attributes of a client computer and many more. Let's do a quick checkup on these: BIOS: System serial number, manufacturer, and model Bios manufacturer, version, and date Processors: Type, count (how many of them), manufacturer, speed, and cache Memory: Physical memory type, manufacturer, capacity, and slot number Total physical memory Total swap/paging memory Video: Video adapter: Chipset/model, manufacturer, memory size, speed, and screen resolution Display monitor: Manufacturer, description, refresh rate, type, serial number, and caption Storage/removable devices: Manufacturer, model, size, type, speed( all when applicable) Drive letter, filesystem type, partition/volume size, free space Network adapters/telephony: Manufacturer, model, type, speed, and description MAC and IP address, mask and IP gateway, DHCP server used Miscellaneous hardware Input devices: Keyboard, mouse, and pointing device Sound devices: Manufacturer name, type, and description System slots: Name, type, and designation System ports: Type, name, caption, and description Software Information: Operating system: Name, version, comments, and registration info Installed software: Name, publisher, version (from Add / Remove software or Programs and Features menu) Custom-specified registry queries (applicable to Windows OS) Not only computers but also networking components can be used for inventorying. OCS Inventory NG detects and collects network-specific information about these (such as MAC address and IP address, subnet mask, and so on.). Later on we can set labels and organize them appropriately. The place where OCS-NG comes as a surprise is its unique capability to make an inventory of hosts that are not on the network. The network agent can be run manually on these offline hosts and are then imported into the centralized management server. One of its features include intelligent auto-discovering functionalities and its ability to detect hosts that have not been inventoried. It is based on popular network diagnosing and auditing tools such as the nmap . The algorithm can decide whether it's an actual workstation computer or rather just a printer. If it's the former, the agent needs to be deployed. The network scanning is not done by the management server. It is delegated to network agents. This way the network is never overcrowded or congested. If the management server itself scans for populated networks spanning throughout different subnets, the process would be disastrous. This way the process is seamless and simply practical. Another interesting part is the election mechanism based on which the server is able to decide the most suited client to carry out the discovery. A rough sketch of this in action can be seen in the next figure. Set of functions and what it brings to the table At this moment, we're fully aware that the kind information that the network agents are getting into the database are relevant and more than enough for our inventorying needs. Nevertheless, we won't stop here. It's time to analyze and present its web interface. We will also shed a bit of light on the set of features it supports out of the box without any plugins or other mods yet. There will be a time for those too. Taking a glance at the OCS-NG web interface The web interface of OCS Inventory NG is slightly old-fashioned. One direct advantage of this is that the interface is really snappy. Queries are displayed quickly, and the UI won't lag. The other side of the coin is that intuitiveness is not the interface's strongest point. Getting used to it might take a while. At least it does not make you feel that the interface is overcrowded. However, the location and naming of buttons leaves plenty of room for improvement. Some people might prefer to see captions below the shortcuts as the meaning of the icons is not always obvious. After the first few minutes, we will easily get used to them. A picture is worth thousands of words, so let's exemplify our claims. The buttons that appear in the previous screenshot from left to right are the following: All computers Tag/Number of PC repartition Groups All softwares Search with various criteria In the same fashion, in this case the buttons in the previous screenshot stand for the following features: Deployment Security Dictionary Agent Configuration (this one is intuitive!) Registry (self-explanatory) Admin Info Duplicates Users Local Import Help When you click on the name of the specific icon, the drop-down menu appears right below on the cursor All in all, the web interface is not that bad after all. We must accept that the strongestpoint lies in its snappiness, and the wealth of information that is presented in a fraction of a second rather than its design or intuitiveness. We appreciate its overall simplicity and its quick response time. We are often struggling with new generation Java-based and AJAX-based overcrowded interfaces of network equipment that seem slow as hell. So, we'll choose OCS Inventory NG's UI over those anytime!
Read more
  • 0
  • 0
  • 6289

article-image-getting-started-vr-programming
Jake Rheude
04 Jul 2016
8 min read
Save for later

Getting Started with VR Programming

Jake Rheude
04 Jul 2016
8 min read
This guide will go through some simple programming for VR apps using the Google VR SDK (software development kit) and the Unity3D game engine. This guide will assume that you already have a mobile device capable of running Google VR apps with a Google Cardboard, as well as a computer able to run Unity3D. Getting Started First and foremost, download the latest version of Unity3D from their website. Out of the four options, select “Personal” since it costs nothing to the user. Then download and run the installer. The installation process is straightforward. However, you must make sure that you select the “Android Build Support” component if you are planning on using an Android device or “iOS Build Support” for an iOS device. If you are unsure at this point, just select both, as neither of them requires a lot of space. Now that you have Unity3D installed, the next step is to set it up for the Google VR SDK which can be found here. After agreeing to the terms and conditions, you will be given a link to download the repository directly. After downloading and extracting the ZIP file, you will notice that it contains a Unity Package file. Double-click on the file, and Unity will automatically load up. You will then see a window similar to the pop up below on your screen. Click the “NEW” button on the top right corner to begin your first Google VR project. Give it any project name other than the default “New Unity Project” name. For this guide, I have chosen “VR Programming Tutorial” as the project name.   As soon as your new project loads up, so will the Google VR SDK Unity Package. The relevant files should all be selected by default, so simply click the “Import” button on the bottom right corner to include the SDK into your project.   In your project’s “Assets” folder, there should be a folder named “GoogleVR”. This is where all the necessary components are located in order to begin working with the SDK.   From the “Assets” folder, go into “GoogleVR”->”DemoScenes”->”HeadSetDemo”. Double-click on the Unity icon that is named “DemoScene”. You should see something similar to this upon opening the scene file. This is where you can preview the scene before playing it to get an idea of how the game objects will be laid out in the environment. So let’s try that by clicking on the “Play” button. The scene will start out from the user’s perspective, which would be the main camera.   There is a slight difference in how the left eye and right eye camera are displaying the environment. This is called distortion correction, which is intentionally designed that way in order to accustom the display to the Google Cardboard eye lenses. You may be wondering why you are unable to look around with your mouse. This design is also intentional to allow the developer to hover the mouse pointer in and out of the game window without disrupting the scene while it is playing. In order to look around in the environment, hold down the Ctrl key, and then the Alt key to enable head movement. Make sure to press the keys in this order, otherwise you will only be rotating the display along the Z-axis. You might also be wondering where the interactive menu on the floor canvas has gone. The menu is still there, it’s just that it does not appear in VR mode. Notice that the dot in the center of the display will turn into a halo when you move it over the hovering cube. This happens whenever the dot is placed over a game object in the environment that is interactive. So even if the menu is not visible, you are still able to select the menu items. If you happen to click on the “VR Mode” button, the left eye and right eye cameras will simply go away and the main camera will be the only camera that displays the world space. VR Mode can be enabled/disabled by clicking on the "VR Mode Enabled" checkbox in the project's inspector. Simply select "GvrMain" in the DemoScene hierarchy to have the inspector display its information. How the scene is displayed when VR mode is disabled. Note that as of the current implementation of Google VR, it is impossible to add UI components into the world space. This is due to the stereoscopic functionality of Google VR and the mathematics involved in calculating the distance of the game objects from the left eye and right eye cameras relative to the world environment. However, it is possible to add non-interactive UI elements (i.e. player HUD) as a child 3D element with the main camera being its parent. If you wish to create interactive UI components, they must be done strictly as game objects in the world space. This also implies that the interactive UI components must be selected by the user from a fixed position in the world space, as they would find it difficult to make their selections otherwise. Now that we have gone over the basics of the Google VR SDK, let’s move onto some programming. Applying Attributes to Game Objects When creating an interactive medium of any kind (in this case a VR app), some of the most basic functions can end up being more complicated than they initially seem to be. We will demonstrate that by incorporating, what seems to be, simple user movement. In the same DemoScene scene, we will add four more cubes to the environment. For the sake of cleanliness, first we will remove the existing cube as it will be an obstruction for our new cube. To delete a game object from a scene, simply right-click it in the hierarchy and select "Delete". Now that we have removed the existing cube, add a new one by clicking "Create" in the hierarchy, select "3D Object" and then "Cube".   Move the cube about 4-5 units along the X or Z axis away from the origin. You can do so by clicking and dragging the red or blue arrow. Now that we have added our cube, the next step is to add a script to the player’s perspective object. For this project, we can use the “GvrMain” game object to incorporate the player’s movement. In the inspector tab, click on the "Add Component" button, select "New Script" and create a new script titled "MoveToCube".   Once the script has been created, click on the cogwheel icon and select "Edit Script".   Copy and paste this code below into MoveToCube.cs Next, add an Event Trigger component to your cube.   Create a new script titled "CubeSelect". Then select the cogwheel icon and select "Edit Script" to open the script in the script editor.   Copy and paste the code below into your CubeSelect.cs script.     Click on the "Add New Event Type" button. Select "PointerClick". Click the + icon to add a method to refer to. In the left box, select the "Cube" game object. For the method, select "CubeSelect" and then click on "GetCubePosition". Finally, select "GvrMain" as the target game object for the method. When you are finished adding the necessary components, copy and paste the cube in the project hierarchy tab three times in order to get four cubes. They will seem as if they did not appear on the scene, only because they are overlapping each other. Change the positions of each cube so that they are separated from each other along the X and Z axis. Once completed, the scene should look something similar to this: Now you can run the VR app and see for yourself that we have now incorporated player movement in this basic implementation. Tips and General Advice Many developers recommend that you do not incorporate any acceleration and/or deceleration to the main camera. Doing so will cause nausea to the users and thus give them a negative experience with your VR application. Keep your VR app relatively simple! The user only has two modes of input: head tracking and the Cardboard trigger button. Trying to force functionality with multiple gestures (i.e. looking straight down and/or up) will not be intuitive to the user and will more than likely cause frustration. About the Author Jake Rheude is the Director of Business Development for Red Stag Fulfillment, a US-based e-commerce fulfillment provider focused primarily on serving ecommerce businesses shipping heavy, large, or valuable products to customers all around the world. Red Stag is so confident in its fulfillment software combined with their warehouse operations, that for any error, inaccuracy, or late shipment, not only will they reimburse you for that order, but they’ll write you a check for $50.
Read more
  • 0
  • 0
  • 6287