Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-creating-maze-and-animating-cube
Packt
07 Jul 2014
9 min read
Save for later

Creating the maze and animating the cube

Packt
07 Jul 2014
9 min read
(For more resources related to this topic, see here.) A maze is a rather simple shape that consists of a number of walls and a floor. So, what we need is a way to create these shapes. Three.js, not very surprisingly, doesn't have a standard geometry that will allow you to create a maze, so we need to create this maze by hand. To do this, we need to take two different steps: Find a way to generate the layout of the maze so that not all the mazes look the same. Convert that to a set of cubes (THREE.BoxGeometry) that we can use to render the maze in 3D. There are many different algorithms that we can use to generate a maze, and luckily there are also a number of open source JavaScript libraries that implement such an algorithm. So, we don't have to start from scratch. For the example in this book, I've used the following random-maze-generator project that you can find on GitHub at the following link: https://github.com/felipecsl/random-maze-generator Generating a maze layout Without going into too much detail, this library allows you to generate a maze and render it on an HTML5 canvas. The result of this library looks something like the following screenshot: You can generate this by just using the following JavaScript: var maze = new Maze(document, 'maze'); maze.generate(); maze.draw(); Even though this is a nice looking maze, we can't use this directly to create a 3D maze. What we need to do is change the code the library uses to write on the canvas, and change it to create Three.js objects. This library draws the lines on the canvas in a function called drawLine: drawLine: function(x1, y1, x2, y2) { self.ctx.beginPath(); self.ctx.moveTo(x1, y1); self.ctx.lineTo(x2, y2); self.ctx.stroke(); } If you're familiar with the HTML5 canvas, you can see that this function draws lines based on the input arguments. Now that we've got this maze, we need to convert it to a number of 3D shapes so that we can render them in Three.js. Converting the layout to a 3D set of objects To change this library to create Three.js objects, all we have to do is change the drawLine function to the following code snippet: drawLine: function(x1, y1, x2, y2) { var lengthX = Math.abs(x1 - x2); var lengthY = Math.abs(y1 - y2); // since only 90 degrees angles, so one of these is always 0 // to add a certain thickness to the wall, set to 0.5 if (lengthX === 0) lengthX = 0.5; if (lengthY === 0) lengthY = 0.5; // create a cube to represent the wall segment var wallGeom = new THREE.BoxGeometry(lengthX, 3, lengthY); var wallMaterial = new THREE.MeshPhongMaterial({ color: 0xff0000, opacity: 0.8, transparent: true }); // and create the complete wall segment var wallMesh = new THREE.Mesh(wallGeom, wallMaterial); // finally position it correctly wallMesh.position = new THREE.Vector3( x1 - ((x1 - x2) / 2) - (self.height / 2), wallGeom.height / 2, y1 - ((y1 - y2)) / 2 - (self.width / 2)); self.elements.push(wallMesh); scene.add(wallMesh); } In this new drawLine function, instead of drawing on the canvas, we create a THREE.BoxGeometry object whose length and depth are based on the supplied arguments. Using this geometry, we create a THREE.Mesh object and use the position attribute to position the mesh on a specific points with the x, y, and z coordinates. Before we add the mesh to the scene, we add it to the self.elements array. Now we can just use the following code snippet to create a 3D maze: var maze = new Maze(scene,17, 100, 100); maze.generate(); maze.draw(); As you can see, we've also changed the input arguments. These properties now define the scene to which the maze should be added and the size of the maze. The result from these changes can be seen in the following screenshot: Every time you refresh, you'll see a newly generated random maze. Now that we've got our generated maze, the next step is to add the object that we'll move through the maze. Animating the cube Before we dive into the code, let's first look at the result as shown in the following screenshot: Using the controls at the top-right corner, you can move the cube around. What you'll see is that the cube rotates around its edges, not around its center. In this section, we'll show you how to create that effect. Let's first look at the default rotation, which is along an object's central axis, and the translation behavior of Three.js. The standard Three.js rotation behavior Let's first look at all the properties you can set on THREE.Mesh. They are as follows: Function/property Description position This property refers to the position of an object, which is relative to the position of its parent. In all our examples, so far the parent is THREE.Scene. rotation This property defines the rotation of THREE.Mesh around its own x, y, or z axis. scale With this property, you can scale the object along its own x, y, and z axes. translateX(amount) This property moves the object by a specified amount over the x axis. translateY(amount) This property moves the object by a specified amount over the y axis. translateZ(amount) This property moves the object by a specified amount over the z axis. If we want to rotate a mesh around one of its own axes, we can just call the following line of code: plane.rotation.x = -0.5 * Math.PI; We've used this to rotate the ground area from a horizontal position to a vertical one. It is important to know that this rotation is done around its own internal axis, not the x, y, or z axis of the scene. So, if you first do a number of rotations one after another, you have to keep track at the orientation of your mesh to make sure you get the required effect. Another point to note is that rotation is done around the center of the object—in this case the center of the cube. If we look at the effect we want to accomplish, we run into the following two problems: First, we don't want to rotate around the center of the object; we want to rotate around one of its edges to create a walking-like animation Second, if we use the default rotation behavior, we have to continuously keep track of our orientation since we're rotating around our own internal axis In the next section, we'll explain how you can solve these problems by using matrix-based transformations. Creating an edge rotation using matrix-based transformation If we want to perform edge rotations, we have to take the following few steps: If we want to rotate around the edge, we have to change the center point of the object to the edge we want to rotate around. Since we don't want to keep track of all the rotations we've done, we'll need to make sure that after each rotation, the vertices of the cube represent the correct position. Finally, after we've rotated around the edge, we have to do the inverse of the first step. This is to make sure the center point of the object is back in the center of the cube so that it is ready for the next step. So, the first thing we need to do is change the center point of the cube. The approach we use is to offset the position of all individual vertices and then change the position of the cube in the opposite way. The following example will allow us to make a step to the right-hand side: cubeGeometry.applyMatrix(new THREE.Matrix4().makeTranslation (0, width / 2, width / 2)); cube.position.y += -width / 2; cube.position.z += -width / 2; With the cubeGeometry.applyMatrix function, we can change the position of the individual vertices of our geometry. In this example, we will create a translation (using makeTranslation), which offsets all the y and z coordinates by half the width of the cube. The result is that it will look like the cube moved a bit to the right-hand side and then up, but the actual center of the cube now is positioned at one of its lower edges. Next, we use the cube.position property to position the cube back at the ground plane since the individual vertices were offset by the makeTranslation function. Now that the edge of the object is positioned correctly, we can rotate the object. For rotation, we could use the standard rotation property, but then, we will have to constantly keep track of the orientation of our cube. So, for rotations, we once again use a matrix transformation on the vertices of our cube: cube.geometry.applyMatrix(new THREE.Matrix4().makeRotationX(amount); As you can see, we use the makeRotationX function, which changes the position of our vertices. Now we can easily rotate our cube, without having to worry about its orientation. The final step we need to take is reset the cube to its original position; taking into account that we've moved a step to the right, we can take the next step: cube.position.y += width/2; // is the inverse + width cube.position.z += -width/2; cubeGeometry.applyMatrix(new THREE.Matrix4().makeTranslation(0, - width / 2, width / 2)); As you can see, this is the inverse of the first step; we've added the width of the cube to position.y and subtracted the width from the second argument of the translation to compensate for the step to the right-hand side we've taken. If we use the preceding code snippet, we will only see the result of the step to the right. Summary In this article, we have seen how to create a maze and animate a cube. Resources for Article: Further resources on this subject: Working with the Basic Components That Make Up a Three.js Scene [article] 3D Websites [article] Rich Internet Application (RIA) – Canvas [article]
Read more
  • 0
  • 0
  • 2955

article-image-using-reactjs-without-jsx
Richard Feldman
30 Jun 2014
6 min read
Save for later

Using React.js without JSX

Richard Feldman
30 Jun 2014
6 min read
React.js was clearly designed with JSX in mind, however, there are plenty of good reasons to use React without it. Using React as a standalone library lets you evaluate the technology without having to spend time learning a new syntax. Some teams—including my own—prefer to have their entire frontend code base in one compile-to-JavaScript language, such as CoffeeScript or TypeScript. Others might find that adding another JavaScript library to their dependencies is no big deal, but adding a compilation step to the build chain is a deal-breaker. There are two primary drawbacks to eschewing JSX. One is that it makes using React significantly more verbose. The other is that the React docs use JSX everywhere; examples demonstrating vanilla JavaScript are few and far between. Fortunately, both drawbacks are easy to work around. Translating documentation The first code sample you see in the React Documentation includes this JSX snippet: /** @jsx React.DOM */ React.renderComponent( <h1>Hello, world!</h1>, document.getElementById('example') ); Suppose we want to see the vanilla JS equivalent. Although the code samples on the React homepage include a helpful Compiled JS tab, the samples in the docs—not to mention React examples you find elsewhere on the Web—will not. Fortunately, React’s Live JSX Compiler can help. To translate the above JSX into vanilla JS, simply copy and paste it into the left side of the Live JSX Compiler. The output on the right should look like this: /** @jsx React.DOM */ React.renderComponent( React.DOM.h1(null, "Hello, world!"), document.getElementById('example') ); Pretty similar, right? We can discard the comment, as it only represents a necessary directive in JSX. When writing React in vanilla JS, it’s just another comment that will be disregarded as usual. Take a look at the call to React.renderComponent. Here we have a plain old two-argument function, which takes a React DOM element (in this case, the one returned by React.DOM.h1) as its first argument, and a regular DOM element (in this case, the one returned by document.getElementById('example')) as its second. jQuery users should note that the second argument will not accept jQuery objects, so you will have to extract the underlying DOM element with $("#example")[0] or something similar. The React.DOM object has a method for every supported tag. In this case we’re using h1, but we could just as easily have used h2, div, span, input, a, p, or any other supported tag. The first argument to these methods is optional; it can either be null (as in this case), or an object specifying the element’s attributes. This argument is how you specify things like class, ID, and so on. The second argument is either a string, in which case it specifies the object’s text content, or a list of child React DOM elements. Let’s put this together with a more advanced example, starting with the vanilla JS: React.DOM.form({className:"commentForm"}, React.DOM.input({type:"text", placeholder:"Your name"}), React.DOM.input({type:"text", placeholder:"Say something..."}), React.DOM.input({type:"submit", value:"Post"}) ) For the most part, the attributes translate as you would expect: type, value, and placeholder do exactly what they would do if used in HTML. The one exception is className, which you use in place of the usual class. The above is equivalent to the following JSX: /** @jsx React.DOM */ <form className="commentForm"> <input type="text" placeholder="Your name" /> <input type="text" placeholder="Say something..." /> <input type="submit" value="Post" /> </form> This JSX is a snippet found elsewhere in the React docs, and again you can view its vanilla JS equivalent by pasting it into the Live JSX Compiler. Note that you can include pure JSX here without any surrounding JavaScript code (unlike the JSX playground), but you do need the /** @jsx React.DOM */ comment at the top of the JSX side. Without the comment, the compiler will simply output the JSX you put in. Simple DSLs to make things concise Although these two implementations are functionally identical, clearly the JSX version is more concise. How can we make the vanilla JS version less verbose? A very quick improvement is to alias the React.DOM object: var R = React.DOM; R.form({className:"commentForm"}, R.input({type:"text", placeholder:"Your name"}), R.input({type:"text", placeholder:"Say something..."}), R.input({type:"submit", value:"Post"})) You can take it even further with a tiny bit of DSL: var R = React.DOM; var form = R.form; var input = R.input; form({className:"commentForm"}, input({type:"text", placeholder:"Your name"}), input({type:"text", placeholder:"Say something..."}), input({type:"submit", value:"Post"}) ) This is more verbose in terms of lines of code, but if you have a large DOM to set up, the extra up-front declarations can make the rest of the file much nicer to read. In CoffeeScript, a DSL like this can tidy things up even further: {form, input} = React.DOM form {className:"commentForm"}, [ input type: "text", placeholder:"Your name" input type:"text", placeholder:"Say something..." input type:"submit", value:"Post" ] Note that in this example, the form’s children are passed as an array rather than as a list of extra arguments (which, in CoffeeScript, allows you to omit commas after each line). React DOM element constructors support either approach. (Also note that CoffeeScript coders who don’t mind mixing languages can use the coffee-react compiler or set up a custom build chain that allows for inline JSX in CoffeeScript sources instead.) Takeaways No matter your particular use case, there are plenty of ways to effectively use React without JSX. Thanks to the Live JSX Compiler ’s ability to quickly translate documentation code samples, and the ease with which you can set up a simple DSL to reduce verbosity, there really is very little overhead to using React as a JavaScript library like any other. About the author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in the HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between
Read more
  • 0
  • 0
  • 8732

article-image-component-communication-reactjs
Richard Feldman
30 Jun 2014
5 min read
Save for later

Component Communication in React.js

Richard Feldman
30 Jun 2014
5 min read
You can get a long way in React.js solely by having parent components create child components with varying props, and having each component deal only with its own state. But what happens when a child wants to affect its parent’s state or props? Or when a child wants to inspect that parent’s state or props? Or when a parent wants to inspect its child’s state? With the right techniques, you can handle communication between React components without introducing unnecessary coupling. Child Elements Altering Parents Suppose you have a list of buttons, and when you click one, a label elsewhere on the page updates to reflect which button was most recently clicked. Although any button’s click handler can alter that button’s state, the handler has no intrinsic knowledge of the label that we need to update. So how can we give it access to do what we need? The idiomatic approach is to pass a function through props. Like so: var ExampleParent = React.createClass({ getInitialState: function() { return {lastLabelClicked: "none"} }, render: function() { var me = this; var setLastLabel = function(label) { me.setState({lastLabelClicked: label}); }; return <div> <p>Last clicked: {this.state.lastLabelClicked}</p> <LabeledButton label="Alpha Button" setLastLabel={setLastLabel}/> <LabeledButton label="Beta Button" setLastLabel={setLastLabel}/> <LabeledButton label="Delta Button" setLastLabel={setLastLabel}/> </div>; } }); var LabeledButton = React.createClass({ handleClick: function() { this.props.setLastLabel(this.props.label); }, render: function() { return <button onClick={this.handleClick}>{this.props.label}</button>; } }); Note that this does not actually affect the label’s state directly; rather, it affects the parent component’s state, and doing so will cause the parent to re-render the label as appropriate. What if we wanted to avoid using state here, and instead modify the parent’s props? Since props are externally specified, this would be a lot of extra work. Rather than telling the parent to change, the child would necessarily have to tell its parent’s parent—its grandparent, in other words—to change that grandparent’s child. This is not a route worth pursuing; besides being less idiomatic, there is no real benefit to changing the parent’s props when you could change its state instead. Inspecting Props Once created, the only way for a child’s props to “change” is for the child to be recreated when the parent’s render method is called again. This helpfully guarantees that the parent’s render method has all the information needed to determine the child’s props—not only in the present, but for the indefinite future as well. Thus if another of the parent’s methods needs to know the child’s props, like for example a click handler, it’s simply a matter of making sure that data is available outside the parent’s render method. An easy way to do this is to record it in the parent’s state: var ExampleComponent = React.createClass({ handleClick: function() { var buttonStatus = this.state.buttonStatus; // ...do something based on buttonStatus }, render: function() { // Pretend it took some effort to determine this value var buttonStatus = "btn-disabled"; this.setState({buttonStatus: buttonStatus}); return <button className={buttonStatus} onClick={this.handleClick}> Click this button! </button>; } }); It’s even easier to let a child know about its parent’s props: simply have the parent pass along whatever information is necessary when it creates the child. It’s cleaner to pass along only what the child needs to know, but if all else fails you can go as far as to pass in the parent’s entire set of props: var ParentComponent = React.createClass({ render: function() { return <ChildComponent parentProps={this.props} />; } }); Inspecting State State is trickier to inspect, because it can change on the fly. But is it ever strictly necessary for components to inspect each other’s states, or might there be a universal workaround? Suppose you have a child whose click handler cares about its parent’s state. Is there any way we could refactor things such that the child could always know that value, without having to ask the parent directly? Absolutely! Simply have the parent pass the current value of its state to the child as a prop. Whenever the parent’s state changes, it will re-run its render method, so the child (including its click handler) will automatically be recreated with the new prop. Now the child’s click handler will always have an up-to-date knowledge of the parent’s state, just as we wanted. Suppose instead that we have a parent that cares about its child’s state. As we saw earlier with the buttons-and-labels example, children can affect their parent’s states, so we can use that technique again here to refactor our way into a solution. Simply include in the child’s props a function that updates the parent’s state, and have the child incorporate that function into its relevant state changes. With the child thus keeping the parent’s state up to speed on relevant changes to the child’s state, the parent can obtain whatever information it needed simply by inspecting its own state. Takeaways Idiomatic communication between parent and child components can be easily accomplished by passing state-altering functions through props. When it comes to inspecting props and state, a combination of passing props on a need-to-know basis and refactoring state changes can ensure the relevant parties have all the information they need, whenever they need it. About the Author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between.
Read more
  • 0
  • 0
  • 5106
Banner background image

article-image-various-subsystem-configurations
Packt
25 Jun 2014
8 min read
Save for later

Various subsystem configurations

Packt
25 Jun 2014
8 min read
(For more resources related to this topic, see here.) In a high-performance environment, every costly resource instantiation needs to be minimized. This can be done effectively using pools. The different subsystems in WildFly often use various pools of resources to minimize the cost of creating new ones. These resources are often threads or various connection objects. Another benefit is that the pools work as a gatekeeper, hindering the underlying system from being overloaded. This is performed by preventing client calls from reaching their target if a limit has been reached. In the upcoming sections of this article, we will provide an overview of the different subsystems and their pools. The thread pool executor subsystem The thread pool executor subsystem was introduced in JBoss AS 7. Other subsystems can reference thread pools configured in this one. This makes it possible to normalize and manage the thread pools via native WildFly management mechanisms, and it allows you to share thread pools across subsystems. The following code is an example taken from the WildFly Administration Guide (https://docs.jboss.org/author/display/WFLY8/Admin+Guide) that describes how the Infinispan subsystem may use the subsystem, setting up four different pools: <subsystem > <thread-factory name="infinispan-factory" priority="1"/> <bounded-queue-thread-pool name="infinispan-transport"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="25"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <bounded-queue-thread-pool name="infinispan-listener"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <scheduled-thread-pool name="infinispan-eviction"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> <scheduled-thread-pool name="infinispan-repl-queue"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> </subsystem> ... <cache-container name="web" default-cache="repl"listener-executor= "infinispan-listener" eviction-executor= "infinispan-eviction"replication-queue-executor ="infinispan-repl-queue"> <transport executor="infinispan-transport"/> <replicated-cache name="repl" mode="ASYNC" batching="true"> <locking isolation="REPEATABLE_READ"/> <file-store/> </replicated-cache> </cache-container> The following thread pools are available: unbounded-queue-thread-pool bounded-queue-thread-pool blocking-bounded-queue-thread-pool queueless-thread-pool blocking-queueless-thread-pool scheduled-thread-pool The details of these thread pools are described in the following sections: unbounded-queue-thread-pool The unbounded-queue-thread-pool thread pool executor has the maximum size and an unlimited queue. If the number of running threads is less than the maximum size when a task is submitted, a new thread will be created. Otherwise, the task is placed in a queue. This queue is allowed to grow infinitely. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads. bounded-queue-thread-pool The bounded-queue-thread-pool thread pool executor has a core, maximum size, and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created; otherwise, it will be put in the queue. If the queue's maximum size has been reached and the maximum number of threads hasn't been reached, a new thread is also created. If max-threads is hit, the call will be sent to the handoff-executor. If no handoff-executor is configured, the call will be discarded. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of threads that are allowed to run simultaneously. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) Handoff-executor This specifies an executor to which tasks will be delegated, in the event that a task cannot be accepted. allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads. blocking-bounded-queue-thread-pool The blocking-bounded-queue-thread-pool thread pool executor has a core, a maximum size and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created. Otherwise, it will be put in the queue. If the queue's maximum size has been reached, a new thread is created; if not, max-threads is exceeded. If so, the call is blocked. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of simultaneous threads allowed to run. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads queueless-thread-pool The queueless-thread-pool thread pool is a thread pool executor without any queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created; otherwise, the handoff-executor will be called. If no handoff-executor is configured the call will be discarded. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time The amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) handoff-executor Specifies an executor to delegate tasks to in the event that a task cannot be accepted thread-factory The thread factory to use to create worker threads blocking-queueless-thread-pool The blocking-queueless-thread-pool thread pool executor has no queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created. Otherwise, the caller will be blocked. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads scheduled-thread-pool The scheduled-thread-pool thread pool is used by tasks that are scheduled to trigger at a certain time. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads Monitoring All of the pools just mentioned can be administered and monitored using both CLI and JMX (actually, the Admin Console can be used to administer, but not see, any live data). The following example and screenshots show the access to an unbounded-queue-thread-pool called test. Using CLI, run the following command: /subsystem=threads/unbounded-queue-thread-pool=test:read-resource (include-runtime=true) The response to the preceding command is as follows: { "outcome" => "success", "result" => { "active-count" => 0, "completed-task-count" => 0L, "current-thread-count" => 0, "keepalive-time" => undefined, "largest-thread-count" => 0, "max-threads" => 100, "name" => "test", "queue-size" => 0, "rejected-count" => 0, "task-count" => 0L, "thread-factory" => undefined } } Using JMX (query and result in the JConsole UI), run the following code: jboss.as:subsystem=threads,unbounded-queue-thread-pool=test An example thread pool by JMX is shown in the following screenshot: An example thread pool by JMX The following screenshot shows the corresponding information in the Admin Console Example thread pool—Admin Console The future of the thread subsystem According to the official JIRA case WFLY-462 (https://issues.jboss.org/browse/WFLY-462), the central thread pool configuration has been targeted for removal in future versions of the application server. It is, however, uncertain that all subprojects will adhere to this. The actual configuration will then be moved out to the subsystem itself. This seems to be the way the general architecture of WildFly is moving in terms of pools—moving away from generic ones and making them subsystem-specific. The different types of pools described here are still valid though. Note that, contrary to previous releases, Stateless EJB is no longer pooled by default. More information of this is available in the JIRA case WFLY-1383. It can be found at https://issues.jboss.org/browse/WFLY-1383.
Read more
  • 0
  • 0
  • 2319

article-image-introduction-mapreduce
Packt
25 Jun 2014
10 min read
Save for later

Introduction to MapReduce

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) The Hadoop platform Hadoop can be used for a lot of things. However, when you break it down to its core parts, the primary features of Hadoop are Hadoop Distributed File System (HDFS) and MapReduce. HDFS stores read-only files by splitting them into large blocks and distributing and replicating them across a Hadoop cluster. Two services are involved with the filesystem. The first service, the NameNode acts as a master and keeps the directory tree of all file blocks that exist in the filesystem and tracks where the file data is kept across the cluster. The actual data of the files is stored in multiple DataNode nodes, the second service. MapReduce is a programming model for processing large datasets with a parallel, distributed algorithm in a cluster. The most prominent trait of Hadoop is that it brings processing to the data; so, MapReduce executes tasks closest to the data as opposed to the data travelling to where the processing is performed. Two services are involved in a job execution. A job is submitted to the service JobTracker, which first discovers the location of the data. It then orchestrates the execution of the map and reduce tasks. The actual tasks are executed in multiple TaskTracker nodes. Hadoop handles infrastructure failures such as network issues, node, or disk failures automatically. Overall, it provides a framework for distributed storage within its distributed file system and execution of jobs. Moreover, it provides the service ZooKeeper to maintain configuration and distributed synchronization. Many projects surround Hadoop and complete the ecosystem of available Big Data processing tools such as utilities to import and export data, NoSQL databases, and event/real-time processing systems. The technologies that move Hadoop beyond batch processing focus on in-memory execution models. Overall multiple projects, from batch to hybrid and real-time execution exist. MapReduce Massive parallel processing of large datasets is a complex process. MapReduce simplifies this by providing a design pattern that instructs algorithms to be expressed in map and reduce phases. Map can be used to perform simple transformations on data, and reduce is used to group data together and perform aggregations. By chaining together a number of map and reduce phases, sophisticated algorithms can be achieved. The shared nothing architecture of MapReduce prohibits communication between map tasks of the same phase or reduces tasks of the same phase. Communication that's required happens at the end of each phase. The simplicity of this model allows Hadoop to translate each phase, depending on the amount of data that needs to be processed into tens or even hundreds of tasks being executed in parallel, thus achieving scalable performance. Internally, the map and reduce tasks follow a simplistic data representation. Everything is a key or a value. A map task receives key-value pairs and applies basic transformations emitting new key-value pairs. Data is then partitioned and different partitions are transmitted to different reduce tasks. A reduce task also receives key-value pairs, groups them based on the key, and applies basic transformation to those groups. A MapReduce example To illustrate how MapReduce works, let's look at an example of a log file of total size 1 GB with the following format: INFO MyApp - Entering application. WARNING com.foo.Bar - Timeout accessing DB - Retrying ERROR com.foo.Bar - Did it again! INFO MyApp - Exiting application Once this file is stored in HDFS, it is split into eight 128 MB blocks and distributed in multiple Hadoop nodes. In order to build a MapReduce job to count the amount of INFO, WARNING, and ERROR log lines in the file, we need to think in terms of map and reduce phases. In one map phase, we can read local blocks of the file and map each line to a key and a value. We can use the log level as the key and the number 1 as the value. After it is completed, data is partitioned based on the key and transmitted to the reduce tasks. MapReduce guarantees that the input to every reducer is sorted by key. Shuffle is the process of sorting and copying the output of the map tasks to the reducers to be used as input. By setting the value to 1 on the map phase, we can easily calculate the total in the reduce phase. Reducers receive input sorted by key, aggregate counters, and store results. In the following diagram, every green block represents an INFO message, every yellow block a WARNING message, and every red block an ERROR message: Implementing the preceding MapReduce algorithm in Java requires the following three classes: A Map class to map lines into <key,value> pairs; for example, <"INFO",1> A Reduce class to aggregate counters A Job configuration class to define input and output types for all <key,value> pairs and the input and output files MapReduce abstractions This simple MapReduce example requires more than 50 lines of Java code (mostly because of infrastructure and boilerplate code). In SQL, a similar implementation would just require the following: SELECT level, count(*) FROM table GROUP BY level Hive is a technology originating from Facebook that translates SQL commands, such as the preceding one, into sets of map and reduce phases. SQL offers convenient ubiquity, and it is known by almost everyone. However, SQL is declarative and expresses the logic of a computation without describing its control flow. So, there are use cases that will be unusual to implement in SQL, and some problems are too complex to be expressed in relational algebra. For example, SQL handles joins naturally, but it has no built-in mechanism for splitting data into streams and applying different operations to each substream. Pig is a technology originating from Yahoo that offers a relational data-flow language. It is procedural, supports splits, and provides useful operators for joining and grouping data. Code can be inserted anywhere in the data flow and is appealing because it is easy to read and learn. However, Pig is a purpose-built language; it excels at simple data flows, but it is inefficient for implementing non-trivial algorithms. In Pig, the same example can be implemented as follows: LogLine = load 'file.logs' as (level, message); LevelGroup = group LogLine by level; Result = foreach LevelGroup generate group, COUNT(LogLine); store Result into 'Results.txt'; Both Pig and Hive support extra functionality through loadable user-defined functions (UDF) implemented in Java classes. Cascading is implemented in Java and designed to be expressive and extensible. It is based on the design pattern of pipelines that many other technologies follow. The pipeline is inspired from the original chain of responsibility design pattern and allows ordered lists of actions to be executed. It provides a Java-based API for data-processing flows. Developers with functional programming backgrounds quickly introduced new domain specific languages that leverage its capabilities. Scalding, Cascalog, and PyCascading are popular implementations on top of Cascading, which are implemented in programming languages such as Scala, Clojure, and Python. Introducing Cascading Cascading is an abstraction that empowers us to write efficient MapReduce applications. The API provides a framework for developers who want to think in higher levels and follow Behavior Driven Development (BDD) and Test Driven Development (TDD) to provide more value and quality to the business. Cascading is a mature library that was released as an open source project in early 2008. It is a paradigm shift and introduces new notions that are easier to understand and work with. In Cascading, we define reusable pipes where operations on data are performed. Pipes connect with other pipes to create a pipeline. At each end of a pipeline, a tap is used. Two types of taps exist: source, where input data comes from and sink, where the data gets stored. In the preceding image, three pipes are connected to a pipeline, and two input sources and one output sink complete the flow. A complete pipeline is called a flow, and multiple flows bind together to form a cascade. In the following diagram, three flows form a cascade: The Cascading framework translates the pipes, flows, and cascades into sets of map and reduce phases. The flow and cascade planner ensure that no flow or cascade is executed until all its dependencies are satisfied. The preceding abstraction makes it easy to use a whiteboard to design and discuss data processing logic. We can now work on a productive higher level abstraction and build complex applications for ad targeting, logfile analysis, bioinformatics, machine learning, predictive analytics, web content mining, and for extract, transform and load (ETL) jobs. By abstracting from the complexity of key-value pairs and map and reduce phases of MapReduce, Cascading provides an API that so many other technologies are built on. What happens inside a pipe Inside a pipe, data flows in small containers called tuples. A tuple is like a fixed size ordered list of elements and is a base element in Cascading. Unlike an array or list, a tuple can hold objects with different types. Tuples stream within pipes. Each specific stream is associated with a schema. The schema evolves over time, as at one point in a pipe, a tuple of size one can receive an operation and transform into a tuple of size three. To illustrate this concept, we will use a JSON transformation job. Each line is originally stored in tuples of size one with a schema: 'jsonLine. An operation transforms these tuples into new tuples of size three: 'time, 'user, and 'action. Finally, we extract the epoch, and then the pipe contains tuples of size four: 'epoch, 'time, 'user, and 'action. Pipe assemblies Transformation of tuple streams occurs by applying one of the five types of operations, also called pipe assemblies: Each: To apply a function or a filter to each tuple GroupBy: To create a group of tuples by defining which element to use and to merge pipes that contain tuples with similar schemas Every: To perform aggregations (count, sum) and buffer operations to every group of tuples CoGroup: To apply SQL type joins, for example, Inner, Outer, Left, or Right joins SubAssembly: To chain multiple pipe assemblies into a pipe To implement the pipe for the logfile example with the INFO, WARNING, and ERROR levels, three assemblies are required: The Each assembly generates a tuple with two elements (level/message), the GroupBy assembly is used in the level, and then the Every assembly is applied to perform the count aggregation. We also need a source tap to read from a file and a sink tap to store the results in another file. Implementing this in Cascading requires 20 lines of code; in Scala/Scalding, the boilerplate is reduced to just the following: TextLine(inputFile) .mapTo('line->'level,'message) { line:String => tokenize(line) } .groupBy('level) { _.size } .write(Tsv(outputFile)) Cascading is the framework that provides the notions and abstractions of tuple streams and pipe assemblies. Scalding is a domain-specific language (DSL) that specializes in the particular domain of pipeline execution and further minimizes the amount of code that needs to be typed. Cascading extensions Cascading offers multiple extensions that can be used as taps to either read from or write data to, such as SQL, NoSQL, and several other distributed technologies that fit nicely with the MapReduce paradigm. A data processing application, for example, can use taps to collect data from a SQL database and some more from the Hadoop file system. Then, process the data, use a NoSQL database, and complete a machine learning stage. Finally, it can store some resulting data into another SQL database and update a mem-cache application. Summary This article explains the core technologies used in the distributed model of Hadoop Resources for Article: Further resources on this subject: Analytics – Drawing a Frequency Distribution with MapReduce (Intermediate) [article] Understanding MapReduce [article] Advanced Hadoop MapReduce Administration [article]
Read more
  • 0
  • 0
  • 4173

article-image-serving-and-processing-forms
Packt
24 Jun 2014
13 min read
Save for later

Serving and processing forms

Packt
24 Jun 2014
13 min read
(For more resources related to this topic, see here.) Spring supports different view technologies, but if we are using JSP-based views, we can make use of the Spring tag library tags to make up our JSP pages. These tags provide many useful, common functionalities such as form binding, evaluating errors outputting internationalized messages, and so on. In order to use these tags, we must add references to this tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> The data transfer took place from model to view via the controller. The following line is a typical example of how we put data into the model from a controller: model.addAttribute(greeting,"Welcome") Similarly the next line shows how we retrieve that data in the view using the JSTL expression: <p> ${greeting} </p> JavaServer Pages Standard Tag Library (JSTL) is also a tag library provided by Oracle. And it is a collection of useful JSP tags that encapsulates the core functionality common to many JSP pages. We can add a reference to the JSTL tag library in our JSP pages as <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%>. However, what if we want to put data into the model from the view? How do we retrieve that data from the controller? For example, consider a scenario where an admin of our store wants to add new product information in our store by filling and submitting an HTML form. How can we collect the values filled in the HTML form elements and process it in the controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form-backing bean in the model. Later, the controller can retrieve the form-backing bean from the model using the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Form-backing beans (sometimes called form beans) are used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields on the form and the properties on our domain object. Another approach is to create separate classes for form beans, which are sometimes called Data Transfer Objects (DTOs). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags that are more or less similar to HTML form and input tags, but it has some special attributes to bind the form elements data with the form-backing bean. Let's create a Spring web form in our application to add new products to our product list by performing the following steps: We open our ProductRepository interface and add one more method declaration in it as follows: void addProduct(Product product); We then add an implementation for this method in the InMemoryProductRepository class as follows: public void addProduct(Product product) { listOfProducts.add(product); } We open our ProductService interface and add one more method declaration in it as follows: void addProduct(Product product); And, we add an implementation for this method in the ProductServiceImpl class as follows: public void addProduct(Product product) { productRepository.addProduct(product); } We open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/products"; } Finally, we add one more JSP view file called addProduct.jsp under src/main/webapp/WEB-INF/views/ and add the following tag reference declaration in it as the very first line: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now, we add the following code snippet under the tag declaration line and save addProduct.jsp (note that I have skipped the <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage that you add binding tags for the skipped fields when you try out this exercise): <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet"href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now, we run our application and enter the URL http://localhost:8080/webstore/products/add. We will be able to see a web page that displays a web form where we can add the product information as shown in the following screenshot: Add the product's web form Now, we enter all the information related to the new product that we want to add and click on the Add button; we will see the new product added in the product listing page under the URL http://localhost:8080/webstore/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. I will give you a brief note on what we have done in steps 1 to 4. In step 1, we created a method declaration addProduct in our ProductRepository interface to add new products. In step 2, we implemented the addProduct method in our InMemoryProductRepository class; the implementation is just to update the existing listOfProducts by adding a new product to the list. Steps 3 and 4 are just a service layer extension for ProductRepository. In step 3, we declared a similar method, addProduct, in our ProductService interface and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; we have done nothing but added two request mapping methods, namely, getAddNewProductForm and processAddNewProductForm, in step 5 as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/products"; } If you observe these methods carefully, you will notice a peculiar thing, which is that both the methods have the same URL mapping value in their @RequestMapping annotation (value = "/add"). So, if we enter the URL http://localhost:8080/webstore/products/add in the browser, which method will Spring MVC map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). If you will notice again, even though both methods have the same URL mapping, they differ in request method. So, what is happening behind the screen is that when we enter the URL http://localhost:8080/webstore/products/add in the browser, it is considered as a GET request. So, Spring MVC maps this request to the getAddNewProductForm method, and within this method, we simply attach a new empty Product domain object to the model under the attribute name, newProduct. Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); So in the view addproduct.jsp, we can access this model object, newProduct. Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp view file for some time so that we are able to understand the form processing flow without confusion. In addproduct.jsp, we have just added a <form:form> tag from the Spring tag library using the following line of code: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is acquired from the Spring tag library, we need to add a reference to this tag library in our JSP file. That's why we have added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of modelAttribute in the <form:form> tag. If you recall correctly, you will notice that this value of modelAttribute and the attribute name we used to store the newProduct object in the model from our getAddNewProductForm method are the same. So, the newProduct object that we attached to the model in the controller method (getAddNewProductForm) is now bound to the form. This object is called the form-backing bean in Spring MVC. Okay, now notice each <form:input> tag inside the <form:form> tag shown in the following code. You will observe that there is a common attribute in every tag. This attribute name is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to the form-backing bean. So, the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now is the time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button of our form. Yes, since every form submission is considered as a POST request, this time the browser will send a POST request to the same URL, that is, http://localhost:8080/webstore/products/add. So, this time, the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply call the service method addProduct to add the new product to the repository, as follows: productService.addProduct(productToBeAdded); However, the interesting question here is, how is the productToBeAdded object populated with the data that we entered in the form? The answer lies within the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Note the method signature of the processAddNewProductForm method shown in the following line of code: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here, if you notice the value attribute of the @ModelAttribute annotation, you will observe a pattern. The values of the @ModelAttribute annotation and modelAttribute from the <form:form> tag are the same. So, Spring MVC knows that it should assign the form-bound newProduct object to the productToBeAdded parameter of the processAddNewProductForm method. The @ModelAttribute annotation is not only used to retrieve an object from a model, but if we want to, we can even use it to add objects to the model. For instance, we rewrite our getAddNewProductForm method to something like the following code with the use of the @ModelAttribute annotation: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can notice that we haven't created any new empty Product domain object and attached it to the model. All we have done was added a parameter of the type Product and annotated it with the @ModelAttribute annotation so that Spring MVC would know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical view name, redirect:/products, that it returns. So, what are we trying to tell Spring MVC by returning a string redirect:/products? To get the answer, observe the logical view name string carefully. If we split this string with the : (colon) symbol, we will get two parts; the first part is the prefix redirect and the second part is something that looks like a request path, /products. So, instead of returning a view name, we simply instruct Spring to issue a redirect request to the request path, /products, which is the request path for the list method of our ProductController class. So, after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring uses a special view object, RedirectView (org.springframework.web.servlet.view.RedirectView), to issue a redirect command behind the screen. Instead of landing in a web page after the successful submission of a web form, we are spawning a new request to the request path /products with the help of RedirectView. This pattern is called Redirect After Post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form; sometimes, if we press the browser's refresh button or back button after submitting the form, there are chances that the same form will be resubmitted. Summary This article introduced you to Spring and Spring form tag libraries in web form handling. You also learned how to bind domain objects with views and how to use message bundles to externalize label caption texts. Resources for Article: Further resources on this subject: Spring MVC - Configuring and Deploying the Application [article] Getting Started With Spring MVC - Developing the MVC components [article] So, what is Spring for Android? [article]
Read more
  • 0
  • 0
  • 3066
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
Packt
23 Jun 2014
10 min read
Save for later

Kendo UI DataViz – Advance Charting

Packt
23 Jun 2014
10 min read
(For more resources related to this topic, see here.) Creating a chart to show stock history The Kendo UI library provides a specialized chart widget that can be used to display the stock price data for a particular stock over a period of time. In this recipe, we will take a look at creating a Stock chart and customizing it. Getting started Include the CSS files, kendo.dataviz.min.css and kendo.dataviz.default.min.css, in the head section. These files are used in styling some of the parts of a stock history chart. How to do it… A Stock chart is made up of two charts: a pane that shows you the stock history and another pane that is used to navigate through the chart by changing the date range. The stock price for a particular stock on a day can be denoted by the following five attributes: Open: This shows you the value of the stock when the trading starts for the day Close: This shows you the value of the stock when the trading closes for the day High: This shows you the highest value the stock was able to attain on the day Low: This shows you the lowest value the stock reached on the day Volume: This shows you the total number of shares of that stock traded on the day Let's assume that a service returns this data in the following format: [ { "Date" : "2013/01/01", "Open" : 40.11, "Close" : 42.34, "High" : 42.5, "Low" : 39.5, "Volume": 10000 } . . . ] We will use the preceding data to create a Stock chart. The kendoStockChart function is used to create a Stock chart, and it is configured with a set of options similar to the area chart or Column chart. In addition to the series data, you can specify the navigator option to show a navigation pane below the chart that contains the entire stock history: $("#chart").kendoStockChart({ title: { text: 'Stock history' }, dataSource: { transport: { read: '/services/stock?q=ADBE' } }, dateField: "Date", series: [{ type: "candlestick", openField: "Open", closeField: "Close", highField: "High", lowField: "Low" }], navigator: { series: { type: 'area', field: 'Volume' } } }); In the preceding code snippet, the DataSource object refers to the remote service that would return the stock data for a set of days. The series option specifies the series type as candlestick; a candlestick chart is used here to indicate the stock price for a particular day. The mappings for openField, closeField, highField, and lowField are specified; they will be used in plotting the chart and also to show a tooltip when the user hovers over it. The navigator option is specified to create an area chart, which uses volume data to plot the chart. The dateField option is used to specify the mapping between the date fields in the chart and the one in the response. How it works… When you load the page, you will see two panes being shown; the navigator is below the main chart. By default, the chart displays data for all the dates in the DataSource object, as shown in the following screenshot: In the preceding screenshot, a candlestick chart is created and it shows you the stock price over a period of time. Also, notice that in the navigator pane, all date ranges are selected by default, and hence, they are reflected in the chart (candlestick) as well. When you hover over the series, you will notice that the stock quote for the selected date is shown. This includes the date and other fields such as Open, High, Low, and Close. The area of the chart is adjusted to show you the stock price for various dates such that the dates are evenly distributed. In the previous case, the dates range from January 1, 2013 to January 31, 2013. However, when you hover over the series, you will notice that some of the dates are omitted. To overcome this, you can either increase the width of the chart area or use the navigator to reduce the date range. The former option is not advisable if the date range spans across several months and years. To reduce the date range in the navigator, move the two date range selectors towards each other to narrow down the dates, as shown in the following screenshot: When you try to narrow down the dates, you will see a tooltip in the chart, indicating the date range that you are trying to select. The candlestick chart is adjusted to show you the stock price for the selected date range. Also, notice that the opacity of the selected date range in the navigator remains the same while the rest of the area's opacity is reduced. Once the date range is selected, the selected pane can be moved in the navigator. There's more… There are several options available to you to customize the behavior and the look and feel of the Stock Chart widget. Specifying the date range in the navigator when initializing the chart By default, all date ranges in the chart are selected and the user will have to narrow them down in the navigator pane. When you work with a large dataset, you will want to show the stock data for a specific range of date when the chart is rendered. To do this, specify the select option in navigator: navigator: { series: { type: 'area', field: 'Volume' }, select: { from: '2013/01/07', to: '2013/01/14' } } In the previous code snippet, the from and to date ranges are specified. Now, when you render the page, you will see that the same dates are selected in the navigator pane. Customizing the look and feel of the Stock Chart widget There are various options available to customize the navigator pane in the Stock Chart widget. Let's increase the height of the pane and also include a title text for it: navigator: { . . pane: { height: '50px', title: { text: 'Stock Volume' } } } Now when you render the page, you will see that the title and height of the navigator pane have been increased. Using the Radial Gauge widget The Radial Gauge widget allows you to build a dashboard-like application wherein you want to indicate a value that lies in a specific range. For example, a car's dashboard can contain a couple of Radial Gauge widgets that can be used to indicate the current speed and RPM. How to do it… To create a Radial Gauge widget, invoke the kendoRadialGauge function on the selected DOM element. A Radial Gauge widget contains some components, and it can be configured by providing options, as shown in the following code snippet: $("#chart").kendoRadialGauge({ scale: { startAngle: 0, endAngle: 180, min: 0, max: 180 }, pointer: { value: 20 } }); Here the scale option is used to configure the range for the Radial Gauge widget. The startAngle and endAngle options are used to indicate the angle at which the Radial Gauge widget's range should start and end. By default, its values are 30 and 210, respectively. The other two options, that is, min and max, are used to indicate the range values over which the value can be plotted. The pointer option is used to indicate the current value in the Radial Gauge widget. There are several options available to configure the Radial Gauge widget; these include positioning of the labels and configuring the look and feel of the widget. How it works… When you render the page, you will see a Radial Gauge widget that shows you the scale from 0 to 180 and the pointer pointing to the value 20. Here, the values from 0 to 180 are evenly distributed, that is, the major ticks are in terms of 20. There are 10 minor ticks, that is, ticks between two major ticks. The widget shows values in the clockwise direction. Also, the pointer value 20 is selected in the scale. There's more… The Radial Gauge widget can be customized to a great extent by including various options when initializing the widget. Changing the major and minor unit values Specify the majorUnit and minorUnit options in the scale: scale: { startAngle: 0, endAngle: 180, min: 0, max: 180, majorUnit: 30, minorUnit: 10, } The scale option specifies the majorUnit value as 30 (instead of the default 20) and minorUnit as 10. This will now add labels at every 30 units and show you two minor ticks between the two major ticks, each at a distance of 10 units, as shown in the following screenshot: The ticks shown in the preceding screenshot can also be customized: scale: { . . minorTicks: { size: 30, width: 1, color: 'green' }, majorTicks: { size: 100, width: 2, color: 'red' } } Here, the size option is used to specify the length of the tick marker, width is used to specify the thickness of the tick, and the color option is used to change the color of the tick. Now when you render the page, you will see the changes for the major and minor ticks. Changing the color of the radial using the ranges option The scale attribute can include the ranges option to specify a radial color for the various ranges on the Radial Gauge widget: scale: { . . ranges: [ { from: 0, to: 60, color: '#00F' }, { from: 60, to: 130, color: '#0F0' }, { from: 130, to: 200, color: '#F00' } ] } In the preceding code snippet, the ranges array contains three objects that specify the color to be applied on the circumference of the widget. The from and to values are used to specify the range of tick values for which the color should be applied. Now when you render the page, you will see the Radial Gauge widget showing the colors for various ranges along the circumference of the widget, as shown in the following screenshot: In the preceding screenshot, the startAngle and endAngle fields are changed to 10 and 250, respectively. The widget can be further customized by moving the labels outside. This can be done by specifying the labels attribute with position as outside. In the preceding screenshot, the labels are positioned outside, hence, the radial appears inside. Updating the pointer value using a Slider widget The pointer value is set when the Radial Gauge widget is initialized. It is possible to change the pointer value of the widget at runtime using a Slider widget. The changes in the Slider widget can be observed, and the pointer value of the Radial Gauge can be updated accordingly. Let's use the Radial Gauge widget. A Slider widget is created using an input element: <input id="slider" value="0" /> The next step is to initialize the previously mentioned input element to a Slider widget: $('#slider').kendoSlider({ min: 0, max: 200, showButtons: false, smallStep: 10, tickPlacement: 'none', change: updateRadialGuage }); The min and max values specify the range of values that can be set for the slider. The smallStep attribute specifies the minimum increment value of the slider. The change attribute specifies the function that should be invoked when the slider value changes. The updateRadialGuage function should then update the value of the pointer in the Radial Gauge widget: function updateRadialGuage() { $('#chart').data('kendoRadialGauge') .value($('#slider').val()); } The function gets the instance of the widget and then sets its value to the value obtained from the Slider widget. Here, the slider value is changed to 100, and you will notice that it is reflected in the Radial Gauge widget.
Read more
  • 0
  • 0
  • 2002

article-image-adding-developer-django-forms
Packt
18 Jun 2014
8 min read
Save for later

Adding a developer with Django forms

Packt
18 Jun 2014
8 min read
(For more resources related to this topic, see here.) When displaying the form, it will generate the contents of the form template. We may change the type of field that the object sends to the template if needed. While receiving the data, the object will check the contents of each form element. If there is an error, the object will send a clear error to the client. If there is no error, we are certain that the form data is correct. CSRF protection Cross-Site Request Forgery (CSRF) is an attack that targets a user who is loading a page that contains a malicious request. The malicious script uses the authentication of the victim to perform unwanted actions, such as changing data or access to sensitive data. The following steps are executed during a CSRF attack: Script injection by the attacker. An HTTP query is performed to get a web page. Downloading the web page that contains the malicious script. Malicious script execution. In this kind of attack, the hacker can also modify information that may be critical for the users of the website. Therefore, it is important for a web developer to know how to protect their site from this kind of attack, and Django will help with this. To re-enable CSRF protection, we must edit the settings.py file and uncomment the following line: 'django.middleware.csrf.CsrfViewMiddleware', This protection ensures that the data that has been sent is really sent from a specific property page. You can check this in two easy steps: When creating an HTML or Django form, we insert a CSRF token that will store the server. When the form is sent, the CSRF token will be sent too. When the server receives the request from the client, it will check the CSRF token. If it is valid, it validates the request. Do not forget to add the CSRF token in all the forms of the site where protection is enabled. HTML forms are also involved, and the one we have just made does not include the token. For the previous form to work with CSRF protection, we need to add the following line in the form of tags and <form> </form>: {% csrf_token %} The view with a Django form We will first write the view that contains the form because the template will display the form defined in the view. Django forms can be stored in other files as forms.py at the root of the project file. We include them directly in our view because the form will only be used on this page. Depending on the project, you must choose which architecture suits you best. We will create our view in the views/create_developer.py file with the following lines: from django.shortcuts import render from django.http import HttpResponse from TasksManager.models import Supervisor, Developer from django import forms # This line imports the Django forms package class Form_inscription(forms.Form): # This line creates the form with four fields. It is an object that inherits from forms.Form. It contains attributes that define the form fields. name = forms.CharField(label="Name", max_length=30) login = forms.CharField(label="Login", max_length=30) password = forms.CharField(label="Password", widget=forms.PasswordInput) supervisor = forms.ModelChoiceField(label="Supervisor", queryset=Supervisor.objects.all()) # View for create_developer def page(request): if request.POST: form = Form_inscription(request.POST) # If the form has been posted, we create the variable that will contain our form filled with data sent by POST form. if form.is_valid(): # This line checks that the data sent by the user is consistent with the field that has been defined in the form. name = form.cleaned_data['name'] # This line is used to retrieve the value sent by the client. The collected data is filtered by the clean() method that we will see later. This way to recover data provides secure data. login = form.cleaned_data['login'] password = form.cleaned_data['password'] supervisor = form.cleaned_data['supervisor'] # In this line, the supervisor variable is of the Supervisor type, that is to say that the returned data by the cleaned_data dictionary will directly be a model. new_developer = Developer(name=name, login=login, password=password, email="", supervisor=supervisor) new_developer.save() return HttpResponse("Developer added") else: return render(request, 'en/public/create_developer.html', {'form' : form}) # To send forms to the template, just send it like any other variable. We send it in case the form is not valid in order to display user errors: else: form = Form_inscription() # In this case, the user does not yet display the form, it instantiates with no data inside. return render(request, 'en/public/create_developer.html', {'form' : form}) This screenshot shows the display of the form with the display of an error message: Template of a Django form We set the template for this view. The template will be much shorter: {% extends "base.html" %} {% block title_html %} Create Developer {% endblock %} {% block h1 %} Create Developer {% endblock %} {% block article_content %} <form method="post" action="{% url "create_developer" %}" > {% csrf_token %} <!-- This line inserts a CSRF token. --> <table> {{ form.as_table }} <!-- This line displays lines of the form.--> </table> <p><input type="submit" value="Create" /></p> </form> {% endblock %} As the complete form operation is in the view, the template simply executes the as_table() method to generate the HTML form. The previous code displays data in tabular form. The three methods to generate an HTML form structure are as follows: as_table: This displays fields in the <tr> <td> tags as_ul: This displays the form fields in the <li> tags as_p: This displays the form fields in the <p> tags So, we quickly wrote a secure form with error handling and CSRF protection through Django forms. The form based on a model ModelForms are Django forms based on models. The fields of these forms are automatically generated from the model that we have defined. Indeed, developers are often required to create forms with fields that correspond to those in the database to a non-MVC website. These particular forms have a save() method that will save the form data in a new record. The supervisor creation form To broach ModelForms, we will take, for example, the addition of a supervisor. For this, we will create a new page. For this, we will create the following URL: url(r'^create-supervisor$', 'TasksManager.views.create_supervisor.page', name="create_supervisor"), Our view will contain the following code: from django.shortcuts import render from TasksManager.models import Supervisor from django import forms from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse def page(request): if len(request.POST) > 0: form = Form_supervisor(request.POST) if form.is_valid(): form.save(commit=True) # If the form is valid, we store the data in a model record in the form. return HttpResponseRedirect(reverse('public_index')) # This line is used to redirect to the specified URL. We use the reverse() function to get the URL from its name defines urls.py. else: return render(request, 'en/public/create_supervisor.html', {'form': form}) else: form = Form_supervisor() return render(request, 'en/public/create_supervisor.html', {'form': form}) class Form_supervisor(forms.ModelForm): # Here we create a class that inherits from ModelForm. class Meta: # We extend the Meta class of the ModelForm. It is this class that will allow us to define the properties of ModelForm. model = Supervisor # We define the model that should be based on the form. exclude = ('date_created', 'last_connexion', ) # We exclude certain fields of this form. It would also have been possible to do the opposite. That is to say with the fields property, we have defined the desired fields in the form. As seen in the line exclude = ('date_created', 'last_connexion', ), it is possible to restrict the form fields. Both the exclude and fields properties must be used correctly. Indeed, these properties receive a tuple of the fields to exclude or include as arguments. They can be described as follows: exclude: This is used in the case of an accessible form by the administrator. Because, if you add a field in the model, it will be included in the form. fields: This is used in cases in which the form is accessible to users. Indeed, if we add a field in the model, it will not be visible to the user. For example, we have a website selling royalty-free images with a registration form based on ModelForm. The administrator adds a credit field in the extended model of the user. If the developer has used an exclude property in some of the fields and did not add credits, the user will be able to take as many credits as he/she wants. We will resume our previous template, where we will change the URL present in the attribute action of the <form> tag: {% url "create_supervisor" %} This example shows us that ModelForms can save a lot of time in development by having a form that can be customized (by modifying the validation, for example). Summary This article discusses Django forms. It explains how to create forms with Django and how to treat them. Resources for Article: Further resources on this subject: So, what is Django? [article] Creating an Administration Interface in Django [article] Django Debugging Overview [article]
Read more
  • 0
  • 0
  • 4226

article-image-working-live-data-and-angularjs
Packt
12 Jun 2014
14 min read
Save for later

Working with Live Data and AngularJS

Packt
12 Jun 2014
14 min read
(For more resources related to this topic, see here.) Big Data is a new field that is growing every day. HTML5 and JavaScript applications are being used to showcase these large volumes of data in many new interesting ways. Some of the latest client implementations are being accomplished with libraries such as AngularJS. This is because of its ability to efficiently handle and organize data in many forms. Making business-level decisions off of real-time data is a revolutionary concept. Humans have only been able to fathom metrics based off of large-scale systems, in real time, for the last decade at most. During this time, the technology to collect large amounts of data has grown tremendously, but the high-level applications that use this data are only just catching up. Anyone can collect large amounts of data with today's complex distributed systems. Displaying this data in different formats that allow for any level of user to digest and understand its meaning is currently the main portion of what the leading-edge technology is trying to accomplish. There are so many different formats that raw data can be displayed in. The trick is to figure out the most efficient ways to showcase patterns and trends, which allow for more accurate business-level decisions to be made. We live in a fast paced world where everyone wants something done in real time. Load times must be in milliseconds, new features are requested daily, and deadlines get shorter and shorter. The Web gives companies the ability to generate revenue off a completely new market and AngularJS is on the leading edge. This new market creates many new requirements for HTML5 applications. JavaScript applications are becoming commonplace in major companies. These companies are using JavaScript to showcase many different types of data from inward to outward facing products. Working with live data sets in client-side applications is a common practice and is the real world standard. Most of the applications today use some type of live data to accomplish some given set of tasks. These tasks rely on this data to render views that the user can visualize and interact with. There are many advantages of working with the Web for data visualization, and we are going to showcase how these tie into an AngularJS application. AngularJS offers different methods to accomplish a view that is in charge of elegantly displaying large amounts of data in very flexible and snappy formats. Some of these different methods feed directives' data that has been requested and resolved, while others allow the directive to maintain control of the requests. We will go over these different techniques of how to efficiently get live data into the view layer by creating different real-world examples. We will also go over how to properly test directives that rely on live data to achieve their view successfully. Techniques that drive directives Most standard data requirements for a modern application involve an entire view that depends on a set of data. This data should be dependent on the current state of the application. The state can be determined in different ways. A common tactic is to build URLs that replicate a snapshot of the application's state. This can be done with a combination of URL paths and parameters. URL paths and parameters are what you will commonly see change when you visit a website and start clicking around. An AngularJS application is made up of different route configurations that use the URL to determine which action to take. Each configuration will have an associated controller, template, and other forms of options. These configurations work in unison to get data into the application in the most efficient ways. AngularUI also offers its own routing system. This UI-Router is a simple system built on complex concepts, which allows nested views to be controlled by different state options. This concept yields the same result as ngRoute, which is to get data into the controller; however, UI-Router does it in a more eloquent way, which creates more options. AngularJS 2.0 will contain a hybrid router that utilizes the best of each. Once the controller gets the data, it feeds the retrieved data to the template views. The template is what holds the directives that are created to perform the view layer functionality. The controller feeds directives' data, which forces the directives to rely on the controllers to be in charge of the said data. This data can either be fed immediately after the route configurations are executed or the application can wait for the data to be resolved. AngularJS offers you the ability to make sure that data requests have been successfully accomplished before any controller logic is executed. The method is called resolving data, and it is utilized by adding the resolve functions to the route configurations. This allows you to write the business logic in the controller in a synchronous manner, without having to write callbacks, which can be counter-intuitive. The XHR extensions of AngularJS are built using promise objects. These promise objects are basically a way to ensure that data has been successfully retrieved or to verify whether an error has occurred. Since JavaScript embraces callbacks at the core, there are many points of failure with respect to timing issues of when data is ready to be worked with. This is where libraries such as the Q library come into play. The promise object allows the execution thread to resemble a more synchronous flow, which reduces complexity and increases readability. The $q library The $q factory is a lite instantiation of the formally accepted Q library (https://github.com/kriskowal/q). This lite package contains only the functions that are needed to defer JavaScript callbacks asynchronously, based on the specifications provided by the Q library. The benefits of using this object are immense, when working with live data. Basically, the $q library allows a JavaScript application to mimic synchronous behavior when dealing with asynchronous data requests or methods that are not thread blocked by nature. This means that we can now successfully write our application's logic in a way that follows a synchronous flow. ES6 (ECMAScript6) incorporates promises at its core. This will eventually alleviate the need, for many functions inside the $q library or the entire library itself, in AngularJS 2.0. The core AngularJS service that is related to CRUD operations is called $http. This service uses the $q library internally to allow the powers of promises to be used anywhere a data request is made. Here is an example of a service that uses the $q object in order to create an easy way to resolve data in a controller. Refer to the following code: this.getPhones = function() { var request = $http.get('phones.json'), promise; promise = request.then(function(response) { return response.data; },function(errorResponse){ return errorResponse; }); return promise; } Here, we can see that the phoneService function uses the $http service, which can request for all the phones. The phoneService function creates a new request object, that calls a then function that returns a promise object. This promise object is returned synchronously. Once the data is ready, the then function is called and the correct data response is returned. This service is best showcased correctly when used in conjunction with a resolve function that feeds data into a controller. The resolve function will accept the promise object being returned and will only allow the controller to be executed once all of the phones have been resolved or rejected. The rest of the code that is needed for this example is the application's configuration code. The config process is executed on the initialization of the application. This is where the resolve function is supposed to be implemented. Refer to the following code: var app = angular.module('angularjs-promise-example',['ngRoute']); app.config(function($routeProvider){ $routeProvider.when('/', { controller: 'PhoneListCtrl', templateUrl: 'phoneList.tpl.html', resolve: { phones: function(phoneService){ return phoneService.getPhones(); } } }).otherwise({ redirectTo: '/' }); }) app.controller('PhoneListCtrl', function($scope, phones) { $scope.phones = phones; }); A live example of this basic application can be found at http://plnkr.co/edit/f4ZDCyOcud5WSEe9L0GO?p=preview. Directives take over once the controller executes its initial context. This is where the $compile function goes through all of its stages and links directives to the controller's template. The controller will still be in charge of driving the data that is sitting inside the template view. This is why it is important for directives to know what to do when their data changes. How should data be watched for changes? Most directives are on a need-to-know basis about the details of how they receive the data that is in charge of their view. This is a separation of logic that reduces cyclomatic complexity in an application. The controllers should be in charge of requesting data and passing this data to directives, through their associated $scope object. Directives should be in charge of creating DOM based on what data they receive and when the data changes. There are an infinite number of possibilities that a directive can try to achieve once it receives its data. Our goal is to showcase how to watch live data for changes and how to make sure that this works at scale so that our directives have the opportunity to fulfill their specific tasks. There are three built-in ways to watch data in AngularJS. Directives use the following methods to carry out specific tasks based on the different conditions set in the source of the program: Watching an object's identity for changes Recursively watching all of the object's properties for changes Watching just the top level of an object's properties for changes Each of these methods has its own specific purpose. The first method can be used if the variable that is being watched is a primitive type. The second type of method is used for deep comparisons between objects. The third type is used to do a shallow watch on an array of any type or just on a normal object. Let's look at an example that shows the last two watcher types. This example is going to use jsPerf to showcase our logic. We are leaving the first watcher out because it only watches primitive types and we will be watching many objects for different levels of equality. This example sets the $scope variable in the app's run function because we want to make sure that the jsPerf test resets each data set upon initialization. Refer to the following code: app.run(function($rootScope) { $rootScope.data = [ {'bob': true}, {'frank': false}, {'jerry': 'hey'}, {'bargle':false}, {'bob': true}, {'bob': true}, {'frank': false}, {'jerry':'hey'},{'bargle': false},{'bob': true},{'bob': true},{'frank': false}]; }); This run function sets up our data object that we will watch for changes. This will be constant throughout every test we run and will reset back to this form at the beginning of each test. Doing a deep watch on $rootScope.data This watch function will do a deep watch on the data object. The true flag is the key to setting off a deep watch. The purpose of a deep comparison is to go through every object property and compare it for changes on every digest. This is an expensive function and should be used only when necessary. Refer to the following code: app.service('Watch', function($rootScope) { return { run: function() { $rootScope.$watch('data', function(newVal, oldVal) { },true); //the digest is here because of the jsPerf test. We are using thisrun function to mimic a real environment. $rootScope.$digest(); } }; }); Doing a shallow watch on $rootScope.data The shallow watch is called whenever a top-level object is changed in the data object. This is less expensive because the application does not have to traverse n levels of data. Refer to the following code: app.service('WatchCollection', function($rootScope) { return { run: function() { $rootScope.$watchCollection('data', function(n, o) { }); $rootScope.$digest(); } }; }); During each individual test, we get each watcher service and call its run function. This fires the watcher on initialization, and then we push another test object to the data array, which fires the watch's trigger function again. That is the end of the test. We are using jsperf.com to show the results. Note that the watchCollection function is much faster and should be used in cases where it is acceptable to shallow watch an object. The example can be found at http://jsperf.com/watchcollection-vs-watch/5. Refer to the following screenshot: This test implies that the watchCollection function is a better choice to watch an array of objects that can be shallow watched for changes. This test is also true for an array of strings, integers, or floats. This brings up more interesting points, such as the following: Does our directive depend on a deep watch of the data? Do we want to use the $watch function, even though it is slow and memory taxing? Is it possible to use the $watch function if we are using large data objects? The directives that have been used in this book have used the watch function to watch data directly, but there are other methods to update the view if our directives depend on deep watchers and very large data sets. Directives can be in charge There are some libraries that believe that elements can be in charge of when they should request data. Polymer (http://www.polymer-project.org/) is a JavaScript library that allows DOM elements to control how data is requested, in a declarative format. This is a slight shift from the processes that have been covered so far in this article, when thinking about what directives are meant for and how they should receive data. Let's come up with an actual use case that could possibly allow this type of behavior. Let's consider a page that has many widgets on it. A widget is a directive that needs a set of large data objects to render its view. To be more specific, lets say we want to show a catalog of phones. Each phone has a very large amount of data associated with it, and we want to display this data in a very clean simple way. Since watching large data sets can be very expensive, what will allow directives to always have the data they require, depending on the state of the application? One option is to not use the controller to resolve the Big Data and inject it into a directive, but rather to use the controller to request for directive configurations that tell the directive to request certain data objects. Some people would say this goes against normal conventions, but I say it's necessary when dealing with many widgets in the same view, which individually deal with large amounts of data. This method of using directives to determine when data requests should be made is only suggested if many widgets on a page depend on large data sets. To create this in a real-life example, let's take the phoneService function, which was created earlier, and add a new method to it called getPhone. Refer to the following code: this.getPhone = function(config) { return $http.get(config.url); }; Now, instead of requesting for all the details on the initial call, the original getPhones method only needs to return phone objects with a name and id value. This will allow the application to request the details on demand. To do this, we do not need to alter the getPhones method that was created earlier. We only need to alter the data that is supplied when the request is made. It should be noted that any directive that is requesting data should be tested to prove that it is requesting the correct data at the right time. Testing directives that control data Since the controller is usually in charge of how data is incorporated into the view, many directives do not have to be coupled with logic related to how that data is retrieved. Keeping things separate is always good and is encouraged, but in some cases, it is necessary that directives and XHR logic be used together. When these use cases reveal themselves in production, it is important to test them properly. The tests in the book use two very generic steps to prove business logic. These steps are as follows: Create, compile, and link DOM to the AngularJS digest cycle Test scope variables and DOM interactions for correct outputs Now, we will add one more step to the process. This step will lie in the middle of the two steps. The new step is as follows: Make sure all data communication is fired correctly AngularJS makes it very simple to allow additional resource related logic. This is because they have a built-in backend service mock, which allows many different ways to create fake endpoints that return structured data. The service is called $httpBackend.
Read more
  • 0
  • 0
  • 3792

article-image-building-web-application-php-and-mariadb-introduction-caching
Packt
11 Jun 2014
4 min read
Save for later

Building a Web Application with PHP and MariaDB - Introduction to caching

Packt
11 Jun 2014
4 min read
Let's begin with database caching. All the data for our application is stored on MariaDB. When a request is made for retrieving the list of available students, we run a query on our course_registry database. Running a single query at a time is simple but as the application gets popular, we will have more concurrent users. As the number of concurrent connections to the database increases, we will have to make sure that our database server is optimized to handle that load. In this section, we will look at the different types of caching that can be performed in the database. Let's start with query caching. Query caching is available by default on MariaDB; to verify if the installation has a query cache, we will use the have_query_cache global variable. Let's use the SHOW VARIABLES command to verify if the query cache is available on our MariaDB installation, as shown in the following screenshot: Now that we have a query cache, let's verify if it is active. To do this, we will use the query_cache_type global variable, shown as follows: From this query, we can verify that the query cache is turned on. Now, let's take a look at the memory that is allocated for the query cache by using the query_cache_size command, shown as follows: The query cache size is currently set to 64 MB; let's modify our query cache size to 128 MB. The following screenshot shows the usage of the SET GLOBAL syntax: We use the SET GLOBAL syntax to set the value for the query_cache_size command, and we verify this by reloading the value of the query_cache_size command. Now that we have the query cache turned on and working, let's look at a few statistics that would give us an idea of how often the queries are being cached. To retrieve this information, we will query the Qcache variable, as shown in the following screenshot: From this output, we can verify whether we are retrieving a lot of statistics about the query cache. One thing to verify is the Qcache_not_cached variable that is high for our database. This is due to the use of prepared statements. The prepared statements are not cached by MariaDB. Another important variable to keep an eye on is the Qcache_lowmem_prunes variable that will give us an idea of the number of queries that were deleted due to low memory. This will indicate that the query cache size has to be increased. From these stats, we understand that for as long as we use the prepared statements, our queries will not be cached on the database server. So, we should use a combination of prepared statements and raw SQL statements, depending on our use cases. Now that we understand a good bit about query caches, let's look at the other caches that MariaDB provides, such as the table open cache, the join cache, and the memory storage cache. The table open cache allows us to define the number of tables that can be left open by the server to allow faster look-ups. This will be very helpful where there is a huge number of requests for a table, and so the table need not be opened for every request. The join buffer cache is commonly used for queries that perform a full join, wherein there are no indexes to be used for finding rows for the next table. Normally, indexes help us avoid these problems. The memory storage cache, previously known as the heap cache, is commonly is used for read-only caches of data from other tables or for temporary work areas. Let's look at the variables that are with MariaDB, as shown in the following screenshot: Database caching is a very important step towards making our application scalable. However, it is important to understand when to cache, the correct caching techniques, and the size for each cache. Allocation of memory for caching has to be done very carefully as the application can run out of memory if too much space is allocated. A good method to allocate memory for caching is by running benchmarks to see how the queries perform, and have a list of popular queries that will run often so that we can begin by caching and optimizing the database for those queries. Now that we have a good understanding of database caching, let's proceed to application-level caching. Resources for Article: Introduction to Kohana PHP Framework Creating and Consuming Web Services in CakePHP 1.3 Installing MariaDB on Windows and Mac OS X
Read more
  • 0
  • 0
  • 2365
article-image-selecting-and-initializing-database
Packt
10 Jun 2014
7 min read
Save for later

Selecting and initializing the database

Packt
10 Jun 2014
7 min read
(For more resources related to this topic, see here.) In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" } We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } } It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } }); We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } } The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." } The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package.json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" } Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection; The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, consider the following code: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } }); The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if(err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } } The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Summary In this article, we saw how to select and initialize database using NoSQL with MongoDB and using MySQL required for writing a blog application with Node.js and AngularJS. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Understanding and Developing Node Modules [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 3546

article-image-automating-performance-analysis-yslow-and-phantomjs
Packt
10 Jun 2014
12 min read
Save for later

Automating performance analysis with YSlow and PhantomJS

Packt
10 Jun 2014
12 min read
(For more resources related to this topic, see here.) Getting ready To run this article, the phantomjs binary will need to be accessible to the continuous integration server, which may not necessarily share the same permissions or PATH as our user. We will also need a target URL. We will use the PhantomJS port of the YSlow library to execute the performance analysis on our target web page. The YSlow library must be installed somewhere on the filesystem that is accessible to the continuous integration server. For our example, we have placed the yslow.js script in the tmp directory of the jenkins user's home directory. To find the jenkins user's home directory on a POSIX-compatible system, first switch to that user using the following command: sudo su - jenkins Then print the home directory to the console using the following command: echo $HOME We will need to have a continuous integration server set up where we can configure the jobs that will execute our automated performance analyses. The example that follows will use the open source Jenkins CI server. Jenkins CI is too large a subject to introduce here, but this article does not assume any working knowledge of it. For information about Jenkins CI, including basic installation or usage instructions, or to obtain a copy for your platform, visit the project website at http://jenkins-ci.org/. Our article uses version 1.552. The combination of PhantomJS and YSlow is in no way unique to Jenkins CI. The example aims to provide a clear illustration of automated performance testing that can easily be adapted to any number of continuous integration server environments. The article also uses several plugins on Jenkins CI to help facilitate our automated testing. These plugins include: Environment Injector Plugin JUnit Attachments Plugin TAP Plugin xUnit Plugin To run that demo site, we must have Node.js installed. In a separate terminal, change to the phantomjs-sandbox directory (in the sample code's directory), and start the app with the following command: node app.js How to do it… To execute our automated performance analyses in Jenkins CI, the first thing that we need to do is set up the job as follows: Select the New Item link in Jenkins CI. Give the new job a name (for example, YSlow Performance Analysis), select Build a free-style software project, and then click on OK. To ensure that the performance analyses are automated, we enter a Build Trigger for the job. Check off the appropriate Build Trigger and enter details about it. For example, to run the tests every two hours, during business hours, Monday through Friday, check Build periodically and enter the Schedule as H 9-16/2 * * 1-5. In the Build block, click on Add build step and then click on Execute shell. In the Command text area of the Execute Shell block, enter the shell commands that we would normally type at the command line, for example: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f junit http ://localhost:3000/css-demo > yslow.xml In the Post-build Actions block, click on Add post-build action and then click on Publish JUnit test result report. In the Test report XMLs field of the Publish JUnit Test Result Report block, enter *.xml. Lastly, click on Save to persist the changes to this job. Our performance analysis job should now run automatically according to the specified schedule; however, we can always trigger it manually by navigating to the job in Jenkins CI and clicking on Build Now. After a few of the performance analyses have completed, we can navigate to those jobs in Jenkins CI and see the results shown in the following screenshots: The landing page for a performance analysis project in Jenkins CI Note the Test Result Trend graph with the successes and failures. The Test Result report page for a specific build Note that the failed tests in the overall analysis are called out and that we can expand specific items to view their details. The All Tests view of the Test Result report page for a specific build Note that all tests in the performance analysis are listed here, regardless of whether they passed or failed, and that we can click into a specific test to view its details. How it works… The driving principle behind this article is that we want our continuous integration server to periodically and automatically execute the YSlow analyses for us so that we can monitor our website's performance over time. This way, we can see whether our changes are having an effect on overall site performance, receive alerts when performance declines, or even fail builds if we fall below our performance threshold. The first thing that we do in this article is set up the build job. In our example, we set up a new job that was dedicated to the YSlow performance analysis task. However, these steps could be adapted such that the performance analysis task is added onto an existing multipurpose job. Next, we configured when our job will run, adding Build Trigger to run the analyses according to a schedule. For our schedule, we selected H 9-16/2 * * 1-5, which runs the analyses every two hours, during business hours, on weekdays. While the schedule that we used is fine for demonstration purposes, we should carefully consider the needs of our project—chances are that a different Build Trigger will be more appropriate. For example, it may make more sense to select Build after other projects are built, and to have the performance analyses run only after the new code has been committed, built, and deployed to the appropriate QA or staging environment. Another alternative would be to select Poll SCM and to have the performance analyses run only after Jenkins CI detects new changes in source control. With the schedule configured, we can apply the shell commands necessary for the performance analyses. As noted earlier, the Command text area accepts the text that we would normally type on the command line. Here we type the following: phantomjs: This is for the PhantomJS executable binary ${HOME}/tmp/yslow.js: This is to refer to the copy of the YSlow library accessible to the Jenkins CI user -i grade: This is to indicate that we want the "Grade" level of report detail -threshold "B": This is to indicate that we want to fail builds with an overall grade of "B" or below -f junit: This is to indicate that we want the results output in the JUnit format http://localhost:3000/css-demo: This is typed in as our target URL > yslow.xml: This is to redirect the JUnit-formatted output to that file on the disk What if PhantomJS isn't on the PATH for the Jenkins CI user? A relatively common problem that we may experience is that, although we have permission on Jenkins CI to set up new build jobs, we are not the server administrator. It is likely that PhantomJS is available on the same machine where Jenkins CI is running, but the jenkins user simply does not have the phantomjs binary on its PATH. In these cases, we should work with the person administering the Jenkins CI server to learn its path. Once we have the PhantomJS path, we can do the following: click on Add build step and then on Inject environment variables; drag-and-drop the Inject environment variables block to ensure that it is above our Execute shell block; in the Properties Content text area, apply the PhantomJS binary's path to the PATH variable, as we would in any other script as follows: PATH=/path/to/phantomjs/bin:${PATH} After setting the shell commands to execute, we jump into the Post-build Actions block and instruct Jenkins CI where it can find the JUnit XML reports. As our shell command is redirecting the output into a file that is directly in the workspace, it is sufficient to enter an unqualified *.xml here. Once we have saved our build job in Jenkins CI, the performance analyses can begin right away! If we are impatient for our first round of results, we can click on Build Now for our job and watch as it executes the initial performance analysis. As the performance analyses are run, Jenkins CI will accumulate the results on the filesystem, keeping them until they are either manually removed or until a discard policy removes old build information. We can browse these accumulated jobs in the web UI for Jenkins CI, clicking on the Test Result link to drill into them. There's more… The first thing that bears expanding upon is that we should be thoughtful about what we use as the target URL for our performance analysis job. The YSlow library expects a single target URL, and as such, it is not prepared to handle a performance analysis job that is otherwise configured to target two or more URLs. As such, we must select a strategy to compensate for this, for example: Pick a representative page: We could manually go through our site and select the single page that we feel best represents the site as a whole. For example, we could pick the page that is "most average" compared to the other pages ("most will perform at about this level"), or the page that is most likely to be the "worst performing" page ("most pages will perform better than this"). With our representative page selected, we can then extrapolate performance for other pages from this specimen. Pick a critical page: We could manually select the single page that is most sensitive to performance. For example, we could pick our site's landing page (for example, "it is critical to optimize performance for first-time visitors"), or a product demo page (for example, "this is where conversions happen, so this is where performance needs to be best"). Again, with our performance-sensitive page selected, we can optimize the general cases around the specific one. Set up multiple performance analysis jobs: If we are not content to extrapolate site performance from a single specimen page, then we could set up multiple performance analysis jobs—one for each page on the site that we want to test. In this way, we could (conceivably) set up an exhaustive performance analysis suite. Unfortunately, the results will not roll up into one; however, once our site is properly tuned, we need to only look for the telltale red ball of a failed build in Jenkins CI. The second point worth considering is—where do we point PhantomJS and YSlow for the performance analysis? And how does the target URL's environment affect our interpretation of the results? If we are comfortable running our performance analysis against our production deploys, then there is not much else to discuss—we are assessing exactly what needs to be assessed. But if we are analyzing performance in production, then it's already too late—the slow code has already been deployed! If we have a QA or staging environment available to us, then this is potentially better; we can deploy new code to one of these environments for integration and performance testing before putting it in front of the customers. However, these environments are likely to be different from production despite our best efforts. For example, though we may be "doing everything else right", perhaps our staging server causes all traffic to come back from a single hostname, and thus, we cannot properly mimic a CDN, nor can we use cookie-free domains. Do we lower our threshold grade? Do we deactivate or ignore these rules? How can we tell apart the false negatives from the real warnings? We should put some careful thought into this—but don't be disheartened—better to have results that are slightly off than to have no results at all! Using TAP format If JUnit formatted results turn out to be unacceptable, there is also a TAP plugin for Jenkins CI. Test Anything Protocol (TAP) is a plain text-based report format that is relatively easy for both humans and machines to read. With the TAP plugin installed in Jenkins CI, we can easily configure our performance analysis job to use it. We would just make the following changes to our build job: In the Command text area of our Execute shell block, we would enter the following command: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f tap http ://localhost:3000/css-demo > yslow.tap In the Post-build Actions block, we would select Publish TAP Results instead of Publish JUnit test result report and enter yslow.tap in the Test results text field. Everything else about using TAP instead of JUnit-formatted results here is basically the same. The job will still run on the schedule we specify, Jenkins CI will still accumulate test results for comparison, and we can still explore the details of an individual test's outcomes. The TAP plugin adds an additional link in the job for us, TAP Extended Test Results, as shown in the following screenshot: One thing worth pointing out about using TAP results is that it is much easier to set up a single job to test multiple target URLs within a single website. We can enter multiple tests in the Execute Shell block (separating them with the && operator) and then set our Test Results target to be *.tap. This will conveniently combine the results of all our performance analyses into one. Summary In this article, we saw setting up of an automated performance analysis task on a continuous integration server (for example, Jenkins CI) using PhantomJS and the YSlow library. Resources for Article: Further resources on this subject: Getting Started [article] Introducing a feature of IntroJs [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 2115

article-image-building-private-app
Packt
23 May 2014
14 min read
Save for later

Building a Private App

Packt
23 May 2014
14 min read
(For more resources related to this topic, see here.) Even though the app will be simple and only take a few hours to build, we'll still use good development practices to ensure we create a solid foundation. There are many different approaches to software development and discussing even a fraction of them is beyond the scope of this book. Instead, we'll use a few common concepts, such as requirements gathering, milestones, Test-Driven Development (TDD), frequent code check-ins, and appropriate commenting/documentation. Personal discipline in following development procedures is one of the best things a developer can bring to a project; it is even more important than writing code. This article will cover the following topics: The structure of the app we'll be building The development process Working with the Shopify API Using source control Deploying to production Signing up for Shopify Before we dive back into code, it would be helpful to get the task of setting up a Shopify store out of the way. Sign up as a Shopify partner by going to http://partners.shopify.com. The benefit of this is that partners can provision stores that can be used for testing. Go ahead and make one now before reading further. Keep your login information close at hand; we'll need it in just a moment. Understanding our workflow The general workflow for developing our application is as follows: Pull down the latest version of the master branch. Pick a feature to implement from our requirements list. Create a topic branch to keep our changes isolated. Write tests that describe the behavior desired by our feature. Develop the code until it passes all the tests. Commit and push the code into the remote repository. Pull down the latest version of the master branch and merge it with our topic branch. Run the test suite to ensure that everything still works. Merge the code back with the master branch. Commit and push the code to the remote repository. The previous list should give you a rough idea of what is involved in a typical software project involving multiple developers. The use of topic branches ensures that our work in progress won't affect other developers (called breaking the build) until we've confirmed that our code has passed all the tests and resolved any conflicts by merging in the latest stable code from the master branch. The practical upside of this methodology is that it allows bug fixes or work from another developer to be added to the project at any time without us having to worry about incomplete code polluting the build. This also gives us the ability to deploy production from a stable code base. In practice, a lot of projects will also have a production branch (or tagged release) that contains a copy of the code currently running in production. This is primarily in case of a server failure so that the application can be restored without having to worry about new features being released ahead of schedule, and secondly so that if a new deploy introduces bugs, it can easily be rolled back. Building the application We'll be building an application that allows Shopify storeowners to organize contests for their shoppers and randomly select a winner. Contests can be configured based on purchase history and timeframe. For example, a contest could be organized for all the customers who bought the newest widget within the last three days, or anyone who has made an order for any product in the month of August. To accomplish this, we'll need to be able to pull down order information from the Shopify store, generate a random winner, and show the storeowner the results. Let's start out by creating a list of requirements for our application. We'll use this list to break our development into discrete pieces so we can easily measure our progress and also keep our focus on the important features. Of course, it's difficult to make a complete list of all the requirements and have it stick throughout the development process, which is why a common strategy is to develop in iterations (or sprints). The result of an iteration is a working app that can be reviewed by the client so that the remaining features can be reprioritized if necessary. High-level requirements The requirements list comprises all the tasks we're going to accomplish in this article. The end result will be an application that we can use to run a contest for a single Shopify store. Included in the following list are any related database, business logic, and user interface coding necessary. Install a few necessary gems. Store Shopify API credentials. Connect to Shopify. Retrieve order information from Shopify. Retrieve product information from Shopify. Clean up the UI. Pick a winner from a list. Create contests. Now that we have a list of requirements, we can treat each one as a sprint. We will work in a topic branch and merge our code to the master branch at the end of the sprint. Installing a few necessary gems The first item on our list is to add a few code libraries (gems) to our application. Let's create a topic branch and do just that. To avoid confusion over which branch contains code for which feature, we can start the branch name with the requirement number. We'll additionally prepend the chapter number for clarity, so our format will be <chapter #>_<requirement #>_<branch name>. Execute the following command line in the root folder of the app: git checkout -b ch03_01_gem_updates This command will create a local branch called ch03_01_gem_updates that we will use to isolate our code for this feature. Once we've installed all the gems and verified that the application runs correctly, we'll merge our code back with the master branch. At a minimum we need to install the gems we want to use for testing. For this app we'll use RSpec. We'll need to use the development and test group to make sure the testing gems aren't loaded in production. Add the following code in bold to the block present in the Gemfile: group :development, :test do gem "sqlite3" # Helpful gems gem "better_errors" # improves error handling gem "binding_of_caller" # used by better errors # Testing frameworks gem 'rspec-rails' # testing framework gem "factory_girl_rails" # use factories, not fixtures gem "capybara" # simulate browser activity gem "fakeweb" # Automated testing gem 'guard' # automated execution of test suite upon change gem "guard-rspec" # guard integration with rspec # Only install the rb-fsevent gem if on Max OSX gem 'rb-fsevent' # used for Growl notifications end Now we need to head over to the terminal and install the gems via Bundler with the following command: bundle install The next step is to install RSpec: rails generate rspec:install The final step is to initialize Guard: guard init rspec This will create a Guard file, and fill it with the default code needed to detect the file changes. We can now restart our Rails server and verify that everything works properly. We have to do a full restart to ensure that any initialization files are properly picked up. Once we've ensured that our page loads without issue, we can commit our code and merge it back with the master branch: git add --all git commit -am "Added gems for testing" git checkout master git merge ch03_01_gem_updates git push Great! We've completed our first requirement. Storing Shopify API credentials In order to access our test store's API, we'll need to create a Private App and store the provided credentials there for future use. Fortunately, Shopify makes this easy for us via the Admin UI: Go to the Apps page. At the bottom of the page, click on the Create a private API key… link. Click on the Generate new Private App button. We'll now be provided with three important pieces of information: the API Key, password, and shared secret. In addition, we can see from the example URL field that we need to track our Shopify URL as well. Now that we have credentials to programmatically access our Shopify store, we can save this in our application. Let's create a topic branch and get to work: git checkout -b ch03_02_shopify_credentials Rails offers a generator called a scaffold that will create the database migration model, controller, view files, and test stubs for us. Run the following from the command line to create the scaffold for the Account vertical (make sure it is all on one line): rails g scaffold Account shopify_account_url:string shopify_api_key:string shopify_password:string shopify_shared_secret:string We'll need to run the database migration to create the database table using the following commands: bundle exec rake db:migrate bundle exec rake db:migrate RAILS_ENV=test Use the following command to update the generated view files to make them bootstrap compatible: rails g bootstrap:themed Accounts -f Head over to http://localhost:3000/accounts and create a new account in our app that uses the Shopify information from the Private App page. It's worth getting Guard to run our test suite every time we make a change so we can ensure that we don't break anything. Open up a new terminal in the root folder of the app and start up Guard: bundle exec guard After booting up, Guard will automatically run all our tests. They should all pass because we haven't made any changes to the generated code. If they don't, you'll need to spend time sorting out any failures before continuing. The next step is to make the app more user friendly. We'll make a few changes now and leave the rest for you to do later. Update the layout file so it has accurate navigation. Boostrap created several dummy links in the header navigation and sidebar. Update the navigation list in /app/views/layouts/application.html.erb to include the following code: <a class="brand" href="/">Contestapp</a> <div class="container-fluid nav-collapse"> <ul class="nav"> <li><%= link_to "Accounts", accounts_path%></li> </ul> </div><!--/.nav-collapse --> Add validations to the account model to ensure that all fields are required when creating/updating an account. Add the following lines to /app/models/account.rb: validates_presence_of :shopify_account_url validates_presence_of :shopify_api_key validates_presence_of :shopify_password validates_presence_of :shopify_shared_secret This will immediately cause the controller tests to fail due to the fact that it is not passing in all the required fields when attempting to submit the created form. If you look at the top of the file, you'll see some code that creates the :valid_attributes hash. If you read the comment above it, you'll see that we need to update the hash to contain the following minimally required fields: # This should return the minimal set of attributes required # to create a valid Account. As you add validations to # Account, be sure to adjust the attributes here as well. let(:valid_attributes) { { "shopify_account_url" => "MyString", "shopify_password" => "MyString", "shopify_api_ key" => "MyString", "shopify_shared_secret" => "MyString" } } This is a prime example of why having a testing suite is important. It keeps us from writing code that breaks other parts of the application, or in this case, helps us discover a weakness we might not have known we had: the ability to create a new account record without filling in any fields! Now that we have satisfied this requirement and all our tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Account model and related files" git checkout master git merge ch03_02_shopify_credentials git push Excellent! We've now completed another critical piece! Connecting to Shopify Now that we have a test store to work with, we're ready to implement the code necessary to connect our app to Shopify. First, we need to create a topic branch: git checkout -b ch03_03_shopify_connection We are going to use the official Shopify gem to connect our app to our test store, as well as interact with the API. Add this to the Gemfile under the gem 'bootstrap-sass' line: gem 'shopify_api' Update the bundle from the command line: bundle install We'll also need to restart Guard in order for it to pick up the new gem. This is typically done by using a key combination like Ctrl + Zexit and pressing the Enter key. I've written a class that encapsulates the Shopify connection logic and initializes the global ShopifyAPI class that we can then use to interact with the API. You can find the code for this class in ch03_shopify_integration.rb. You'll need to copy the contents of this file to your app in a new file located at /app/services/shopify_integration.rb. The contents of the spec file ch03_shopify_integration_spec.rb need to be pasted in a new file located at /spec/services/shopify_integration_spec.rb. Using this class will allow us to execute something like ShopifyAPI::Order.find(:all) to get a list of orders, or ShopifyAPI::Product.find(1234) to retrieve the product with the ID 1234. The spec file contains tests for functionality that we haven't built yet and will initially fail. We'll fix this soon! We are going to add a Test Connection button to the account page that will give the user instant feedback as to whether or not the credentials are valid. Because we will be adding a new action to our application, we will need to first update controller, request, routing, and view tests before proceeding. Given the nature of this article and because in this case, we're connecting to an external service, the topics such as mocking, test writing, and so on will have to be reviewed as homework. I recommend watching the excellent screencasts created by Ryan Bates at http://railscasts.com as a primer on testing in Rails. The first step is to update the resources :accounts route in the /config/routes.rb file with the following block: resources :accounts do member do get 'test_connection' end end Copy the controller code from ch03_accounts_controller.rb and replace the code in /app/controllers/accounts_controller.rb file. This new code adds the test_connection method as well as ensuring the account is loaded properly. Finally, we need to add a button to /app/views/account/show.html.erb that will call this action in div.form-actions: <%= link_to "Test Connection",test_connection_account_path(@account), :class => 'btn' %> If we view the account page in our browser, we can now test our Shopify integration. Assuming that everything was copied correctly, we should see a success message after clicking on the Test Connection button. If everything was not copied correctly, we'll see the message that the Shopify API returned to us as a clue to what isn't working. Once all the tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Shopify connection and related UI" git checkout master git merge ch03_03_shopify_connection git push Having fun? Good, because things are about to get heavy. Summary: As you can see and understand this article explains briefly about, the integration with Shopify's API in order to retrieve product and order information from the shop. The UI is then streamlined a bit before the logic to create a contest is created. Resources for Article: Further resources on this subject: Integrating typeahead.js into WordPress and Ruby on Rails [Article] Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion [Article] Designing and Creating Database Tables in Ruby on Rails [Article]
Read more
  • 0
  • 0
  • 1510
article-image-3d-websites
Packt
23 May 2014
10 min read
Save for later

3D Websites

Packt
23 May 2014
10 min read
(For more resources related to this topic, see here.) Creating engaging scenes There is no adopted style for a 3D website. No metaphor can best describe the process of designing the 3D web. Perhaps what we know the most is what does not work. Often, our initial concept is to model the real world. An early design that was used years ago involved a university that wanted to use its campus map to navigate through its website. One found oneself dragging the mouse repeatedly, as fast as one could, just to get to the other side of campus. A better design would've been a book shelf where everything was in front of you. To view the chemistry department, just grab the chemistry book, and click on the virtual pages to view the faculty, curriculum, and other department information. Also, if you needed to cross-reference this with the math department's upcoming schedule, you could just grab the math book. Each attempt adds to our knowledge and gets us closer to something better. What we know is what most other applications of computer graphics learned—that reality might be a starting point, but we should not let it interfere with creativity. 3D for the sake of recreating the real world limits our innovative potential. Following this starting point, strip out the parts bound by physics, such as support beams or poles that serve no purpose in a virtual world. Such items make the rendering slower by just existing. Once we break these bounds, the creative process takes over—perhaps a whimsical version, a parody, something dark and scary, or a world-emphasizing story. Characters in video games and animated movies take on stylized features. The characters are purposely unrealistic or exaggerated. One of the best animations to exhibit this is Chris Landreth's The Spine, Ryan (Academy Award for best-animated short film in 2004), and his earlier work in Psychological Driven Animation, where the characters break apart by the ravages of personal failure (https://www.nfb.ca/film/ryan). This demonstration will describe some of the more difficult technical issues involved with lighting, normal maps, and the efficient sharing of 3D models. The following scene uses 3D models and textures maps from previous demonstrations but with techniques that are more complex. Engage thrusters This scene has two lampposts and three brick walls, yet we only read in the texture map and 3D mesh for one of each and then reuse the same models several times. This has the obvious advantage that we do not need to read in the same 3D models several times, thus saving download time and using less memory. A new function, copyObject(), was created that currently sits inside the main WebGL file, although it can be moved to mesh3dObject.js. In webGLStart(), after the original objects were created, we call copyObject(), passing along the original object with the unique name, location, rotation, and scale. In the following code, we copy the original streetLight0Object into a new streetLight1Object: streetLight1Object = copyObject( streetLight0Object, "streetLight1", streetLight1Location, [1, 1, 1], [0, 0, 0] ); Inside copyObject(), we first create the new mesh and then set the unique name, location (translation), rotation, and scale: function copyObject(original, name, translation, scale, rotation) { meshObjectArray[ totalMeshObjects ] = new meshObject(); newObject = meshObjectArray[ totalMeshObjects ]; newObject.name = name; newObject.translation = translation; newObject.scale = scale; newObject.rotation = rotation; The object to be copied is named original. We will not need to set up new buffers since the new 3D mesh can point to the same buffers as the original object: newObject.vertexBuffer = original.vertexBuffer; newObject.indexedFaceSetBuffer = original.indexedFaceSetBuffer; newObject.normalsBuffer = original.normalsBuffer; newObject.textureCoordBuffer = original.textureCoordBuffer; newObject.boundingBoxBuffer = original.boundingBoxBuffer; newObject.boundingBoxIndexBuffer = original.boundingBoxIndexBuffer; newObject.vertices = original.vertices; newObject.textureMap = original.textureMap; We do need to create a new bounding box matrix since it is based on the new object's unique location, rotation, and scale. In addition, meshLoaded is set to false. At this stage, we cannot determine if the original mesh and texture map have been loaded since that is done in the background: newObject.boundingBoxMatrix = mat4.create(); newObject.meshLoaded = false; totalMeshObjects++; return newObject; } There is just one more inclusion to inform us that the original 3D mesh and texture map(s) have been loaded inside drawScene(): streetLightCover1Object.meshLoaded = streetLightCover0Object.meshLoaded; streetLightCover1Object.textureMap = streetLightCover0Object.textureMap; This is set each time a frame is drawn, and thus, is redundant once the mesh and texture map have been loaded, but the additional code is a very small hit in performance. Similar steps are performed for the original brick wall and its two copies. Most of the scene is programmed using fragment shaders. There are four lights: the two streetlights, the neon Products sign, and the moon, which sets and rises. The brick wall uses normal maps. However, it is more complex here; the use of spotlights and light attenuation, where the light fades over a distance. The faint moon light, however, does not fade over a distance. Opening scene with four light sources: two streetlights, the Products neon sign, and the moon This program has only three shaders: LightsTextureMap, used by the brick wall with a texture normal map; Lights, used for any object that is illuminated by one or more lights; and Illuminated, used by the light sources such as the moon, neon sign, and streetlight covers. The simplest out of these fragment shaders is Illuminated. It consists of a texture map and the illuminated color, uLightColor. For many objects, the texture map would simply be a white placeholder. However, the moon uses a texture map, available for free from NASA that must be merged with its color: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); gl_FragColor = vec4(fragmentColor.rgb * uLightColor, 1.0); The light color also serves another purpose, as it will be passed on to the other two fragment shaders since each adds its own individual color: off-white for the streetlights, gray for the moon, and pink for the neon sign. The next step is to use the shaderLights fragment shader. We begin by setting the ambient light, which is a dim light added to every pixel, usually about 0.1, so nothing is pitch black. Then, we make a call for each of our four light sources (two streetlights, the moon, and the neon sign) to the calculateLightContribution() function: void main(void) { vec3 lightWeighting = vec3(uAmbientLight, uAmbientLight, uAmbientLight); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight0Loc, uSpotLightDir, false); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight1Loc, uSpotLightDir, false); lightWeighting += uMoonLightColor * calculateLightContribution(uMoonLightPos, vec3(0.0, 0.0, 0.0), true); lightWeighting += uProductTextColor * calculateLightContribution(uProductTextLoc, vec3(0.0, 0.0, 0.0), true); All four calls to calculateLightContribution() are multiplied by the light's color (white for the streetlights, gray for the moon, and pink for the neon sign). The parameters in the call to calculateLightContribution(vec3, vec3, vec3, bool) are: location of the light, its direction, the pixel's normal, and the point light. This parameter is true for a point light that illuminates in all directions, or false if it is a spotlight that points in a specific direction. Since point lights such as the moon or neon sign have no direction, their direction parameter is not used. Therefore, their direction parameter is set to a default, vec3(0.0, 0.0, 0.0). The vec3 lightWeighting value accumulates the red, green, and blue light colors at each pixel. However, these values cannot exceed the maximum of 1.0 for red, green, and blue. Colors greater than 1.0 are unpredictable based on the graphics card. So, the red, green, and blue light colors must be capped at 1.0: if ( lightWeighting.r > 1.0 ) lightWeighting.r = 1.0; if ( lightWeighting.g > 1.0 ) lightWeighting.g = 1.0; if ( lightWeighting.b > 1.0 ) lightWeighting.b = 1.0; Finally, we calculate the pixels based on the texture map. Only the street and streetlight posts use this shader, and neither have any tiling, but the multiplication by uTextureMapTiling was included in case there was tiling. The fragmentColor based on the texture map is multiplied by lightWeighting—the accumulation of our four light sources for the final color of each pixel: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)); gl_FragColor = vec4(fragmentColor.rgb * lightWeighting.rgb, 1.0); } In the calculateLightContribution() function, we begin by determining the angle between the light's direction and point's normal. The dot product is the cosine between the light's direction to the pixel and the pixel's normal, which is also known as Lambert's cosine law (http://en.wikipedia.org/wiki/Lambertian_reflectance): vec3 distanceLightToPixel = vec3(vPosition.xyz - lightLoc); vec3 vectorLightPosToPixel = normalize(distanceLightToPixel); vec3 lightDirNormalized = normalize(lightDir); float angleBetweenLightNormal = dot( -vectorLightPosToPixel, vTransformedNormal ); A point light shines in all directions, but a spotlight has a direction and an expanding cone of light surrounding this direction. For a pixel to be lit by a spotlight, that pixel must be in this cone of light. This is the beam width area where the pixel receives the full amount of light, which fades out towards the cut-off angle that is the angle where there is no more light coming from this spotlight: With texture maps removed, we reveal the value of the dot product between the pixel normal and direction of the light if ( pointLight) { lightAmt = 1.0; } else { // spotlight float angleLightToPixel = dot( vectorLightPosToPixel, lightDirNormalized ); // note, uStreetLightBeamWidth and uStreetLightCutOffAngle // are the cosines of the angles, not actual angles if ( angleLightToPixel >= uStreetLightBeamWidth ) { lightAmt = 1.0; } if ( angleLightToPixel > uStreetLightCutOffAngle ) { lightAmt = (angleLightToPixel - uStreetLightCutOffAngle) / (uStreetLightBeamWidth - uStreetLightCutOffAngle); } } After determining the amount of light at that pixel, we calculate attenuation, which is the fall-off of light over a distance. Without attenuation, the light is constant. The moon has no light attenuation since it's dim already, but the other three lights fade out at the maximum distance. The float maxDist = 15.0; code snippet says that after 15 units, there is no more contribution from this light. If we are less than 15 units away from the light, reduce the amount of light proportionately. For example, a pixel 10 units away from the light source receives (15-10)/15 or 1/3 the amount of light: attenuation = 1.0; if ( uUseAttenuation ) { if ( length(distanceLightToPixel) < maxDist ) { attenuation = (maxDist - length(distanceLightToPixel))/maxDist; } else attenuation = 0.0; } Finally, we multiply the values that make the light contribution and we are done: lightAmt *= angleBetweenLightNormal * attenuation; return lightAmt; Next, we must account for the brick wall's normal map using the shaderLightsNormalMap-fs fragment shader. The normal is equal to rgb * 2 – 1. For example, rgb (1.0, 0.5, 0.0), which is orange, would become a normal (1.0, 0.0, -1.0). This normal is converted to a unit value or normalized to (0.707, 0, -0.707): vec4 textureMapNormal = vec4( (texture2D(uSamplerNormalMap, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)) * 2.0) - 1.0 ); vec3 pixelNormal = normalize(uNMatrix * normalize(textureMapNormal.rgb) ); A normal mapped brick (without red brick texture image) reveals how changing the pixel normal altersthe shading with various light sources We call the same calculateLightContribution() function, but we now pass along pixelNormal calculated using the normal texture map: calculateLightContribution(uSpotLight0Loc, uSpotLightDir, pixelNormal, false); From here, much of the code is the same, except we use pixelNormal in the dot product to determine the angle between the normal and the light sources: float angleLightToTextureMap = dot( -vectorLightPosToPixel, pixelNormal ); Now, angleLightToTextureMap replaces angleBetweenLightNormal because we are no longer using the vertex normal embedded in the 3D mesh's .obj file, but instead we use the pixel normal derived from the normal texture map file, brickNormalMap.png. A normal mapped brick wall with various light sources Objective complete – mini debriefing This comprehensive demonstration combined multiple spot and point lights, shared 3D meshes instead of loading the same 3D meshes, and deployed normal texture maps for a real 3D brick wall appearance. The next step is to build upon this demonstration, inserting links to web pages found on a typical website. In this example, we just identified a location for Products using a neon sign to catch the users' attention. As a 3D website is built, we will need better ways to navigate this virtual space and this is covered in the following section.
Read more
  • 0
  • 0
  • 1486

Packt
21 May 2014
8 min read
Save for later

Running our first web application

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) The standalone/deployments directory, as in the previous releases of JBoss Application Server, is the location used by end users to perform their deployments and applications are automatically deployed into the server at runtime. The artifacts that can be used to deploy are as follows: WAR (Web application Archive): This is a JAR file used to distribute a collection of JSP (Java Server Pages), servlets, Java classes, XML files, libraries, static web pages, and several other features that make up a web application. EAR (Enterprise Archive): This type of file is used by Java EE for packaging one or more modules within a single file. JAR (Java Archive): This is used to package multiple Java classes. RAR (Resource Adapter Archive): This is an archive file that is defined in the JCA specification as the valid format for deployment of resource adapters on application servers. You can deploy a RAR file on the AS Java as a standalone component or as part of a larger application. In both cases, the adapter is available to all applications using a lookup procedure. The deployment in WildFly has some deployment file markers that can be identified quickly, both by us and by WildFly, to understand what is the status of the artifact, whether it was deployed or not. The file markers always have the same name as the artifact that will deploy. A basic example is the marker used to indicate that my-first-app.war, a deployed application, will be the dodeploy suffix. Then in the directory to deploy, there will be a file created with the name my-first-app.war.dodeploy. Among these markers, there are others, explained as follows: dodeploy: This suffix is inserted by the user, which indicates that the deployment scanner will deploy the artifact indicated. This marker is mostly important for exploded deployments. skipdeploy: This marker disables the autodeploy mode while this file is present in the deploy directory, only for the artifact indicated. isdeploying: This marker is placed by the deployment scanner service to indicate that it has noticed a .dodeploy file or a new or updated autodeploy mode and is in the process of deploying the content. This file will be erased by the deployment scanner so the deployment process finishes. deployed: This marker is created by the deployment scanner to indicate that the content was deployed in the runtime. failed: This marker is created by the deployment scanner to indicate that the deployment process failed. isundeploying: This marker is created by the deployment scanner and indicates the file suffix .deployed was deleted and its contents will be undeployed. This marker will be deleted when the process is completely undeployed. undeployed: This marker is created by the deployment scanner to indicate that the content was undeployed from the runtime. pending: This marker is placed by the deployment scanner service to indicate that it has noticed the need to deploy content but has not yet instructed the server to deploy it. When we deploy our first application, we'll see some of these marker files, making it easier to understand their functions. To support learning, the small applications that I made will be available on GitHub (https://github.com) and packaged using Maven (for further details about Maven, you can visit http://maven.apache.org/). To begin the deployment process, we perform a checkout of the first application. First of all you need to install the Git client for Linux. To do this, use the following command: [root@wfly_book ~]# yum install git –y Git is also necessary to perform the Maven installation so that it is possible to perform the packaging process of our first application. Maven can be downloaded from http://maven.apache.org/download.cgi. Once the download is complete, create a directory that will be used to perform the installation of Maven and unzip it into this directory. In my case, I chose the folder /opt as follows: [root@wfly_book ~]# mkdir /opt/maven Unzip the file into the newly created directory as follows: [root@wfly_book maven]# tar -xzvf /root/apache-maven-3.2.1-bin.tar.gz [root@wfly_book maven]# cd apache-maven-3.2.1/ Run the mvn command and, if any errors are returned, we must set the environment variable M3_HOME, described as follows: [root@wfly_book ~]# mvn -bash: mvn: command not found If the error indicated previously occurs again, it is because the Maven binary was not found by the operating system; in this scenario, we must create and configure the environment variable that is responsible for this. First, two settings, populate the environment variable with the Maven installation directory and enter the directory in the PATH environment variable in the necessary binaries. Access and edit the /etc/profile file, taking advantage of the configuration that we did earlier with the Java environment variable, and see how it will look with the Maven configuration file as well: #Java and Maven configuration export JAVA_HOME="/usr/java/jdk1.7.0_45" export M3_HOME="/opt/maven/apache-maven-3.2.1" export PATH="$PATH:$JAVA_HOME/bin:$M3_HOME/bin" Save and close the file and then run the following command to apply the following settings: [root@wfly_book ~]# source /etc/profile To verify the configuration performed, run the following command: [root@wfly_book ~]# mvn -version Well, now that we have the necessary tools to check out the application, let's begin. First, set a directory where the application's source codes will be saved as shown in the following command: [root@wfly_book opt]# mkdir book_apps [root@wfly_book opt]# cd book_apps/ Let's check out the project using the command, git clone; the repository is available at https://github.com/spolti/wfly_book.git. Perform the checkout using the following command: [root@wfly_book book_apps]# git clone https://github.com/spolti/wfly_book.git Access the newly created directory using the following command: [root@wfly_book book_apps]# cd wfly_book/ For the first example, we will use the application called app1-v01, so access this directory and build and deploy the project by issuing the following commands. Make sure that the WildFly server is already running. The first build is always very time-consuming, because Maven will download all the necessary libs to compile the project, project dependencies, and Maven libraries. [root@wfly_book wfly_book]# cd app1-v01/ [root@wfly_book app1-v01]# mvn wildfly:deploy For more details about the WildFly Maven plugin, please take a look at https://docs.jboss.org/wildfly/plugins/maven/latest/index.html. The artifact will be generated and automatically deployed on WildFly Server. Note that a message similar to the following is displayed stating that the application was successfully deployed: INFO [org.jboss.as.server] (ServerService Thread Pool -- 29) JBAS018559: Deployed "app1-v01.war" (runtime-name : "app1-v01.war") When we perform the deployment of some artifact, and if we have not configured the virtual host or context root address, then in order to access the application we always need to use the application name without the suffix, because our application's address will be used for accessing it. The structure to access the application is http://<your-ip-address>:<port-number>/app1-v01/. In my case, it would be http://192.168.11.109:8080/app1-v01/. See the following screenshot of the application running. This application is very simple and is made using JSP and rescuing some system properties. Note that in the deployments directory we have a marker file that indicates that the application was successfully deployed, as follows: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 07:33 app1-v01.war.deployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt To undeploy the application without having to remove the artifact, we need only remove the app1-v01.war.deployed file. This is done using the following command: [root@wfly_book ~]# cd $JBOSS_HOME/standalone/deployments [root@wfly_book deployments]# rm app1-v01.war.deployed rm: remove regular file `app1-v01.war.deployed'? y In the previous option, you will also need to press Y to remove the file. You can also use the WildFly Maven plugin for undeployment, using the following command: [root@wfly_book deployments]# mvn wildfly:undeploy Notice that the log is reporting that the application was undeployed and also note that a new marker, .undeployed, has been added indicating that the artifact is no longer active in the runtime server as follows: INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS018558: Undeployed "app1-v01.war" (runtime-name: "app1-v01.war") And run the following command: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 09:44 app1-v01.war.undeployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt [root@wfly_book deployments]# If you make undeploy using the WildFly Maven plugin, the artifact will be deleted from the deployments directory. Summary In this article, we learn how to configure an application using a virtual host, the context root, and also how to use the logging tools that we now have available to use Java in some of our test applications, among several other very interesting settings. Resources for Article: Further resources on this subject: JBoss AS Perspective [Article] JBoss EAP6 Overview [Article] JBoss RichFaces 3.3 Supplemental Installation [Article]
Read more
  • 0
  • 0
  • 1516