Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-creating-dynamic-ui-android-fragments
Packt
26 Sep 2013
2 min read
Save for later

Creating Dynamic UI with Android Fragments

Packt
26 Sep 2013
2 min read
(For more resources related to this topic, see here.) Many applications involve several screens of data that a user might want to browse or flip through to view each screen. As an example, think of an application where we list a catalogue of books with each book in the catalogue appearing on a single screen. A book's screen contains an image, title, and description like the following screenshot: To view each book's information, the user needs to move to each screen. We could put a next button and a previous button on the screen, but a more natural action is for the user to use their thumb or finger to swipe the screen from one edge of the display to the other and have the screen with the next book's information slide into place as represented in the following screenshot: This creates a very natural navigation experience, and honestly, is a more fun way to navigate through an application than using buttons. Summary Fragments are the foundation of modern Android app development, allowing us to display multiple application screens within a single activity. Thanks to the flexibility provided by fragments, we can now incorporate rich navigation into our apps with relative ease. Using these rich navigation capabilities, we're able to create a more dynamic user interface experience that make our apps more compelling and that users find more fun to work with. Resources for Article : Further resources on this subject: So, what is Spring for Android? [Article] Android Native Application API [Article] Animating Properties and Tweening Pages in Android 3-0 [Article]
Read more
  • 0
  • 0
  • 5091

article-image-preparing-your-first-jquery-mobile-project
Packt
25 Sep 2013
10 min read
Save for later

Preparing Your First jQuery Mobile Project

Packt
25 Sep 2013
10 min read
(For more resources related to this topic, see here.) Building an HTML page Let's begin with a simple web page that is not mobile optimized. To be clear, we aren't saying it can't be viewed on a mobile device. Not at all! But it may not be usable on a mobile device. It may be hard to read (text too small). It may be too wide. It may use forms that don't work well on a touch screen. We don't know what kinds of problems we will have at all until we start testing. (And we've all tested our websites on mobile devices to see how well they work, right?) Let's have a look at the following code snippet: <h1>Welcome</h1> <p> Welcome to our first mobile web site. It's going to be the bestsite you've ever seen. Once we get some content. And a business plan. But the hard part is done! </p> <p> <i>Copyright Megacorp&copy; 2013</i> </p> </body> </html> As we said, there is nothing too complex, right? Let's take a quick look at this in the browser: Not so bad, right? But let's take a look at the same page in a mobile simulator: Wow, that's pretty tiny. You've probably seen web pages like this before on your mobile device. You can, of course, typically use pinch and zoom or double-click actions to increase the size of the text. But it would be preferable to have the page render immediately in a mobile-friendly view. This is where jQuery Mobile comes in. Getting jQuery Mobile In the preface we talked about how jQuery Mobile is just a set of files. That isn't said to minimize the amount of work done to create those files, or how powerful they are, but to emphasize that using jQuery Mobile means you don't have to install any special tools or server. You can download the files and simply include them in your page. And if that's too much work, you have an even simpler solution. jQuery Mobile's files are hosted on a Content Delivery Network (CDN). This is a resource hosted by them and guaranteed (as much as anything like this can be) to be online and available. Multiple sites are already using these CDN hosted files. That means when your users hit your site they may already have the resources in their cache. For this article, we will be making use of the CDN hosted files, but just for this first example we'll download and extract the files we need. I recommend doing this anyway for those times when you're on an airplane and wanting to whip up a quick mobile site. To grab the files, visit http://jquerymobile.com/download. There are a few options here but you want the ZIP file option. Go ahead and download that ZIP file and extract it. (The ZIP file you downloaded earlier from GitHub has a copy already.) The following screenshot demonstrates what you should see after extracting the files from the ZIP file: Notice the ZIP file contains a CSS and JavaScript file for jQuery Mobile, as well as a minified version of both. You will typically want to use the minified version in your production apps and the regular version while developing. The images folder has five images used by the CSS when generating mobile optimized pages. You will also see demos for the framework as well as theme and structure files. So, to be clear, the entire framework and all the features we will be talking about over the rest of the article will consist of a framework of 6 files. Of course, you also need to include the jQuery library. You can download that separately at www.jquery.com. At the time this article was written, the recommended version was 1.9.1. Customized downloads As a final option for downloading jQuery Mobile, you can also use a customized Download Builder tool at http://jquerymobile.com/download-builder. Currently in Alpha (that is, not certified to be bug-free!), the web-based tool lets you download a jQuery Mobile build minus features your website doesn't need. This creates smaller files which reduces the total amount of time your application needs to display to the end user. Implementing jQuery Mobile Ok, we've got the bits, but how do we use them? Adding jQuery Mobile support to a site requires the following three steps at a minimum: First, add the HTML5 DOCTYPE to the page: <!DOCTYPE html>. This is used to help inform the browser about the type of content it will be dealing with. Add a viewport metatag: <metaname="viewport"content="width=device-width,initial-scale="1">. This helps set better defaults for pages when viewed on a mobile device. Finally, the CSS, JavaScript library, and jQuery itself need to be included into the file. Let's look at a modified version of our previous HTML file that adds all of the above: code 1-2: test2.html <!DOCTYPE html> <html> <head> <title>First Mobile Example</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet"href="jquery.mobile-1.3.2.min.css" /> <script type="text/javascript"src = "http://code.jquery.com/jquery-1.9.1.min.js"></script> <script type="text/javascript"src = "jquery.mobile-1.3.2.min.js"></script> </head> <body> <h1>Welcome</h1> <p> Welcome to our first mobile web site. It's going to be the best siteyou've ever seen. Once we get some content. And a business plan. But the hard part is done! </p> <p> <i>Copyright Megacorp&copy; 2013</i> </p> </body> </html> For the most part, this version is the exact same as Code 1-1, except for the addition of the DOCTYPE, the CSS link, and our two JavaScript libraries. Notice we point to the hosted version of the jQuery library. It's perfectly fine to mix local JavaScript files and remote ones. If you wanted to ensure you could work offline, you can simply download the jQuery library as well. So while nothing changed in the code between the body tags, there is going to be a radically different view now in the browser. The following screenshot shows how the iOS mobile browser renders the page now: Right away, you see a couple of differences. The biggest difference is the relative size of the text. Notice how much bigger it is and easier to read. As we said, the user could have zoomed in on the previous version, but many mobile users aren't aware of this technique. This page loads up immediately in a manner that is much more usable on a mobile device. Working with data attributes As we saw in the previous example, just adding in jQuery Mobile goes a long way to updating our page for mobile support. But there's a lot more involved to really prepare our pages for mobile devices. As we work with jQuery Mobile over the course of the article, we're going to use various data attributes to mark up our pages in a way that jQuery Mobile understands. But what are data attributes? HTML5 introduced the concept of data attributes as a way to add ad-hoc values to the DOM ( Document Object Model). As an example, this is a perfectly valid HTML: <div id="mainDiv" data-ray="moo">Some content</div> In the previous HTML, the data-ray attribute is completely made-up. However, because our attribute begins with data-, it is also completely legal. So what happens when you view this in your browser? Nothing! The point of these data attributes is to integrate with other code, like JavaScript, that does whatever it wants with them. So for example, you could write JavaScript that finds every item in the DOM with the data-ray attribute, and change the background color to whatever was specified in the value. This is where jQuery Mobile comes in, making extensive use of data attributes, both for markup (to create widgets) and behavior (to control what happens when links are clicked). Let's look at one of the main uses of data attributes within jQuery Mobile—defining pages, headers, content, and footers: code 1-3: test3.html <!DOCTYPE html> <html> <head> <title>First Mobile Example</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet"href="jquery.mobile-1.3.2.min.css" /> <script type="text/javascript"src = "http://code.jquery.com/jquery-1.9.1.min.js"></script> <script type="text/javascript"src = "jquery.mobile-1.3.2.min.js"></script> </head> <body> <div data-role="page"> <div data-role="header"><h1>Welcome</h1></div> <div data-role="content"> <p> Welcome to our first mobile web site. It's going to be the bestsite you've ever seen. Once we get some content.And a business plan. But the hard part is done! </p> </div> <div data-role="footer"> <h4>Copyright Megacorp&copy; 2013</h4> </div> </div> </body> </html> Compare the previous code snippet to code 1-2, and you can see that the main difference was the addition of the div blocks. One div block defines the page. Notice it wraps all of the content inside the body tags. Inside the body tag, there are three separate div blocks. One has a role of header, another a role of content, and the final one is marked as footer. All the blocks use data-role, which should give you a clue that we're defining a role for each of the blocks. As we stated previously, these data attributes mean nothing to the browser itself. But let's look what at what jQuery Mobile does when it encounters these tags: Notice right away that both the header and footer now have a black background applied to them. This makes them stand out even more from the rest of the content. Speaking of the content, the page text now has a bit of space between it and the sides. All of this was automatic once the div tags with the recognized data-roles were applied. This is a theme you're going to see repeated again and again as we go through this article. A vast majority of the work you'll be doing will involve the use of data attributes. Summary In this article, we talked a bit about how web pages may not always render well in a mobile browser. We talked about how the simple use of jQuery Mobile can go a long way to improving the mobile experience for a website. Specifically, we discussed how you can download jQuery Mobile and add it to an existing HTML page, what data attributes mean in terms of HTML, and how jQuery Mobile makes use of data attributes to enhance your pages. Resources for Article: Further resources on this subject: jQuery Mobile: Collapsible Blocks and Theming Content [Article] Using different jQuery event listeners for responsive interaction [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 1565

article-image-vaadin-and-its-context
Packt
25 Sep 2013
24 min read
Save for later

Vaadin and its Context

Packt
25 Sep 2013
24 min read
(For more resources related to this topic, see here.) Developing Java applications and more specifically, developing Java web applications should be fun. Instead, most projects are a mess of sweat and toil, pressure and delays, costs and cost cutting. Web development has lost its appeal. Yet, among the many frameworks available, there is one in particular that draws our attention because of its ease of use and its original stance. It has been around since the past decade and has begun to grow in importance. The name of this framework is Vaadin. The goal of this article is to see, step-by-step, how to develop web applications with Vaadin. Vaadin is the Finnish word for a female reindeer (as well as a Finnish goddess). This piece of information will do marvels to your social life as you are now one of the few people on Earth who know this (outside Finland). Before diving right into Vaadin, it is important to understand what led to its creation. Readers who already have this information (or who don't care) should go directly to Environment Setup. Rich applications Vaadin is often referred to as a Rich Internet Application (RIA) framework. Before explaining why, we need to first define some terms which will help us describe the framework. In particular, we will have a look at application tiers, the different kind of clients, and their history. Application tiers Some software run locally, that is, on the client machine and some run remotely, such as on a server machine. Some applications also run on both the client and the server. For example, when requesting an article from a website, we interact with a browser on the client side but the order itself is passed on a server in the form of a request. Traditionally, all applications can be logically separated into tiers, each having different responsibilities as follows: Presentation : The presentation tier is responsible for displaying the end-user information and interaction. It is the realm of the user interface. Business Logic : The logic tier is responsible for controlling the application logic and functionality. It is also known as the application tier, or the middle tier as it is the glue between the other two surrounding tiers, thus leading to the term middleware. Data : The data tier is responsible for storing and retrieving data. This backend may be a file system. In most cases, it is a database, whether relational, flat, or even an object-oriented one. This categorization not only naturally corresponds to specialized features, but also allows you to physically separate your system into different parts, so that you can change a tier with reduced impact on adjacent tiers and no impact on non-adjacent tiers. Tier migration In the histor yof computers and computer software, these three tiers have moved back and forth between the server and the client. Mainframes When computers were mainframes, all tiers were handled by the server. Mainframes stored data, processed it, and were also responsible for the layout of the presentation. Clients were dumb terminals, suited only for displaying characters on the screen and accepting the user input. Client server Not many companies could afford the acquisition of a mainframe (and many still cannot). Yet, those same companies could not do without computers at all, because the growing complexity of business processes needed automation. This development in personal computers led to a decrease in their cost. With the need to share data between them, the network traffic rose. This period in history saw the rise of the personal computer, as well as the Client server term, as there was now a true client. The presentation and logic tier moved locally, while shared databases were remotely accessible, as shown in the following diagram: Thin clients Big companies migrating from mainframes to client-server architectures thought that deploying software on ten client machines on the same site was relatively easy and could be done in a few hours. However, they quickly became aware of the fact that with the number of machines growing in a multi-site business, it could quickly become a nightmare. Enterprises also found that it was not only the development phase that had to be managed like a project, but also the installation phase. When upgrading either the client or the server, you most likely found that the installation time was high, which in turn led to downtime and that led to additional business costs. Around 1991, Sir Tim Berners-Leeinvented the Hyper Text Markup Language, better known as HTML. Some time after that, people changed its original use, which was to navigate between documents, to make HTML-based web applications. This solved the deployment problem as the logic tier was run on a single-server node (or a cluster), and each client connected to this server. A deployment could be done in a matter of minutes, at worst overnight, which was a huge improvement. The presentation layer was still hosted on the client, with the browser responsible for displaying the user interface and handling user interaction. This new approach brought new terms, which are as follows: The old client-server architecture was now referred to as fat client . The new architecture was coined as thin client, as shown in the following diagram: Limitations of the thin-client applications approach Unfortunately, this evolution was made for financial reasons and did not take into account some very important drawbacks of the thin client. Poor choice of controls HTML does not support many controls, and what is available is not on par with fat-client technologies. Consider, for example, the list box: in any fat client, choices displayed to the user can be filtered according to what is typed in the control. In legacy HTML, there's no such feature and all lines are displayed in all cases. Even with HTML5, which is supposed to add this feature, it is sadly not implemented in all browsers. This is a usability disaster if you need to display the list of countries (more than 200 entries!). As such, ergonomics of true thin clients have nothing to do with their fat-client ancestors. Many unrelated technologies Developers of fat-client applications have to learn only two languages: SQL and the technology's language, such as Visual Basic, Java, and so on. Web developers, on the contrary, have to learn an entire stack of technologies, both on the client side and on the server side. On the client side, the following are the requirements: First, of course, is HTML. It is the basis of all web applications, and although some do not consider it a programming language per se, every web developer must learn it so that they can create content to be displayed by browsers. In order to apply some common styling to your application, one will probably have to learn the Cascading Style Sheets ( CSS) technology. CSS is available in three main versions, each version being more or less supported by browser version combinations (see Browser compatibility). Most of the time, it is nice to have some interactivity on the client side, like pop-up windows or others. In this case, we will need a scripting technology such as ECMAScript. ECMAScript is the specification of which JavaScript is an implementation (along with ActionScript ). It is standardized by the ECMA organization. See http://www.ecma-international.org/publications/standards/Ecma-262.htm for more information on the subject. Finally, one will probably need to update the structure of the HTML page, a healthy dose of knowledge of the Document Object Model (DOM) is necessary. As a side note, consider that HTML, CSS, and DOM are W3C specifications while ECMAScript is an ECMA standard. From a Java point-of-view and on the server side, the following are the requirements: As servlets are the most common form of request-response user interactions in Java EE, every web developer worth his salt has to know both the Servlet specification and the Servlet API. Moreover, most web applications tend to enforce the Model-View-Controller paradigm. As such, the Java EE specification enforces the use of servlets for controllers and JavaServer Pages (JSP ) for views. As JSP are intended to be templates, developers who create JSP have an additional syntax to learn, even though they offer the same features as servlets. JSP accept scriptlets, that is, Java code snippets, but good coding practices tend to frown upon this, however, as Java code can contain any feature, including some that should not be part of views—for example, the database access code. Therefore, a completely new technology stack is proposed in order to limit code included in JSP: the tag libraries. These tag libraries also have a specification and API, and that is another stack to learn. However, these are a few of the standard requirements that you should know in order to develop web applications in Java. Most of the time, in order to boost developer productivity, one has to use frameworks. These frameworks are available in most of the previously cited technologies. Some of them are supported by Oracle, such as Java Server Faces, others are open source, such as Struts. JavaEE 6 seems to favor replacement of JSP and Servlet by Java Server Faces(JSF). Although JSF aims to provide a component-based MVC framework, it is plagued by a relative complexity regarding its components lifecycle. Having to know so much has negative effects, a few are as follows: On the technical side, as web developers have to manage so many different technologies, web development is more complex than fat-client development, potentially leading to more bugs On the human resources side, different meant either different profiles were required or more resources, either way it added to the complexity of human resource management On the project management side, increased complexity caused lengthier projects: developing a web application was potentially taking longer than developing a fat-client application All of these factors tend to make the thin-client development cost much more than fat-client, albeit the deployment cost was close to zero. Browser compatibility The Web has standards, most of them upheld by the World Wide Web Consortium. Browsers more or less implement these standards, depending on the vendor and the version. The ACID test, in version 3, is a test for browser compatibility with web standards. Fortunately, most browsers pass the test with 100 percent success, which was not the case two years ago. Some browsers even make the standards evolve, such as Microsoft which implemented the XmlHttpRequest objectin Internet Explorer and thus formed the basis for Ajax. One should be aware of the combination of the platform, browser, and version. As some browsers cannot be installed with different versions on the same platform, testing can quickly become a mess (which can fortunately be mitigated with virtual machines and custom tools like http://browsershots.org). Applications should be developed with browser combinations in mind, and then tested on it, in order to ensure application compatibility. For intranet applications, the number of supported browsers is normally limited. For Internet applications, however, most common combinations must be supported in order to increase availability. If this wasn't enough, then the same browser in the same version may run differently on different operating systems. In all cases, each combination has an exponential impact on the application's complexity, and therefore, on cost. Page flow paradigm Fat-client applications manage windows. Most of the time, there's a main window. Actions are mainly performed in this main window, even if sometimes managed windows or pop-up windows are used. As web applications are browser-based and use HTML over HTTP, things are managed differently. In this case, the presentation unit is not the window but the page. This is a big difference that entails a performance problem: indeed, each time the user clicks on a submit button, the request is sent to the server, processed by it, and the HTML response is sent back to the client. For example, when a client submits a complex registration form, the entire page is recreated on the server side and sent back to the browser even if there is a minor validation error, even though the required changes to the registration form would have been minimal. Beyond the limits Over the last few years, users have been applying some pressure in order to have user interfaces that offer the same richness as good old fat-client applications. IT managers, however, are unwilling to go back to the old deploy-as-a-project routine and its associated costs and complexity. They push towards the same deployment process as thin-client applications. It is no surprise that there are different solutions in order to solve this dilemma. What are rich clients? All the following solutions are globally called rich clients, even if the approach differs. They have something in common though: all of them want to retain the ease of deployment of the thin client and solve some or all of the problems mentioned previously. Rich clients fulfill the fourth quadrant of the following schema, which is like a dream come true, as shown in the following diagram: Some rich client approaches The following solutions are strategies that deserve the rich client label. Ajax Ajax was one of the first successful rich-client solutions. The term means Asynchronous JavaScript with XML. In effect, this browser technology enables sending asynchronous requests, meaning there is no need to reload the full page. Developers can provide client scripts implementing custom callbacks: those are executed when a response is sent from the server. Most of the time, such scripts use data provided in the response payload to dynamically update relevant part of the page DOM. Ajax addresses the richness of controls and the page flow paradigm. Unfortunately: It aggravates browser-compatibility problems as Ajax is not handled in the same way by all browsers. It has problems unrelated directly to the technologies, which are as follows: Either one learns all the necessary technologies to do Ajax on its own, that is, JavaScript, Document Object Model, and JSON/XML, to communicate with the server and write all common features such as error handling from scratch. Alternatively, one uses an Ajax framework, and thus, one has to learn another technology stack. Richness through a plugin The oldest way to bring richness to the user's experience is to execute the code on the client side and more specifically, as a plugin in the browser. Sun—now Oracle—proposed the applet technology, whereas Microsoft proposed ActiveX. The latest technology using this strategy is Flash. All three were failures due to technical problems, including performance lags, security holes, and plain-client incompatibility or just plain rejection by the market. There is an interesting way to revive the applet with the Apache Pivot project, as shown in the following screenshot (http://pivot.apache.org/), but it hasn't made a huge impact yet; A more recent and successful attempt at executing code on the client side through a plugin is through Adobe's Flex. A similar path was taken by Microsoft's Silverlight technology. Flex is a technology where static views are described in XML and dynamic behavior in ActionScript. Both are transformed at compile time in Flash format. Unfortunately, Apple refused to have anything to do with the Flash plugin on iOS platforms. This move, coupled with the growing rise of HTML5, resulted in Adobe donating Flex to the Apache foundation. Also, Microsoft officially renounced plugin technology and shifted Silverlight development to HTML5. Deploying and updating fat-client from the web The most direct way toward rich-client applications is to deploy (and update) a fat-client application from the web. Java Web Start Java Web Start (JWS), available at http://download.oracle.com/javase/1.5.0/docs/guide/javaws/, is a proprietary technology invented by Sun. It uses a deployment descriptor in Java Network Launching Protocol (JNLP) that takes the place of the manifest inside a JAR file and supplements it. For example, it describes the main class to launch the classpath, and also additional information such as the minimum Java version, icons to display on the user desktop, and so on. This descriptor file is used by the javaws executable, which is bundled in the Java Runtime Environment. It is the javaws executable's responsibility to read the JNLP file and do the right thing according to it. In particular, when launched, javaws will download the updated JAR. The detailed process goes something like the following: The user clicks on a JNLP file. The JNLP file is downloaded on the user machine, and interpreted by the local javaws application. The file references JARs that javaws can download. Once downloaded, JWS reassembles the different parts, create the classpath, and launch the main class described in the JNLP. JWS correctly tackles all problems posed by the thin-client approach. Yet it never reaches critical mass for a number of reasons: First time installations are time-consuming because typically lots of megabytes need to be transferred over the wire before the users can even start using the app. This is a mere annoyance for intranet applications, but a complete no go for Internet apps. Some persistent bugs weren't fixed across major versions. Finally, the lack of commercial commitment by Sun was the last straw. A good example of a successful JWS application is JDiskReport (http://www.jgoodies.com/download/jdiskreport/jdiskreport.jnlp), a disk space analysis tool by Karsten Lentzsch , which is available on the Web for free. Update sites Updating software through update sites is a path taken by both Integrated Development Environment ( IDE ) leaders, NetBeans and Eclipse. In short, once the software is initially installed, updates and new features can be downloaded from the application itself. Both IDEs also propose an API to build applications. This approach also handles all problems posed by the thin-client approach. However, like JWS, there's no strong trend to build applications based on these IDEs. This can probably be attributed to both IDEs using the OSGI standard whose goal is to address some of Java's shortcomings but at the price of complexity. Google Web Toolkit Google Web Toolkit (GWT) is the framework used by Google to create some of its own applications. Its point of view is very unique among the technologies presented here. It lets you develop in Java, and then the GWT compiler transforms your code to JavaScript, which in turn manipulates the DOM tree to update HTML. It's GWT's responsibility to handle browser compatibility. This approach also solves the other problems of the pure thin-client approach. Yet, GWT does not shield developers from all the dirty details. In particular, the developer still has to write part of the code handling server-client communication and he has to take care of the segregation between Java server-code which will be compiled into byte code and Java client-code which will be compiled into JavaScript. Also, note that the compilation process may be slow, even though there are a number of optimization features available during development. Finally, developers need a good understanding of the DOM, as well as the JavaScript/DOM event model. Why Vaadin? Vaadin is a solution evolved from a decade of problem-solving approach, provided by a Finnish company named Vaadin Ltd, formerly IT Mill. Therefore, having so many solutions available, could question the use of Vaadin instead of Flex or GWT? Let's first have a look at the state of the market for web application frameworks in Java, then detail what makes Vaadin so unique in this market. State of the market Despite all the cons of the thin-client approach, an important share of applications developed today uses this paradigm, most of the time with a touch of Ajax augmentation. Unfortunately, there is no clear leader for web applications. Some reasons include the following: Most developers know how to develop plain old web applications, with enough Ajax added in order to make them usable by users. GWT, although new and original, is still complex and needs seasoned developers in order to be effective. From a Technical Lead or an IT Manager's point of view, this is a very fragmented market where it is hard to choose a solution that will meet users' requirements, as well as offering guarantees to be maintained in the years to come. Importance of Vaadin Vaadin is a unique framework in the current ecosystem; its differentiating features include the following: There is no need to learn different technology stacks, as the coding is solely in Java. The only thing to know beside Java is Vaadin's own API, which is easy to learn. This means: The UI code is fully object-oriented There's no spaghetti JavaScript to maintain It is executed on the server side Furthermore, the IDE's full power is in our hands with refactoring and code completion. No plugin to install on the client's browser, ensuring all users that browse our application will be able to use it as-is. As Vaadin uses GWT under the hood, it supports all browsers that the version of GWT also supports. Therefore, we can develop a Vaadin application without paying attention to the browsers and let GWT handle the differences. Our users will interact with our application in the same way, whether they use an outdated version (such as Firefox 3.5), or a niche browser (like Opera). Moreover, Vaadin uses an abstraction over GWT so that the API is easier to use for developers. Also, note that Vaadin Ltd (the company) is part of GWT steering committee, which is a good sign for the future. Finally, Vaadin conforms to standards such as HTML and CSS, making the technology future proof. For example, many applications created with Vaadin run seamlessly on mobile devices although they were not initially designed to do so. Vaadin integration In today's environment, integration features of a framework are very important, as normally every enterprise has rules about which framework is to be used in some context. Vaadin is about the presentation layer and runs on any servlet container capable environment. Integrated frameworks There are three integration levels possible which are as follows: Level 1 : out-of-the-box or available through an add-on, no effort required save reading the documentation Level 2 : more or less documented Level 3 : possible with effort The following are examples of such frameworks and tools with their respective integration estimated effort: Level 1 : Java Persistence API ( JPA ): JPA is the Java EE 5 standard for all things related to persistence. An add-on exists that lets us wire existing components to a JPA backend. Other persistence add-ons are available in the Vaadin directory, such as a container for Hibernate, one of the leading persistence frameworks available in the Java ecosystem. A bunch of widget add-ons, such as tree tables, popup buttons, contextual menus, and many more. Level 2 : Spring is a framework which is based on Inversion of Control ( IoC ) that is the de facto standard for Dependency Injection. Spring can easily be integrated with Vaadin, and different strategies are available for this. Context Dependency Injection ( CDI ): CDI is an attempt at making IoC a standard on the Java EE platform. Whatever can be done with Spring can be done with CDI. Any GWT extensions such as Ext-GWT or Smart GWT can easily be integrated in Vaadin, as Vaadin is built upon GWT's own widgets. Level 3 : We can use another entirely new framework and languages and integrate them with Vaadin, as long as they run on the JVM: Apache iBatis, MongoDB, OSGi, Groovy, Scala, anything you can dream of! Integration platforms Vaadin provides an out-of-the-box integration with an important third-party platform: Liferay is an open source enterprise portal backed by Liferay Inc. Vaadin provides a specialized portlet that enables us to develop Vaadin applications as portlets that can be run on Liferay. Also, there is a widgetset management portlet provided by Vaadin, which deploys nicely into Liferay's Control Panel. Using Vaadin in the real world If you embrace Vaadin, then chances are that you will want to go beyond toying with the Vaadin framework and develop real-world applications. Concerns about using a new technology Although it is okay to use the latest technology for a personal or academic project, projects that have business objectives should just run and not be riddled with problems from third-party products. In particular, most managers may be wary when confronted by anew product (or even a new version), and developers should be too. The following are some of the reasons to choose Vaadin: Product is of highest quality : The Vaadin team has done rigorous testing throughout their automated build process. Currently, it consists of more than 8,000 unit tests. Moreover, in order to guarantee full compatibility between versions, many (many!) tests execute pixel-level regression testing. Support : Commercial : Although completely committed to open source, Vaadin Limited offer commercial support for their product. Check their Pro Account offering. User forums : A Vaadin user forum is available. Anyone registered can post questions and see them answered by a member of the team or of the community. Note that Vaadin registration is free, as well as hassle-free: you will just be sent the newsletter once a month (and you can opt-out, of course). Retro-compatibility: API : The server-side API is very stable, version after version, and has survived major client-engines rewrite. Some part of the API has been changed from v6 to v7, but it is still very easy to migrate. Architecture : Vaadin's architecture favors abstraction and is at the root of it all. Full-blown documentation available : Product documentation : Vaadin's site provides three levels of documentation regarding Vaadin: a five-minute tutorial, a one-hour tutorial, and the famed article of Vaadin . Tutorials API documentation : The Javadocs are available online; there is no need to build the project locally. Course/webinar offerings : Vaadin Ltd currently provides four different courses, which tackles all the needed skills for a developer to be proficient in the framework. Huge community around the product : There is a community gathering, which is ever growing and actively using the product. There are plenty of blogs and articles online on Vaadin. Furthermore, there are already many enterprises using Vaadin for their applications. Available competent resources : There are more and more people learning Vaadin. Moreover, if no developer is available, the framework can be learned in a few days. Integration with existing product/platforms : Vaadin is built to be easily integrated with other products and platforms. The artile of Vaadin describes how to integrate with Liferay and Google App Engine. Others already use Vaadin Upon reading this, managers and developers alike should realize Vaadin is mature and is used on real-world applications around the world. If you still have any doubts, then you should check http://vaadin.com/who-is-using-vaadin and be assured that big businesses trusted Vaadin before you, and benefited from its advantages as well. Summary In this article, we saw the migration of application tiers in the software architecture between the client and the server. We saw that each step resolved the problems in the previous architecture: Client-server used the power of personal computers in order to decrease mainframe costs Thin-clients resolved the deployment costs and delays Thin-clients have numerous drawbacks. For the user, a lack of usability due to poor choice of controls, browser compatibility issues, and the navigation based on page flow; for the developer, many technologies to know. As we are at the crossroad, there is no clear winner in all the solutions available: some only address a few of the problems, some aggravate them. Vaadin is an original solution that tries to resolve many problems at once: It provides rich controls It uses GWT under the cover that addresses most browser compatibility issues It has abstractions over the request response model, so that the model used is application-based and not page based The developer only needs to know one programming language: Java, and Vaadin generates all HTML, JavaScript, and CSS code for you Now we can go on and create our first Vaadin application! Resources for Article : Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Creating a Basic Vaadin Project [Article] Vaadin – Using Input Components and Forms [Article]
Read more
  • 0
  • 0
  • 3201
Visually different images

article-image-deploying-vertx-application
Packt
24 Sep 2013
12 min read
Save for later

Deploying a Vert.x application

Packt
24 Sep 2013
12 min read
(For more resources related to this topic, see here.) Setting up an Ubuntu box We are going to set up an Ubuntu virtual machine using the Vagrant tool. This virtual machine will simulate a real production server. If you already have an Ubuntu box (or a similar Linux box) handy, you can skip this step and move on to setting up a user. Vagrant (http://www.vagrantup.com/) is a tool for managing virtual machines. Many people use it to manage their development environments so that they can easily share them and test their software on different operating systems. For us, it is a perfect tool to practice Vert.x deployment into a Linux environment. Install Vagrant by heading to the Downloads area at http://vagrantup.com and selecting the latest version. Select a package for your operating system and run the installer. Once it is done you should have a vagrant command available on the command line as follows: vagrant –v Navigate to the root directory of our project and run the following command: vagrant init precise64 http://files.vagrantup.com/precise64.box This will generate a file called Vagrant file in the project folder. It contains configuration for the virtual machine we're about to create. We initialized a precise64 box, which is shorthand for the 64-bit version of Ubuntu 12.04 Precise Pangolin. Open the file in an editor and find the following line: # config.vm.network :private_network, ip: "192.168.33.10" Uncomment the line by removing the # character. This will enable private networking for the box. We will be able to conveniently access it with the IP address 192.168.33.10 locally. Run the following command to download, install, and launch the virtual machine: vagrant up This command launches the virtual machine configured in the Vagrantfile. On first launch it will also download it. Because of this, running the command may take a while. Once the command is finished you can check the status of the virtual machine by running vagrant status, suspend it by running vagrant suspend, bring it back up by running vagrant up, and remove it by running vagrant destroy. Setting up a user For any application deployment, it's a good idea have an application-specific user configured. The sole purpose of the user is to run the application. This gives you a nice way to control permissions and make sure the application can only do what it's supposed to. Open a shell connection to our Linux box. If you followed the steps to set up a Vagrant box, you can do this by running the following command in the project root directory: vagrant ssh Add a new user called mindmaps using the following command: sudo useradd -d /home/mindmaps -m mindmaps Also specify a password for the new user using the following command (and make a note of the password you choose; you'll need it): sudo passwd mindmaps Install Java on the server Install Java for the Linux box, as described in Getting Started with Vert.x. As a quick reminder, Java can be installed on Ubuntu with the following command: sudo apt-get install openjdk-7-jdk On fresh Ubuntu installations, it is a good idea to always make sure the package manager index is up-to-date before installing any packages. This is also the case for our Ubuntu virtual machine. Run the following command if the Java installation fails: sudo apt-get update Installing MongoDB on the server We also need MongoDB to be installed on the server, for persisting the mind maps. Setting up privileged ports Our application is configured to serve requests on port 8080. When we deploy to the Internet, we don't want users to have to know anything about ports, which means we should deploy our app to the default HTTP port 80 instead. On Unix systems (such as Linux) port 80 can only be bound to by the root user. Because it is not a good idea to run applications as the root user, we should set up a special privilege for the mindmaps user to bind to port 80. We can do this with the authbind utility. authbind is a Linux utility that can be used to bind processes to privileged ports without requiring root access. Install authbind using the package manager with the following command: sudo apt-get install authbind Set up a privilege for the mindmaps user to bind to port 80, by creating a file into the authbind configuration directory with the following command: cd /etc/authbind/byport/sudo touch 80sudo chown mindmaps:mindmaps 80sudo chmod 700 80 When authbind is run, it checks from this directory, whether there is a file corresponding to the used port and whether the current user has access to it. Here we have created such a file. Many people prefer to have a web server such as Nginx or Apache as a frontend and not expose backend services to the Internet directly. This can also be done with Vert.x. In that case, you could just deploy Vert.x to port 8080 and skip the authbind configuration. Then, you would need to configure reverse proxying for the Vert.x application in your web server. Note that we are using the event bus bridge in our application, and that uses HTTP WebSockets as the transport mechanism. This means the front-end web server must be able to also proxy WebSocket traffic. Nginx is able to do this starting from version 1.3 and Apache from version 2.4.5. Installing Vert.x on the server Switch to the mindmaps user in the shell on the virtual machine using the following command: sudo su – mindmaps Install Vert.x for this user, as described in Getting Started with Vert.x. As a quick reminder, it can be done by downloading and unpacking the latest distribution from http://vertx.io. Making the application port configurable Let's move back to our application code for a moment. During development we have been running the application in port 8080, but on the server we will want to run it in port 80. To support both of these scenarios we can make the port configurable through an environment variable. Vert.x makes environment variables available to verticles through the container API. In JavaScript, the variables can be found in the container.env object. Let's use it to give our application a port at runtime. Find the following line from the deployment verticle app.js: port: 8080, Change it to the following line: port: parseInt(container.env.get('MINDMAPS_PORT')) || 8080, This gets the MINDMAPS_PORT environment variable, and parses it from a string to an integer using the standard JavaScript parseInt function. If no port has been given, the default value 8080 is used. We also need to change the host configuration of the web server. So far, we have been binding to localhost, but now we also want the application to be accessible from outside the server. Find the following line in app.js: host: "localhost", Change it to the following line: host: "0.0.0.0", Using the host 0.0.0.0 will make the server bind to all IPv4 network interfaces the server has. Setting up the application on the server We are going to need some way of transferring the application code itself to the server, as well as delivering incremental updates as new versions of the application are developed. One of the simplest ways to accomplish this is to just transfer the application files over using the rsync tool, which is what we will do. rsync is a widely used Unix tool for transferring files between machines. It has some useful features over plain file copying, such as only copying the deltas of what has changed, and two-way synchronization of files. Create a directory for the application, on the home directory of the mindmaps user using the following command: mkdir ~/app Go back to the application root directory and transfer the files from it to the new remote directory: rsync -rtzv . [email protected]:~/app Testing the setup At this point, the project working tree should already be in the application directory on the remote server, because we have transferred it over using rsync. You should also be able to run it on the virtual machine, provided that you have the JDK, Vert.x, and MongoDB installed, and that you have authbind installed and configured. You can run the app with the following commands: cd ~/appJAVA_OPTS="-Djava.net.preferIPv4Stack=true" MINDMAPS_PORT=80 authbind ~/vert.x-2.0.1-final/bin/vertx run app.js Let's go through the file bit by bit as follows: We pass a Java system parameter called java.net.preferIPv4Stack to Java via the JAVA_OPTS environment variable. This will have Java use IPv4 networking only. We need it because the authbind utility only supports IPv4. We also explicitly set the application to use port 80 using the MINDMAPS_PORT environment variable. We wrap the Vert.x command with the authbind command. Finally, there's the call to Vert.x. Substitute the path to the Vert.x executable with the path you installed Vert.x to. After starting the application, you should be able to see it by navigating to //192.168.33.10 in a browser. Setting up an upstart service We have our application fully operational, but it isn't very convenient or reliable to have to start it manually. What we'll do next is to set up an Ubuntu upstart job that will make sure the application is always running and survives things like server restarts. Upstart is an Ubuntu utility that handles task supervision and automated starting and stopping of tasks when the machine starts up, shuts down, or when some other events occur. It is similar to the /sbin/init daemon, but is arguably easier to configure, which is the reason we'll be using it. The first thing we need to do is set up an upstart configuration file. Open an editor with root access (using sudo) for a new file /etc/init/mindmaps.conf, and set its contents as follows: start on runlevel [2345]stop on runlevel [016]setuid mindmapssetgid mindmapsenv JAVA_OPTS="-Djava.net.preferIPv4Stack=true"env MINDMAPS_PORT=80chdir /home/mindmaps/appexec authbind /home/mindmaps/vert.x-2.0.1-final/bin/vertx run app.js Let's go through the file bit by bit as follows: On the first two lines, we configure when this service will start and stop. This is defined using runlevels, which are numeric identifiers of different states of the operating system (http://en.wikipedia.org/wiki/Runlevel). 2, 3, 4, and 5 designate runlevels where the system is operational; 0, 1, and 6 designate runlevels where the system is stopping or restarting. We set the user and group the service will run as to the mindmaps user and its group. We set the two environment variables we also used earlier when testing the service: JAVA_OPTS for letting Java know it should only use the IPv4 stack, and MINDMAPS_PORT to let our application know that it should use port 80. We change the working directory of the service to where our application resides, using the chdir directive. Finally, we define the command that starts the service. It is the vertx command wrapped in the authbind command. Be sure to change the directory for the vertx binary to match the directory you installed Vert.x to. Let's give the mindmaps user the permission to manage this job so that we won't have to always run it as root. Open up the /etc/sudoers file into an editor with the following command: sudo /usr/sbin/visudo At the end of the file, add the following line: mindmaps ALL = (root) NOPASSWD: /sbin/start mindmaps, /sbin/stopmindmaps, /sbin/restart mindmaps, /sbin/status mindmaps The visudo command is used to configure the privileges of different users to use the sudo command. With the line we added, we enabled the mindmaps user to run a few specific commands without having to supply a password. At this point you should be able to start and stop the application as the mindmaps user: sudo start mindmaps You also have the following additional commands available for managing the service: sudo status mindmapssudo restart mindmapssudo stop mindmaps If there is a problem with the commands, there might be some configuration error. The upstart service will log errors to the file: /var/log/upstart/mindmaps.log. You will need to open it using the sudo command. Deploying new versions Deploying a new version of the application consists of the following two steps: Transferring new files over using rsync Restarting the mind maps service We can make this even easier by creating a shell script that executes both steps. Create a file called deploy.sh in the root directory of the project and set its contents as: #!/bin/shrsync -rtzv . [email protected]:~/app/ssh [email protected] sudo restart mindmaps Make the script executable, using the following command: chmod +x deploy.sh After this, just run the following command whenever you want a new version on the server: ./deploy.sh To make deployment even more streamlined, you can set up SSH public key authentication so that you won't need to supply the password of the mindmaps user as you deploy. See https://help.ubuntu.com/community/SSH/OpenSSH/Keys for more information. Summary In this article, we have learned the following things: How to set up a Linux server for Vert.x production deployment How to set up deployment for a Vert.x application using rsync How to start and supervise a Vert.x process using upstart Resources for Article : Further resources on this subject: IRC-style chat with TCP server and event bus [Article] Coding for the Real-time Web [Article] Integrating Storm and Hadoop [Article]
Read more
  • 0
  • 0
  • 6086

Packt
24 Sep 2013
7 min read
Save for later

Mobiles First – How and Why

Packt
24 Sep 2013
7 min read
(For more resources related to this topic, see here.) What is Responsive Web Design? Responsive Web Design (RWD) is a set of strategies used to display web pages on screens of varying sizes. These strategies leverage, among other things, features available in modern browsers as well as a strategy of progressive enhancement (rather than graceful degradation). What's with all the buzzwords? Well, again, once we dig into the procedures and the code, it will all get a lot more meaningful. But here is a quick example to illustrate a two-way progressive enhancement that is used in RWD. Let's say you want to make a nice button that is a large target and can be reliably pressed with big, fat clumsy thumbs on a wide array of mobile devices. In fact, you want that button to pretty much run the full spectrum of every mobile device known to humans. This is not a problem. The following code is how your (greatly simplified) HTML will look: <!DOCTYPE html> <head> <link rel="stylesheet" href="css/main.css"> </head> <body> <button class="big-button">Click Me!</button> </body> </html> The following code is how your CSS will look: .big-button { width: 100%; padding: 8px 0; background: hotPink; border: 3px dotted purple; font-size: 18px; color: #fff; border-radius: 20px; box-shadow: #111 3px 4px 0px; } So this gets you a button that stretches the width of the document's body. It's also hot pink with a dotted purple border and thick black drop shadow (don't judge my design choices). Here is what is nice about this code. Let's break down the CSS with some imaginary devices/browsers to illustrate some of the buzzwords in the first paragraph of this section: Device one (code name: Goldilocks): This device has a modern browser, with screen dimensions of 320 x 480 px. It is regularly updated, so is highly likely to have all the cool browser features you read about in your favorite blogs. Device two (code name: Baby Bear): This device has a browser that partially supports CSS2 and is poorly documented, so much so that you can only figure out which styles are supported through trial and error or forums. The screen is 320 x 240 px. This describes a device that predated the modern adoption levels of browsing the web on a mobile but your use case may require you to support it anyway. Device three (code name: Papa Bear): This is a laptop computer with a modern browser but you will never know the screen dimensions since the viewport size is controlled by the user. Thus, Goldilocks gets the following display: Because it is all tricked out with full CSS3 feature, it will render the rounded corners and drop shadow. Baby Bear, on the other hand, will only get square corners and no drop shadow (as seen in the previous screenshot) because its browser can't make sense of those style declarations and will just do nothing with them. It's not a huge deal, though, as you still get the important features of the button; it stretches the full width of the screen, making it a big target for all the thumbs in the world (also, it's still pink). Papa Bear gets the button with all the CSS3 goodies too. That said, it stretches the full width of the browser no matter how absurdly wide a user makes his/her browser. We only need it to be about 480 px wide to make it big enough for a user to click and look reasonable within whatever design we are imagining. So in order to make that happen, we will take advantage of a nifty CSS3 feature called @media queries. We will use these extensively throughout this article and make your stylesheet look like this: .big-button { width: 100%; padding: 8px 0; background: hotPink; border: 3px dotted purple; font-size: 18px; color: #fff; border-radius: 20px; box-shadow: #111 3px 3px 0px; } @media only screen and (min-width: 768px){ .big-button { width: 480px; } } Now if you were coding along with me and have a modern browser (meaning a browser that supports most, if not all, features in the HTML5 specification, more on this later), you could do something fun. You can resize the width of your browser to see the start button respond to the @media queries. Start off with the browser really narrow and the button will get wider until the screen is 768 px wide; beyond that the button will snap to being only 480 px. If start off with your browser wider than 768 px, the button will stay 480 px wide until your browser width is under 768 px. Once it is under this threshold, the button snaps to being full width. This happens because of the media query. This query essentially asks the browser a couple of questions. The first part of the query is about what type of medium it is (print or screen). The second part of the query asks what the screen's minimum width is. When the browser replies yes to both screen and min-width 768px, the conditions are met for applying the styles within that media query. To say these styles are applied is a little misleading. In fact, the approach actually takes advantage of the fact that the styles provided in the media query can override other styles set previously in the stylesheet. In our case, the only style applied is an explicit width for the button that overrides the percentage width that was set previously. So, the nice thing about this is, we can make one website that will display appropriately for lots of screen sizes. This approach re-uses a lot of code, only applying styles as needed for various screen widths. Other approaches for getting usable sites to mobile devices require maintaining multiple codebases and having to resort to device detection, which only works if you can actually detect what device is requesting your website. These other approaches can be fragile and also break the Don't Repeat Yourself (DRY) commandment of programming. This article is going to go over a specific way of approaching RWD, though. We will use the 320 and Up framework to facilitate a mobile first strategy. In short, this strategy assumes that a device requesting the site has a small screen and doesn't necessarily have a lot of processing power. 320 and Up also has a lot of great helpers to make it fast and easy to produce features that many clients require on their sites. But we will get into these details as we build a simple site together. Take note, there are lots of frameworks out there that will help you build responsive sites, and there are even some that will help you build a responsive, mobile first site. One thing that distinguishes 320 and Up is that it is a tad less opinionated than most frameworks. I like it because it is simple and eliminates the busy work of setting up things one is likely to use for many sites. I also like that it is open source and can be used with static sites as well as any server-side language. Prerequisites Before we can start building, you need to download the code associated with this article. It will have all the components that you will need and is structured properly for you. If you want 320 and Up for your own projects, you can get it from the website of Andy Clarke (he's the fellow responsible for 320 and Up) or his GitHub account. I also maintain a fork in my own GitHub repo. Andy Clarke's site http://stuffandnonsense.co.uk/projects/320andup/ GitHub https://github.com/malarkey/320andup My GitHub Fork https://github.com/jasongonzales23/320andup That said, the simplest route to follow along with this article is to get the code I've wrapped up for you from: https://github.com/jasongonzales23/mobilefirst_book Summary In this article, we looked at a simple example of how responsive web design strategies can serve up the same content to screens of many sizes and have the layout adjust to the screen it is displayed on. We wrote a simple example of that for a pink button and got a link to 320 and Up, so we can get started building an entire mobile first-responsive website. Resources for Article: Further resources on this subject: HTML5 Canvas [Article] HTML5 Presentations - creating our initial presentation [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 916

Packt
24 Sep 2013
8 min read
Save for later

Quick start – using Haml

Packt
24 Sep 2013
8 min read
(For more resources related to this topic, see here.) Step 1 – integrating with Rails and creating a simple view file Let's create a simple rails application and change one of the view files to Haml from ERB: Create a new rails application named blog: rails new blog Add the Haml gem to this application's Gemfile and run it: bundle install When the gem has been added, generate some views that we can convert to Haml and learn about its basic features. Run the Rails scaffold generator to create a scffold named post. rails g scaffold post After this, you need to run the migrations to create the database tables: rake db:migrate You should get an output as shown in the following screenshot: Our application is not yet generating Haml views automatically. We will switch it to this mode in the next steps. The index.html.erb file that has been generated and is located in app/views/posts/index.html.erb looks as follows: <h1>Listing posts</h1><table> <tr> <th></th> <th></th> <th></th> </tr><% @posts.each do |post| %> <tr> <td><%= link_to 'Show', post %></td> <td><%= link_to 'Edit', edit_post_path(post) %></td> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' } %></td> </tr><% end %></table><br><%= link_to 'New Post', new_post_path %>] Let's convert it to an Haml view step-by-step. First, let's understand the basic features of Haml: Any HTML tags are written with a percent sign and then the name of the tag Whitespace (tabs) is being used to create nested tags Any part of an Haml line that is not interpreted as something else is taken to be plain text Closing tags as well as end statements are omitted for Ruby blocks Knowing the previous features we can write the first lines of our example view in Haml. Open the index.html.erb file in an editor and replace <h1>, <table>, <th>, and <tr> as follows: <h1>Listing posts</h1> can be written as %h1 Listing posts <table> can be written as %table <tr> becomes %tr <th> becomes %th After those first replacements our view file should look like: %h1 Listing posts%table %tr %th %th %th<% @posts.each do |post| %> <tr> <td><%= link_to 'Show', post %></td> <td><%= link_to 'Edit', edit_post_path(post) %></td> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' } %></td> </tr><% end %><br><%= link_to 'New Post', new_post_path %> Please notice how %tr is nested within the %table tag using a tab and also how %th is nested within %tr using a tab. Next, let's convert the Ruby parts of this view. Ruby is evaluated and its output is inserted into the view when using the equals character. In ERB we had to use <%=, whereas in Haml, this is shortened to just a =. The following examples illustrate this: <%= link_to 'Show', post %> becomes = link_to 'Show', post and all the other <%= parts are changed accordingly The equals sign can be used at the end of the tag to insert the Ruby code within that tag Empty (void) tags, such as <br>, are created by adding a forward slash at the end of the tag Please note that you have to leave a space after the equals sign. After the changes are incorporated, our view file will look like %h1 Listing posts %table %tr %th %th %th<% @posts.each do |post| %> %tr %td= link_to 'Show',post %td= link_to 'Edit',edit_post_path(post) %td= link_to 'Destroy',post,method: :delete,data: { confirm: 'Areyou sure?' }%br/= link_to 'New Post',new_post_path The only thing left to do now is to convert the Ruby block part: <% @posts.each do | post | %>. Code that needs to be run, but does not generate any output, is written using a hyphen character. Here is how this conversion works: Ruby blocks do not need to be closed, they end when the indentation decreases HTML tags and the Ruby code that is nested within the block are indented by one tab more than the block <% @posts.each do |post| %> becomes - @ posts.each do |post| Remember about a space after the hyphen character After we replace the remaining part in our view file according to the previous rules, it should look as follows: %h1 Listing posts%table %tr %th %th %th- @posts.each do |post| %tr %td= link_to 'Show', post %td= link_to 'Edit', edit_post_path(post) %td= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' }%br/= link_to 'New Post', new_post_path Save the view file and change its name to index.html.haml. This is now an Haml-based template. Start our example Rails application and visit http: //localhost:3000/posts to see the view being rendered by Rails, as shown in the following screenshot: Step 2 – switching Rails application to use Haml as the templating engine In the previous step, we have enabled Haml in the test application. However, if you generate new view files using any of the Rails built-in generators, it will still use ERB. Let's switch the application to use Haml as the templating engine. Edit the blog application Gemfile and add a gem named haml-rails to it. You can add it to the :development group because the generators are only used during development and this functionality is not needed in production or test environments. Our application Gemfile now looks as shown in the following code: source 'https://rubygems.org'gem 'rails', '3.2.13'gem 'sqlite3'gem 'haml'gem 'haml-rails', :group => :developmentgroup :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'uglifier', '>= 1.0.3'endgem 'jquery-rails' Then run following bundle command to install the gem: bundle install Let's say the posts in our application need to have categories. Run the scaffold generator to create some views for categories. This generator will create views using Haml, as shown in the following screenshot: Please note that new views have a .html.haml extension and are using Haml. For example, the _form.html.haml view for the form looks as follows: = form_for @category do |f|Asjd12As - if @category.errors.any? #error_explanation %h2= "#{pluralize(@category.errors.count, "error")} prohibitedthis category from being saved:" %ul - @category.errors.full_messages.each do |msg| %li= msg .actions = f.submit 'Save' There are two very useful shorthand notations for creating a <div> tag with a class or a <div> tag with an ID. To create a div with an ID, use the hash symbol followed by the name of the ID. For example, #error_explanation will result in <div id="error_explanation"> To create a <div>tag with a class attribute, use a dot followed by the name of the class. For example, .actions will create <div class="actions"> Step 3 – converting existing view templates to Haml Our example blog app still has some leftover templates which are using ERB as well as an application.html.erb layout file. We would like to convert those to Haml. There is no need to do it all individually, because there is a handy gem which will automatically convert all the ERB files to Haml, shown as follows: Let's install the html2haml gem: gem install html2haml Using the cd command, change the current working directory to the app directory of our example application and run the following bash command to convert all the ERB files to Haml (to run this command you need a bash shell. On Windows, you can use the embedded bash shell which ships with GitHub for Windows, Cygwin bash, MinGW bash, or the MSYS bash shell which is bundled with Git for Windows). for file in $(find . -type f -name \*.html.erb); do html2haml -e ${file} "$(dirname ${file})/$(basename ${file}.erb).haml";done Then to remove the ERB files and run this command: find ./ -name *erb | while read line; do rm $line; done Those two Bash snippets will first convert all the ERB files recursively in the app directory of our application and then remove the remaining ERB view templates. Summary This article covered integrating with Rails and creating a simple view file, switching Rails application to use Haml as the templating engine, and converting existing view templates to Haml. Resources for Article : Further resources on this subject: Building HTML5 Pages from Scratch [Article] URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Building tiny Web-applications in Ruby using Sinatra [Article]
Read more
  • 0
  • 0
  • 2792
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-self-service-business-intelligence-creating-value-data
Packt
20 Sep 2013
15 min read
Save for later

Self-service Business Intelligence, Creating Value from Data

Packt
20 Sep 2013
15 min read
(For more resources related to this topic, see here.) Over the years most businesses have spent considerable amount of time, money, and effort in building databases, reporting systems, and Business Intelligence (BI) systems. IT often thinks that they are providing the necessary information to the business users for them to make the right decisions. However, when I meet the users they tell me a different story. Most often they say that they do not have the information they need to do their job. Or they have to spend a lot of time getting the relevant information. Many users state that they spend more time getting access to the data than understanding the information. This divide between IT and business is very common, it causes a lot of frustration and can cost a lot of money, which is a real issue for companies that needs to be solved for them to be profitable in the future. Research shows that by 2015 companies that build a good information management system will be 20 percent more profitable compared to their peers. You can read the entire research publication from http://download.microsoft.com/download/7/B/8/7B8AC938-2928-4B65-B1B3-0B523DDFCDC7/Big%20Data%20Gartner%20 information_management_in_the_21st%20Century.pdf. So how can an organization avoid the pitfalls in business intelligence systems and create an effective way of working with information? This article will cover the following topics concerning it: Common user requirements related to BI Understanding how these requirements can be solved by Analysis Services An introduction to self-service reporting Identifying common user requirements for a business intelligence system In many cases, companies that struggle with information delivery do not have a dedicated reporting system or data warehouse. Instead the users have access only to the operational reports provided by each line of business application. This is extremely troublesome for the users that want to compare information from different systems. As an example, think of a sales person that wants to have a report that shows the sales pipeline, from the Customer Relationship Management (CRM) system together with the actual sales figures from the Enterprise Resource Planning (ERP) system. Without a common reporting system the users have to combine the information themselves with whatever tools are available to them. Most often this tool is Microsoft Excel. While Microsoft Excel is an application that can be used to effectively display information to the users, it is not the best system for data integration. To perform the steps of extracting, transforming, and loading data (ETL), from the source system, the users have to write tedious formulas and macros to clean data, before they can start comparing the numbers and taking actual decisions based on the information. Lack of a dedicated reporting system can also cause trouble with the performance of the Online Transaction Processing (OLTP) system. When I worked in the SQL Server support group at Microsoft, we often had customers contacting us on performance issues that they had due to the users running the heavy reports directly on the production system. To solve this problem, many companies invest in a dedicated reporting system or a data warehouse. The purpose of this system is to contain a database customized for reporting, where the data can be transformed and combined once and for all from all source systems. The data warehouse also serves another purpose and that is to serve as the storage of historic data. Many companies that have invested in a common reporting database or data warehouse still require a person with IT skills to create a report. The main reason for this is that the organizations that have invested in a reporting system have had the expert users define the requirements for the system. Expert users will have totally different requirements than the majority of the users in the organization and an expert tool is often very hard to learn. An expert tool that is too hard for the normal users will put a strain on the IT department that will have to produce all the reports. This will result in the end users waiting for their reports for weeks and even months. One large corporation that I worked with had invested millions of dollars in a reporting solution, but to get a new report the users had to wait between nine and 12 months, before they got the report in their hand. Imagine the frustration and the grief that waiting this long before getting the right information causes the end users. To many users, business intelligence means simple reports with only the ability to filter data in a limited way. While simple reports such as the one in the preceding screenshot can provide valuable information, it does not give the users the possibility to examine the data in detail. The users cannot slice-and-dice the information and they cannot drill down to the details, if the aggregated level that the report shows is insufficient for decision making. If a user would like to have these capabilities, they would need to export the information into a tool that enables them to easily do so. In general, this means that the users bring the information into Excel to be able to pivot the information and add their own measures. This often results in a situation where there are thousands of Excel spreadsheets floating around in the organization, all with their own data, and with different formulas calculating the same measures. When analyzing data, the data itself is the most important thing. But if you cannot understand the values, the data is of no benefit to you. Many users find that it is easier to understand information, if it is presented in a way that they can consume efficiently. This means different things to different users, if you are a CEO, you probably want to consume aggregated information in a dashboard such as the one you can see in the following screenshot: On the other hand, if you are a controller, you want to see the numbers on a very detailed level that would enable you to analyze the information. A controller needs to be able to find the root cause, which in most cases includes analyzing information on a transaction level. A sales representative probably does not want to analyze the information. Instead, he or she would like to have a pre-canned report filtered on customers and time to see what goods the customers have bought in the past, and maybe some suggested products that could be recommended to the customers. Creating a flexible reporting solution What the companies need is a way for the end users to access information in a user-friendly interface, where they can create their own analytical reports. Analytical reporting gives the user the ability to see trends, look at information on an aggregated level, and drill down to the detailed information with a single-click. In most cases this will involve building a data warehouse of some kind, especially if you are going to reuse the information in several reports. The reason for creating a data warehouse is mainly the ability to combine different sources into one infrastructure once. If you build reports that do the integration and cleaning of the data in the reporting layer, then you will end up doing the same tasks of data modification in every report. This is both tedious and could cause unwanted errors as the developer would have to repeat all the integration efforts in all the reports that need to access the data. If you do it in the data warehouse you can create an ETL program that will move the data, and prepare it for the reports once, and all the reports can access this data. A data warehouse is also beneficial from many other angles. With a data warehouse, you have the ability to offload the burden of running the reports from the transactional system, a system that is built mainly for high transaction rates at high speed, and not for providing summarized data in a report to the users. From a report authoring perspective a data warehouse is also easier to work with. Consider the simple static report shown in the first screenshot. This report is built against a data warehouse that has been modeled using dimensional modeling. This means that the query used in the report is very simple compared to getting the information from a transactional system. In this case, the query is a join between six tables containing all the information that is available about dates, products, sales territories, and sales. selectf.SalesOrderNumber,s.EnglishProductSubcategoryName,SUM(f.OrderQuantity) as OrderQuantity,SUM(f.SalesAmount) as SalesAmount,SUM(f.TaxAmt) as TaxAmtfrom FactInternetSales fjoin DimProduct p on f.ProductKey=p.ProductKeyjoin DimProductSubcategory s on p.ProductSubcategoryKey =s.ProductSubcategoryKeyjoin DimProductCategory c on s.ProductCategoryKey =c.ProductCategoryKeyjoin DimDate d on f.OrderDateKey = d.DateKeyjoin DimSalesTerritory t on f.SalesTerritoryKey =t.SalesTerritoryKeywhere c.EnglishProductCategoryName = @ProductCategoryand d.CalendarYear = @Yearand d.EnglishMonthName = @MonthNameand t.SalesTerritoryCountry = @Countrygroup by f.SalesOrderNumber, s.EnglishProductSubcategoryName You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. The preceding query is included for illustrative purposes. As you can see it is very simple to write for someone that is well versed in Transact-SQL. Compare this to getting all the information from the operational system necessary to produce this report, and all the information stored in the six tables. It would be a daunting task. Even though the sample database for AdventureWorks is very simple, we still need to query a lot of tables to get to the information. The following figure shows the necessary tables from the OLTP system you would need to query, to get the information available in the six tables in the data warehouse. Now imagine creating the same query against a real system, it could easily be hundreds of tables involved to extract the data that are stored in a simple data model used for sales reporting. As you can see clearly now, working against a model that has been optimized for reporting is much simpler when creating the reports. Even with a well-structured data warehouse, many users would struggle with writing the select query driving the report shown earlier. The users, in general, do not know SQL. They typically do not understand the database schema since the table and column names usually consists of abbreviations that can be cryptic to the casual user. What if a user would like to change the report, so that it would show data in a matrix with the ability to drill down to lower levels? Then they most probably would need to contact IT. IT would need to rewrite the query and change the entire report layout, causing a delay between the need of the data and the availability. What is needed is a tool that enables the users to work with the business attributes instead of the tables and columns, with simple understandable objects instead of a complex database engine. Fortunately for us SQL Server contains this functionality; it is just for us database professionals to learn how to bring these capabilities to the business. That is what this article is all about, creating a flexible reporting solution allowing the end users to create their own reports. I have assumed that you as the reader have knowledge of databases and are well-versed with your data. What you will learn in this article is, how to use a component of SQL Server 2012 called SQL Server Analysis Services to create a cube or semantic model, exposing data in the simple business attributes allowing the users to use different tools to create their own ad hoc reports. Think of the cube as a PivotTable spreadsheet in Microsoft Excel. From the users perspective, they have full flexibility when analyzing the data. You can drag-and-drop whichever column you want to, into either the rows, columns, or filter boxes. The PivotTable spreadsheet also summarizes the information depending on the different attributes added to the PivotTable spreadsheet. The same capabilities are provided through the semantic model or the cube. When you are using the semantic model the data is not stored locally within the PivotTable spreadsheet, as it is when you are using the normal PivotTable functionality in Microsoft Excel. This means that you are not limited to the number of rows that Microsoft Excel is able to handle. Since the semantic model sits in a layer between the database and the end user reporting tool, you have the ability to rename fields, add calculations, and enhance your data. It also means that whenever new data is available in the database and you have processed your semantic model, then all the reports accessing the model will be updated. The semantic model is available in SQL Server Analysis Services. It has been part of the SQL Server package since Version 7.0 and has had major revisions in the SQL Server 2005, 2008 R2, and 2012 versions. This article will focus on how to create semantic models or cubes through practical step-by-step instructions. Getting user value through self-service reporting SQL Server Analysis Services is an application that allows you to create a semantic model that can be used to analyze very large amounts of data with great speed. The models can either be user created, or created and maintained by IT. If the user wants to create it, they can do so, by using a component in Microsoft Excel 2010 and upwards called PowerPivot. If you run Microsoft Excel 2013, it is included in the installed product, and you just need to enable it. In Microsoft Excel 2010, you have to download it as a separate add-in that you either can find on the Microsoft homepage or on the site called http://www.powerpivot.com. PowerPivot creates and uses a client-side semantic model that runs in the context of the Microsoft Excel process; you can only use Microsoft Excel as a way of analyzing the data. If you just would like to run a user created model, you do not need SQL Server at all, you just need Microsoft Excel. On the other hand, if you would like to maintain user created models centrally then you need, both SQL Server 2012 and SharePoint. Instead, if you would like IT to create and maintain a central semantic model, then IT need to install SQL Server Analysis Services. IT will, in most cases, not use Microsoft Excel to create the semantic models. Instead, IT will use Visual Studio as their tool. Visual Studio is much more suitable for IT compared to Microsoft Excel. Not only will they use it to create and maintain SQL Server Analysis Services semantic models, they will also use it for other database related tasks. It is a tool that can be connected to a source control system allowing several developers to work on the same project. The semantic models that they create from Visual Studio will run on a server that several clients can connect to simultaneously. The benefit of running a server-side model is that they can use the computational power of the server, this means that you can access more data. It also means that you can use a variety of tools to display the information. Both approaches enable users to do their own self-service reporting. In the case where PowerPivot is used they have complete freedom; but they also need the necessary knowledge to extract the data from the source systems and build the model themselves. In the case where IT maintains the semantic model, the users only need the knowledge to connect an end user tool such as Microsoft Excel to query the model. The users are, in this case, limited to the data that is available in the predefined model, but on the other hand, it is much simpler to do their own reporting. This is something that can be seen in the preceding figure that shows Microsoft Excel 2013 connected to a semantic model. SQL Server Analysis Services is available in the Standard edition with limited functionality, and in the BI and Enterprise edition with full functionality. For smaller departmental solutions the Standard edition can be used, but in many cases you will find that you need either the BI or the Enterprise edition of SQL Server. If you would like to create in-memory models, you definitely cannot run the Standard edition of the software since this functionality is not available in the Standard edition of SQL Server. Summary In this article, you learned about the requirements that most organizations have when it comes to an information management platform. You were introduced to SQL Server Analysis Services that provides the capabilities needed to create a self-service platform that can serve as the central place for all the information handling. SQL Server Analysis Services allows users to work with the data in the form of business entities, instead of through accessing a databases schema. It allows users to use easy to learn query tools such as Microsoft Excel to analyze the large amounts of data with subsecond response times. The users can easily create different kinds of reports and dashboards with the semantic model as the data source. Resources for Article : Further resources on this subject: MySQL Linked Server on SQL Server 2008 [Article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] FAQs on Microsoft SQL Server 2008 High Availability [Article]
Read more
  • 0
  • 0
  • 1783

article-image-oracle-goldengate-advanced-administration-tasks-i
Packt
20 Sep 2013
19 min read
Save for later

Oracle GoldenGate- Advanced Administration Tasks - I

Packt
20 Sep 2013
19 min read
(For more resources related to this topic, see here.) Upgrading Oracle GoldenGate binaries In this recipe you will learn how to upgrade GoldenGate binaries. You will also learn about GoldenGate patches and how to apply them. Getting ready For this recipe, we will upgrade the GoldenGate binaries from version 11.2.1.0.1 to 11.2.1.0.3 on the source system, that is prim1-ol6-112 in our case. Both of these binaries are available from the Oracle Edelivery website under the part number V32400-01 and V34339-01 respectively. 11.2.1.0.1 binaries are installed under /u01/app/ggate/112101. How to do it... The steps to upgrade the Oracle GoldenGate binaries are: Make a new directory for 11.2.1.0.3 binaries: mkdir /u01/app/ggate/112103 Copy the binaries ZIP file to the server in the new directory. Unzip the binaries file: [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112103 [ggate@prim1-ol6-112 112103]$ unzip V34339-01.zip Archive: V34339-01.zip inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar inflating: Oracle_GoldenGate_11.2.1.0.3_README.doc inflating: Oracle GoldenGate_11.2.1.0.3_README.txt inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.3.pdf Install the new binaries in /u01/app/ggate/112103: [ggate@prim1-ol6-112 112103]$ tar -pxvf fbo_ggs_Linux_x64_ora11g_64bit.tar Stop the processes in the existing installation: [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112101 [ggate@prim1-ol6-112 112101]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> stop * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Stop the manager process: GGSCI (prim1-ol6-112.localdomain) 2> STOP MGRManager process is required by other GGS processes.Are you sure you want to stop it (y/n)? ySending STOP request to MANAGER ...Request processed.Manager stopped. Copy the subdirectories to the new binaries: [ggate@prim1-ol6-112 112101]$ cp -R dirprm /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirrpt /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirchk /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R BR /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirpcs /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112101]$ cp -R dirdef /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirout /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirdat /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirtmp /u01/app/ggate/112103/ Modify any parameter files under dirprm if you have hardcoded old binaries path in them. Edit the ggate user profile and update the value of the GoldenGate binaries home: vi .profile export GG_HOME=/u01/app/ggate/112103 Start the manager process from the new binaries: [ggate@prim1-ol6-112 ~]$ cd /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112103]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> START MGR Manager started. Start the processes: GGSCI (prim1-ol6-112.localdomain) 18> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting How it works... The method to upgrade the GoldenGate binaries is quite straightforward. As seen in the preceding section, you need to download and install the binaries on the server in a new directory. After this, you would stop the all GoldenGate processes that are running from the existing binaries. Then you would copy all the important GoldenGate directories with parameter files, trail files, report files, checkpoint files, and recovery files to the new binaries. If your trail files are kept on a separate filesystem which is linked to the dirdat directory using a softlink, then you would just need to create a new softlink under the new GoldenGate binaries home. Once all the files are copied, you would need to modify the parameter files if you have the path of the existing binaries hardcoded in them. The same would also need to be done in the OS profile of the ggate user. After this, you just start the manager process and rest of the processes from the new home. GoldenGate patches are all delivered as full binaries sets. This makes the procedure to patch the binaries exactly the same as performing major release upgrades. Table structure changes in GoldenGate environments with similar table definitions Almost all of the applications systems in IT undergo some change over a period of time. This change might include a fix of an identified bug, an enhancement or some configuration change required due to change in any other part of the system. The data that you would replicate using GoldenGate will most likely be part of some application schema. These schemas, just like the application software, sometimes require some changes which are driven by the application vendor. If you are replicating DDL along with DML in your environment then these schema changes will most likely be replicated by GoldenGate itself. However, if you are only replicating only DML and there are any DDL changes in the schema particularly around the tables that you are replicating, then these will affect the replication and might even break it. In this recipe, you will learn how to update the GoldenGate configuration to accommodate the schema changes that are done to the source system. This recipe assumes that the definitions of the tables that are replicated are similar in both the source and target databases. Getting ready For this recipe we are making the following assumptions: GoldenGate is set up to replicate only DML changes between the source and target environments. The application will be stopped for making schema changes in the source environment. The table structures in the source and target database are similar. The replication is configured for all objects owned by a SCOTT user using a SCOTT.* clause. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database. The schema changes performed in this recipe are as follows: Add a new column called DOB (DATE) to the EMP table. Modify the DNAME column in the DEPT table to VARCHAR(20). Add a new table called ITEMS to the SCOTT schema: ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20) Add a new table called SALES to the SCOTT schema: SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO) Load the values for the DOB column in the EMP table. Load a few records in the ITEMS table. How to do it… Here are the steps that you can follow to implement the preceding schema changes in the source environment: Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract. Check the latest timestamp read by the Extract and Datapump processes and ensure it is the current timestamp: GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-25 22:35:06 Seqno 350, RBA 11778560 SCN 0.11806849 (11806849) EXTRACT PGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-25 22:35:05.000000 RBA 7631 Stop the Extract and Datapump processes in the source environment: GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3: GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-25 22:25 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000061 2013-03-25 22:37:04.950188 RBA 10039 Stop the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed. Apply the schema changes to the source database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Apply the schema changes to the target database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Add supplemental logging for the newly added tables: GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@ DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch: GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered. Start the Extract and Datapump in the source environment: GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting Start the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting How it works... The preceding steps cover a high level procedure that you can follow to modify the structure of the replicated tables in your GoldenGate configuration. Before you start to alter any processes or parameter file, you need to ensure that the applications are stopped and no user sessions in the database are modifying the data in the tables that you are replicating. Once the application is stopped, we check that all the redo data has been processed by GoldenGate processes and then stop. At this point we run the scripts that need to be run to make DDL changes to the database. This step needs to be run on both the source and target database as we will not be replicating these changes using GoldenGate. Once this is done, we alter the GoldenGate processes to start from the current time and start them. There's more... Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in such cases where the environment does not satisfy these conditions: Specific tables defined in GoldenGate parameter files Unlike the earlier example, where the tables are defined in the parameter files using a schema qualifier for example SCOTT.*, if you have individual tables defined in the GoldenGateparameterfiles, you would need to modify the GoldenGate parameter files to add these newly created tables to include them in replication. Individual table permissions granted to the GoldenGate Admin user If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment. Supplemental logging for modified tables without any keys If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will have to be modified when the structure of the underlying table is modified. Supplemental log groups with all columns for modified tables In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns. Table structure changes in GoldenGate environments with different table definitions In this recipe you will learn how to perform table structure changes in a replication environment where the table structures in the source and target environments are not similar. Getting ready For this recipe we are making the following assumptions: GoldenGate is set up to replicate only DML changes between the source and target environments. The application will be stopped for making schema changes in the source environment. The table structures in the source and target databases are not similar. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database. The definition file was generated for the source schema and is configured in the replicat parameter file. The schema changes performed in this recipe are as follows: Add a new column called DOB (DATE) to the EMP table. Modify the DNAME column in the DEPT table to VARCHAR(20). Add a new table called ITEMS to the SCOTT schema: ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20) Add a new table called SALES to the SCOTT schema: SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO) Load the values for the DOB column in the EMP table. Load a few records in the ITEMS table. How to do it... Here are the steps that you can follow to implement the previous schema changes in the source environment: Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract. Check the latest timestamp read by the Extract and Datapump process, and ensure it is the current timestamp: GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-28 10:16:06 Seqno 352, RBA 12574320 SCN 0.11973456 (11973456) EXTRACT PGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-28 10:15:43.000000 RBA 8450 Stop the Extract and Datapump processes in the source environment: GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3: GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-28 10:15 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000062 2013-03-28 10:15:04.950188 RBA 10039 Stop the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed. Apply the schema changes to the source database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Apply the schema changes to the target database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Add supplemental logging for the newly added tables: GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES. Update the parameter file for generating definitions as follows: vi $GG_HOME/dirprm/defs.prm DEFSFILE ./dirdef/defs.def USERID ggate_admin@dboratest, PASSWORD XXXX TABLE SCOTT.EMP; TABLE SCOTT.DEPT; TABLE SCOTT.BONUS; TABLE SCOTT.DUMMY; TABLE SCOTT.SALGRADE; TABLE SCOTT.ITEMS; TABLE SCOTT.SALES; Generate the definitions in the source environment: ./defgen paramfile ./dirprm/defs.prm Push the definitions file to the target server using scp: scp ./dirdef/defs.def stdby1-ol6-112:/u01/app/ggate/dirdef/ Edit the Extract and Datapump process parameter to include the newly created tables if you have specified individual table names in them. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch: GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered. Start the Extract and Datapump in the source environment: GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting Edit the Replicat process parameter file to include the tables: ./ggsci EDIT PARAMS RGGTEST1 REPLICAT RGGTEST1 USERID GGATE_ADMIN@TGORTEST, PASSWORD GGATE_ADMIN DISCARDFILE /u01/app/ggate/dirrpt/RGGTEST1.dsc,append,MEGABYTES 500 SOURCEDEFS ./dirdef/defs.def MAP SCOTT.BONUS, TARGET SCOTT.BONUS; MAP SCOTT.SALGRADE, TARGET SCOTT.SALGRADE; MAP SCOTT.DEPT, TARGET SCOTT.DEPT; MAP SCOTT.DUMMY, TARGET SCOTT.DUMMY; MAP SCOTT.EMP, TARGET SCOTT.EMP; MAP SCOTT.EMP,TARGET SCOTT.EMP_DIFFCOL_ORDER; MAP SCOTT.EMP, TARGET SCOTT.EMP_EXTRACOL, COLMAP(USEDEFAULTS, LAST_UPDATE_TIME = @DATENOW ()); MAP SCOTT.SALES, TARGET SCOTT.SALES; MAP SCOTT.ITEMS, TARGET SCOTT.ITEMS; Start the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting How it works... You can follow the previously mentioned procedure to apply any DDL changes to the tables in the source database. This procedure is valid for environments where existing table structures between the source and the target databases are not similar. The key things to note in this method are: The changes should only be made when all the changes extracted by GoldenGate are applied to the target database, and the replication processes are stopped. Once the DDL changes have been performed in the source database, the definitions file needs to be regenerated. The changes that you are making to the table structures needs to be performed on both sides. There's more… Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in cases where the environment does not satisfy these conditions: Individual table permissions granted to the GoldenGate Admin user If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment. Supplemental logging for modified tables without any keys If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will need to be modified when the structure of the underlying table is modified. Supplemental log groups with all columns for modified tables In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns.
Read more
  • 0
  • 0
  • 3384

article-image-angular-zen
Packt
19 Sep 2013
5 min read
Save for later

Angular Zen

Packt
19 Sep 2013
5 min read
(For more resources related to this topic, see here.) Meet AngularJS AngularJS is a client-side MVC framework written in JavaScript. It runs in a web browser and greatly helps us (developers) to write modern, single-page, AJAX-style web applications. It is a general purpose framework, but it shines when used to write CRUD (Create Read Update Delete) type web applications. Getting familiar with the framework AngularJS is a recent addition to the client-side MVC frameworks list, yet it has managed to attract a lot of attention, mostly due to its innovative templating system, ease of development, and very solid engineering practices. Indeed, its templating system is unique in many respects: It uses HTML as the templating language It doesn't require an explicit DOM refresh, as AngularJS is capable of tracking user actions, browser events, and model changes to figure out when and which templates to refresh It has a very interesting and extensible components subsystem, and it is possible to teach a browser how to interpret new HTML tags and attributes The templating subsystem might be the most visible part of AngularJS, but don't be mistaken that AngularJS is a complete framework packed with several utilities and services typically needed in single-page web applications. AngularJS also has some hidden treasures, dependency injection (DI) and strong focus on testability. The built-in support for DI makes it easy to assemble a web application from smaller, thoroughly tested services. The design of the framework and the tooling around it promote testing practices at each stage of the development process. Finding your way in the project AngularJS is a relatively new actor on the client-side MVC frameworks scene; its 1.0 version was released only in June 2012. In reality, the work on this framework started in 2009 as a personal project of Miško Hevery, a Google employee. The initial idea turned out to be so good that, at the time of writing, the project was officially backed by Google Inc., and there is a whole team at Google working full-time on the framework. AngularJS is an open source project hosted on GitHub (https://github.com/angular/angular.js) and licensed by Google, Inc. under the terms of the MIT license. The community At the end of the day, no project would survive without people standing behind it. Fortunately, AngularJS has a great, supportive community. The following are some of the communication channels where one can discuss design issues and request help: [email protected] mailing list (Google group) Google + community at https://plus.google.com/u/0/communities/115368820700870330756 #angularjs IRC channel [angularjs] tag at http://stackoverflow.com AngularJS teams stay in touch with the community by maintaining a blog (http://blog.angularjs.org/) and being present in the social media, Google + (+ AngularJS), and Twitter (@angularjs). There are also community meet ups being organized around the world; if one happens to be hosted near a place you live, it is definitely worth attending! Online learning resources AngularJS has its own dedicated website (http://www.angularjs.org) where we can find everything that one would expect from a respectable framework: conceptual overview, tutorials, developer's guide, API reference, and so on. Source code for all released AngularJS versions can be downloaded from http://code.angularjs.org. People looking for code examples won't be disappointed, as AngularJS documentation itself has plenty of code snippets. On top of this, we can browse a gallery of applications built with AngularJS (http://builtwith.angularjs.org). A dedicated YouTube channel (http://www.youtube.com/user/angularjs) has recordings from many past events as well as some very useful video tutorials. Libraries and extensions While AngularJS core is packed with functionality, the active community keeps adding new extensions almost every day. Many of those are listed on a dedicated website: http://ngmodules.org. Tools AngularJS is built on top of HTML and JavaScript, two technologies that we've been using in web development for years. Thanks to this, we can continue using our favorite editors and IDEs, browser extensions, and so on without any issues. Additionally, the AngularJS community has contributed several interesting additions to the existing HTML/JavaScript toolbox. Batarang Batarang is a Chrome developer tool extension for inspecting the AngularJS web applications. Batarang is very handy for visualizing and examining the runtime characteristics of AngularJS applications. We are going to use it extensively in this article to peek under the hood of a running application. Batarang can be installed from the Chrome's Web Store (AngularJS Batarang) as any other Chrome extension. Plunker and jsFiddle Both Plunker (http://plnkr.co) and jsFiddle (http://jsfiddle.net) make it very easy to share live-code snippets (JavaScript, CSS, and HTML). While those tools are not strictly reserved for usage with AngularJS, they were quickly adopted by the AngularJS community to share the small-code examples, scenarios to reproduce bugs, and so on. Plunker deserves special mentioning as it was written in AngularJS, and is a very popular tool in the community. IDE extensions and plugins Each one of us has a favorite IDE or an editor. The good news is that there are existing plugins/extensions for several popular IDEs such as Sublime Text 2 (https://github.com/angular-ui/AngularJS-sublime-package), Jet Brains' products (http://plugins.jetbrains.com/plugin?pr=idea&pluginId=6971), and so on.
Read more
  • 0
  • 0
  • 5135

article-image-rabbitmq-acknowledgements
Packt
16 Sep 2013
3 min read
Save for later

RabbitMQ Acknowledgements

Packt
16 Sep 2013
3 min read
(For more resources related to this topic, see here.) Acknowledgements (Intermediate) This task will examine reliable message delivery from the RabbitMQ server to a consumer. Getting ready If a consumer takes a message/order from our queue and the consumer dies, our unprocessed message/order will die with it. In order to make sure a message is never lost, RabbitMQ supports message acknowledgments or acks. When a consumer has received and processed a message, an acknowledgment is sent to RabbitMQ from the consumer informing RabbitMQ that it is free to delete the message. If a consumer dies without sending an acknowledgment, RabbitMQ will redeliver it to another consumer. How to do it... Let's navigate to our source code examples folder and locate the folder Message-Acknowledgement. Take a look at the consumer.js script and examine the changes we have made to support acks. We pass to the {ack:true} option to the q.subscribe function, which tells the queue that messages should be acknowledged before being removed: q.subscribe({ack:true}, function(message) { When our message has been processed we call q.shift, which informs RabbitMQ that the message has been processed, and it can now be removed from the queue: q.shift(); You can also use the prefetchCount option to increase the window of how many messages the server will send you before you need to send an acknowledgement. {ack:true, prefetchCount:1} is the default and will only send you one message before you acknowledge. Setting prefetchCount to 0 will make that window unlimited. A low value will impact performance, so it may be worth considering a higher value. Let's demonstrate this concept. Edit the consumer.js script located in the folder Message-Acknowledgement. Simply comment out the line q.shift(), which will stop the consumer from acknowledging the messages. Open a command-line console and start RabbitMQ: rabbitmq-server Now open a command-line console, navigate to our source code examples folder, and locate the folder Message-Acknowledgement. Execute the following command: Message-Acknowledgement> node producer Let the producer create several message/orders; press Ctrl + C while on the command-line console to stop the producer creating orders. Now execute the following to begin consuming messages: Message-Acknowledgement> node consumer Let's open another command-line console and run list_queues: rabbitmqctl list_queues messages_readymessages_unacknowledged The response should display our shop queue; details include the name, the number of messages ready to be processed, and one message which has not been acknowledged. Listing queues ...shop.queue 9 1...done. If you press Ctrl + C while on the command-line console, the consumer script is stopped, and then list the queues again you will notice the message has returned to the queue. Listing queues ...shop.queue 10 0...done. If you edit the change we made to consumer.js script and re-run these steps, the application will work correctly, consuming messages one at a time and sending an acknowledgment to RabbitMQ when each message has been processed. Summary This article explained a reliable message delivery process in RabbitMQ using Acknowledgements. It also listed the steps that will give you acknowledegements for a messaging application using scripts in RabbitMQ. Resources for Article : Further resources on this subject: Getting Started with Oracle Information Integration [Article] Messaging with WebSphere Application Server 7.0 (Part 1) [Article] Using Virtual Destinations (Advanced) [Article]
Read more
  • 0
  • 0
  • 6042
article-image-using-different-jquery-event-listeners-responsive-interaction
Packt
16 Sep 2013
9 min read
Save for later

Using different jQuery event listeners for responsive interaction

Packt
16 Sep 2013
9 min read
(For more resources related to this topic, see here.) Getting Started First we want to create JavaScript that transforms a select form element into a button widget that changes the value of the form element when a button is pressed. So the first part of that task is to build a form with a select element. How to do it This part is simple; start by creating a new web page. Inside it, create a form with a select element. Give the select element some options. Wrap the form in a div element with the class select. See the following example. I have added a title just for placement. <h2>Super awesome form element</h2><div class="select"> <form> <select> <option value="1">1</option> <option value="Bar">Bar</option> <option value="3">3</option> </select> </form></div> Next, create a new CSS file called desktop.css and add a link to it in your header. After that, add a media query to the link for screen media and min-device-width:321px. The media query causes the browser to load the new CSS file only on devices with a screen larger than 320 pixels. Copy and paste the link to the CSS, but change the media query to screen and min-width:321px. This will help you test and demonstrate the mobile version of the widget on your desktop. <link rel="stylesheet" media="screen and (min-device-width:321px)" href="desktop.css" /><link rel="stylesheet" media="screen and (min-width:321px)" href="desktop.css" /> Next, create a script tag with a link to a new JavaScript file called uiFunctions.js and then, of course, create the new JavaScript file. Also, create another script element with a link to the recent jQuery library. <script src = "http://code.jquery.com/jquery-1.8.2.min.js"></script><script src = "uiFunctions.js"></script> Now open the new JavaScript file uiFunctions.js in your editor and add instructions to do something on a document load. $(document).ready(function(){ //Do something}); The first thing your JavaScript should do when it loads is determine what kind of device it is on—a mobile device or a desktop. There are a few logical tests you can utilize to determine whether the device is mobile. You can test navigator.userAgent; specifically, you can use the .test() method, which in essence tests a string to see whether an expression of characters is in it, checks the window width, and checks whether the screen width is smaller than 600 pixels. For this article, let's use all three of them. Ultimately, you might just want to test navigator.userAgent. Write this inside the $(document).ready() function. if( /Android|webOS|iPhone|iPad|iPod|BlackBerry/i.test(navigator.userAgent) || $(window).width()<600 ||window.screen.width<600) { //Do something for mobile} else { //Do something for the desktop} Inside, you will have to create a listener for the desktop device interaction event and the mouse click event, but that is later. First, let's write the JavaScript to create the UI widget for the select element. Create a function that iterates for each select option and appends a button with the same text as the option to the select div element. This belongs inside the $(document).ready() function, but outside and before the if condition. The order of these is important. $('select option').each(function(){ $('div.select').append('<button>'+$(this).html()+'</button>');}); Now, if you load the page on your desktop computer, you will see that it generates new buttons below the select element, one for each select option. You can click on them, but nothing happens. What we want them to do is change the value of the select form element. To do so, we need to add an event listener to the buttons inside the else condition. For the desktop version, you need to add a .click() event listener with a function. Inside the function, create two new variables, element and itemClicked. Make element equal the string button, and itemClicked, the jQuery object event target, or $(event.target). The next line is tricky; we're going to use the .addClass() method to add a selected class to the element variable :nth-child(n). Also, the n of the :nth-child(n) should be a call to a function named .eventAction(), to which we will add the integer 2. We will create the function next. $('button').click(function(){ var element = 'button'; var itemClicked = $(event.target); $(element+':nth-child(' + (eventAction(itemClicked,element) + 2) + ')').addClass('selected');}); Next, outside the $(document).ready() function, create the eventAction() function. It will receive the variables itemClicked and element. The reason we make this function is because it performs the same functions for both the desktop click event and the mobile tap or long tap events. function eventAction(itemClicked,element){ //Do something!}; Inside the eventAction() function, create a new variable called choiceAction. Make choiceAction equal to the index of the element object in itemClicked, or just take a look at the following code: var choiceAction = $(element).index(itemClicked); Next, use the .removeClass() method to remove the selected class from the element object. $(element).removeClass('selected'); There are only two more steps to complete the function. First, add the selected attribute to the select field option using the .eq() method and the choiceAction variable. Finally, remember that when the function was called in the click event, it was expecting something to replace the n in :nth-child(n); so end the function by returning the value of the choiceAction variable. $('select option').eq(choiceAction).attr('selected','selected');return choiceAction; That takes care of everything but the mobile event listeners. The button style will be added at the end of the article. See how it looks in the following screenshot: This will be simple. First, using jQuery's $.getScript() method, add a line to retrieve the jQuery library in the first if condition where we tested navigator.userAgent and the screen sizes to see whether the page was loaded into the viewport of a mobile device. The jQuery Mobile library will transform the HTML into a mobile, native-looking app. $.getScript("http://code.jquery.com/mobile/1.2.0/jquery.mobile-1.2.0.min.js"); The next step is to copy the desktop's click event listener, paste it below the $.getScript line, and change some values. Replace the .click() listener with a jQuery Mobile event listener, .tap() or .taphold(), change the value of the element variable to the string .uti-btn, and append the daisy-chained .parent().prev() methods to the itemClicked variable value, $(event.target). Replace the line that calls the eventAction() function in the :nth-child(n) selector with a more simple call to eventAction(), with the variables itemClicked and element. $('button').click(function(){ var element = '.ui-btn'; var itemClicked = $(event.target).parent().prev(); eventAction(itemClicked,element);}); When you click on the buttons to update the select form element in the mobile device, you will need to instruct jQuery Mobile to refresh its select menu. jQuery Mobile has a method to refresh its select element. $('select').selectmenu("refresh",true); That is all you need for the JavaScript file. Now open the HTML file and add a few things to the header. First, add a style tag to make the select form element and .ui-select hidden with the CSS display:none;. Next, add links to the jQuery Mobile stylesheets and desktop.css with a media query for media screen and max-width: 600px; or max-device-width:320px;. <style> select,.ui-select{display:none;}</style><link rel="stylesheet" media="screen and (max-width:600px)" href="http://code.jquery.com/mobile/1.2.0/jquery.mobile-1.2.0.min.css"><link rel="stylesheet" media="screen and (min-width:600px)" href="desktop.css" /> When launched on a mobile device, the widget will look like this: Then, open the desktop.css file and create some style for the widget buttons. For the button element, add an inline display, padding, margins, border radius, background gradient, box shadow, font color, text shadow, and a cursor style. button { display:inline; padding:8px 15px; margin:2px; border-top:1px solid #666; border-left:1px solid #666; border-bottom:1px solid #333; border-right:1px solid #333; border-radius:5px; background: #7db9e8; /* Old browsers */ background:-moz-linear-gradient(top, #7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* FF3.6+ */ background:-webkit-gradient(linear,left top,left bottom, color-stop(0%,#7db9e8), color-stop(49%,#207cca), color-stop(50%,#2989d8), color-stop(100%,#1e5799)); /* Chrome,Safari4+ */ background:-webkit-linear-gradient(top,#7db9e8 0%,#207cca 49%, #2989d8 50%,#1e5799 100%); /* Chrome10+,Safari5.1+ */ background:-o-linear-gradient(top,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* Opera 11.10+ */ background:-ms-linear-gradient(top,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* IE10+ */ background:linear-gradient(to bottom,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* W3C */ filter:progid:DXImageTransform.Microsoft.gradient ( startColorstr='#7db9e8', endColorstr='#1e5799',GradientType=0 ); /* IE6-9 */ color:white; text-shadow: -1px -1px 1px #333; box-shadow: 1px 1px 4px 2px #999; cursor:pointer;} Finally, add CSS for the .selected class that was added by the JavaScript. This CSS will change the button to look as if the button has been pressed in. .selected{ border-top:1px solid #333; border-left:1px solid #333; border-bottom:1px solid #666; border-right:1px solid #666; color:#ffff00; box-shadow:inset 2px 2px 2px 2px #333; background: #1e5799; /* Old browsers */ background:-moz-linear-gradient(top,#1e5799 0%,#2989d8 50%, #207cca 51%, #7db9e8 100%); /* FF3.6+ */ background:-webkit-gradient(linear,left top,left bottom, color-stop(0%,#1e5799),color-stop(50%,#2989d8), color-stop(51%,#207cca),color-stop(100%,#7db9e8)); /* Chrome,Safari4+ */ background:-webkit-linear-gradient(top, #1e5799 0%,#2989d8 50%, #207cca 51%,#7db9e8 100%); /* Chrome10+,Safari5.1+ */ background:-o-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* Opera 11.10+ */ background:-ms-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* IE10+ */ background: linear-gradient(to bottom, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#1e5799', endColorstr='#7db9e8',GradientType=0 ); /* IE6-9 */} How it works This uses a combination of JavaScript and media queries to build a dynamic HTML form widget. The JavaScript tests the user agent and screen size to see if it is a mobile device and responsively delivers a different event listener for the different device types. In addition to that, the look of the widget will be different for different devices. Summary In this article we learned how to create an interactive widget that uses unobtrusive JavaScript, which uses different event listeners for desktop versus mobile devices. This article also helped you build your own web app that can transition between the desktop and mobile versions without needing you to rewrite your entire JavaScript code. Resources for Article : Further resources on this subject: Video conversion into the required HTML5 Video playback [Article] LESS CSS Preprocessor [Article] HTML5 Presentations - creating our initial presentation [Article]
Read more
  • 0
  • 0
  • 3161

Packt
16 Sep 2013
16 min read
Save for later

Linux Shell Scripting – various recipes to help you

Packt
16 Sep 2013
16 min read
(For more resources related to this topic, see here.) The shell scripting language is packed with all the essential problem-solving components for Unix/Linux systems. Text processing is one of the key areas where shell scripting is used, and there are beautiful utilities such as sed, awk, grep, and cut, which can be combined to solve problems related to text processing. Various utilities help to process a file in fine detail of a character, line, word, column, row, and so on, allowing us to manipulate a text file in many ways. Regular expressions are the core of pattern-matching techniques, and most of the text-processing utilities come with support for it. By using suitable regular expression strings, we can produce the desired output, such as filtering, stripping, replacing, and searching. Using regular expressions Regular expressions are the heart of text-processing techniques based on pattern matching. For fluency in writing text-processing tools, one must have a basic understanding of regular expressions. Using wild card techniques, the scope of matching text with patterns is very limited. Regular expressions are a form of tiny, highly-specialized programming language used to match text. A typical regular expression for matching an e-mail address might look like [a-z0-9_]+@[a-z0-9]+\.[a-z]+. If this looks weird, don't worry, it is really simple once you understand the concepts through this recipe. How to do it... Regular expressions are composed of text fragments and symbols, which have special meanings. Using these, we can construct any suitable regular expression string to match any text according to the context. As regex is a generic language to match texts, we are not introducing any tools in this recipe. Let's see a few examples of text matching: To match all words in a given text, we can write the regex as follows: ( ?[a-zA-Z]+ ?) ? is the notation for zero or one occurrence of the previous expression, which in this case is the space character. The [a-zA-Z]+ notation represents one or more alphabet characters (a-z and A-Z). To match an IP address, we can write the regex as follows: [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3} Or: [[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3} We know that an IP address is in the form 192.168.0.2. It is in the form of four integers (each from 0 to 255), separated by dots (for example, 192.168.0.2). [0-9] or [:digit:] represents a match for digits from 0 to 9. {1,3} matches one to three digits and \. matches the dot character (.). This regex will match an IP address in the text being processed. However, it doesn't check for the validity of the address. For example, an IP address of the form 123.300.1.1 will be matched by the regex despite being an invalid IP. This is because when parsing text streams, usually the aim is to only detect IPs. How it works... Let's first go through the basic components of regular expressions (regex): regex Description Example ^ This specifies the start of the line marker. ^tux matches a line that starts with tux. $ This specifies the end of the line marker. tux$ matches a line that ends with tux. . This matches any one character. Hack. matches Hack1, Hacki, but not Hack12 or Hackil; only one additional character matches. [] This matches any one of the characters enclosed in [chars]. coo[kl] matches cook or cool. [^] This matches any one of the characters except those that are enclosed in [^chars]. 9[^01] matches 92 and 93, but not 91 and 90. [-] This matches any character within the range specified in []. [1-5] matches any digits from 1 to 5. ? This means that the preceding item must match one or zero times. colou?r matches color or colour, but not colouur. + This means that the preceding item must match one or more times. Rollno-9+ matches Rollno-99 and Rollno-9, but not Rollno-. * This means that the preceding item must match zero or more times. co*l matches cl, col, and coool. () This treats the terms enclosed as one entity ma(tri)?x matches max or matrix. {n} This means that the preceding item must match n times. [0-9]{3} matches any three-digit number. [0-9]{3} can be expanded as [0-9][0-9][0-9]. {n,} This specifies the minimum number of times the preceding item should match. [0-9]{2,} matches any number that is two digits or longer. {n, m} This specifies the minimum and maximum number of times the preceding item should match. [0-9]{2,5} matches any number has two digits to five digits. | This specifies the alternation-one of the items on either of sides of | should match. Oct (1st | 2nd) matches Oct 1st or Oct 2nd. \ This is the escape character for escaping any of the special characters mentioned previously. a\.b matches a.b, but not ajb. It ignores the special meaning of . because of \. For more details on the regular expression components available, you can refer to the following URL: http://www.linuxforu.com/2011/04/sed-explained-part-1/ There's more... Let's see how the special meanings of certain characters are specified in the regular expressions. Treatment of special characters Regular expressions use some characters, such as $, ^, ., *, +, {, and }, as special characters. But, what if we want to use these characters as normal text characters? Let's see an example of a regex, a.txt. This will match the character a, followed by any character (due to the '.' character), which is then followed by the string txt . However, we want '.' to match a literal '.' instead of any character. In order to achieve this, we precede the character with a backward slash \ (doing this is called escaping the character). This indicates that the regex wants to match the literal character rather than its special meaning. Hence, the final regex becomes a\.txt. Visualizing regular expressions Regular expressions can be tough to understand at times, but for people who are good at understanding things with diagrams, there are utilities available to help in visualizing regex. Here is one such tool that you can use by browsing to http://www.regexper.com; it basically lets you enter a regular expression and creates a nice graph to help understand it. Here is a screenshot showing the regular expression we saw in the previous section: Searching and mining a text inside a file with grep Searching inside a file is an important use case in text processing. We may need to search through thousands of lines in a file to find out some required data, by using certain specifications. This recipe will help you learn how to locate data items of a given specification from a pool of data. How to do it... The grep command is the magic Unix utility for searching in text. It accepts regular expressions, and can produce output in various formats. Additionally, it has numerous interesting options. Let's see how to use them: To search for lines of text that contain the given pattern: $ grep pattern filenamethis is the line containing pattern Or: $ grep "pattern" filenamethis is the line containing pattern We can also read from stdin as follows: $ echo -e "this is a word\nnext line" | grep wordthis is a word Perform a search in multiple files by using a single grep invocation, as follows: $ grep "match_text" file1 file2 file3 ... We can highlight the word in the line by using the --color option as follows: $ grep word filename --color=autothis is the line containing word Usually, the grep command only interprets some of the special characters in match_text. To use the full set of regular expressions as input arguments, the -E option should be added, which means an extended regular expression. Or, we can use an extended regular expression enabled grep command, egrep. For example: $ grep -E "[a-z]+" filename Or: $ egrep "[a-z]+" filename In order to output only the matching portion of a text in a file, use the -o option as follows: $ echo this is a line. | egrep -o "[a-z]+\." line. In order to print all of the lines, except the line containing match_pattern, use: $ grep -v match_pattern file The -v option added to grep inverts the match results. Count the number of lines in which a matching string or regex match appears in a file or text, as follows: $ grep -c "text" filename 10 It should be noted that -c counts only the number of matching lines, not the number of times a match is made. For example: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -c "[0-9]" 2 Even though there are six matching items, it prints 2, since there are only two matching lines. Multiple matches in a single line are counted only once. To count the number of matching items in a file, use the following trick: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -o "[0-9]" | wc -l 6 Print the line number of the match string as follows: $ cat sample1.txt gnu is not unix linux is fun bash is art $ cat sample2.txt planetlinux $ grep linux -n sample1.txt 2:linux is fun or $ cat sample1.txt | grep linux -n If multiple files are used, it will also print the filename with the result as follows: $ grep linux -n sample1.txt sample2.txt sample1.txt:2:linux is fun sample2.txt:2:planetlinux Print the character or byte offset at which a pattern matches, as follows: $ echo gnu is not unix | grep -b -o "not" 7:not The character offset for a string in a line is a counter from 0, starting with the first character. In the preceding example, not is at the seventh offset position (that is, not starts from the seventh character in the line; that is, gnu is not unix). The -b option is always used with -o. To search over multiple files, and list which files contain the pattern, we use the following: $ grep -l linux sample1.txt sample2.txt sample1.txt sample2.txt The inverse of the -l argument is -L. The -L argument returns a list of non-matching files. There's more... We have seen the basic usages of the grep command, but that's not it; the grep command comes with even more features. Let's go through those. Recursively search many files To recursively search for a text over many directories of descendants, use the following command: $ grep "text" . -R -n In this command, "." specifies the current directory. The options -R and -r mean the same thing when used with grep. For example: $ cd src_dir $ grep "test_function()" . -R -n ./miscutils/test.c:16:test_function(); test_function() exists in line number 16 of miscutils/test.c. This is one of the most frequently used commands by developers. It is used to find files in the source code where a certain text exists. Ignoring case of pattern The -i argument helps match patterns to be evaluated, without considering the uppercase or lowercase. For example: $ echo hello world | grep -i "HELLO" hello grep by matching multiple patterns Usually, we specify single patterns for matching. However, we can use an argument -e to specify multiple patterns for matching, as follows: $ grep -e "pattern1" -e "pattern" This will print the lines that contain either of the patterns and output one line for each match. For example: $ echo this is a line of text | grep -e "this" -e "line" -o this line There is also another way to specify multiple patterns. We can use a pattern file for reading patterns. Write patterns to match line-by-line, and execute grep with a -f argument as follows: $ grep -f pattern_filesource_filename For example: $ cat pat_file hello cool $ echo hello this is cool | grep -f pat_file hello this is cool Including and excluding files in a grep search grep can include or exclude files in which to search. We can specify include files or exclude files by using wild card patterns. To search only for .c and .cpp files recursively in a directory by excluding all other file types, use the following command: $ grep "main()" . -r --include *.{c,cpp} Note, that some{string1,string2,string3} expands as somestring1 somestring2 somestring3. Exclude all README files in the search, as follows: $ grep "main()" . -r --exclude "README" To exclude directories, use the --exclude-dir option. To read a list of files to exclude from a file, use --exclude-from FILE. Using grep with xargs with zero-byte suffix The xargs command is often used to provide a list of file names as a command-line argument to another command. When filenames are used as command-line arguments, it is recommended to use a zero-byte terminator for the filenames instead of a space terminator. Some of the filenames can contain a space character, and it will be misinterpreted as a terminator, and a single filename may be broken into two file names (for example, New file.txt can be interpreted as two filenames New and file.txt). This problem can be avoided by using a zero-byte suffix. We use xargs so as to accept a stdin text from commands such as grep and find. Such commands can output text to stdout with a zero-byte suffix. In order to specify that the input terminator for filenames is zero byte (\0), we should use -0 with xargs. Create some test files as follows: $ echo "test" > file1 $ echo "cool" > file2 $ echo "test" > file3 In the following command sequence, grep outputs filenames with a zero-byte terminator (\0), because of the -Z option with grep. xargs -0 reads the input and separates filenames with a zero-byte terminator: $ grep "test" file* -lZ | xargs -0 rm Usually, -Z is used along with -l. Silent output for grep Sometimes, instead of actually looking at the matched strings, we are only interested in whether there was a match or not. For this, we can use the quiet option (-q), where the grep command does not write any output to the standard output. Instead, it runs the command and returns an exit status based on success or failure. We know that a command returns 0 on success, and non-zero on failure. Let's go through a script that makes use of grep in a quiet mode, for testing whether a match text appears in a file or not. #!/bin/bash #Filename: silent_grep.sh #Desc: Testing whether a file contain a text or not if [ $# -ne 2 ]; then echo "Usage: $0 match_text filename" exit 1 fi match_text=$1 filename=$2 grep -q "$match_text" $filename if [ $? -eq 0 ]; then echo "The text exists in the file" else echo "Text does not exist in the file" fi The silent_grep.sh script can be run as follows, by providing a match word (Student) and a file name (student_data.txt) as the command argument: $ ./silent_grep.sh Student student_data.txt The text exists in the file Printing lines before and after text matches Context-based printing is one of the nice features of grep. Suppose a matching line for a given match text is found, grep usually prints only the matching lines. But, we may need "n" lines after the matching line, or "n" lines before the matching line, or both. This can be performed by using context-line control in grep. Let's see how to do it. In order to print three lines after a match, use the -A option: $ seq 10 | grep 5 -A 3 5 6 7 8 In order to print three lines before the match, use the -B option: $ seq 10 | grep 5 -B 3 2 3 4 5 Print three lines after and before the match, and use the -C option as follows: $ seq 10 | grep 5 -C 3 2 3 4 5 6 7 8 If there are multiple matches, then each section is delimited by a line "--": $ echo -e "a\nb\nc\na\nb\nc" | grep a -A 1 a b -- a b Cutting a file column-wise with cut We may need to cut the text by a column rather than a row. Let's assume that we have a text file containing student reports with columns, such as Roll, Name, Mark, and Percentage. We need to extract only the name of the students to another file or any nth column in the file, or extract two or more columns. This recipe will illustrate how to perform this task. How to do it... cut is a small utility that often comes to our help for cutting in column fashion. It can also specify the delimiter that separates each column. In cut terminology, each column is known as a field . To extract particular fields or columns, use the following syntax: cut -f FIELD_LIST filename FIELD_LIST is a list of columns that are to be displayed. The list consists of column numbers delimited by commas. For example: $ cut -f 2,3 filename Here, the second and the third columns are displayed. cut can also read input text from stdin. Tab is the default delimiter for fields or columns. If lines without delimiters are found, they are also printed. To avoid printing lines that do not have delimiter characters, attach the -s option along with cut. An example of using the cut command for columns is as follows: $ cat student_data.txt No Name Mark Percent 1 Sarath 45 90 2 Alex 49 98 3 Anu 45 90 $ cut -f1 student_data.txt No 1 2 3 Extract multiple fields as follows: $ cut -f2,4 student_data.txt Name Percent Sarath 90 Alex 98 Anu 90 To print multiple columns, provide a list of column numbers separated by commas as arguments to -f. We can also complement the extracted fields by using the --complement option. Suppose you have many fields and you want to print all the columns except the third column, then use the following command: $ cut -f3 --complement student_data.txt No Name Percent 1 Sarath 90 2 Alex 98 3 Anu 90 To specify the delimiter character for the fields, use the -d option as follows: $ cat delimited_data.txt No;Name;Mark;Percent 1;Sarath;45;90 2;Alex;49;98 3;Anu;45;90 $ cut -f2 -d";" delimited_data.txt Name Sarath Alex Anu There's more The cut command has more options to specify the character sequences to be displayed as columns. Let's go through the additional options available with cut. Specifying the range of characters or bytes as fields Suppose that we don't rely on delimiters, but we need to extract fields in such a way that we need to define a range of characters (counting from 0 as the start of line) as a field. Such extractions are possible with cut. Let's see what notations are possible: N- from the Nth byte, character, or field, to the end of line N-M from the Nth to Mth (included) byte, character, or field -M from first to Mth (included) byte, character, or field We use the preceding notations to specify fields as a range of bytes or characters with the following options: -b for bytes -c for characters -f for defining fields For example: $ cat range_fields.txt abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxy You can print the first to fifth characters as follows: $ cut -c1-5 range_fields.txt abcde abcde abcde abcde The first two characters can be printed as follows: $ cut range_fields.txt -c -2 ab ab ab ab Replace -c with -b to count in bytes. We can specify the output delimiter while using with -c, -f, and -b, as follows: --output-delimiter "delimiter string" When multiple fields are extracted with -b or -c, the --output-delimiter is a must. Otherwise, you cannot distinguish between fields if it is not provided. For example: $ cut range_fields.txt -c1-3,6-9 --output-delimiter "," abc,fghi abc,fghi abc,fghi abc,fghi
Read more
  • 0
  • 0
  • 1559

article-image-creating-different-font-files-and-using-web-fonts
Packt
16 Sep 2013
12 min read
Save for later

Creating different font files and using web fonts

Packt
16 Sep 2013
12 min read
(For more resources related to this topic, see here.) Creating different font files In this recipe, we will learn how to create or get these fonts and how to generate the different formats for use in different browsers (Embedded Open Type, Open Type, True Type Font, Web Open Font Format, and SVG font) is explained in this recipe. Getting ready To get the original file of the font created during this recipe in addition to the generated font formats and the full source code of the project FontCreation; please refer to the receipe2 project folder. How to do it... The following steps are preformed for creating different font files: Firstly, we will get an original TTF font file. There are two different ways to get fonts: The first method is by downloading one from specialized websites. Both free and commercial solutions can be found with a wide variety of beautiful fonts. The following are a few sites for downloading free fonts: Google fonts, Font squirrel, Dafont, ffonts, Jokal, fontzone, STIX, Fontex, and so on. Here are a few sites for downloading commercial fonts: Typekit, Font Deck, Font Spring, and so on. We will consider the example of Fontex, as shown in the following screenshot. There are a variety of free fonts. You can visit the website at http://www.fontex.org/. The second method is by creating your own font and then generating a TIFF file format. There are a lot of font generators on the Web. We can find online generators or follow the professionals by scanning handwritten typography and finally import it to Adobe Illustrator to change it into vector based letters or symbols. For newbies, I recommend trying Fontstruct (http://fontstruct.com). It is a WYSIWYG flash editor that will help you create your first font file, as shown in the following screenshot: As you can see, we were trying to create the S letter using a grid and some different forms. After completing the font creation, we can now preview it rather than download the TTF file. The file is in the receipe2 project folder. The following screenshot is an example of a font we have created on the run: Now we have to generate the rest of file formats in order to ensure maximum compatibility with common browsers. We highly recommend the use of Font squirrel web font generator (http://www.fontsquirrel.com/tools/webfont-generator). This online tool helps to create fonts for @font-face by generating different font formats. All we need to do is to upload the original file (optionally adding same font variants bold, italic, or bold-italic), select the output formats, add some optimizations, and finally download the package. It is shown in the following screenshot: The following code explains the how to use this font: <!DOCTYPE html><html><head><title>My first @font-face demo</title><style type="text/css">@font-face {font-family: 'font_testregular';src: url('font_test-webfont.eot');src: url('font_test-webfont.eot?#iefix')format('embedded-opentype'),url('font_test-webfont.woff') format('woff'),url('font_test-webfont.ttf') format('truetype'),url('font_test-webfont.svg#font_testregular')format('svg');font-weight: normal;font-style: normal;} Normal font usage: h1 , p{font-family: 'font_testregular', Helvetica, Arial,sans-serif;}h1 {font-size: 45px;}p:first-letter {font-size: 100px;text-decoration: wave;}p {font-size: 18px;line-height: 27px;}</style> Font usage in canvas: <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" ></script><script language="javascript" type="text/javascript">var x = 30, y = 60;function generate(){var canvas = $('canvas')[0],tx = canvas.getContext('2d');var t = 'font_testregular'; var c = 'red';var v =' sample text via canvas';ctx.font = '52px "'+t+'"';ctx.fillStyle = c;ctx.fillText(v, x, y);}</script></head><body onload="generate();"><h1>Header sample</h1><p>Sample text with lettrine effect</p><canvas height="800px" width="500px">Your browser does not support the CANVAS element.Try the latest Firefox, Google Chrome, Safari or Opera.</canvas></body></html> How it works... This recipe takes us through getting an original TTF file: Font download: When downloading a font (either free or commercial) we have to pay close attention to terms of use. Sometimes, you are not allowed to use these fonts on the web and you are only allowed to use them locally. Font creation: During this process, we have to pay attention to some directives. We have to create Glyphs for all the needed alphabets (upper case and lower case), numbers, and symbols to avoid font incompatibility. We have to take care of the spacing between glyphs and eventually, variations, and ligatures. A special creation process is reserved for right- to left-written languages. Font formats generation: Font squirrel is a very good online tool to generate the most common formats to handle the cross-browser compatibility. It is recommended that we optimize the font ourselves via expert mode. We have the possibility of fixing some issues during the font creation such as missing glyphs, X-height matching, and Glyph spacing. Font usage: We will go through the following font usage: Normal font usage: We used the same method as already adopted via font-family; web-safe fonts are also applied: h1 , p{font-family: 'font_testregular', Helvetica, Arial, sans-serif;} Font usage in canvas: The canvas is a HTML5 tag that renders dynamically, bitmap images via scripts Creating 2D shapes. In order to generate this image based on fonts, we will create the canvas tag at first. An alternative text will be displayed if canvas is not supported by the browser. <canvas height="800px" width="500px">Your browser does not support the CANVAS element.Try the latest Firefox, Google Chrome, Safari or Opera.</canvas> We will now use the jQuery library in order to generate the canvas output. An onload function will be initiated to create the content of this tag: <scriptsrc = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" ></script> In the following function, we create a variable ctx which is a canvas occurrence of 2D context via canvas.getContext('2d'). We also define font-family using t as a variable, font-size, text to display using v as a variable, and color using c as a variable. These properties will be used as follows: <script language="javascript" type="text/javascript">var x = 30, y = 60;function generate(){var canvas = $('canvas')[0],ctx = canvas.getContext('2d');var t = 'font_testregular';var c = 'red' ;var v =' sample text via canvas'; This is for font-size and family. Here the font-size is 52px and the font-family is font_testregular: ctx.font = '52px "'+t+'"'; This is for color by fillstyle: ctx.fillStyle = c; Here we establish both text to display and axis coordinates where x is the horizontal position and y is vertical one. ctx.fillText(v, x, y); Using Web fonts In this recipe, you will learn how to use fonts hosted in distant servers for many reasons such as support services and special loading scripts. A lot of solutions are widely available on the web such as Typekit, Google fonts, Ascender, Fonts.com web fonts, and Fontdeck. In this task, we will be using Google fonts and its special JavaScript open source library, WebFont loader. Getting ready Please refer to the project WebFonts to get the full source code. How to do it... We will get through four steps: Let us configure the link tag: <link rel="stylesheet" id="linker" type="text/css"href="http://fonts.googleapis.com/css?family=Mr+De+Haviland"> Then we will set up the WebFont loader: <script type="text/javascript">WebFontConfig = {google: {families: [ 'Tangerine' ]}};(function() {var wf = document.createElement('script');wf.src = ('https:' == document.location.protocol ?'https' : 'http') +'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';wf.type = 'text/javascript';wf.async = 'true';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(wf, s);})();</script><style type="text/css">.wf-loading p#firstp {font-family: serif}.wf-inactive p#firstp {font-family: serif}.wf-active p#firstp {font-family: 'Tangerine', serif} Next we will write the import command: @import url(http://fonts.googleapis.com/css?family=Bigelow+Rules); Then we will cover font usage: h1 {font-size: 45px;font-family: "Bigelow Rules";}p {font-family: "Mr De Haviland";font-size: 40px;text-align: justify;color: blue;padding: 0 5px;}</style></head><body><div id="container"><h1>This H1 tag's font was used via @import command </h1><p>This font was imported via a Stylesheet link</p><p id="firstp">This font was created via WebFont loaderand managed by wf a script generated from webfonts.js.<br />loading time will be managed by CSS properties :<i>.wf-loading , .wf-inactive and .wf-active</i> </p></div></body></html> How it works... In this recipe and for educational purpose, we used following ways to embed the font in the source code (the link tag, the WebFont loader, and the import command). The link tag: A simple link tag to a style sheet is used referring to the address already created: <link rel="stylesheet" type="text/css"href="http://fonts.googleapis.com/css?family=Mr+De+Haviland"> The WebFont loader: It is a JavaScript library developed by Google and Typekit. It grants advanced control options over the font loading process and exceptions. It lets you use multiple web font providers. In the following script, we can identify the font we used, Tangerine, and the link to predefined address of Google APIs with the world google: WebFontConfig = {google: { families: [ 'Inconsolata:bold' ] }}; We now will create wf which is an instance of an asynchronous JavaScript element. This instance is issued from Ajax Google API: var wf = document.createElement('script');wf.src = ('https:' == document.location.protocol ?'https' : 'http') +'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';wf.type = 'text/javascript';wf.async = 'true';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(wf, s);})(); We can have control over fonts during and after loading by using specific class names. In this particular case, only the p tag with the ID firstp will be processed during and after font loading. During loading, we use the class .wf-loading. We can use a safe font (for example, Serif) and not the browser's default page until loading is complete as follows: .wf-loading p#firstp {font-family: serif;} After loading is complete, we will usually use the font that we were importing earlier. We can also add a safe font for older browsers: .wf-active p#firstp {font-family: 'Tangerine', serif;} Loading failure: In case we failed to load the font, we can specify a safe font to avoid falling in default browser's font: .wf-inactive p#firstp {font-family: serif;} The import command: It is the easiest way to link to the fonts: @import url(http://fonts.googleapis.com/css?family=Bigelow+Rules); Font usage: We will use the fonts as we did already via font-family property: h1 {font-family: "Bigelow Rules";}p {font-family: "Mr De Haviland";} There's more... The WebFont loader has the ability to embed fonts from mutiple WebFont providers. It has some predefined providers in the script such as Google, Typekit, Ascender, Fonts.com web fonts, and Fontdeck. For example, the following is the specific source code for Typekit and Ascender: WebFontConfig ={typekit: {id: 'TypekitId'}};WebFontConfig ={ascender: {key: 'AscenderKey',families: ['AscenderSans:bold,bolditalic,italic,regular']}}; For the font providers that are not listed above, a custom module can handle the loading of the specific style sheet: WebFontConfig = {custom: {families: ['OneFont', 'AnotherFont'],urls: ['http://myotherwebfontprovider.com/stylesheet1.css','http://yetanotherwebfontprovider.com/stylesheet2.css' ]}}; For more details and options of the WebFont loader script, you can visit the following link: https://developers.google.com/fonts/docs/webfont_loader To download this API you may access the following URL: https://github.com/typekit/webfontloader How to generate the link to the font? The URL used in every method to import the font in every method (the link tag, the WebFont loader, and the import command) is composed of the Google fonts API base url (http://fonts.googleapis.com/css) and the family parameter including one or more font names, ?family=Tangerine. Multiple fonts are separated with a pipe character (|) as follows: ?family=Tangerine|Inconsolata|Droid+Sans Optionally, we can add subsets or also specify a style for each font: Cantarell:italic|Droid+Serif:bold&subset=latin Browser-dependent output The Google fonts API serves a generated style sheet specific to the client, via the browser's request. The response is relative to the browser. For example, the output for Firefox will be: @font-face {font-family: 'Inconsolata';src: local('Inconsolata'),url('http://themes.googleusercontent.com/fonts/font?kit=J_eeEGgHN8Gk3Eud0dz8jw') format('truetype');} This method lowers the loading time because the generated style sheet is relative to client's browser. No multiformat font files are needed because Google API will generate it, automatically. Summary In this way, we have learned how to create different font formats, such as Embedded Open Type, Open Type, True Type Font, Web Open Font Format, and SVG font, and how to use the different Web fonts such as Typekit, Google fonts, Ascender, Fonts.com web fonts, and Fontdeck. Resources for Article: Further resources on this subject: So, what is Markdown? [Article] Building HTML5 Pages from Scratch [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 1
  • 2057
article-image-selecting-attributes-should-know
Packt
16 Sep 2013
9 min read
Save for later

Selecting by attributes (Should know)

Packt
16 Sep 2013
9 min read
(For more resources related to this topic, see here.) Getting ready These selectors are easily recognizable because they are wrapped by square brackets (for example, [selector]). This type of selector is always used coupled with other, like those seen so far, although this can be implicit as we'll see in few moments. In my experience, you'll often use them with the Element selector, but this can vary based on your needs. How many and what are the selectors of this type? Glad you asked! Here is a table that gives you an overview: Name Syntax Description Contains [attribute*="value"] (for example input[name*="cod"]) Selects the elements that have the value specified as a substring of the given attribute. Contains Prefix [attribute|="value"] (for example, a[class|="audero-"]) Selects nodes with the given value equal or equal followed by a hyphen inside the specified attribute. Contains Word [attribute~="value"] (for example, span[data-level~="hard"]) Selects elements that have the specified attribute with a value equal to or containing the given value delimited by spaces. Ends With [attribute$="value"] (for example, div[class$="wrapper"]) Selects nodes having the value specified at the end of the given attribute's value. Equals [attribute="value"] (for example, p[draggable="true"]) Selects elements that have the specified attribute with a value equal to the given value delimited by spaces. This selector performs an exact match. Not Equal [attribute!="value"] (for example, a[target!="_blank"]) Selects elements that don't have the specified attribute or have it but with a value not equal to the given value delimited by spaces. Starts With [attribute^="value"] (for example, img[alt^="photo"]) Selects nodes having the value specified at the start of the given attribute's value. Has [attribute] (for example, input[placeholder]) Selects elements that have the attribute specified, regardless of its value. As you've seen in the several examples in the table, we've used all of these selectors with other ones. Recalling what I said few moments ago, sometimes you can have used them with an implicit selector. In fact, take the following example: $('[placeholder]') What's the "hidden" selector? If you guessed All, you can pat yourself on the back. You're really smart! In fact, it's equivalent to write: $('*[placeholder]') How to do it... There are quite a lot of Attribute selectors, therefore, we won't build an example for each of them, and I'm going to show you two demos. The first will teach you the use of the Attribute Contains Word selector to print on the console the value of the collected elements. The second will explain the use of the Attribute Has selector to print the value of the placeholder's attribute of the retrieved nodes. Let's write some code! To build the first example, follow these steps: Create a copy of the template.html file and rename it as contain-word-selector.html. Inside the <body> tag, add the following HTML markup: <h1>Rank</h1> <table> <thead> <th>Name</th> <th>Surname</th> <th>Points</th> </thead> <tbody> <tr> <td class="name">Aurelio</td> <td>De Rosa</td> <td class="highlight green">100</td> </tr> <tr> <td class="name">Nikhil</td> <td>Chinnari</td> <td class="highlight">200</td> </tr> <tr> <td class="name">Esha</td> <td>Thakker</td> <td class="red highlight void">50</td> </tr> </tbody> </table> Edit the <head> section adding the following lines just after the <title>: <style> .highlight { background-color: #FF0A27; } </style> Edit the <head> section of the page adding this code: <script> $(document).ready(function() { var $elements = $('table td[class~="highlight"]'); console.log($elements.length); }); </script> Save the file and open it with your favorite browser. To create the second example, performs the following steps instead: Create a copy of the template.html file and rename it as has-selector.html. Inside the <body> tag, add the following HTML markup: <form name="registration-form" id="registration-form" action="registration.php" method="post"> <input type="text" name="name" placeholder="Name" /> <input type="text" name="surname" placeholder="Surname" /> <input type="email" name="email" placeholder="Email" /> <input type="tel" name="phone-number" placeholder="Phone number" /> <input type="submit" value="Register" /> <input type="reset" value="Reset" /> </form> Edit the <head> section of the page adding this code: <script> $(document).ready(function() { var $elements = $('input[placeholder]'); for(var i = 0; i < $elements.length; i++) { console.log($elements[i].placeholder); } }); </script> Save the file and open it with your favorite browser. How it works... In the first example, we created a table with four rows, one for the header and three for the data, and three columns. We put some classes to several columns, and in particular, we used the class highlight. Then, we set the definition of this class so that an element having it assigned, will have a red background color. In the next step, we created our usual script (hey, this is still a article on jQuery, isn't it?) where we selected all of the <td> having the class highlight assigned that are descendants (in this case we could use the Child selector as well) of a <table>. Once done, we simply print the number of the collected elements. The console will confirm that, as you can see by yourself loading the page, that the matched elements are three. Well done! In the second step, we created a little registration form. It won't really work since the backend is totally missing, but it's good for our discussion. As you can see, our form takes advantage of some of the new features of HTML5, like the new <input> types e-mail and tel and the placeholder attribute. In our usual handler, we're picking up all of the <input>instance's in the page having the placeholder attribute specified and assigning them to a variable called $elements. We're prepending a dollar sign to the variable name to highlight that it stores a jQuery object. With the next block of code, we iterate over the object to access the elements by their index position. Then we log on the console the placeholder's value accessing it using the dot operator. As you can see, we accessed the property directly, without using a method. This happens because the collection's elements are plain DOM elements and not jQuery objects. If you replicated correctly the demo you should see this output in your console: In this article, we chose all of the page's <input> instance's, not just those inside the form since we haven't specified it. A better selection would be to restrict the selection using the form's id, that is very fast as we've already discussed. Thus, our selection will turn into: $('#registration-form input[placehoder]') We can have an even better selection using the jQuery's find() method that retrieves the descendants that matches the given selector: $('#registration-form').find('input[placehoder]') There's more... You can also use more than one attribute selector at once. Multiple attribute selector In case you need to select nodes that match two or more criteria, you can use the Multiple Attribute selector. You can chain how many selectors you liked, it isn't limited to two. Let's say that you want to select all of the <input> instance's of type email and have the placeholder attribute specified, you would need to write: $('input[type="email"][placeholder]') Not equal selector This selector isn't part of the CSS specifications, so it can't take advantage of the native querySelectorAll() method. The official documentation has a good hint to avoid the problem and have better performance: For better performance in modern browsers, use $("your-pure-css-selector").not('[name="value"]') instead. Using filter() and attr() jQuery really has a lot of methods to help you in your work and thanks to this, you can achieve the same task in a multitude of ways. While the attribute selectors are important, it can be worth to see how you could achieve the same result seen before using filter() and attr(). filter() is a function that accepts only one argument that can be of different types, but you'll usually see codes that pass a selector or a function. Its aim is to reduce a collection, iterating over it, and keeping only the elements that match the given parameter. The attr() method, instead, accepts up to two arguments and the first is usually an attribute name. We'll use it simply as a getter to retrieve the value of the elements' placeholder. To achieve our goal, replace the selection instruction with these lines: var $elements = $('#registration-form input').filter(function() { return ($(this).attr("placeholder") !== undefined) }); The main difference here is the anonymous function we passed to the filter() method. Inside the function, this refers to the current DOM element processed, so to be able to use jQuery's methods we need to wrap the element in a jQuery object. Some of you may guess why we haven't used the plain DOM elements accessing the placeholder attribute directly. The reason is that the result won't be the one expected. In fact, by doing so, you'll have an empty string as a value even if the placeholder attribute wasn't set for that element making the strict test against undefined useless. Summary Thus in this article we have learned how to select elements using the attributes. Resources for Article : Further resources on this subject: jQuery refresher [Article] Tips and Tricks for Working with jQuery and WordPress [Article] jQuery User Interface Plugins: Tooltip Plugins [Article]
Read more
  • 0
  • 0
  • 2761

article-image-designing-sizing-building-and-configuring-citrix-vdi-box
Packt
13 Sep 2013
7 min read
Save for later

Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box

Packt
13 Sep 2013
7 min read
(For more resources related to this topic, see here.) Sizing the servers There are a number of tools and guidelines to help you to size Citrix VIAB appliances. Essentially, the guides cover the following topics: CPU Memory Disk IO Storage In their sizing guides, Citrix classifies users into the following two groups: 4kers Knowledge workers Therefore, the first thing to determine is how many of your proposed VIAB users are task workers, and how many are knowledge workers? Task workers Citrix would define task workers as users who run a small set of simple applications, not very graphical in nature or CPU- or memory-intensive, for example, Microsoft Office and a simple line of business applications. Knowledge workers Citrix would define knowledge workers as users who run multimedia and CPU- and memory-intensive applications. They may include large spreadsheet files, graphics packages, video playback, and so on. CPU Citrix offers recommendations based on CPU cores, such as the following: 3 x desktops per core per knowledge worker 6 x desktops per core per task user 1 x core for the hypervisor These figures can be increased slightly if the CPUs have hyper-threading. You should also add another 15 percent if delivering personal desktops. The sizing information has been gathered from the Citrix VIAB sizing guide PDF. Example 1 If you wanted to size a server appliance to support 50 x task-based users running pooled desktops, you would require 50 / 6 = 8.3 + 1 (for the hypervisor) = 9.3 cores, rounded up to 10 cores. Therefore, a dual CPU with six cores would provide 12 x CPU cores for this requirement. Example 2 If you wanted to size a server appliance to support 15 x task and 10 x knowledge workers you would require (15 / 6 = 2.5) + (10 / 3 = 3.3) + 1 (for the hypervisor) = 7 cores. Therefore, a dual CPU with 4 cores would provide 8 x CPU cores for this requirement. Memory The memory required depends on the desktop OS that you are running and also on the amount of optimization that you have done to the image. Citrix recommends the following guidelines: Task worker for Windows 7 should be 1.5 GB Task worker for Windows XP should be 0.5 GB Knowledge worker Windows 7 should be 2 GB Knowledge worker Windows XP should be 1 GB It is also important to allocate memory for the hypervisor and the VIAB virtual appliance. This can vary depending on the number of users, so we would recommend using the sizing spreadsheet calculator available in the Resources section of the VIAB website. However, as a guide, we would allocate 3 GB memory (based on 50 users) for the hypervisor and 1 GB for VIAB. The amount of memory required by the hypervisor will grow as the number of users on the server grows. Citrix also recommends adding 10 percent more memory for server operations. Example 1 If you wanted to size a server appliance to support 50 x task-based users, with Windows 7, you would require 50 x 1.5 + 4 GB (for VIAB and the hypervisor) = 75 GB + 10% = 87 GB. Therefore, you would typically round this up to a 96 GB memory, providing an ideal configuration for this requirement. Example 2 Therefore, if you wanted to size a server appliance to support 15 x task and 10 x knowledge workers, with Windows 7, you would require (15 x 1.5) + (10 x 2) + 4 GB (for VIAB and the hypervisor) = 75 GB + 10% = 51.5 GB. Therefore, a 64 GB memory would be an ideal configuration for this requirement. Disk IO As multiple Windows images run on the appliances, disk IO becomes very important and can often become the first bottleneck for VIAB.Citrix calculates IOPS with a 40-60 split between read and write OPS, during end user desktop access.Citrix doesn't recommend using slow disks for VIAB and has statistic information for SAS 10 and 15K and SSD disks.The following table shows the IOPS delivered from the following disks: Hard drive RPM IOPS RAID 0 IOPS RAID 1 SSD 6000   15000 175 122.5 10000 125 87.7 The following table shows the IOPS required for task and knowledge workers for Windows XP and Windows 7: Desktop IOPS Windows XP Windows 7 Task user 5 IOS 10 IOPS Knowledge user 10 IOPS 20 IOPS Some organizations decide to implement RAID 1 or 10 on the appliances to reduce the chance of an appliance failure. This does require many more disks however, and significantly increases the cost of the solution. SSD SSD is becoming an attractive proposition for organizations that want to run a larger number of users on each appliance. SSD is roughly 30 times faster than 15K SAS drives, so it will eliminate desktop IO bottlenecks completely. SSD continues to come down in price, so can be well worth considering at the start of a VIAB project. SSDs have no moving mechanical components. Compared with electromechanical disks, SSDs are typically less susceptible to physical shock, run more quietly, have lower access time, and less latency. However, while the price of SSDs has continued to decline, SSDs are still about 7 to 8 times more expensive per unit of storage than HDDs. A further option to consider would be Fusion-IO, which is based on NAND flash memory technology and can deliver an exceptional number of IOPS. Example 1 If you wanted to size a server appliance to support 50 x task workers, with Windows 7, using 15K SAS drives, you would require 175 / 10 = 17.5 users on each disk, therefore, 50 / 17. 5 = 3 x 15K SAS disks. Example 2 If you wanted to size a server appliance to support 15 x task workers and 10 knowledge workers, with Windows 7, you would require the following: 175 / 10 = 17.5 task users on each disk, therefore 15 / 17.5 = 0.8 x 15K SAS disks 175 / 20 = 8.75 knowledge users on each disk, therefore 10 / 8.75 = 1.1 x 15K SAS disks Therefore, 2 x 15K SAS drives would be required. Storage Storage capacity is determined by the number of images, number of desktops, and types of desktop. It is best practice to store user profile information and data elsewhere. Citrix uses the following formula to determine the storage capacity requirement: 2 x golden image x number of images (assume 20 GB for an image) 70 GB for VDI-in-a-Box 15 percent of the size of the image / desktop (achieved with linked clone technology) Example 1 Therefore, if you wanted to size a server appliance to support 50 x task-based users, with two golden Windows 7 images, you would require the following: Space for the golden image: 2 x 20 GB x 2 = 80 GB VIAB appliance space: 70 GB Image space/desktop: 15% x 20 GB x 50 = 150 GB Extra room for swap and transient activity: 100 GB Total: 400 GB Recommended: 500 GB to 1 TB per server We have already specified 3 x 15K SAS drives for our IO requirements. If those were 300-GB disks, they should provide enough storage. This section of the article provides you with a step-by-step guide to help you to build and configure a VIAB solution; starting with the hypervisor install. It then goes onto to cover adding an SSL certificate, the benefits of using the GRID IP Address feature, and how you can use the Kiosk mode to deliver a standard desktop to public access areas. It then covers adding a license file and provides details on the useful features contained within Citrix profile management. It then highlights how VIAB can integrate with other Citrix products such as NetScaler VPX, to enable secure connections across the Internet and GoToAssist, a support and monitoring package which is very useful if you are supporting a number of VIAB appliances across multiple sites. ShareFile can again be a very useful tool to enable data files to follow the user, whether they are connecting to a local device or a virtual desktop. This can avoid the problems of files being copied across the network, delaying users. We then move on to a discussion on the options available for connecting to VIAB, including existing PCs, thin clients, and other devices, including mobile devices. The chapter finishes with some useful information on support for VIAB, including the support services included with subscription and the knowledge forums. Installing the hypervisor All the hypervisors have two elements; the bare metal hypervisor that installs on the server and its management tools that you would typically install on the IT administrator workstations. Bare Metal Hypervisor Management tool Citrix XenServer XenCenter Microsoft Hyper-V Hyper V Manager VMware ESXi vSphere Client It is relatively straightforward to install the hypervisor. Make sure you enable linked clones in XenServer, because this is required for the linked clone technology. Give the hypervisor a static IP address and make a note of the administrator's username and password. You will need to download ISO images for the installation media; if you don't already have them, they can be found on the Internet.
Read more
  • 0
  • 0
  • 2818