Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-creating-subtle-ui-details-using-midnightjs-wowjs-and-animatecss
Roberto González
10 Jul 2015
9 min read
Save for later

Creating subtle UI details using Midnight.js, Wow.js, and Animate.css

Roberto González
10 Jul 2015
9 min read
Creating animations in CSS or JavaScript is often annoying and/or time-consuming, so most people tend to pay a lot of attention to the content that’s below "the fold" ("the fold" is quickly becoming an outdated concept, but you know what I mean). I’ll be covering a few techniques to help you add some nice touches to your landing pages that only take a few minutes to implement and require pretty much no development work at all. To create a base for this project, I put together a bunch of photographs from https://unsplash.com/ with some text on top so we have something to work with. Download the files from http://aerolab.github.io/subtle-animations/assets/basics.zip and put them in a new folder. You can also check out the final result at http://aerolab.github.io/subtle-animations. Dynamically change your fixed headers using Midnight.js If you took a look at the demo site, you probably noticed that the minimalistic header we are using for "A How To Guide" becomes illegible in very light backgrounds. When this happens in most sites, we typically end up putting a background on the header, which usually improves legibility at the cost of making the design worse. Midnight.js is a jQuery plugin that changes your headers as you scroll, so the header always has a design that matches the content below it. This is particularly useful for minimalistic websites as they often use transparent headers. Implementation is quite simple as the setup is pretty much automatic. Start by adding a fixed header into the site. The example has one ready to go: <nav class="fixed"> <div class="container"> <span class="logo">A How To Guide</span> </div> </nav> Most of the setting up comes in specifying which header corresponds to which section. This is done by adding data-midnight="your-class" to any section or piece of content that requires a different design for the header. For the first section, we’ll be using a white header, so we’ll add data-midnight="white" to this section (it doesn’t have to be only a section, any large element works well). <section class="fjords" data-midnight="white"> <article> <h1>Adding Subtle UI Details</h1> <p>Using Midnight.js, Wow.js and Animate.css</p> </article> </section> In the next section, which is a photo of ships in very thick white fog, we’ll be using a darker header to help improve contrast. Let’s use data-midnight="gray" for the second one and data-midgnight="pink" for the last one, so it feels more in line with the content: <section class="ships" data-midnight="gray"> <article> <h1>Be quiet</h1> <p>I'm hunting wabbits</p> </article> </section> <section class="puppy" data-midnight="pink"> <article> <h1>OMG A PUPPY &lt;3</h1> </article> </section> Now we just need to add some css rules to change the look of the header in those cases. We’ll just be changing the color of the text for the moment, so open up css/styles.css and add the following rules: /* Styles for White, Gray and Pink headers */.midnightHeader.white { color: #fff; } .midnightHeader.gray { color: #999; } .midnightHeader.pink { color: #ffc0cb; } Last but not least, we need to include the necessary libraries. We’ll add two libraries right before the end of the body: jQuery and Midnight.js (they are included in the project files inside the js folder): <script src="js/jquery-1.11.1.min.js"></script> <script src="js/midnight.jquery.min.js"></script> Right after that, we start Midnight.js on document.ready, using $('nav.fixed').midnight() (you can change the selector to whatever you are using on your site): <script> $(document).ready(function(){ $('nav.fixed').midnight(); }); </script> If you check the site now, you’ll notice that the fixed header gracefully changes color when you start scrolling into the ships section. It’s a very subtle effect, but it helps keep your designs clean. Bonus Feature! It’s possible to completely change the markup of your header just for a specific section. It’s mostly used to add some visual details that require extra markup, but it can be used to completely alter your headers as necessary. In this case, we’ll be changing the “logo" from "A How To Guide" to "Shhhhhhhhh" on the ships section, and a bunch of hearts for the part of the puppy for additional bad comedy. To do this, we need to alter our fixed header a bit. First we need to identify the “default" header (all headers that don't have custom markup will be based on this one), and then add the markup we need for any custom headers, like the gray one. This is done by creating multiple copies of the header and wrapping them in .midnightHeader.default,.midnightHeader.gray and .midnightHeader.pink respectively: <nav class="fixed"> <div class="midnightHeader default"> <div class="container"> <span class="logo">A How To Guide</span> </div> </div> <div class="midnightHeader gray"> <div class="container"> <span class="logo">Shhhhhhhhh</span> </div> </div> <div class="midnightHeader pink"> <div class="container"> <span class="logo">❤❤❤ OMG PUPPIES ❤❤❤</span> </div> </div> </nav> If you test the site now, you’ll notice that the header not only changes color, but it also changes the "name" of the site to match the section, which gives you more freedom in terms of navigation and design. Simple animations with Wow.js and Animate.css Wow.js looks more like a toy than a serious plugin, but it’s actually a very powerful library that’s extremely easy to implement. Wow.js lets you animate things as they come into view. For instance, you can fade something in when you scroll to that section, letting users enjoy some extra UI candy. You can choose from a large set of animations from Animate.css so you don’t even have to touch the CSS (but you can still do that if you want). To get Wow.JS to work, we have to include just two things: Animate.css, which contains all the animations we need. Of course, you can create your own, or even tweak those to match your tastes. Just add a link to animate.css in the head of the document: <linkrel="stylesheet"href="css/animate.css"/> Wow.JS. This is simply just including the script and initializing it, which is done by adding the following just before the end of the document: <script src="js/wow.min.js"></script> <script>new WOW().init()</script> That’s it! To animate an element as soon as it gets into view, you just need to add the .wow class to that element, and then any animation from Animate.css (like .fadeInUp, .slideInLeft, or one of the many options available at http://daneden.github.io/animate.css/). For example, to make something fade in from the bottom of the screen, you just have to add wow fadeInUp. Let’s try this on the h1 our first section: <section class="fjords" data-midnight="white"> <article> <h1 class="wow fadeInUp">Adding Subtle UI Details</h1> <p>Using Midnight.js, Wow.js and Animate.css</p> </article> </section> If you feel like altering the animation slightly, you have quite a bit of control over how it behaves. For instance, let’s fade in the subtitle but do it a few milliseconds after the title, so it follows a sequence. We can use data-wow-delay="0.5s" to make the subtitle wait for half a second before making its appearance: <section class="fjords" data-midnight="white"> <article> <h1 class="wow fadeInUp">Adding Subtle UI Details</h1> <p class="wow fadeInUp" data-wow-delay="0.5s">Using Midnight.js, Wow.js and Animate.css</p> </article> </section> We can even tweak how long the animation takes by using data-wow-duration="1.5s" so it lasts a second and a half. This is particularly useful in the second section, combined with another delay: <section class="ships" data-midnight="gray"> <article> <h1 class="wow fadeIn" data-wow-duration="1.5s">Be quiet</h1> <p class="wow fadeIn" data-wow-delay="0.5s" data-wow-duration="1.5s">I'm hunting wabbits</p> </article> </section> We can even repeat an animation a few times. Let’s make the last title shake a few times as soon as it gets into view with data-wow-iteration="5". We'll take this opportunity to use all the properties, like data-wow-duration="0.5s" to make each shake last half a second, and we'll also add a large delay for the last piece so it appears after the main animation has finished: <section class="puppy"> <article> <h1 class="wow shake" data-wow-iteration="5" data-wow-duration="0.5s">OMG A PUPPY &lt;3</h1> <p class="wow fadeIn" data-wow-delay="2.5s">Ok, this one wasn't subtle at all</p> </article> </section> Summary That’s pretty much all there is to know about using Midnight.js, Wow.js and Animate.css! All you need to do now is find a project and experiment a bit with different animations. It’s a great tool to add some last-minute eye candy and - as long as you don’t overdo it - looks fantastic on most sites. I hope you enjoyed the article! About the author Roberto González is the co-founder of Aerolab, "an awesome place where we really push the barriers to create amazing, well coded design for the best digital products."He can be reached at @robertcode. From the 11th to 17th April, save 50% on top web development eBooks and 70% on our specially selected video courses. From Angular 2 to React and much more, find them all here.
Read more
  • 0
  • 0
  • 4102

article-image-responsive-web-design-wordpress
Packt
09 Jul 2015
13 min read
Save for later

Responsive Web Design with WordPress

Packt
09 Jul 2015
13 min read
Welcome to the world of the Responsive Web Design! This article is written by Dejan Markovic, author of the book WordPress Responsive Theme Design, and it will introduce you to the Responsive Web Design and its concepts and techniques. It will also present crisp notes from WordPress Responsive Theme Design. (For more resources related to this topic, see here.) Responsive web design (RWD) is a web design approach aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from mobile phones to desktop computer monitors). Reference: http://en.wikipedia.org/wiki/Responsive_web_design. To say it simply, responsive web design (RWD) means that the responsive website should adapt to the screen size of the device it is being viewed on. When I began my web development journey in 2002, we didn't have to consider as many factors as we do today. We just had to create the website for a 17-inch screen (which was the standard at that time), and that was it. Yes, we also had to consider 15, 19, and 21-inch monitors, but since the 17-inch screen was the standard, that was the target screen size for us. In pixels, these sizes were usually 800 or 1024. We also had to consider a fewer number of browsers (Internet Explorer, Netscape, and Opera) and the styling for the print, and that was it. Since then, a lot of things have changed, and today, in 2015, for a website design, we have to consider multiple factors, such as: A lot of different web browsers (Internet Explorer, Firefox, Opera, Chrome, and Safari) A number of different operating systems (Windows (XP, 7, and 8), Mac OS X, Linux, Unix, iOS, Android, and Windows phones) Device screen sizes (desktop, mobile, and tablet) Is content accessible and readable with screen readers? How the content will look like when it's printed? Today, creating different design for all these listed factors & devices would take years. This is where a responsive web design comes to the rescue. The concepts of RWD I have to point out that the mobile environment is becoming more important factor than the desktop environment. Mobile browsing is becoming bigger than the desktop-based access, which makes the mobile environment very important factor to consider when developing a website. Simply put, the main point of RWD is that the layout changes based on the size and capabilities of the device its being viewed on. The concepts of RWD, that we will learn next, are: Viewport, scaling and screen density. Controlling Viewport On the desktop, Viewport is the screen size of the window in a browser. For example, when we resize the browser window, we are actually changing the Viewport size. On mobile devices, the Viewport size is also independent of the device screen size. For example, Viewport is 850 px for mobile Opera and 980 px for mobile Safari, and the screen size for iPhone is 320 px. If we compare the Viewport size of 980 px and the screen size of an iPhone of 320 px, we can see that Viewport is bigger than the screen size. This is because mobile browsers function differently. They first load the page into Viewport, and then they resize it to the device's screen size. This is why we are able to see the whole page on the mobile device. If the mobile browsers had Viewport the same as the screen size (320 px), we would be able to see only a part of the page on the mobile device. In the following screenshot, we can see the table with the list of Viewport sizes for some iPhone models: We can control Viewport with CSS: @viewport {width: device-width;} Or, we can control it with the meta tag: <meta name="viewport" content="width=device-width"> In the preceding code, we are matching the Viewport width with the device width. Because the Viewport meta tag approach is more widely adopted, as it was first used on iOS and the @viewport approach was not supported by some browsers, we will use the meta tag approach. We are setting the Viewport width in order to match our web content with our mobile content, as we want to make sure that our web content looks good on a mobile device as well. We can set Viewports in the code for each device separately, for example, 320 px for the iPhone. The better approach will be to use content="width=device-width". Scaling Scaling is extremely important, as the initial scale controls the zoom aspect of the content for the initial look of the page. For example, if the initial scale is set to 3, the content will be loaded in the size of 3 times of the Viewport size, which means 3 times zoom. Here is the look of the screenshot for initial-scale=1 and initial-scale=3: As we can see from the preceding screenshots, on the initial scale 3 (three times zoom), the logo image takes the bigger part of the screen. It is important to note that this is just the initial scale, which means that the user can zoom in and zoom out later, if they want to. Here is the example of the code with the initial scale: <meta name="viewport" content="width=device-width, initial- scale=1, maximum-scale=1"> In this example, we have used the maximum-scale=1 option, which means that the user will not be able to use the zoom here. We should avoid using the maximum-scale property because of accessibility issues. If we forbid zooming on our pages, users with visual problems will not be able to see the content properly. The screen density As the screen technology is going forward every year or even faster than that, we have to consider the screen density aspect as well. Screen density is the number of pixels that are contained within a screen area. This means that if the screen density is higher, we can have more details, in this case, pixels in the same area. There are two measurements that are usually used for this, dots per inch (DPI) and pixels per inch (PPI). DPI means how many drops a printer can place in an inch of a space. PPI is the number of pixels we can have in one inch of the screen. If we go back to the preceding screenshot with the table where we are showing Viewports and densities and compare the values of iPhone 3G and iPhone 4S, we will see that the screen size stayed the same at 3.5 inch, Viewport stayed the same at 320 px, but the screen density has doubled, from 163 dpi to 326 dpi, which means that the screen resolution also has doubled from 320x480 to 640x960. The screen density is very relevant to RWD, as newer devices have bigger densities and we should do our best to cover as many densities as we can in order to provide a better experience for end users. Pixels' density matters more than the resolution or screen size, because more pixels is equal to sharper display: There are topics that need to be taken into consideration, such as hardware, reference pixels, and the device-pixel-ratio, too. Problems and solutions with the screen density Scalable vector graphics and CSS graphics will scale to the resolution. This is why I recommend using Font Awesome icons in your project. Font Awesome icons are available for download at: http://fortawesome.github.io/Font-Awesome/icons/. Font Icons is a font that is made up of symbols, icons, or pictograms (whatever you prefer to call them) that you can use in a webpage just like a font. They can be instantly customized with properties like: size, drop shadow, or anything you want can be done with the power of CSS. The real problem triggered by the change in the screen density is images, as for high-density screens, we should provide higher resolution images. There are several ways through which we can approach this problem: By targeting high-density screens (providing high-resolution images to all screens) By providing high-resolution images where appropriate (loading high-resolution images only on devices with high-resolution screens) By not using high-resolution images For the beginner developers I will recommend using second approach, providing high-resolution images where appropriate. Techniques in RWD RWD consists of three coding techniques: Media queries (adapt content to specific screen sizes) Fluid grids (for flexible layouts) Flexible images and media (that respond to changes to screen sizes) More detailed information about RWD techniques by Ethan Marcote, who coined the term Reponsive Web Design, is available at http://alistapart.com/article/responsive-web-design. Media queries Media queries are CSS modules, or as some people like to say, just a conditional statements, which are telling tells the browsers to use a specific type of style, depending on the size of the screen and other factors, such as print (specific styles for print). They are here for a long time already, as I was using different styles for print in 2002. If you wish to know more about media queries, refer to W3C Candidate Recommendation 8 July 2002 at http://www.w3.org/TR/2002/CR-css3-mediaqueries-20020708/. Here is an example of media query declaration: @media only screen and (min-width:500px) { font-family: sans-serif; } Let's explain the preceding code: The @media code means that it is a media type declaration. The screen and part of the query is an expression or condition (in this case, it means only screen and no print). The following conditional statement means that everything above 500 px will have the font family of sans serif: (min-width:500px) { font-family: sans-serif; } Here is another example of a media query declaration: @media only screen and (min-width: 500px), screen and (orientation: portrait) { font-family: sans-serif; } In this case, if we have two statements and if one of the statements is true, the entire declaration is applied (either everything above 50 px or the portrait orientation will be applied to the screen). The only keyword hides the styles from older browsers. As some older browsers don't support media queries, I recommend using a respond.js script, which will "patch" support for them. Polyfill (or polyfiller) is code that provides features that are not built or supported by some web browsers. For example, a number of HTML5 features are not supported by older versions of IE (older than 8 or 9), but these features can be used if polyfill is installed on the web page. This means that if the developer wants to use these features, he/she can just include that polyfill library and these features will work in older browsers. Breakpoints Breakpoint is a moment when layout switches, from one layout to another, when some condition is fulfilled, for example, the screen has been resized. Almost all responsive designs cover the changes of the screen between the desktop, tablets, and smart phones. Here is an example with comments inside: @media only screen and (max-width: 480px) { //mobile styles // up to 480px size } Media query in the preceding code will only be used if the width of the screen is 480 px or less. @media only screen and (min-width:481px) and (max-width: 768px) { //tablet styles //between 481 and 768px } Media query in the preceding code will only be used if the width of the screen is between the 481 px and 768 px. @media only screen and (min-width:769px) { //desktop styles //from 769px and up } Media query in the preceding code will only be used when the width of the screen is 769 px and more. The minimum width value in desktop styles is 1 pixel over the maximum width value in tablet styles, and the same difference is there between values from tablet and mobile styles. We are doing this in order to avoid overlapping, as that could cause problem with our styles. There is also an approach to set the maximum width and minimum width with em values. Setting em of the screen for maximum will mean that the width of the screen is set relative to the device's font size. If the font size for the device is 16 px (which is the usual size), the maximum width for mobile styles would be 480/16=30. Why do we use em values? With pixel sizes, everything is fixed; for example, h1 is 19 px (or 1.5 em of the default size of 16 px), and that's it. With em sizes, everything is relative, so if we change the default value in the browser from, for example, 16 px to 18 px, everything relative to that will change. Therefore, all h1 values will change from 19 px to 22 px and make our layout "zoomable". Here is the example with sizes changed to em: @media only screen and (max-width: 30em) { //mobile styles // up to 480px size }   @media only screen and (min-width:30em) and (max-width: 48em) { //tablet styles //between 481 and 768px }   @media only screen and (min-width:48em) { //desktop styles //from 769px and up } Fluid grids The major point in RWD is that the content should adapt to any screen it's viewed on. One of the best solutions to do this is to use fluid layouts where our content can be resized on each breakpoint. In fluid grids, we define a maximum layout size for the design. The grid is divided into a specific number of columns to keep the layout clean and easy to handle. Then we design each element with proportional widths and heights instead of pixel based dimensions. So whenever the device or screen size is changed, elements will adjust their widths and heights by the specified proportions to its parent container. Reference: http://www.1stwebdesigner.com/tutorials/fluid-grids-in-responsive-design/. To make the grid flexible (or elastic), we can use the % points, or we can use the em values, whichever suits us better. We can make our own fluid grids, or we can use grid frameworks. As there are so many frameworks available, I would recommend that you use the existing framework rather than building your own. Grid frameworks could use a single grid that covers various screen sizes, or we can have multiple grids for each of the break points or screen size categories, such as mobiles, tablets, and desktops. Some of the notable frameworks are Twitter's Bootstrap, Foundation, and SemanticUI. I prefer Twitter's Bootstrap, as it really helps me speed up the process and it is the most used framework currently. Flexible images and media Last but not the least important, are images and media (videos). The problem with them is that they are elements that come with fixed sizes. There are several approaches to fix this: Replacing dimensions with percentage values Using maximum widths Using background images only for some cases, as these are not good for accessibility Using some libraries, such as Scott Jehl's picturefill (https://github.com/scottjehl/picturefill) Taking out the width and height parameters from the image tag and dealing with dimensions in CSS Summary In this article, you learned about the RWD concepts such as: Viewport, scaling and the screen density. Also, we have covered the RWD techniques: media queries, fluid grids, and flexible media. Resources for Article: Further resources on this subject: Deployment Preparations [article] Why Meteor Rocks! [article] Clustering and Other Unsupervised Learning Methods [article]
Read more
  • 0
  • 0
  • 3488

article-image-materials-and-why-they-are-essential
Packt
08 Jul 2015
11 min read
Save for later

Materials, and why they are essential

Packt
08 Jul 2015
11 min read
In this article by Ciro Cardoso, author of the book Lumion3D Best Practices, we will see materials, and why they are essential. In the 3D world, materials and textures are nearly as important as the 3D geometry that composes the scene. A material defines the optical properties of an object when hit by a ray of light. In other words, a material defines how the light interacts with the surface, and textures can help not only to control the color (diffuse), but also the reflections and glossiness. (For more resources related to this topic, see here.) It's not difficult to understand that textures are another essential part of a good material, and if your goal is to achieve believable results, you need textures or images of real elements like stone, wood, brick, and other natural elements. Textures can bring detail to your surface that otherwise would require geometry to look good. In that case, how can Lumion help you and, most importantly, what are the best practices to work with materials? Let's have a look at the following section which will provide the answer. A quick overview of Lumion's materials Lumion always had a good library of materials to assign to your 3D model, The reality is that Physically-Based Rendering (PBR) is more of a concept than a set of rules, and each render engine will implement slightly differently. The good news for us as users is that these materials follow realistic shading and lighting systems to accurately represent real-world materials. You can find excellent information regarding PBR on the following sites: http://www.marmoset.co/toolbag/learn/pbr-theory http://www.marmoset.co/toolbag/learn/pbr-practice https://www.allegorithmic.com/pbr-guide More than 600 materials are already prepared to be assigned directly to your 3D model and, by default, they should provide a realistic and appropriate material. The Lumion team has also made an effort to create a better and simpler interface, as you can see in the following screenshot: The interface was simplified, showing only the most common and essential settings. If you need more control over the material, click on the More… button to have access to extra functionalities. One word of caution: the material preview, which in this case is the sphere, will not reflect the changes you perform using the settings available. For example, if you change the main texture, the sphere will continue to show the previous material. A good practice to tweak materials is to assign the material to the surface, use the viewport to check how the settings are affecting the material, and then do a quick render. The viewport will try to show the final result, but there's nothing like a quick render to see how the material really looks when Lumion does all the lighting and shading calculations. Working with materials in Lumion – three options There are three options to work with materials in Lumion: Using Lumion's materials Using the imported materials that you can create on your favorite modeling application Creating materials using Lumion's Standard material Let's have a look at each one of these options and see how they can help you and when they best suit your project. Using Lumion's materials The first option is obvious; you are using Lumion and it makes sense using Lumion's materials, but you may feel constrained by what is available at Lumion's material library. However, instead of thinking, "I only have 600 materials and I cannot find what I need!", you need to look at the materials library also as a template to create other materials. For example, if none of the brick materials is similar to what you need, nothing stops you from using a brick material, changing the Gloss and Reflection values, and loading a new texture, creating an entirely new material. This is made possible by using the Choose Color Map button, as shown in the following screenshot: When you click on the Choose Color Map button, a new window appears where you can navigate to the folder where the texture is saved. What about the second square? The one with a purple color? Let's see the answer in the following section. Normal maps and their functionality The purple square you just saw is where you can load the normal map. And what is a normal map? Firstly, a normal map is not a bump map. A bump map uses a color range from black to white and in some ways is more limited than a normal map. The following screenshots show the clear difference between these two maps: The map on the left is a bump map and you can see that the level of detail is not the same that we can get with a normal map. A normal map consists of red, green, and blue colors that represent the x, y, and z coordinates. This allows a 2D image to represent depth and Lumion uses this depth information to fake lighting details based on the color associated with the 3D coordinate. The perks of using a normal map Why should you use normal maps? Keep in mind that Lumion is a real-time rendering engine and, as you saw previously, there is the need to keep a balance between detail and geometry. If you add too much detail, the 3D model will look gorgeous but Lumion's performance will suffer drastically. On the other hand, you can have a low-poly 3D model and fake detail with a normal map. Using a normal map for each material has a massive impact on the final quality you can get with Lumion. Since these maps are so important, how can you create one? Tips to create normal maps As you will understand, we cannot cover all the different techniques to create normal maps. However, you may find something to suit your workflow in the following list: Photoshop using an action script called nDo: Teddy Bergsman is the author of this fantastic script. It is a free script that creates a very accurate normal map of any texture you load in Photoshop in seconds. To download and see how to install this script, visit the following link: http://www.philipk.net/ndo.html Here you can find a more detailed tutorial on how to use this nDo script: http://www.philipk.net/tutorials/ndo/ndo.html This script has three options to create normal maps. The default option is Smooth, which gives you a blurry normal map. Then you have the Chisel Hard option to generate a very sharp and subtle normal map but you don't have much control over the final result. The Chisel Soft option is similar to the Chisel Hard except that you have full control over the intensity and bevel radius. This script also allows you to sculpt and combine several normal maps. Using the Quixel NDO application: From the same creator, we have a more capable and optimized application called Quixel NDO. With this application, you can sculpt normal maps in real-time, build your own normal maps without using textures, and preview everything with the 3DO material preview. This is quite useful because you don't have to save the normal map and see how it looks in Lumion. 3DO (which comes free with NDO) has a physically based renderer and lets you load a 3D model to see how the texture looks. Find more information including a free trial here: http://quixel.se/dev/ndo GIMP with the normalmap plugin: If you want to use free software, a good alternative is GIMP. There is a great plugin called normalmap, which does good work not only by creating a normal map but also by providing a preview window to see the tweaks you are making. To download this plugin, visit the following link: https://code.google.com/p/gimp-normalmap/ Do it online with NormalMap-Online: In case you don't want to install another application, the best option is doing it online. In that case, you may want to have a look at NormalMap-Online, as shown in the following screenshot: The process is extremely simple as you can see from the preceding screenshot. You load the image and automatically get a normal map, and on the right-hand side there is a preview to show how the normal map and the texture work together. Christian Petry is the man behind this tool that will help to create sharp and accurate normal maps. He is a great guy and if you like this online application, please consider supporting an application that will save you time and money. Find this online tool here: http://cpetry.github.io/NormalMap-Online/ Don't forget to use a good combination of Strength and Blur/Sharp to create a well-balanced map. You need the correct amount of detail; otherwise your normal map will be too noisy in terms of detail. However, Lumion being a user-friendly application gives you a hand on this topic by providing a tool to create a normal map automatically from a texture you import. Creating a normal map with Lumion's relief feature By now, creating a normal map from a texture is not something too technical or even complex, but it can be time consuming if you need to create a normal map for each texture. This is a wise move because it will remove the need for extra detail for the model to look good. With this in mind, Lumion's team created a new feature that allows you to create a normal map for any texture you import. After loading the new texture, the next step is to click on the Create Normal Map button, as highlighted in the following screenshot: Lumion then creates a normal map based on the texture imported, and you have the ability to invert the map by clicking on the Flip Normal Map direction button, as highlighted in the preceding screenshot. Once Lumion creates the normal map, you need a way to control how the normal map affects the material and the light. For that, you need to use the Relief slider, as shown in the following screenshot: Using this slider is very intuitive; you only need to move the slider and see the adjustments on the viewport, since the material preview will not be updated. The previous screenshot is a good example of that, because even when we loaded a wood texture, the preview still shows a concrete material. Again, this means you can easily use the settings from one material and use that as a base to create something completely new. But how good is the normal map that Lumion creates for you? Have a look for yourself in the following screenshot: On the left hand side, we have a wood floor material with a normal map that Lumion created. The right-hand side image is the same material but the normal map was created using the free nDo script for Photoshop. There is a big difference between the image on the left and the image on the right, and that is related to the normal maps used in this case. You can see clearly that the normal map used for the image on the right achieves the goal of bringing more detail to the surface. The difference is that the normal map that Lumion creates in some situations is too blurry, and for that reason we end up losing detail. Before we explore a few more things regarding creating custom materials in Lumion, let's have a look at another useful feature in Lumion. Summary Physically based rendering materials aren't that scary, don't you agree? In reality, Lumion makes this feature almost unnoticeable by making it so simple. You learned what this feature involves and how you can take full advantage of materials that make your render more believable. You learned the importance of using normal maps and how to create them using a variety of tools for all flavors. You also saw how we can easily improve material reflections without compromising the speed and quality of the render. You also learned another key aspect of Lumion: flexibility to create your own materials using the Standard material. The Standard material, although slightly different from the other materials available in Lumion, lets you play with the reflections, glossiness, and other settings that are essential. On top of all of this, you learned how to create textures. Resources for Article: Further resources on this subject: Unleashing the powers of Lumion [article] Mastering Lumion 3D [article] What is Lumion? [article]
Read more
  • 0
  • 0
  • 3807
Banner background image

article-image-file-sharing
Packt
08 Jul 2015
14 min read
Save for later

File Sharing

Packt
08 Jul 2015
14 min read
In this article by Dan Ristic, author of the book Learning WebRTC, we will cover the following topics: Getting a file with File API Setting up our page Getting a reference to a file The real power of a data channel comes when combining it with other powerful technologies from a browser. By opening up the power to send data peer-to-peer and combining it with a File API, we could open up all new possibilities in your browser. This means you could add file sharing functionalities that are available to any user with an Internet connection. The application that we will build will be a simple one with the ability to share files between two peers. The basics of our application will be real-time, meaning that the two users have to be on the page at the same time to share a file. There will be a finite number of steps that both users will go through to transfer an entire file between them: User A will open the page and type a unique ID. User B will open the same page and type the same unique ID. The two users can then connect to each other using RTCPeerConnection. Once the connection is established, one user can select a file to share. The other user will be notified of the file that is being shared, where it will be transferred to their computer over the connection and they will download the file. The main thing we will focus on throughout the article is how to work with the data channel in new and exciting ways. We will be able to take the file data from the browser, break it down into pieces, and send it to the other user using only the RTCPeerConnection API. The interactivity that the API promotes will stand out in this article and can be used in a simple project. Getting a file with the File API One of the first things that we will cover is how to use the File API to get a file from the user's computer. There is a good chance you have interacted with the File API on a web page and have not even realized it yet! The API is usually denoted by the Browse or Choose File text located on an input field in the HTML page and often looks something similar to this: Although the API has been around for quite a while, the one you are probably familiar with is the original specification, dating back as far as 1995. This was the Form-based File Upload in HTML specification that focused on allowing a user to upload a file to a server using an HTML form. Before the days of the file input, application developers had to rely on third-party tools to request files of data from the user. This specification was proposed in order to make a standard way to upload files for a server to download, save, and interact with. The original standard focused entirely on interacting with a file via an HTML form, however, and did not detail any way to interact with a file via JavaScript. This was the origin of the File API. Fast-forward to the groundbreaking days of HTML5 and we now have a fully-fledged File API. The goal of the new specification was to open the doors to file manipulation for web applications, allowing them to interact with files similar to how a native-installed application would. This means providing access to not only a way for the user to upload a file, but also ways to read the file in different formats, manipulate the data of the file, and then ultimately do something with this data. Although there are many great features of the API, we are going to only focus on one small aspect of this API. This is the ability to get binary file data from the user by asking them to upload a file. A typical application that works with files, such as Notepad on Windows, will work with file data in pretty much the same way. It asks the user to open a file in which it will read the binary data from the file and display the characters on the screen. The File API gives us access to the same binary data that any other application would use in the browser. This is the great thing about working with the File API: it works in most browsers from a HTML page; similar to the ones we have been building for our WebRTC demos. To start building our application, we will put together another simple web page. This will look similar to the last ones, and should be hosted with a static file server as done in the previous examples. By the end of the article, you will be a professional single page application builder! Now let's take a look at the following HTML code that demonstrates file sharing: <!DOCTYPE html> <html lang="en"> <head>    <meta charset="utf-8" />      <title>Learning WebRTC - Article: File Sharing</title>      <style>      body {        background-color: #404040;        margin-top: 15px;        font-family: sans-serif;        color: white;      }        .thumb {        height: 75px;        border: 1px solid #000;        margin: 10px 5px 0 0;      }        .page {        position: relative;        display: block;        margin: 0 auto;        width: 500px;        height: 500px;      }        #byte_content {        margin: 5px 0;        max-height: 100px;        overflow-y: auto;        overflow-x: hidden;      }        #byte_range {        margin-top: 5px;      }    </style> </head> <body>    <div id="login-page" class="page">      <h2>Login As</h2>      <input type="text" id="username" />      <button id="login">Login</button>    </div>      <div id="share-page" class="page">      <h2>File Sharing</h2>        <input type="text" id="their-username" />      <button id="connect">Connect</button>      <div id="ready">Ready!</div>        <br />      <br />           <input type="file" id="files" name="file" /> Read bytes:      <button id="send">Send</button>    </div>      <script src="client.js"></script> </body> </html> The page should be fairly recognizable at this point. We will use the same page showing and hiding via CSS as done earlier. One of the main differences is the appearance of the file input, which we will utilize to have the user upload a file to the page. I even picked a different background color this time to spice things up. Setting up our page Create a new folder for our file sharing application and add the HTML code shown in the preceding section. You will also need all the steps from our JavaScript file to log in two users, create a WebRTC peer connection, and create a data channel between them. Copy the following code into your JavaScript file to get the page set up: var name, connectedUser;   var connection = new WebSocket('ws://localhost:8888');   connection.onopen = function () { console.log("Connected"); };   // Handle all messages through this callback connection.onmessage = function (message) { console.log("Got message", message.data);   var data = JSON.parse(message.data);   switch(data.type) {    case "login":      onLogin(data.success);      break;    case "offer":      onOffer(data.offer, data.name);      break;    case "answer":      onAnswer(data.answer);      break;    case "candidate":      onCandidate(data.candidate);      break;    case "leave":      onLeave();      break;    default:      break; } };   connection.onerror = function (err) { console.log("Got error", err); };   // Alias for sending messages in JSON format function send(message) { if (connectedUser) {    message.name = connectedUser; }   connection.send(JSON.stringify(message)); };   var loginPage = document.querySelector('#login-page'), usernameInput = document.querySelector('#username'), loginButton = document.querySelector('#login'), theirUsernameInput = document.querySelector('#their- username'), connectButton = document.querySelector('#connect'), sharePage = document.querySelector('#share-page'), sendButton = document.querySelector('#send'), readyText = document.querySelector('#ready'), statusText = document.querySelector('#status');   sharePage.style.display = "none"; readyText.style.display = "none";   // Login when the user clicks the button loginButton.addEventListener("click", function (event) { name = usernameInput.value;   if (name.length > 0) {    send({      type: "login",      name: name    }); } });   function onLogin(success) { if (success === false) {    alert("Login unsuccessful, please try a different name."); } else {    loginPage.style.display = "none";    sharePage.style.display = "block";      // Get the plumbing ready for a call    startConnection(); } };   var yourConnection, connectedUser, dataChannel, currentFile, currentFileSize, currentFileMeta;   function startConnection() { if (hasRTCPeerConnection()) {    setupPeerConnection(); } else {    alert("Sorry, your browser does not support WebRTC."); } }   function setupPeerConnection() { var configuration = {    "iceServers": [{ "url": "stun:stun.1.google.com:19302 " }] }; yourConnection = new RTCPeerConnection(configuration, {optional: []});   // Setup ice handling yourConnection.onicecandidate = function (event) {    if (event.candidate) {      send({        type: "candidate",       candidate: event.candidate      });    } };   openDataChannel(); }   function openDataChannel() { var dataChannelOptions = {    ordered: true,    reliable: true,    negotiated: true,    id: "myChannel" }; dataChannel = yourConnection.createDataChannel("myLabel", dataChannelOptions);   dataChannel.onerror = function (error) {    console.log("Data Channel Error:", error); };   dataChannel.onmessage = function (event) {    // File receive code will go here };   dataChannel.onopen = function () {    readyText.style.display = "inline-block"; };   dataChannel.onclose = function () {    readyText.style.display = "none"; }; }   function hasUserMedia() { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; return !!navigator.getUserMedia; }   function hasRTCPeerConnection() { window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection; window.RTCSessionDescription = window.RTCSessionDescription || window.webkitRTCSessionDescription || window.mozRTCSessionDescription; window.RTCIceCandidate = window.RTCIceCandidate || window.webkitRTCIceCandidate || window.mozRTCIceCandidate; return !!window.RTCPeerConnection; }   function hasFileApi() { return window.File && window.FileReader && window.FileList && window.Blob; }   connectButton.addEventListener("click", function () { var theirUsername = theirUsernameInput.value;   if (theirUsername.length > 0) {    startPeerConnection(theirUsername); } });   function startPeerConnection(user) { connectedUser = user;   // Begin the offer yourConnection.createOffer(function (offer) {    send({      type: "offer",      offer: offer    });    yourConnection.setLocalDescription(offer); }, function (error) {    alert("An error has occurred."); }); };   function onOffer(offer, name) { connectedUser = name; yourConnection.setRemoteDescription(new RTCSessionDescription(offer));   yourConnection.createAnswer(function (answer) {    yourConnection.setLocalDescription(answer);      send({      type: "answer",      answer: answer    }); }, function (error) {    alert("An error has occurred"); }); };   function onAnswer(answer) { yourConnection.setRemoteDescription(new RTCSessionDescription(answer)); };   function onCandidate(candidate) { yourConnection.addIceCandidate(new RTCIceCandidate(candidate)); };   function onLeave() { connectedUser = null; yourConnection.close(); yourConnection.onicecandidate = null; setupPeerConnection(); }; We set up references to our elements on the screen as well as get the peer connection ready to be processed. When the user decides to log in, we send a login message to the server. The server will return with a success message telling the user they are logged in. From here, we allow the user to connect to another WebRTC user who is given their username. This sends offer and response, connecting the two users together through the peer connection. Once the peer connection is created, we connect the users through a data channel so that we can send arbitrary data across. Hopefully, this is pretty straightforward and you are able to get this code up and running in no time. It should all be familiar to you by now. This is the last time we are going to refer to this code, so get comfortable with it before moving on! Getting a reference to a file Now that we have a simple page up and running, we can start working on the file sharing part of the application. The first thing the user needs to do is select a file from their computer's filesystem. This is easily taken care of already by the input element on the page. The browser will allow the user to select a file from their computer and then save a reference to that file in the browser for later use. When the user presses the Send button, we want to get a reference to the file that the user has selected. To do this, you need to add an event listener, as shown in the following code: sendButton.addEventListener("click", function (event) { var files = document.querySelector('#files').files;   if (files.length > 0) {    dataChannelSend({      type: "start",      data: files[0]    });      sendFile(files[0]); } }); You might be surprised at how simple the code is to get this far! This is the amazing thing about working within a browser. Much of the hard work has already been done for you. Here, we will get a reference to our input element and the files that it has selected. The input element supports both multiple and single selection of files, but in this example we will only work with one file at a time. We then make sure we have a file to work with, tell the other user that we want to start sending data, and then call our sendFile function, which we will implement later in this article. Now, you might think that the object we get back will be in the form of the entire data inside of our file. What we actually get back from the input element is an object representing metadata about the file itself. Let's take a look at this metadata: { lastModified: 1364868324000, lastModifiedDate: "2013-04-02T02:05:24.000Z", name: "example.gif", size: 1745559, type: "image/gif" } This will give us the information we need to tell the other user that we want to start sending a file with the example.gif name. It will also give a few other important details, such as the type of file we are sending and when it has been modified. The next step is to read the file's data and send it through the data channel. This is no easy task, however, and we will require some special logic to do so. Summary In this article we covered the basics of using the File API and retrieving a file from a user's computer. The article also discusses the page setup for the application using JavaScript and getting a reference to a file. Resources for Article: Further resources on this subject: WebRTC with SIP and IMS [article] Using the WebRTC Data API [article] Applications of WebRTC [article]
Read more
  • 0
  • 0
  • 1281

article-image-deployment-preparations
Packt
08 Jul 2015
23 min read
Save for later

Deployment Preparations

Packt
08 Jul 2015
23 min read
In this article by Jurie-Jan Botha, author of the book Grunt Cookbook, has covered the following recipes: Minifying HTML Minifying CSS Optimizing images Linting JavaScript code Uglifying JavaScript code Setting up RequireJS (For more resources related to this topic, see here.) Once our web application is built and its stability ensured, we can start preparing it for deployment to its intended market. This will mainly involve the optimization of the assets that make up the application. Optimization in this context mostly refers to compression of one kind or another, some of which might lead to performance increases too. The focus on compression is primarily due to the fact that the smaller the asset, the faster it can be transferred from where it is hosted to a user's web browser. This leads to a much better user experience, and can sometimes be essential to the functioning of an application. Minifying HTML In this recipe, we make use of the contrib-htmlmin (0.3.0) plugin to decrease the size of some HTML documents by minifying them. Getting ready In this example, we'll work with the a basic project structure. How to do it... The following steps take us through creating a sample HTML document and configuring a task that minifies it: We'll start by installing the package that contains the contrib-htmlmin plugin. Next, we'll create a simple HTML document called index.html in the src directory, which we'd like to minify, and add the following content in it: <html> <head>    <title>Test Page</title> </head> <body>    <!-- This is a comment! -->    <h1>This is a test page.</h1> </body> </html> Now, we'll add the following htmlmin task to our configuration, which indicates that we'd like to have the white space and comments removed from the src/index.html file, and that we'd like the result to be saved in the dist/index.html file: htmlmin: { dist: {    src: 'src/index.html',    dest: 'dist/index.html',    options: {      removeComments: true,      collapseWhitespace: true    } } } The removeComments and collapseWhitespace options are used as examples here, as using the default htmlmin task will have no effect. Other minification options can be found at the following URL: https://github.com/kangax/html-minifier#options-quick-reference We can now run the task using the grunt htmlmin command, which should produce output similar to the following: Running "htmlmin:dist" (htmlmin) task Minified dist/index.html 147 B ? 92 B If we now take a look at the dist/index.html file, we will see that all white space and comments have been removed: <html> <head>    <title>Test Page</title> </head> <body>    <h1>This is a test page.</h1> </body> </html> Minifying CSS In this recipe, we'll make use of the contrib-cssmin (0.10.0) plugin to decrease the size of some CSS documents by minifying them. Getting ready In this example, we'll work with a basic project structure. How to do it... The following steps take us through creating a sample CSS document and configuring a task that minifies it. We'll start by installing the package that contains the contrib-cssmin plugin. Then, we'll create a simple CSS document called style.css in the src directory, which we'd like to minify, and provide it with the following contents: body { /* Average body style */ background-color: #ffffff; color: #000000; /*! Black (Special) */ } Now, we'll add the following cssmin task to our configuration, which indicates that we'd like to have the src/style.css file compressed, and have the result saved to the dist/style.min.css file: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css' } } We can now run the task using the grunt cssmin command, which should produce the following output: Running "cssmin:dist" (cssmin) taskFile dist/style.css created: 55 B ? 38 B If we take a look at the dist/style.min.css file that was produced, we will see that it has the compressed contents of the original src/style.css file: body{background-color:#fff;color:#000;/*! Black (Special) */} There's more... The cssmin task provides us with several useful options that can be used in conjunction with its basic compression feature. We'll look at prefixing a banner, removing special comments, and reporting gzipped results. Prefixing a banner In the case that we'd like to automatically include some information about the compressed result in the resulting CSS file, we can do so in a banner. A banner can be prepended to the result by supplying the desired banner content to the banner option, as shown in the following example: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css',    options: {      banner: '/* Minified version of style.css */'    } } } Removing special comments Comments that should not be removed by the minification process are called special comments and can be indicated using the "/*! comment */" markers. By default, the cssmin task will leave all special comments untouched, but we can alter this behavior by making use of the keepSpecialComments option. The keepSpecialComments option can be set to either the *, 1, or 0 value. The * value is the default and indicates that all special comments should be kept, 1 indicates that only the first comment that is found should be kept, and 0 indicates that none of them should be kept. The following configuration will ensure that all comments are removed from our minified result: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css',    options: {      keepSpecialComments: 0    } } } Reporting on gzipped results Reporting is useful to see exactly how well the cssmin task has compressed our CSS files. By default, the size of the targeted file and minified result will be displayed, but if we'd also like to see the gzipped size of the result, we can set the report option to gzip, as shown in the following example: cssmin: { dist: {    src: 'src/main.css',    dest: 'dist/main.css',    options: {      report: 'gzip'    } } } Optimizing images In this recipe, we'll make use of the contrib-imagemin (0.9.4) plugin to decrease the size of images by compressing them as much as possible without compromising on their quality. This plugin also provides a plugin framework of its own, which is discussed at the end of this recipe. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through configuring a task that will compress an image for our project. We'll start by installing the package that contains the contrib-imagemin plugin. Next, we can ensure that we have an image called image.jpg in the src directory on which we'd like to perform optimizations. Now, we'll add the following imagemin task to our configuration and indicate that we'd like to have the src/image.jpg file optimized, and have the result saved to the dist/image.jpg file: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg' } } We can then run the task using the grunt imagemin command, which should produce the following output: Running "imagemin:dist" (imagemin) task Minified 1 image (saved 13.36 kB) If we now take a look at the dist/image.jpg file, we will see that its size has decreased without any impact on the quality. There's more... The imagemin task provides us with several options that allow us to tweak its optimization features. We'll look at how to adjust the PNG compression level, disable the progressive JPEG generation, disable the interlaced GIF generation, specify SVGO plugins to be used, and use the imagemin plugin framework. Adjusting the PNG compression level The compression of a PNG image can be increased by running the compression algorithm on it multiple times. By default, the compression algorithm is run 16 times. This number can be changed by providing a number from 0 to 7 to the optimizationLevel option. The 0 value means that the compression is effectively disabled and 7 indicates that the algorithm should run 240 times. In the following configuration we set the compression level to its maximum: imagemin: { dist: {    src: 'src/image.png',    dest: 'dist/image.png',    options: {      optimizationLevel: 7    } } } Disabling the progressive JPEG generation Progressive JPEGs are compressed in multiple passes, which allows a low-quality version of them to quickly become visible and increase in quality as the rest of the image is received. This is especially helpful when displaying images over a slower connection. By default, the imagemin plugin will generate JPEG images in the progressive format, but this behavior can be disabled by setting the progressive option to false, as shown in the following example: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg',    options: {      progressive: false    } } } Disabling the interlaced GIF generation An interlaced GIF is the equivalent of a progressive JPEG in that it allows the contained image to be displayed at a lower resolution before it has been fully downloaded, and increases in quality as the rest of the image is received. By default, the imagemin plugin will generate GIF images in the interlaced format, but this behavior can be disabled by setting the interlaced option to false, as shown in the following example: imagemin: { dist: {    src: 'src/image.gif',    dest: 'dist/image.gif',    options: {      interlaced: false    } } } Specifying SVGO plugins to be used When optimizing SVG images, the SVGO library is used by default. This allows us to specify the use of various plugins provided by the SVGO library that each performs a specific function on the targeted files. Refer to the following URL for more detailed instructions on how to use the svgo plugins options and the SVGO library: https://github.com/sindresorhus/grunt-svgmin#available-optionsplugins Most of the plugins in the library are enabled by default, but if we'd like to specifically indicate which of these should be used, we can do so using the svgoPlugins option. Here, we can provide an array of objects, where each contain a property with the name of the plugin to be affected, followed by a true or false value to indicate whether it should be activated. The following configuration disables three of the default plugins: imagemin: { dist: {    src: 'src/image.svg',    dest: 'dist/image.svg',    options: {      svgoPlugins: [        {removeViewBox:false},        {removeUselessStrokeAndFill:false},        {removeEmptyAttrs:false}      ]    } } } Using the 'imagemin' plugin framework In order to provide support for the various image optimization projects, the imagemin plugin has a plugin framework of its own that allows developers to easily create an extension that makes use of the tool they require. You can get a list of the available plugin modules for the imagemin plugin's framework at the following URL: https://www.npmjs.com/browse/keyword/imageminplugin The following steps will take us through installing and making use of the mozjpeg plugin to compress an image in our project. These steps start where the main recipe takes off. We'll start by installing the imagemin-mozjpeg package using the npm install imagemin-mozjpeg command, which should produce the following output: [email protected] node_modules/imagemin-mozjpeg With the package installed, we need to import it into our configuration file, so that we can make use of it in our task configuration. We do this by adding the following line at the top of our Gruntfile.js file: var mozjpeg = require('imagemin-mozjpeg'); With the plugin installed and imported, we can now change the configuration of our imagemin task by adding the use option and providing it with the initialized plugin: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg',    options: {      use: [mozjpeg()]    } } } Finally, we can test our setup by running the task using the grunt imagemin command. This should produce an output similar to the following: Running "imagemin:dist" (imagemin) task Minified 1 image (saved 9.88 kB) Linting JavaScript code In this recipe, we'll make use of the contrib-jshint (0.11.1) plugin to detect errors and potential problems in our JavaScript code. It is also commonly used to enforce code conventions within a team or project. As can be derived from its name, it's basically a Grunt adaptation for the JSHint tool. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating a sample JavaScript file and configuring a task that will scan and analyze it using the JSHint tool. We'll start by installing the package that contains the contrib-jshint plugin. Next, we'll create a sample JavaScript file called main.js in the src directory, and add the following content in it: sample = 'abc'; console.log(sample); With our sample file ready, we can now add the following jshint task to our configuration. We'll configure this task to target the sample file and also add a basic option that we require for this example: jshint: { main: {    options: {      undef: true    },    src: ['src/main.js'] } } The undef option is a standard JSHint option used specifically for this example and is not required for this plugin to function. Specifying this option indicates that we'd like to have errors raised for variables that are used without being explicitly defined. We can now run the task using the grunt jshint command, which should produce output informing us of the problems found in our sample file: Running "jshint:main" (jshint) task      src/main.js      1 |sample = 'abc';          ^ 'sample' is not defined.      2 |console.log(sample);          ^ 'console' is not defined.      2 |console.log(sample);                      ^ 'sample' is not defined.   >> 3 errors in 1 file There's more... The jshint task provides us with several options that allow us to change its general behavior, in addition to how it analyzes the targeted code. We'll look at how to specify standard JSHint options, specify globally defined variables, send reported output to a file, and prevent task failure on JSHint errors. Specifying standard JSHint options The contrib-jshint plugin provides a simple way to pass all the standard JSHint options from the task's options object to the underlying JSHint tool. A list of all the options provided by the JSHint tool can be found at the following URL: http://jshint.com/docs/options/ The following example adds the curly option to the task we created in our main recipe to enforce the use of curly braces wherever they are appropriate: jshint: { main: {    options: {      undef: true,      curly: true    },    src: ['src/main.js'] } } Specifying globally defined variables Making use of globally defined variables is quite common when working with JavaScript, which is where the globals option comes in handy. Using this option, we can define a set of global values that we'll use in the targeted code, so that errors aren't raised when JSHint encounters them. In the following example, we indicate that the console variable should be treated as a global, and not raise errors when encountered: jshint: { main: {    options: {      undef: true,      globals: {        console: true      }    },    src: ['src/main.js'] } } Sending reported output to a file If we'd like to store the resulting output from our JSHint analysis, we can do so by specifying a path to a file that should receive it using the reporterOutput option, as shown in the following example: jshint: { main: {    options: {      undef: true,      reporterOutput: 'report.dat'    },    src: ['src/main.js'] } } Preventing task failure on JSHint errors The default behavior for the jshint task is to exit the running Grunt process once a JSHint error is encountered in any of the targeted files. This behavior becomes especially undesirable if you'd like to keep watching files for changes, even when an error has been raised. In the following example, we indicate that we'd like to keep the process running when errors are encountered by giving the force option a true value: jshint: { main: {    options: {      undef: true,      force: true    },    src: ['src/main.js'] } } Uglifying JavaScript Code In this recipe, we'll make use of the contrib-uglify (0.8.0) plugin to compress and mangle some files containing JavaScript code. For the most part, the process of uglifying just removes all the unnecessary characters and shortens variable names in a source code file. This has the potential to dramatically reduce the size of the file, slightly increase performance, and make the inner workings of your publicly available code a little more obscure. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating a sample JavaScript file and configuring a task that will uglify it. We'll start by installing the package that contains the contrib-uglify plugin. Then, we can create a sample JavaScript file called main.js in the src directory, which we'd like to uglify, and provide it with the following contents: var main = function () { var one = 'Hello' + ' '; var two = 'World';   var result = one + two;   console.log(result); }; With our sample file ready, we can now add the following uglify task to our configuration, indicating the sample file as the target and providing a destination output file: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js' } } We can now run the task using the grunt uglify command, which should produce output similar to the following: Running "uglify:main" (uglify) task >> 1 file created. If we now take a look at the resulting dist/main.js file, we should see that it contains the uglified contents of the original src/main.js file. There's more... The uglify task provides us with several options that allow us to change its general behavior and see how it uglifies the targeted code. We'll look at specifying standard UglifyJS options, generating source maps, and wrapping generated code in an enclosure. Specifying standard UglifyJS options The underlying UglifyJS tool can provide a set of options for each of its separate functional parts. These parts are the mangler, compressor, and beautifier. The contrib-plugin allows passing options to each of these parts using the mangle, compress, and beautify options. The available options for each of the mangler, compressor, and beautifier parts can be found at each of following URLs (listed in the order mentioned): https://github.com/mishoo/UglifyJS2#mangler-options https://github.com/mishoo/UglifyJS2#compressor-options https://github.com/mishoo/UglifyJS2#beautifier-options The following example alters the configuration of the main recipe to provide a single option to each of these parts: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      mangle: {        toplevel: true      },      compress: {        evaluate: false      },      beautify: {        semicolons: false      }    } } } Generating source maps As code gets mangled and compressed, it becomes effectively unreadable to humans, and therefore, nearly impossible to debug. For this reason, we are provided with the option of generating a source map when uglifying our code. The following example makes use of the sourceMap option to indicate that we'd like to have a source map generated along with our uglified code: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      sourceMap: true    } } } Running the altered task will now, in addition to the dist/main.js file with our uglified source, generate a source map file called main.js.map in the same directory as the uglified file. Wrapping generated code in an enclosure When building your own JavaScript code modules, it's usually a good idea to have them wrapped in a wrapper function to ensure that you don't pollute the global scope with variables that you won't be using outside of the module itself. For this purpose, we can use the wrap option to indicate that we'd like to have the resulting uglified code wrapped in a wrapper function, as shown in the following example: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      wrap: true    } } } If we now take a look at the result dist/main.js file, we should see that all the uglified contents of the original file are now contained within a wrapper function. Setting up RequireJS In this recipe, we'll make use of the contrib-requirejs (0.4.4) plugin to package the modularized source code of our web application into a single file. For the most part, this plugin just provides a wrapper for the RequireJS tool. RequireJS provides a framework to modularize JavaScript source code and consume those modules in an orderly fashion. It also allows packaging an entire application into one file and importing only the modules that are required while keeping the module structure intact. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating some files for a sample application and setting up a task that bundles them into one file. We'll start by installing the package that contains the contrib-requirejs plugin. First, we'll need a file that will contain our RequireJS configuration. Let's create a file called config.js in the src directory and add the following content in it: require.config({ baseUrl: 'app' }); Secondly, we'll create a sample module that we'd like to use in our application. Let's create a file called sample.js in the src/app directory and add the following content in it: define(function (require) { return function () {    console.log('Sample Module'); } }); Lastly, we'll need a file that will contain the main entry point for our application, and also makes use of our sample module. Let's create a file called main.js in the src/app directory and add the following content in it: require(['sample'], function (sample) { sample(); }); Now that we've got all the necessary files required for our sample application, we can setup a requirejs task that will bundle it all into one file: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js'    } } } The mainConfigFile option points out the configuration file that will determine the behavior of RequireJS. The name option indicates the name of the module that contains the application entry point. In the case of this example, our application entry point is contained in the app/main.js file, and app is the base directory of our application in the src/config.js file. This translates the app/main.js filename into the main module name. The out option is used to indicate the file that should receive the result of the bundled application. We can now run the task using the grunt requirejs command, which should produce output similar to the following: Running "requirejs:app" (requirejs) task We should now have a file named app.js in the www/js directory that contains our entire sample application. There's more... The requirejs task provides us with all the underlying options provided by the RequireJS tool. We'll look at how to use these exposed options and generate a source map. Using RequireJS optimizer options The RequireJS optimizer is quite an intricate tool, and therefore, provides a large number of options to tweak its behavior. The contrib-requirejs plugin allows us to easily set any of these options by just specifying them as options of the plugin itself. A list of all the available configuration options for the RequireJS build system can be found in the example configuration file at the following URL: https://github.com/jrburke/r.js/blob/master/build/example.build.js The following example indicates that the UglifyJS2 optimizer should be used instead of the default UglifyJS optimizer by using the optimize option: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js',      optimize: 'uglify2'    } } } Generating a source map When the source code is bundled into one file, it becomes somewhat harder to debug, as you now have to trawl through miles of code to get to the point you're actually interested in. A source map can help us with this issue by relating the resulting bundled file to the modularized structure it is derived from. Simply put, with a source map, our debugger will display the separate files we had before, even though we're actually using the bundled file. The following example makes use of the generateSourceMap option to indicate that we'd like to generate a source map along with the resulting file: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js',      optimize: 'uglify2',      preserveLicenseComments: false,      generateSourceMaps: true    } } } In order to use the generateSourceMap option, we have to indicate that UglifyJS2 is to be used for optimization, by setting the optimize option to uglify2, and that license comments should not be preserved, by setting the preserveLicenseComments option to false. Summary This article covers the optimization of images, minifying of CSS, ensuring the quality of our JavaScript code, compressing it, and packaging it all together into one source file. Resources for Article: Further resources on this subject: Grunt in Action [article] So, what is Node.js? [article] Exploring streams [article]
Read more
  • 0
  • 0
  • 1076

article-image-why-meteor-rocks
Packt
08 Jul 2015
23 min read
Save for later

Why Meteor Rocks!

Packt
08 Jul 2015
23 min read
In this article by Isaac Strack, the author of the book, Getting Started with Meteor.js JavaScript Framework - Second Edition, has discussed some really amazing features of Meteor that has contributed a lot to the success of Meteor. Meteor is a disruptive (in a good way!) technology. It enables a new type of web application that is faster, easier to build, and takes advantage of modern techniques, such as Full Stack Reactivity, Latency Compensation, and Data On The Wire. (For more resources related to this topic, see here.) This article explains how web applications have changed over time, why that matters, and how Meteor specifically enables modern web apps through the above-mentioned techniques. By the end of this article, you will have learned: What a modern web application is What Data On The Wire means and how it's different How Latency Compensation can improve your app experience Templates and Reactivity—programming the reactive way! Modern web applications Our world is changing. With continual advancements in displays, computing, and storage capacities, things that weren't even possible a few years ago are now not only possible but are critical to the success of a good application. The Web in particular has undergone significant change. The origin of the web app (client/server) From the beginning, web servers and clients have mimicked the dumb terminal approach to computing where a server with significantly more processing power than a client will perform operations on data (writing records to a database, math calculations, text searches, and so on), transform the data and render it (turn a database record into HTML and so on), and then serve the result to the client, where it is displayed for the user. In other words, the server does all the work, and the client acts as more of a display, or a dumb terminal. This design pattern for this is called…wait for it…the client/server design pattern. The diagrammatic representation of the client-server architecture is shown in the following diagram: This design pattern, borrowed from the dumb terminals and mainframes of the 60s and 70s, was the beginning of the Web as we know it and has continued to be the design pattern that we think of when we think of the Internet. The rise of the machines (MVC) Before the Web (and ever since), desktops were able to run a program such as a spreadsheet or a word processor without needing to talk to a server. This type of application could do everything it needed to, right there on the big and beefy desktop machine. During the early 90s, desktop computers got even more beefy. At the same time, the Web was coming alive, and people started having the idea that a hybrid between the beefy desktop application (a fat app) and the connected client/server application (a thin app) would produce the best of both worlds. This kind of hybrid app—quite the opposite of a dumb terminal—was called a smart app. Many business-oriented smart apps were created, but the easiest examples can be found in computer games. Massively Multiplayer Online games (MMOs), first-person shooters, and real-time strategies are smart apps where information (the data model) is passed between machines through a server. The client in this case does a lot more than just display the information. It performs most of the processing (or acts as a controller) and transforms the data into something to be displayed (the view). This design pattern is simple but very effective. It's called the Model View Controller (MVC) pattern. The model is essentially the data for an application. In the context of a smart app, the model is provided by a server. The client makes requests to the server for data and stores that data as the model. Once the client has a model, it performs actions/logic on that data and then prepares it to be displayed on the screen. This part of the application (talking to the server, modifying the data model, and preparing data for display) is called the controller. The controller sends commands to the view, which displays the information. The view also reports back to the controller when something happens on the screen (a button click, for example). The controller receives the feedback, performs the logic, and updates the model. Lather, rinse, repeat! Since web browsers were built to be "dumb clients", the idea of using a browser as a smart app back then was out of question. Instead, smart apps were built on frameworks such as Microsoft .NET, Java, or Macromedia (now Adobe) Flash. As long as you had the framework installed, you could visit a web page to download/run a smart app. Sometimes, you could run the app inside the browser, and sometimes, you would download it first, but either way, you were running a new type of web app where the client application could talk to the server and share the processing workload. The browser grows up Beginning in the early 2000s, a new twist on the MVC pattern started to emerge. Developers started to realize that, for connected/enterprise "smart apps", there was actually a nested MVC pattern. The server code (controller) was performing business logic against the database (model) through the use of business objects and then sending processed/rendered data to the client application (a "view"). The client was receiving this data from the server and treating it as its own personal "model". The client would then act as a proper controller, perform logic, and send the information to the view to be displayed on the screen. So, the "view" for the server MVC was the "model" for the client MVC. As browser technologies (HTML and JavaScript) matured, it became possible to create smart apps that used the Nested MVC design pattern directly inside an HTML web page. This pattern makes it possible to run a full-sized application using only JavaScript. There is no longer any need to download multiple frameworks or separate apps. You can now get the same functionality from visiting a URL as you could previously by buying a packaged product. A giant Meteor appears! Meteor takes modern web apps to the next level. It enhances and builds upon the nested MVC design pattern by implementing three key features: Data On The Wire through the Distributed Data Protocol (DDP) Latency Compensation with Mini Databases Full Stack Reactivity with Blaze and Tracker Let's walk through these concepts to see why they're valuable, and then, we'll apply them to our Lending Library application. Data On The Wire The concept of Data On The Wire is very simple and in tune with the nested MVC pattern; instead of having a server process everything, render content, and then send HTML across the wire, why not just send the data across the wire and let the client decide what to do with it? This concept is implemented in Meteor using the Distributed Data Protocol, or DDP. DDP has a JSON-based syntax and sends messages similar to the REST protocol. Additions, deletions, and changes are all sent across the wire and handled by the receiving service/client/device. Since DDP uses WebSockets rather than HTTP, the data can be pushed whenever changes occur. But the true beauty of DDP lies in the generic nature of the communication. It doesn't matter what kind of system sends or receives data over DDP—it can be a server, a web service, or a client app—they all use the same protocol to communicate. This means that none of the systems know (or care) whether the other systems are clients or servers. With the exception of the browser, any system can be a server, and without exception, any server can act as a client. All the traffic looks the same and can be treated in a similar manner. In other words, the traditional concept of having a single server for a single client goes away. You can hook multiple servers together, each serving a discreet purpose, or you can have a client connect to multiple servers, interacting with each one differently. Think about what you can do with a system like that: Imagine multiple systems all coming together to create, for example, a health monitoring system. Some systems are built with C++, some with Arduino, some with…well, we don't really care. They all speak DDP. They send and receive data on the wire and decide individually what to do with that data. Suddenly, very difficult and complex problems become much easier to solve. DDP has been implemented in pretty much every major programming language, allowing you true freedom to architect an enterprise application. Latency Compensation Meteor employs a very clever technique called Mini Databases. A mini database is a "lite" version of a normal database that lives in the memory on the client side. Instead of the client sending requests to a server, it can make changes directly to the mini database on the client. This mini database then automatically syncs with the server (using DDP of course), which has the actual database. Out of the box, Meteor uses MongoDB and Minimongo: When the client notices a change, it first executes that change against the client-side Minimongo instance. The client then goes on its merry way and lets the Minimongo handlers communicate with the server over DDP. If the server accepts the change, it then sends out a "changed" message to all connected clients, including the one that made the change. If the server rejects the change, or if a newer change has come in from a different client, the Minimongo instance on the client is corrected, and any affected UI elements are updated as a result. All of this doesn't seem very groundbreaking, but here's the thing—it's all asynchronous, and it's done using DDP. This means that the client doesn't have to wait until it gets a response back from the server. It can immediately update the UI based on what is in the Minimongo instance. What if the change was illegal or other changes have come in from the server? This is not a problem as the client is updated as soon as it gets word from the server. Now, what if you have a slow internet connection or your connection goes down temporarily? In a normal client/server environment, you couldn't make any changes, or the screen would take a while to refresh while the client waits for permission from the server. However, Meteor compensates for this. Since the changes are immediately sent to Minimongo, the UI gets updated immediately. So, if your connection is down, it won't cause a problem: All the changes you make are reflected in your UI, based on the data in Minimongo. When your connection comes back, all the queued changes are sent to the server, and the server will send authorized changes to the client. Basically, Meteor lets the client take things on faith. If there's a problem, the data coming in from the server will fix it, but for the most part, the changes you make will be ratified and broadcast by the server immediately. Coding this type of behavior in Meteor is crazy easy (although you can make it more complex and therefore more controlled if you like): lists = new Mongo.Collection("lists"); This one line declares that there is a lists data model. Both the client and server will have a version of it, but they treat their versions differently. The client will subscribe to changes announced by the server and update its model accordingly. The server will publish changes, listen to change requests from the client, and update its model (its master copy) based on these change requests. Wow, one line of code that does all that! Of course, there is more to it, but that's beyond the scope of this article, so we'll move on. To better understand Meteor data synchronization, see the Publish and subscribe section of the meteor documentation at http://docs.meteor.com/#/full/meteor_publish. Full Stack Reactivity Reactivity is integral to every part of Meteor. On the client side, Meteor has the Blaze library, which uses HTML templates and JavaScript helpers to detect changes and render the data in your UI. Whenever there is a change, the helpers re-run themselves and add, delete, and change UI elements, as appropriate, based on the structure found in the templates. These functions that re-run themselves are called reactive computations. On both the client and the server, Meteor also offers reactive computations without having to use a UI. Called the Tracker library, these helpers also detect any data changes and rerun themselves accordingly. Because both the client and the server are JavaScript-based, you can use the Tracker library anywhere. This is defined as isomorphic or full stack reactivity because you're using the same language (and in some cases the same code!) on both the client and the server. Re-running functions on data changes has a really amazing benefit for you, the programmer: you get to write code declaratively, and Meteor takes care of the reactive part automatically. Just tell Meteor how you want the data displayed, and Meteor will manage any and all data changes. This declarative style is usually accomplished through the use of templates. Templates work their magic through the use of view data bindings. Without getting too deep, a view data binding is a shared piece of data that will be displayed differently if the data changes. Let's look at a very simple data binding—one for which you don't technically need Meteor—to illustrate the point. Let's perform the following set of steps to understand the concept in detail: In LendLib.html, you will see an HTML-based template expression: <div id="categories-container">      {{> categories}}   </div> This expression is a placeholder for an HTML template that is found just below it: <template name="categories">    <h2 class="title">my stuff</h2>.. So, {{> categories}} is basically saying, "put whatever is in the template categories right here." And the HTML template with the matching name is providing that. If you want to see how data changes will affect the display, change the h2 tag to an h4 tag and save the change: <template name="categories">    <h4 class="title">my stuff</h4> You'll see the effect in your browser. (my stuff will become itsy bitsy.) That's view data binding at work. Change the h4 tag back to an h2 tag and save the change, unless you like the change. No judgment here...okay, maybe a little bit of judgment. It's ugly, and tiny, and hard to read. Seriously, you should change it back before someone sees it and makes fun of you! Alright, now that we know what a view data binding is, let's see how Meteor uses it. Inside the categories template in LendLib.html, you'll find even more templates: <template name="categories"> <h4 class="title">my stuff</h4> <div id="categories" class="btn-group">    {{#each lists}}      <div class="category btn btn-primary">        {{Category}}      </div>    {{/each}} </div> </template> Meteor uses a template language called Spacebars to provide instructions inside templates. These instructions are called expressions, and they let us do things like add HTML for every record in a collection, insert the values of properties, and control layouts with conditional statements. The first Spacebars expression is part of a pair and is a for-each statement. {{#each lists}} tells the interpreter to perform the action below it (in this case, it tells it to make a new div element) for each item in the lists collection. lists is the piece of data, and {{#each lists}} is the placeholder. Now, inside the {{#each lists}} expression, there is one more Spacebars expression: {{Category}} Since the expression is found inside the #each expression, it is considered a property. That is to say that {{Category}} is the same as saying this.Category, where this is the current item in the for-each loop. So, the placeholder is saying, "add the value of the Category property for the current record." Now, if we look in LendLib.js, we will see the reactive values (called reactive contexts) behind the templates: lists : function () { return lists.find(... Here, Meteor is declaring a template helper named lists. The helper, lists, is found inside the template helpers belonging to categories. The lists helper happens to be a function that returns all the data in the lists collection, which we defined previously. Remember this line? lists = new Mongo.Collection("lists"); This lists collection is returned by the above-mentioned helper. When there is a change to the lists collection, the helper gets updated and the template's placeholder is changed as well. Let's see this in action. On your web page pointing to http://localhost:3000, open the browser console and enter the following line: > lists.insert({Category:"Games"}); This will update the lists data collection. The template will see this change and update the HTML code/placeholder. Each of the placeholders will run one additional time for the new entry in lists, and you'll see the following screen: When the lists collection was updated, the Template.categories.lists helper detected the change and reran itself (recomputed). This changed the contents of the code meant to be displayed in the {{> categories}} placeholder. Since the contents were changed, the affected part of the template was re-run. Now, take a minute here and think about how little we had to do to get this reactive computation to run: we simply created a template, instructing Blaze how we want the lists data collection to be displayed, and we put in a placeholder. This is simple, declarative programming at its finest! Let's create some templates We'll now see a real-life example of reactive computations and work on our Lending Library at the same time. Adding categories through the console has been a fun exercise, but it's not a long-term solution. Let's make it so that we can do that on the page instead as follows: Open LendLib.html and add a new button just before the {{#each lists}} expression: <div id="categories" class="btn-group"> <div class="category btn btn-primary" id="btnNewCat">    <span class="glyphicon glyphicon-plus"></span> </div> {{#each lists}} This will add a plus button on the page, as follows: Now, we want to change the button into a text field when we click on it. So let's build that functionality by using the reactive pattern. We will make it based on the value of a variable in the template. Add the following {{#if…else}} conditionals around our new button: <div id="categories" class="btn-group"> {{#if new_cat}} {{else}}    <div class="category btn btn-primary" id="btnNewCat">      <span class="glyphicon glyphicon-plus"></span>    </div> {{/if}} {{#each lists}} The first line, {{#if new_cat}}, checks to see whether new_cat is true or false. If it's false, the {{else}} section is triggered, and it means that we haven't yet indicated that we want to add a new category, so we should be displaying the button with the plus sign. In this case, since we haven't defined it yet, new_cat will always be false, and so the display won't change. Now, let's add the HTML code to display when we want to add a new category: {{#if new_cat}} <div class="category form-group" id="newCat">      <input type="text" id="add-category" class="form-control" value="" />    </div> {{else}} ... {{/if}} There's the smallest bit of CSS we need to take care of as well. Open ~/Documents/Meteor/LendLib/LendLib.css and add the following declaration: #newCat { max-width: 250px; } Okay, so now we've added an input field, which will show up when new_cat is true. The input field won't show up unless it is set to true; so, for now, it's hidden. So, how do we make new_cat equal to true? Save your changes if you haven't already done so, and open LendLib.js. First, we'll declare a Session variable, just below our Meteor.isClient check function, at the top of the file: if (Meteor.isClient) { // We are declaring the 'adding_category' flag Session.set('adding_category', false); Now, we'll declare the new template helper new_cat, which will be a function returning the value of adding_category. We need to place the new helper in the Template.categories.helpers() method, just below the declaration for lists: Template.categories.helpers({ lists: function () {    ... }, new_cat: function(){    //returns true if adding_category has been assigned    //a value of true    return Session.equals('adding_category',true); } }); Note the comma (,) on the line above new_cat. It's important that you add that comma, or your code will not execute. Save these changes, and you'll see that nothing has changed. Ta-da! In reality, this is exactly as it should be because we haven't done anything to change the value of adding_category yet. Let's do this now: First, we'll declare our click event handler, which will change the value in our Session variable. To do this, add the following highlighted code just below the Template.categories.helpers() block: Template.categories.helpers({ ... }); Template.categories.events({ 'click #btnNewCat': function (e, t) {    Session.set('adding_category', true);    Tracker.flush();    focusText(t.find("#add-category")); } }); Now, let's take a look at the following line of code: Template.categories.events({ This line declares that events will be found in the category template. Now, let's take a look at the next line: 'click #btnNewCat': function (e, t) { This tells us that we're looking for a click event on the HTML element with an id="btnNewCat" statement (which we already created in LendLib.html). Session.set('adding_category', true); Tracker.flush(); focusText(t.find("#add-category")); Next, we set the Session variable, adding_category = true, flush the DOM (to clear up anything wonky), and then set the focus onto the input box with the id="add-category" expression. There is one last thing to do, and that is to quickly add the focusText(). helper function. To do this, just before the closing tag for the if (Meteor.isClient) function, add the following code: /////Generic Helper Functions///// //this function puts our cursor where it needs to be. function focusText(i) { i.focus(); i.select(); }; } //<------closing bracket for if(Meteor.isClient){} Now, when you save the changes and click on the plus button, you will see the input box: Fancy! However, it's still not useful, and we want to pause for a second and reflect on what just happened; we created a conditional template in the HTML page that will either show an input box or a plus button, depending on the value of a variable. This variable is a reactive variable, called a reactive context. This means that if we change the value of the variable (like we do with the click event handler), then the view automatically updates because the new_cat helpers function (a reactive computation) will rerun. Congratulations, you've just used Meteor's reactive programming model! To really bring this home, let's add a change to the lists collection (which is also a reactive context, remember?) and figure out a way to hide the input field when we're done. First, we need to add a listener for the keyup event. Or, to put it another way, we want to listen when the user types something in the box and hits Enter. When this happens, we want to add a category based on what the user typed. To do this, let's first declare the event handler. Just after the click handler for #btnNewCat, let's add another event handler: 'click #btnNewCat': function (e, t) {    ... }, 'keyup #add-category': function (e,t){    if (e.which === 13)    {      var catVal = String(e.target.value || "");      if (catVal)      {        lists.insert({Category:catVal});        Session.set('adding_category', false);      }    } } We add a "," character at the end of the first click handler, and then add the keyup event handler. Now, let's check each of the lines in the preceding code: This line checks to see whether we hit the Enter/Return key. if (e.which === 13) This line of code checks to see whether the input field has any value in it: var catVal = String(e.target.value || ""); if (catVal) If it does, we want to add an entry to the lists collection: lists.insert({Category:catVal}); Then, we want to hide the input box, which we can do by simply modifying the value of adding_category: Session.set('adding_category', false); There is one more thing to add and then we'll be done. When we click away from the input box, we want to hide it and bring back the plus button. We already know how to do this reactively, so let's add a quick function that changes the value of adding_category. To do this, add one more comma after the keyup event handler and insert the following event handler: 'keyup #add-category': function (e,t){ ... }, 'focusout #add-category': function(e,t){    Session.set('adding_category',false); } Save your changes, and let's see this in action! In your web browser on http://localhost:3000, click on the plus sign, add the word Clothes, and hit Enter. Your screen should now resemble the following screenshot: Feel free to add more categories if you like. Also, experiment by clicking on the plus button, typing something in, and then clicking away from the input field. Summary In this article, you learned about the history of web applications and saw how we've moved from a traditional client/server model to a nested MVC design pattern. You learned what smart apps are, and you also saw how Meteor has taken smart apps to the next level with Data On The Wire, Latency Compensation, and Full Stack Reactivity. You saw how Meteor uses templates and helpers to automatically update content, using reactive variables and reactive computations. Lastly, you added more functionality to the Lending Library. You made a button and an input field to add categories, and you did it all using reactive programming rather than directly editing the HTML code. Resources for Article: Further resources on this subject: Building the next generation Web with Meteor [article] Quick start - creating your first application [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article]
Read more
  • 0
  • 0
  • 1881
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-how-to-build-remote-controlled-tv-node-webkit
Roberto González
08 Jul 2015
14 min read
Save for later

How to build a Remote-controlled TV with Node-Webkit

Roberto González
08 Jul 2015
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html // This is the starting point for our desktop app - css // Our desktop app styles - js // This is where the magic happens - remote // This is where the magic happens (Part 2) - libraries // FFMPEG libraries, which give you H.264 video support in Node-Webkit - player // Our youtube player - Gruntfile.js // Build scripts - run.bat // run.bat runs the app on Windows - run.sh // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -g sudo npm install grunt-cli -g  On Windows: npm install node-gyp -g npm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: { "//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!", "name": "Remote", "//": "A simple description of what the app does.", "description": "An example of node-webkit", "//": "This is the first html the app will load. Just leave this this way", "main": "app://host/index.html", "//": "The version number. 0.0.1 is a good start :D", "version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.", "window": { "//": "The Window Title for the app", "title": "Remote", "//": "The Icon for the app", "icon": "css/images/icon.png", "//": "Do you want the File/Edit/Whatever toolbar?", "toolbar": false, "//": "Do you want a standard window around your app (a title bar and some borders)?", "frame": true, "//": "Can you resize the window?", "resizable": true}, "webkit": { "plugin": false, "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36" }, "//": "These are the libraries we’ll be using:", "//": "Express is a web server, which will handle the files for the remote", "//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.", "dependencies": { "express": "^4.9.5", "socket.io": "^1.1.0" }, "//": "And these are just task handlers to make things easier", "devDependencies": { "grunt": "^0.4.5", "grunt-contrib-copy": "^0.6.0", "grunt-node-webkit-builder": "^0.1.21" } } You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm install grunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html> <head> <metacharset="utf-8"/> <title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/> <linkhref="css/normalize.css"rel="stylesheet"type="text/css"/> <linkhref="css/styles.css"rel="stylesheet"type="text/css"/> </head> <body> <divid="serverInfo"> <h1>Youtube TV</h1> </div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script> <script src="js/app.js"></script> </body> </html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express'); var app = express(); var server = require('http').Server(app); var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080; server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) { // video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video Youtube.watchVideo(video); }); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () { Youtube.playVideo(); }); socket.on('pause', function () { Youtube.pauseVideo(); }); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) { io.emit('statusChange', status); }; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html> <head> <metacharset=“utf-8”/> <title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/> <linkrel="stylesheet"href="/css/styles.css"/> </head> <body> <divclass="controls"> <divclass="search"> <inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/> </div> <divclass="playback"> <buttonclass="play">&gt;</button> <buttonclass="pause">||</button> </div> </div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"> <articleclass="video"> <figure><imgsrc=""alt=""/></figure> <divclass="info"> <h2></h2> </div> </article> </div> <script src="/socket.io/socket.io.js"></script> <script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script> <script src="/js/remote.js"></script> </body> </html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null; $('#searchQuery').on('keyup', function(event){ clearTimeout(searchTimeout); searchTimeout = setTimeout(function(){ searchYoutube($('#searchQuery').val()); }, 500); }); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){ // Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data()); }); // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){ if( status ==='play' ) { $('.playback .pause').show(); $('.playback .play').hide(); } elseif( status ==='pause'|| status ==='stop' ) { $('.playback .pause').hide(); $('.playback .play').show(); } }); // Notify the app when we hit the play button$('.playback .play').on('click', function(event){ socket.emit('play'); }); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){ socket.emit('pause'); }); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) { require('dns').lookup( require('os').hostname(), function (err, add, fam) { typeof callback =='function'? callback(add) :null; }); } // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){ $('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote'); }); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 4139

article-image-man-do-i-templates
Packt
07 Jul 2015
22 min read
Save for later

Man, Do I Like Templates!

Packt
07 Jul 2015
22 min read
In this article by Italo Maia, author of the book Building Web Applications with Flask, we will discuss what Jinja2 is, and how Flask uses Jinja2 to implement the View layer and awe you. Be prepared! (For more resources related to this topic, see here.) What is Jinja2 and how is it coupled with Flask? Jinja2 is a library found at http://jinja.pocoo.org/; you can use it to produce formatted text with bundled logic. Unlike the Python format function, which only allows you to replace markup with variable content, you can have a control structure, such as a for loop, inside a template string and use Jinja2 to parse it. Let's consider this example: from jinja2 import Template x = """ <p>Uncle Scrooge nephews</p> <ul> {% for i in my_list %} <li>{{ i }}</li> {% endfor %} </ul> """ template = Template(x) # output is an unicode string print template.render(my_list=['Huey', 'Dewey', 'Louie']) In the preceding code, we have a very simple example where we create a template string with a for loop control structure ("for tag", for short) that iterates over a list variable called my_list and prints the element inside a "li HTML tag" using curly braces {{ }} notation. Notice that you could call render in the template instance as many times as needed with different key-value arguments, also called the template context. A context variable may have any valid Python variable name—that is, anything in the format given by the regular expression [a-zA-Z_][a-zA-Z0-9_]*. For a full overview on regular expressions (Regex for short) with Python, visit https://docs.python.org/2/library/re.html. Also, take a look at this nice online tool for Regex testing http://pythex.org/. A more elaborate example would make use of an environment class instance, which is a central, configurable, extensible class that may be used to load templates in a more organized way. Do you follow where we are going here? This is the basic principle behind Jinja2 and Flask: it prepares an environment for you, with a few responsive defaults, and gets your wheels in motion. What can you do with Jinja2? Jinja2 is pretty slick. You can use it with template files or strings; you can use it to create formatted text, such as HTML, XML, Markdown, and e-mail content; you can put together templates, reuse templates, and extend templates; you can even use extensions with it. The possibilities are countless, and combined with nice debugging features, auto-escaping, and full unicode support. Auto-escaping is a Jinja2 configuration where everything you print in a template is interpreted as plain text, if not explicitly requested otherwise. Imagine a variable x has its value set to <b>b</b>. If auto-escaping is enabled, {{ x }} in a template would print the string as given. If auto-escaping is off, which is the Jinja2 default (Flask's default is on), the resulting text would be b. Let's understand a few concepts before covering how Jinja2 allows us to do our coding. First, we have the previously mentioned curly braces. Double curly braces are a delimiter that allows you to evaluate a variable or function from the provided context and print it into the template: from jinja2 import Template # create the template t = Template("{{ variable }}") # – Built-in Types – t.render(variable='hello you') >> u"hello you" t.render(variable=100) >> u"100" # you can evaluate custom classes instances class A(object): def __str__(self):    return "__str__" def __unicode__(self):    return u"__unicode__" def __repr__(self):    return u"__repr__" # – Custom Objects Evaluation – # __unicode__ has the highest precedence in evaluation # followed by __str__ and __repr__ t.render(variable=A()) >> u"__unicode__" In the preceding example, we see how to use curly braces to evaluate variables in your template. First, we evaluate a string and then an integer. Both result in a unicode string. If we evaluate a class of our own, we must make sure there is a __unicode__ method defined, as it is called during the evaluation. If a __unicode__ method is not defined, the evaluation falls back to __str__ and __repr__, sequentially. This is easy. Furthermore, what if we want to evaluate a function? Well, just call it: from jinja2 import Template # create the template t = Template("{{ fnc() }}") t.render(fnc=lambda: 10) >> u"10" # evaluating a function with argument t = Template("{{ fnc(x) }}") t.render(fnc=lambda v: v, x='20') >> u"20" t = Template("{{ fnc(v=30) }}") t.render(fnc=lambda v: v) >> u"30" To output the result of a function in a template, just call the function as any regular Python function. The function return value will be evaluated normally. If you're familiar with Django, you might notice a slight difference here. In Django, you do not need the parentheses to call a function, or even pass arguments to it. In Flask, the parentheses are always needed if you want the function return evaluated. The following two examples show the difference between Jinja2 and Django function call in a template: {# flask syntax #} {{ some_function() }}   {# django syntax #} {{ some_function }} You can also evaluate Python math operations. Take a look: from jinja2 import Template # no context provided / needed Template("{{ 3 + 3 }}").render() >> u"6" Template("{{ 3 - 3 }}").render() >> u"0" Template("{{ 3 * 3 }}").render() >> u"9" Template("{{ 3 / 3 }}").render() >> u"1" Other math operators will also work. You may use the curly braces delimiter to access and evaluate lists and dictionaries: from jinja2 import Template Template("{{ my_list[0] }}").render(my_list=[1, 2, 3]) >> u'1' Template("{{ my_list['foo'] }}").render(my_list={'foo': 'bar'}) >> u'bar' # and here's some magic Template("{{ my_list.foo }}").render(my_list={'foo': 'bar'}) >> u'bar' To access a list or dictionary value, just use normal plain Python notation. With dictionaries, you can also access a key value using variable access notation, which is pretty neat. Besides the curly braces delimiter, Jinja2 also has the curly braces/percentage delimiter, which uses the notation {% stmt %} and is used to execute statements, which may be a control statement or not. Its usage depends on the statement, where control statements have the following notation: {% stmt %} {% endstmt %} The first tag has the statement name, while the second is the closing tag, which has the name of the statement appended with end in the beginning. You must be aware that a non-control statement may not have a closing tag. Let's look at some examples: {% block content %} {% for i in items %} {{ i }} - {{ i.price }} {% endfor %} {% endblock %} The preceding example is a little more complex than what we have been seeing. It uses a control statement for loop inside a block statement (you can have a statement inside another), which is not a control statement, as it does not control execution flow in the template. Inside the for loop you see that the i variable is being printed together with the associated price (defined elsewhere). A last delimiter you should know is {# comments go here #}. It is a multi-line delimiter used to declare comments. Let's see two examples that have the same result: {# first example #} {# second example #} Both comment delimiters hide the content between {# and #}. As can been seen, this delimiter works for one-line comments and multi-line comments, what makes it very convenient. Control structures We have a nice set of built-in control structures defined by default in Jinja2. Let's begin our studies on it with the if statement. {% if true %}Too easy{% endif %} {% if true == true == True %}True and true are the same{% endif %} {% if false == false == False %}False and false also are the same{% endif %} {% if none == none == None %}There's also a lowercase None{% endif %} {% if 1 >= 1 %}Compare objects like in plain python{% endif %} {% if 1 == 2 %}This won't be printed{% else %}This will{% endif %} {% if "apples" != "oranges" %}All comparison operators work = ]{% endif %} {% if something %}elif is also supported{% elif something_else %}^_^{% endif %} The if control statement is beautiful! It behaves just like a python if statement. As seen in the preceding code, you can use it to compare objects in a very easy fashion. "else" and "elif" are also fully supported. You may also have noticed that true and false, non-capitalized, were used together with plain Python Booleans, True and False. As a design decision to avoid confusion, all Jinja2 templates have a lowercase alias for True, False, and None. By the way, lowercase syntax is the preferred way to go. If needed, and you should avoid this scenario, you may group comparisons together in order to change precedence evaluation. See the following example: {% if 5 < 10 < 15 %}true{%else%}false{% endif %} {% if (5 < 10) < 15 %}true{%else%}false{% endif %} {% if 5 < (10 < 15) %}true{%else%}false{% endif %} The expected output for the preceding example is true, true, and false. The first two lines are pretty straightforward. In the third line, first, (10<15) is evaluated to True, which is a subclass of int, where True == 1. Then 5 < True is evaluated, which is certainly false. The for statement is pretty important. One can hardly think of a serious Web application that does not have to show a list of some kind at some point. The for statement can iterate over any iterable instance and has a very simple, Python-like syntax: {% for item in my_list %} {{ item }}{# print evaluate item #} {% endfor %} {# or #} {% for key, value in my_dictionary.items() %} {{ key }}: {{ value }} {% endfor %} In the first statement, we have the opening tag indicating that we will iterate over my_list items and each item will be referenced by the name item. The name item will be available inside the for loop context only. In the second statement, we have an iteration over the key value tuples that form my_dictionary, which should be a dictionary (if the variable name wasn't suggestive enough). Pretty simple, right? The for loop also has a few tricks in store for you. When building HTML lists, it's a common requirement to mark each list item in alternating colors in order to improve readability or mark the first or/and last item with some special markup. Those behaviors can be achieved in a Jinja2 for-loop through access to a loop variable available inside the block context. Let's see some examples: {% for i in ['a', 'b', 'c', 'd'] %} {% if loop.first %}This is the first iteration{% endif %} {% if loop.last %}This is the last iteration{% endif %} {{ loop.cycle('red', 'blue') }}{# print red or blue alternating #} {{ loop.index }} - {{ loop.index0 }} {# 1 indexed index – 0 indexed index #} {# reverse 1 indexed index – reverse 0 indexed index #} {{ loop.revindex }} - {{ loop.revindex0 }} {% endfor %} The for loop statement, as in Python, also allow the use of else, but with a slightly different meaning. In Python, when you use else with for, the else block is only executed if it was not reached through a break command like this: for i in [1, 2, 3]: pass else: print "this will be printed" for i in [1, 2, 3]: if i == 3:    break else: print "this will never not be printed" As seen in the preceding code snippet, the else block will only be executed in a for loop if the execution was never broken by a break command. With Jinja2, the else block is executed when the for iterable is empty. For example: {% for i in [] %} {{ i }} {% else %}I'll be printed{% endfor %} {% for i in ['a'] %} {{ i }} {% else %}I won't{% endfor %} As we are talking about loops and breaks, there are two important things to know: the Jinja2 for loop does not support break or continue. Instead, to achieve the expected behavior, you should use loop filtering as follows: {% for i in [1, 2, 3, 4, 5] if i > 2 %} value: {{ i }}; loop.index: {{ loop.index }} {%- endfor %} In the first tag you see a normal for loop together with an if condition. You should consider that condition as a real list filter, as the index itself is only counted per iteration. Run the preceding example and the output will be the following: value:3; index: 1 value:4; index: 2 value:5; index: 3 Look at the last observation in the preceding example—in the second tag, do you see the dash in {%-? It tells the renderer that there should be no empty new lines before the tag at each iteration. Try our previous example without the dash and compare the results to see what changes. We'll now look at three very important statements used to build templates from different files: block, extends, and include. block and extends always work together. The first is used to define "overwritable" blocks in a template, while the second defines a parent template that has blocks, for the current template. Let's see an example: # coding:utf-8 with open('parent.txt', 'w') as file:    file.write(""" {% block template %}parent.txt{% endblock %} =========== I am a powerful psychic and will tell you your past   {#- "past" is the block identifier #} {% block past %} You had pimples by the age of 12. {%- endblock %}   Tremble before my power!!!""".strip())   with open('child.txt', 'w') as file:    file.write(""" {% extends "parent.txt" %}   {# overwriting the block called template from parent.txt #} {% block template %}child.txt{% endblock %}   {#- overwriting the block called past from parent.txt #} {% block past %} You've bought an ebook recently. {%- endblock %}""".strip()) with open('other.txt', 'w') as file:    file.write(""" {% extends "child.txt" %} {% block template %}other.txt{% endblock %}""".strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() # tell the environment how to load templates env.loader = FileSystemLoader('.') # look up our template tmpl = env.get_template('parent.txt') # render it to default output print tmpl.render() print "" # loads child.html and its parent tmpl = env.get_template('child.txt') print tmpl.render() # loads other.html and its parent env.get_template('other.txt').render() Do you see the inheritance happening, between child.txt and parent.txt? parent.txt is a simple template with two block statements, called template and past. When you render parent.txt directly, its blocks are printed "as is", because they were not overwritten. In child.txt, we extend the parent.txt template and overwrite all its blocks. By doing that, we can have different information in specific parts of a template without having to rewrite the whole thing. With other.txt, for example, we extend the child.txt template and overwrite only the block-named template. You can overwrite blocks from a direct parent template or from any of its parents. If you were defining an index.txt page, you could have default blocks in it that would be overwritten when needed, saving lots of typing. Explaining the last example, Python-wise, is pretty simple. First, we create a Jinja2 environment (we talked about this earlier) and tell it how to load our templates, then we load the desired template directly. We do not have to bother telling the environment how to find parent templates, nor do we need to preload them. The include statement is probably the easiest statement so far. It allows you to render a template inside another in a very easy fashion. Let's look at an example: with open('base.txt', 'w') as file: file.write(""" {{ myvar }} You wanna hear a dirty joke? {% include 'joke.txt' %} """.strip()) with open('joke.txt', 'w') as file: file.write(""" A boy fell in a mud puddle. {{ myvar }} """.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() # tell the environment how to load templates env.loader = FileSystemLoader('.') print env.get_template('base.txt').render(myvar='Ha ha!') In the preceding example, we render the joke.txt template inside base.txt. As joke.txt is rendered inside base.txt, it also has full access to the base.txt context, so myvar is printed normally. Finally, we have the set statement. It allows you to define variables for inside the template context. Its use is pretty simple: {% set x = 10 %} {{ x }} {% set x, y, z = 10, 5+5, "home" %} {{ x }} - {{ y }} - {{ z }} In the preceding example, if x was given by a complex calculation or a database query, it would make much more sense to have it cached in a variable, if it is to be reused across the template. As seen in the example, you can also assign a value to multiple variables at once. Macros Macros are the closest to coding you'll get inside Jinja2 templates. The macro definition and usage are similar to plain Python functions, so it is pretty easy. Let's try an example: with open('formfield.html', 'w') as file: file.write(''' {% macro input(name, value='', label='') %} {% if label %} <label for='{{ name }}'>{{ label }}</label> {% endif %} <input id='{{ name }}' name='{{ name }}' value='{{ value }}'></input> {% endmacro %}'''.strip()) with open('index.html', 'w') as file: file.write(''' {% from 'formfield.html' import input %} <form method='get' action='.'> {{ input('name', label='Name:') }} <input type='submit' value='Send'></input> </form> '''.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() env.loader = FileSystemLoader('.') print env.get_template('index.html').render() In the preceding example, we create a macro that accepts a name argument and two optional arguments: value and label. Inside the macro block, we define what should be output. Notice we can use other statements inside a macro, just like a template. In index.html we import the input macro from inside formfield.html, as if formfield was a module and input was a Python function using the import statement. If needed, we could even rename our input macro like this: {% from 'formfield.html' import input as field_input %} You can also import formfield as a module and use it as follows: {% import 'formfield.html' as formfield %} When using macros, there is a special case where you want to allow any named argument to be passed into the macro, as you would in a Python function (for example, **kwargs). With Jinja2 macros, these values are, by default, available in a kwargs dictionary that does not need to be explicitly defined in the macro signature. For example: # coding:utf-8 with open('formfield.html', 'w') as file:    file.write(''' {% macro input(name) -%} <input id='{{ name }}' name='{{ name }}' {% for k,v in kwargs.items() -%}{{ k }}='{{ v }}' {% endfor %}></input> {%- endmacro %} '''.strip())with open('index.html', 'w') as file:    file.write(''' {% from 'formfield.html' import input %} {# use method='post' whenever sending sensitive data over HTTP #} <form method='post' action='.'> {{ input('name', type='text') }} {{ input('passwd', type='password') }} <input type='submit' value='Send'></input> </form> '''.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() env.loader = FileSystemLoader('.') print env.get_template('index.html').render() As you can see, kwargs is available even though you did not define a kwargs argument in the macro signature. Macros have a few clear advantages over plain templates, that you notice with the include statement: You do not have to worry about variable names in the template using macros You can define the exact required context for a macro block through the macro signature You can define a macro library inside a template and import only what is needed Commonly used macros in a Web application include a macro to render pagination, another to render fields, and another to render forms. You could have others, but these are pretty common use cases. Regarding our previous example, it is good practice to use HTTPS (also known as, Secure HTTP) to send sensitive information, such as passwords, over the Internet. Be careful about that! Extensions Extensions are the way Jinja2 allows you to extend its vocabulary. Extensions are not enabled by default, so you can enable an extension only when and if you need, and start using it without much trouble: env = Environment(extensions=['jinja2.ext.do',   'jinja2.ext.with_']) In the preceding code, we have an example where you create an environment with two extensions enabled: do and with. Those are the extensions we will study in this article. As the name suggests, the do extension allows you to "do stuff". Inside a do tag, you're allowed to execute Python expressions with full access to the template context. Flask-Empty, a popular flask boilerplate available at https://github.com/italomaia/flask-empty uses the do extension to update a dictionary in one of its macros, for example. Let's see how we could do the same: {% set x = {1:'home', '2':'boat'} %} {% do x.update({3: 'bar'}) %} {%- for key,value in x.items() %} {{ key }} - {{ value }} {%- endfor %} In the preceding example, we create the x variable with a dictionary, then we update it with {3: 'bar'}. You don't usually need to use the do extension but, when you do, a lot of coding is saved. The with extension is also very simple. You use it whenever you need to create block scoped variables. Imagine you have a value you need cached in a variable for a brief moment; this would be a good use case. Let's see an example: {% with age = user.get_age() %} My age: {{ age }} {% endwith %} My age: {{ age }}{# no value here #} As seen in the example, age exists only inside the with block. Also, variables set inside a with block will only exist inside it. For example: {% with %} {% set count = query.count() %} Current Stock: {{ count }} Diff: {{ prev_count - count }} {% endwith %} {{ count }} {# empty value #} Filters Filters are a marvelous thing about Jinja2! This tool allows you to process a constant or variable before printing it to the template. The goal is to implement the formatting you want, strictly in the template. To use a filter, just call it using the pipe operator like this: {% set name = 'junior' %} {{ name|capitalize }} {# output is Junior #} Its name is passed to the capitalize filter that processes it and returns the capitalized value. To inform arguments to the filter, just call it like a function, like this: {{ ['Adam', 'West']|join(' ') }} {# output is Adam West #} The join filter will join all values from the passed iterable, putting the provided argument between them. Jinja2 has an enormous quantity of available filters by default. That means we can't cover them all here, but we can certainly cover a few. capitalize and lower were seen already. Let's look at some further examples: {# prints default value if input is undefined #} {{ x|default('no opinion') }} {# prints default value if input evaluates to false #} {{ none|default('no opinion', true) }} {# prints input as it was provided #} {{ 'some opinion'|default('no opinion') }}   {# you can use a filter inside a control statement #} {# sort by key case-insensitive #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort %}{{ key }}{% endfor %} {# sort by key case-sensitive #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort(true) %}{{ key }}{% endfor %} {# sort by value #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort(false, 'value') %}{{ key }}{% endfor %} {{ [3, 2, 1]|first }} - {{ [3, 2, 1]|last }} {{ [3, 2, 1]|length }} {# prints input length #} {# same as in python #} {{ '%s, =D'|format("I'm John") }} {{ "He has two daughters"|replace('two', 'three') }} {# safe prints the input without escaping it first#} {{ '<input name="stuff" />'|safe }} {{ "there are five words here"|wordcount }} Try the preceding example to see exactly what each filter does. After reading this much about Jinja2, you're probably thinking: "Jinja2 is cool but this is a book about Flask. Show me the Flask stuff!". Ok, ok, I can do that! Of what we have seen so far, almost everything can be used with Flask with no modifications. As Flask manages the Jinja2 environment for you, you don't have to worry about creating file loaders and stuff like that. One thing you should be aware of, though, is that, because you don't instantiate the Jinja2 environment yourself, you can't really pass to the class constructor, the extensions you want to activate. To activate an extension, add it to Flask during the application setup as follows: from flask import Flask app = Flask(__name__) app.jinja_env.add_extension('jinja2.ext.do') # or jinja2.ext.with_ if __name__ == '__main__': app.run() Messing with the template context You can use the render_template method to load a template from the templates folder and then render it as a response. from flask import Flask, render_template app = Flask(__name__)   @app.route("/") def hello():    return render_template("index.html") If you want to add values to the template context, as seen in some of the examples in this article, you would have to add non-positional arguments to render_template: from flask import Flask, render_template app = Flask(__name__)   @app.route("/") def hello():    return render_template("index.html", my_age=28) In the preceding example, my_age would be available in the index.html context, where {{ my_age }} would be translated to 28. my_age could have virtually any value you want to exhibit, actually. Now, what if you want all your views to have a specific value in their context, like a version value—some special code or function; how would you do it? Flask offers you the context_processor decorator to accomplish that. You just have to annotate a function that returns a dictionary and you're ready to go. For example: from flask import Flask, render_response app = Flask(__name__)   @app.context_processor def luck_processor(): from random import randint def lucky_number():    return randint(1, 10) return dict(lucky_number=lucky_number)   @app.route("/") def hello(): # lucky_number will be available in the index.html context by default return render_template("index.html") Summary In this article, we saw how to render templates using only Jinja2, how control statements look and how to use them, how to write a comment, how to print variables in a template, how to write and use macros, how to load and use extensions, and how to register context processors. I don't know about you, but this article felt like a lot of information! I strongly advise you to run the experiment with the examples. Knowing your way around Jinja2 will save you a lot of headaches. Resources for Article: Further resources on this subject: Recommender systems dissected Deployment and Post Deployment [article] Handling sessions and users [article] Introduction to Custom Template Filters and Tags [article]
Read more
  • 0
  • 0
  • 6048

article-image-groups-and-cohorts
Packt
06 Jul 2015
20 min read
Save for later

Groups and Cohorts in Moodle

Packt
06 Jul 2015
20 min read
In this article by William Rice, author of the book, Moodle E-Learning Course Development - Third Edition shows you how to use groups to separate students in a course into teams. You will also learn how to use cohorts to mass enroll students into courses. Groups versus cohorts Groups and cohorts are both collections of students. There are several differences between them. We can sum up these differences in one sentence, that is; cohorts enable administrators to enroll and unenroll students en masse, whereas groups enable teachers to manage students during a class. Think of a cohort as a group of students working together through the same academic curriculum. For example, a group of students all enrolled in the same course. Think of a group as a subset of students enrolled in a course. Groups are used to manage various activities within a course. Cohort is a system-wide or course category-wide set of students. There is a small amount of overlap between what you can do with a cohort and a group. However, the differences are large enough that you would not want to substitute one for the other. Cohorts In this article, we'll look at how to create and use cohorts. You can perform many operations with cohorts in bulk, affecting many students at once. Creating a cohort To create a cohort, perform the following steps: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, click on the Add button. The Add New Cohort page is displayed. Enter a Name for the cohort. This is the name that you will see when you work with the cohort. Enter a Cohort ID for the cohort. If you upload students in bulk to this cohort, you will specify the cohort using this identifier. You can use any characters you want in the Cohort ID; however, keep in mind that the file you upload to the cohort can come from a different computer system. To be safe, consider using only ASCII characters; such as letters, numbers, some special characters, and no spaces in the Cohort ID option. For example, Spring_2012_Freshmen. Enter a Description that will help you and other administrators remember the purpose of the cohort. Click on Save changes. Now that the cohort is created, you can begin adding users to this cohort. Adding students to a cohort Students can be added to a cohort manually by searching and selecting them. They can also be added in bulk by uploading a file to Moodle. Manually adding and removing students to a cohort If you add a student to a cohort, that student is enrolled in all the courses to which the cohort is synchronized. If you remove a student from a cohort, that student will be unenrolled from all the courses to which the cohort is synchronized. We will look at how to synchronize cohorts and course enrollments later. For now, here is how to manually add and remove students from a cohort: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, for the cohort to which you want to add students, click on the people icon: The Cohort Assign page is displayed. The left-hand side panel displays users that are already in the cohort, if any. The right-hand side panel displays users that can be added to the cohort. Use the Search field to search for users in each panel. You can search for text that is in the user name and e-mail address fields. Use the Add and Remove button to move users from one panel to another. Adding students to a cohort in bulk – upload When you upload students to Moodle, you can add them to a cohort. After you have all the students in a cohort, you can quickly enroll and unenroll them in courses just by synchronizing the cohort to the course. If you are going to upload students in bulk, consider putting them in a cohort. This makes it easier to manipulate them later. Here is an example of a cohort. Note that there are 1,204 students enrolled in the cohort: These students were uploaded to the cohort under Administration | Site Administration | Users | Upload users: The file that was uploaded contained information about each student in the cohort. In a spreadsheet, this is how the file looks: username,email,firstname,lastname,cohort1 moodler_1,[email protected],Bill,Binky,open-enrollmentmoodlers moodler_2,[email protected],Rose,Krial,open-enrollmentmoodlers moodler_3,[email protected],Jeff,Marco,open-enrollmentmoodlers moodler_4,[email protected],Dave,Gallo,open-enrollmentmoodlers In this example, we have the minimum required information to create new students. These are as follows: The username The e-mail address The first name The last name We also have the cohort ID (the short name of the cohort) in which we want to place a student. During the upload process, you can see a preview of the file that you will upload: Further down on the Upload users preview page, you can choose the Settings option to handle the upload: Usually, when we upload users to Moodle, we will create new users. However, we can also use the upload option to quickly enroll existing users in the cohort. You saw previously (Manually adding and removing students to a cohort) how to search for and then enroll users in a cohort. However, when you want to enroll hundreds of users in the cohort, it's often faster to create a text file and upload it, than to search your existing users. This is because when you create a text file, you can use powerful tools—such as spreadsheets and databases—to quickly create this file. If you want to perform this, you will find options to Update existing users under the Upload type field. In most Moodle systems, a user's profile must include a city and country. When you upload a user to a system, you can specify the city and country in the upload file or omit them from the upload file and assign the city and country to the system while the file is uploaded. This is performed under Default values on the Upload users page: Now that we have examined some of the capabilities and limitations of this process, let's list the steps to upload a cohort to Moodle: Prepare a plain file that has, at minimum, the username, email, firstname, lastname, and cohort1 information. If you were to create this in a spreadsheet, it may look similar to the following screenshot: Under Administration | Site Administration | Users | Upload users, select the text file that you will upload. On this page, choose Settings to describe the text file, such as delimiter (separator) and encoding. Click on the Upload users button. You will see the first few rows of the text file displayed. Also, additional settings become available on this page. In the Settings section, there are settings that affect what happens when you upload information about existing users. You can choose to have the system overwrite information for existing users, ignore information that conflicts with existing users, create passwords, and so on. In the Default values section, you can enter values to be entered into the user profiles. For example, you can select a city, country, and department for all the users. Click on the Upload users button to begin the upload. Cohort sync Using the cohort sync enrolment method, you can enroll and un-enroll large collections of students at once. Using cohort sync involves several steps: Creating a cohort. Enrolling students in the cohort. Enabling the cohort sync enrollment method. Adding the cohort sync enrollment method to a course. You saw the first two steps: how to create a cohort and how to enroll students in the cohort. We will cover the last two steps: enabling the cohort sync method and adding the cohort sync to a course. Enabling the cohort sync enrollment method To enable the cohort sync enrollment method, you will need to log in as an administrator. This cannot be done by someone who has only teacher rights: Select Site administration | Plugins | Enrolments | Manage enrol plugins. Click on the Enable icon located next to Cohort sync. Then, click on the Settings button located next to Cohort sync. On the Settings page, choose the default role for people when you enroll them in a course using Cohort sync. You can change this setting for each course. You will also choose the External unenrol action. This is what happens to a student when they are removed from the cohort. If you choose Unenrol user from course, the user and all his/her grades are removed from the course. The user's grades are purged from Moodle. If you were to read this user to the cohort, all the user's activity in this course will be blank, as if the user was never in the course. If you choose Disable course enrolment and remove roles, the user and all his/her grades are hidden. You will not see this user in the course's grade book. However, if you were to read this user to the cohort or to the course, this user's course records will be restored. After enabling the cohort sync method, it's time to actually add this method to a course. Adding the cohort sync enrollment method to a course To perform this, you will need to log in as an administrator or a teacher in the course: Log in and enter the course to which you want to add the enrolment method. Select Course administration | Users | Enrolment methods. From the Add method drop-down menu, select Cohort sync. In Custom instance name, enter a name for this enrolment method. This will enable you to recognize this method in a list of cohort syncs. For Active, select Yes. This will enroll the users. Select the Cohort option. Select the role that the members of the cohort will be given. Click on the Save changes button. All the users in the cohort will be given a selected role in the course. Un-enroll a cohort from a course There are two ways to un-enroll a cohort from a course. First, you can go to the course's enrollment methods page and delete the enrollment method. Just click on the X button located next to the cohort sync field that you added to the course. However, this will not just remove users from the course, but also delete all their course records. The second method preserves the student records. Once again, go to the course's enrollment methods page located next to the Cohort sync method that you added and click on the Settings icon. On the Settings page, select No for Active. This will remove the role that the cohort was given. However, the members of the cohort will still be listed as course participants. So, as the members of the cohort do not have a role in the course, they can no longer access this course. However, their grades and activity reports are preserved. Differences between cohort sync and enrolling a cohort Cohort sync and enrolling a cohort are two different methods. Each has advantages and limitations. If you follow the preceding instructions, you can synchronize a cohort's membership to a course's enrollment. As people are added to and removed from the cohort, they are enrolled and un-enrolled from the course. When working with a large group of users, this can be a great time saver. However, using cohort sync, you cannot un-enroll or change the role of just one person. Consider a scenario where you have a large group of students who want to enroll in several courses, all at once. You put these students in a cohort, enable the cohort sync enrollment method, and add the cohort sync enrollment method to each of these courses. In a few minutes, you have accomplished your goal. Now, if you want to un-enroll some users from some courses, but not from all courses, you remove them from the cohort. So, these users are removed from all the courses. This is how cohort sync works. Cohort sync is everyone or no one When a person is added to or removed from the cohort, this person is added to or removed from all the courses to which the cohort is synced. If that's what you want, great. If not, An alternative to cohort sync is to enroll a cohort. That is, you can select all the members of a cohort and enroll them in a course, all at once. However, this is a one-way journey. You cannot un-enroll them all at once. You will need to un-enroll them one at a time. If you enroll a cohort all at once, after enrollment, users are independent entities. You can un-enroll them and change their role (for example, from student to teacher) whenever you wish. To enroll a cohort in a course, perform the following steps: Enter the course as an administrator or teacher. Select Administration | Course administration | Users | Enrolled users. Click on the Enrol cohort button. A popup window appears. This window lists the cohorts on the site. Click on Enrol users next to the cohort that you want to enroll. The system displays a confirmation message. Now, click on the OK button. You will be taken back to the Enrolled users page. Note that although you can enroll all users in a cohort (all at once), there is no button to un-enroll them all at once. You will need to remove them one at a time from your course. Managing students with groups A group is a collection of students in a course. Outside of a course, a group has no meaning. Groups are useful when you want to separate students studying the same course. For example, if your organization is using the same course for several different classes or groups, you can use the group feature to separate students so that each group can see only their peers in the course. For example, you can create a new group every month for employees hired that month. Then, you can monitor and mentor them together. After you have run a group of people through a course, you may want to reuse this course for another group. You can use the group feature to separate groups so that the current group doesn't see the work done by the previous group. This will be like a new course for the current group. You may want an activity or resource to be open to just one group of people. You don't want others in the class to be able to use that activity or resource. Course versus activity You can apply the groups setting to an entire course. If you do this, every activity and resource in the course will be segregated into groups. You can also apply the groups setting to an individual activity or resource. If you do this, it will override the groups setting for the course. Also, it will segregate just this activity, or resource between groups. The three group modes For a course or activity, there are several ways to apply groups. Here are the three group modes: No groups: There are no groups for a course or activity. If students have been placed in groups, ignore it. Also, give everyone the same access to the course or activity. Separate groups: If students have been placed in groups, allow them to see other students and only the work of other students from their own group. Students and work from other groups are invisible. Visible groups: If students have been placed in groups, allow them to see other students and the work of other students from all groups. However, the work from other groups is read only. You can use the No groups setting on an activity in your course. Here, you want every student who ever took the course to be able to interact with each other. For example, you may use the No groups setting in the news forum so that all students who have ever taken the course can see the latest news. Also, you can use the Separate groups setting in a course. Here, you will run different groups at different times. For each group that runs through the course, it will be like a brand new course. You can use the Visible groups setting in a course. Here, students are part of a large and in-person class; you want them to collaborate in small groups online. Also, be aware that some things will not be affected by the groups setting. For example, no matter what the group setting, students will never see each other's assignment submissions. Creating a group There are three ways to create groups in a course. You can: Manually create and populate each group Automatically create and populate groups based on the characteristics of students Import groups using a text file We'll cover these methods in the following subsections. Manually creating and populating a group Don't be discouraged by the idea of manually populating a group with students. It takes only a few clicks to place a student in a group. To create and populate a group, perform the following steps: Select Course administration | Users | Groups. This takes you to the Groups page. Click on the Create group button. The Create group page is displayed. You must enter a Name for the group. This will be the name that teachers and administrators see when they manage a group. The Group ID number is used to match up this group with a group identifier in another system. If your organization uses a system outside Moodle to manage students and this system categorizes students in groups, you can enter the group ID from the other system in this field. It does not need to be a number. This field is optional. The Group description field is optional. It's good practice to use this to explain the purpose and criteria for belonging to a group. The Enrolment key is a code that you can give to students who self enroll in a course. When the student enrolls, he/she is prompted to enter the enrollment key. On entering this key, the student is enrolled in the course and made a member of the group. If you add a picture to this group, then when members are listed (as in a forum), the member will have the group picture shown next to them. Here is an example of a contributor to a forum on http://www.moodle.org with her group memberships: Click on the Save changes button to save the group. On the Groups page, the group appears in the left-hand side column. Select this group. In the right-hand side column, search for and select the students that you want to add to this group: Note the Search fields. These enable you to search for students that meet a specific criteria. You can search the first name, last name, and e-mail address. The other part of the user's profile information is not available in this search box. Automatically creating and populating a group When you automatically create groups, Moodle creates a number of groups that you specify and then takes all the students enrolled in the course and allocates them to these groups. Moodle will put the currently enrolled students in these groups even if they already belong to another group in the course. To automatically create a group, use the following steps: Click on the Auto-create groups button. The Auto-create groups page is displayed. In the Naming scheme field, enter a name for all the groups that will be created. You can enter any characters. If you enter @, it will be converted to sequential letters. If you enter #, it will be converted to sequential numbers. For example, if you enter Group @, Moodle will create Group A, Group B, Group C, and so on. In the Auto-create based on field, you will tell the system to choose either of the following options:     Create a specific number of groups and then fill each group with as many students as needed (Number of groups)     Create as many groups as needed so that each group has a specific number of students (Members per group). In the Group/member count field, you will tell the system to choose either of the following options:     How many groups to create (if you choose the preceding Number of groups option)     How many members to put in each group (if you choose the preceding Members per group option) Under Group members, select who will be put in these groups. You can select everyone with a specific role or everyone in a specific cohort. The setting for Prevent last small group is available if you choose Members per group. It prevents Moodle from creating a group with fewer than the number of students that you specify. For example, if your class has 12 students and you choose to create groups with five members per group, Moodle would normally create two groups of five. Then, it would create another group for the last two members. However, with Prevent last small group selected, it will distribute the remaining two members between the first two groups. Click on the Preview button to preview the results. The preview will not show you the names of the members in groups, but it will show you how many groups and members will be in each group. Importing groups The term importing groups may give you the impression that you will import students into a group. The import groups button does not import students into groups. It imports a text file that you can use to create groups. So, if you need to create a lot of groups at once, you can use this feature to do this. This needs to be done by a site administrator. If you need to import students and put them into groups, use the upload students feature. However, instead of adding students to the cohort, you will add them to a course and group. You perform this by specifying the course and group fields in the upload file, as shown in the following code: username,email,firstname,lastname,course1,group1,course2 moodler_1,[email protected],Bill,Binky,history101,odds,science101 moodler_2,[email protected],Rose,Krial,history101,even,science101 moodler_3,[email protected],Jeff,Marco,history101,odds,science101 moodler_4,[email protected],Dave,Gallo,history101,even,science101 In this example, we have the minimum needed information to create new students. These are as follows: The username The e-mail address The first name The last name We have also enrolled all the students in two courses: history101 and science101. In the history101 course, Bill Binky, and Jeff Marco are placed in a group called odds. Rose Krial and Dave Gallo are placed in a group called even. In the science101 course, the students are not placed in any group. Remember that this student upload doesn't happen on the Groups page. It happens under Administration | Site Administration | Users | Upload users. Summary Cohorts and groups give you powerful tools to manage your students. Cohorts are a useful tool to quickly enroll and un-enroll large numbers of students. Groups enable you to separate students who are in the same course and give teachers the ability to quickly see only those students that they are responsible for. Useful Links: What's New in Moodle 2.0 Moodle for Online Communities Understanding Web-based Applications and Other Multimedia Forms
Read more
  • 0
  • 0
  • 6243

article-image-json-jsonnet
Packt
25 Jun 2015
16 min read
Save for later

JSON with JSON.Net

Packt
25 Jun 2015
16 min read
In this article by Ray Rischpater, author of the book JavaScript JSON Cookbook, we show you how you can use strong typing in your applications with JSON using C#, Java, and TypeScript. You'll find the following recipes: How to deserialize an object using Json.NET How to handle date and time objects using Json.NET How to deserialize an object using gson for Java How to use TypeScript with Node.js How to annotate simple types using TypeScript How to declare interfaces using TypeScript How to declare classes with interfaces using TypeScript Using json2ts to generate TypeScript interfaces from your JSON (For more resources related to this topic, see here.) While some say that strong types are for weak minds, the truth is that strong typing in programming languages can help you avoid whole classes of errors in which you mistakenly assume that an object of one type is really of a different type. Languages such as C# and Java provide strong types for exactly this reason. Fortunately, the JSON serializers for C# and Java support strong typing, which is especially handy once you've figured out your object representation and simply want to map JSON to instances of classes you've already defined. We use Json.NET for C# and gson for Java to convert from JSON to instances of classes you define in your application. Finally, we take a look at TypeScript, an extension of JavaScript that provides compile-time checking of types, compiling to plain JavaScript for use with Node.js and browsers. We'll look at how to install the TypeScript compiler for Node.js, how to use TypeScript to annotate types and interfaces, and how to use a web page by Timmy Kokke to automatically generate TypeScript interfaces from JSON objects. How to deserialize an object using Json.NET In this recipe, we show you how to use Newtonsoft's Json.NET to deserialize JSON to an object that's an instance of a class. We'll use Json.NET because although this works with the existing .NET JSON serializer, there are other things that I want you to know about Json.NET, which we'll discuss in the next two recipes. Getting ready To begin, you need to be sure you have a reference to Json.NET in your project. The easiest way to do this is to use NuGet; launch NuGet, search for Json.NET, and click on Install, as shown in the following screenshot: You'll also need a reference to the Newonsoft.Json namespace in any file that needs those classes with a using directive at the top of your file: usingNewtonsoft.Json; How to do it… Here's an example that provides the implementation of a simple class, converts a JSON string to an instance of that class, and then converts the instance back into JSON: using System; usingNewtonsoft.Json;   namespaceJSONExample {   public class Record {    public string call;    public double lat;    public double lng; } class Program {    static void Main(string[] args)      {        String json = @"{ 'call': 'kf6gpe-9',        'lat': 21.9749, 'lng': 159.3686 }";          var result = JsonConvert.DeserializeObject<Record>(          json, newJsonSerializerSettings            {        MissingMemberHandling = MissingMemberHandling.Error          });        Console.Write(JsonConvert.SerializeObject(result));          return;        } } } How it works… In order to deserialize the JSON in a type-safe manner, we need to have a class that has the same fields as our JSON. The Record class, defined in the first few lines does this, defining fields for call, lat, and lng. The Newtonsoft.Json namespace provides the JsonConvert class with static methods SerializeObject and DeserializeObject. DeserializeObject is a generic method, taking the type of the object that should be returned as a type argument, and as arguments the JSON to parse, and an optional argument indicating options for the JSON parsing. We pass the MissingMemberHandling property as a setting, indicating with the value of the enumeration Error that in the event that a field is missing, the parser should throw an exception. After parsing the class, we convert it again to JSON and write the resulting JSON to the console. There's more… If you skip passing the MissingMember option or pass Ignore (the default), you can have mismatches between field names in your JSON and your class, which probably isn't what you want for type-safe conversion. You can also pass the NullValueHandling field with a value of Include or Ignore. If Include, fields with null values are included; if Ignore, fields with Null values are ignored. See also The full documentation for Json.NET is at http://www.newtonsoft.com/json/help/html/Introduction.htm. Type-safe deserialization is also possible with JSON support using the .NET serializer; the syntax is similar. For an example, see the documentation for the JavaScriptSerializer class at https://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer(v=vs.110).aspx. How to handle date and time objects using Json.NET Dates in JSON are problematic for people because JavaScript's dates are in milliseconds from the epoch, which are generally unreadable to people. Different JSON parsers handle this differently; Json.NET has a nice IsoDateTimeConverter that formats the date and time in ISO format, making it human-readable for debugging or parsing on platforms other than JavaScript. You can extend this method to converting any kind of formatted data in JSON attributes, too, by creating new converter objects and using the converter object to convert from one value type to another. How to do it… Simply include a new IsoDateTimeConverter object when you call JsonConvert.Serialize, like this: string json = JsonConvert.SerializeObject(p, newIsoDateTimeConverter()); How it works… This causes the serializer to invoke the IsoDateTimeConverter instance with any instance of date and time objects, returning ISO strings like this in your JSON: 2015-07-29T08:00:00 There's more… Note that this can be parsed by Json.NET, but not JavaScript; in JavaScript, you'll want to use a function like this: Function isoDateReviver(value) { if (typeof value === 'string') { var a = /^(d{4})-(d{2})-(d{2})T(d{2}):(d{2}):(d{2}(?:.d*)?)(?:([+-])(d{2}):(d{2}))?Z?$/ .exec(value); if (a) {      var utcMilliseconds = Date.UTC(+a[1],          +a[2] - 1,          +a[3],          +a[4],          +a[5],          +a[6]);        return new Date(utcMilliseconds);    } } return value; } The rather hairy regular expression on the third line matches dates in the ISO format, extracting each of the fields. If the regular expression finds a match, it extracts each of the date fields, which are then used by the Date class's UTC method to create a new date. Note that the entire regular expression—everything between the/characters—should be on one line with no whitespace. It's a little long for this page, however! See also For more information on how Json.NET handles dates and times, see the documentation and example at http://www.newtonsoft.com/json/help/html/SerializeDateFormatHandling.htm. How to deserialize an object using gson for Java Like Json.NET, gson provides a way to specify the destination class to which you're deserializing a JSON object. Getting ready You'll need to include the gson JAR file in your application, just as you would for any other external API. How to do it… You use the same method as you use for type-unsafe JSON parsing using gson using fromJson, except you pass the class object to gson as the second argument, like this: // Assuming we have a class Record that looks like this: /* class Record { private String call; private float lat; private float lng;    // public API would access these fields } */   Gson gson = new com.google.gson.Gson(); String json = "{ "call": "kf6gpe-9", "lat": 21.9749, "lng": 159.3686 }"; Record result = gson.fromJson(json, Record.class); How it works… The fromGson method always takes a Java class. In the example in this recipe, we convert directly to a plain old Java object that our application can use without needing to use the dereferencing and type conversion interface of JsonElement that gson provides. There's more… The gson library can also deal with nested types and arrays as well. You can also hide fields from being serialized or deserialized by declaring them transient, which makes sense because transient fields aren't serialized. See also The documentation for gson and its support for deserializing instances of classes is at https://sites.google.com/site/gson/gson-user-guide#TOC-Object-Examples. How to use TypeScript with Node.js Using TypeScript with Visual Studio is easy; it's just part of the installation of Visual Studio for any version after Visual Studio 2013 Update 2. Getting the TypeScript compiler for Node.js is almost as easy—it's an npm install away. How to do it… On a command line with npm in your path, run the following command: npm install –g typescript The npm option –g tells npm to install the TypeScript compiler globally, so it's available to every Node.js application you write. Once you run it, npm downloads and installs the TypeScript compiler binary for your platform. There's more… Once you run this command to install the compiler, you'll have the TypeScript compiler tsc available on the command line. Compiling a file with tsc is as easy as writing the source code and saving in a file that ends in .ts extension, and running tsc on it. For example, given the following TypeScript saved in the file hello.ts: function greeter(person: string) { return "Hello, " + person; }   var user: string = "Ray";   console.log(greeter(user)); Running tschello.ts at the command line creates the following JavaScript: function greeter(person) { return "Hello, " + person; }   var user = "Ray";   console.log(greeter(user)); Try it! As we'll see in the next section, the function declaration for greeter contains a single TypeScript annotation; it declares the argument person to be string. Add the following line to the bottom of hello.ts: console.log(greeter(2)); Now, run the tschello.ts command again; you'll get an error like this one: C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2082: Supplied parameters do not match any signature of call target:        Could not apply type 'string' to argument 1 which is         of type 'number'. C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2087: Could not select overload for 'call' expression. This error indicates that I'm attempting to call greeter with a value of the wrong type, passing a number where greeter expects a string. In the next recipe, we'll look at the kinds of type annotations TypeScript supports for simple types. See also The TypeScript home page, with tutorials and reference documentation, is at http://www.typescriptlang.org/. How to annotate simple types using TypeScript Type annotations with TypeScript are simple decorators appended to the variable or function after a colon. There's support for the same primitive types as in JavaScript, and to declare interfaces and classes, which we will discuss next. How to do it… Here's a simple example of some variable declarations and two function declarations: function greeter(person: string): string { return "Hello, " + person; }   function circumference(radius: number) : number { var pi: number = 3.141592654; return 2 * pi * radius; }   var user: string = "Ray";   console.log(greeter(user)); console.log("You need " + circumference(2) + " meters of fence for your dog."); This example shows how to annotate functions and variables. How it works… Variables—either standalone or as arguments to a function—are decorated using a colon and then the type. For example, the first function, greeter, takes a single argument, person, which must be a string. The second function, circumference, takes a radius, which must be a number, and declares a single variable in its scope, pi, which must be a number and has the value 3.141592654. You declare functions in the normal way as in JavaScript, and then add the type annotation after the function name, again using a colon and the type. So, greeter returns a string, and circumference returns a number. There's more… TypeScript defines the following fundamental type decorators, which map to their underlying JavaScript types: array: This is a composite type. For example, you can write a list of strings as follows: var list:string[] = [ "one", "two", "three"]; boolean: This type decorator can contain the values true and false. number: This type decorator is like JavaScript itself, can be any floating-point number. string: This type decorator is a character string. enum: An enumeration, written with the enum keyword, like this: enumColor { Red = 1, Green, Blue }; var c : Color = Color.Blue; any: This type indicates that the variable may be of any type. void: This type indicates that the value has no type. You'll use void to indicate a function that returns nothing. See also For a list of the TypeScript types, see the TypeScript handbook at http://www.typescriptlang.org/Handbook. How to declare interfaces using TypeScript An interface defines how something behaves, without defining the implementation. In TypeScript, an interface names a complex type by describing the fields it has. This is known as structural subtyping. How to do it… Declaring an interface is a little like declaring a structure or class; you define the fields in the interface, each with its own type, like this: interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686};   printLocation(myObj); How it works… The interface keyword in TypeScript defines an interface; as I already noted, an interface consists of the fields it declares with their types. In this listing, I defined a plain JavaScript object, myObj and then called the function printLocation, that I previously defined, which takes a Record. When calling printLocation with myObj, the TypeScript compiler checks the fields and types each field and only permits a call to printLocation if the object matches the interface. There's more… Beware! TypeScript can only provide compile-type checking. What do you think the following code does? interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686}; printLocation(myObj);   var json = '{"call":"kf6gpe-7","lat":21.9749}'; var myOtherObj = JSON.parse(json); printLocation(myOtherObj); First, this compiles with tsc just fine. When you run it with node, you'll see the following: kf6gpe-7: 21.9749, 159.3686 kf6gpe-7: 21.9749, undefined What happened? The TypeScript compiler does not add run-time type checking to your code, so you can't impose an interface on a run-time created object that's not a literal. In this example, because the lng field is missing from the JSON, the function can't print it, and prints the value undefined instead. This doesn't mean that you shouldn't use TypeScript with JSON, however. Type annotations serve a purpose for all readers of the code, be they compilers or people. You can use type annotations to indicate your intent as a developer, and readers of the code can better understand the design and limitation of the code you write. See also For more information about interfaces, see the TypeScript documentation at http://www.typescriptlang.org/Handbook#interfaces. How to declare classes with interfaces using TypeScript Interfaces let you specify behavior without specifying implementation; classes let you encapsulate implementation details behind an interface. TypeScript classes can encapsulate fields or methods, just as classes in other languages. How to do it… Here's an example of our Record structure, this time as a class with an interface: class RecordInterface { call: string; lat: number; lng: number;   constructor(c: string, la: number, lo: number) {} printLocation() {}   }   class Record implements RecordInterface { call: string; lat: number; lng: number; constructor(c: string, la: number, lo: number) {    this.call = c;    this.lat = la;    this.lng = lo; }   printLocation() {    console.log(this.call + ': ' + this.lat + ', ' + this.lng); } }   var myObj : Record = new Record('kf6gpe-7', 21.9749, 159.3686);   myObj.printLocation(); How it works… The interface keyword, again, defines an interface just as the previous section shows. The class keyword, which you haven't seen before, implements a class; the optional implements keyword indicates that this class implements the interface RecordInterface. Note that the class implementing the interface must have all of the same fields and methods that the interface prescribes; otherwise, it doesn't meet the requirements of the interface. As a result, our Record class includes fields for call, lat, and lng, with the same types as in the interface, as well as the methods constructor and printLocation. The constructor method is a special method called when you create a new instance of the class using new. Note that with classes, unlike regular objects, the correct way to create them is by using a constructor, rather than just building them up as a collection of fields and values. We do that on the second to the last line of the listing, passing the constructor arguments as function arguments to the class constructor. See also There's a lot more you can do with classes, including defining inheritance and creating public and private fields and methods. For more information about classes in TypeScript, see the documentation at http://www.typescriptlang.org/Handbook#classes. Using json2ts to generate TypeScript interfaces from your JSON This last recipe is more of a tip than a recipe; if you've got some JSON you developed using another programming language or by hand, you can easily create a TypeScript interface for objects to contain the JSON by using Timmy Kokke's json2ts website. How to do it… Simply go to http://json2ts.com and paste your JSON in the box that appears, and click on the generate TypeScript button. You'll be rewarded with a second text-box that appears and shows you the definition of the TypeScript interface, which you can save as its own file and include in your TypeScript applications. How it works… The following figure shows a simple example: You can save this typescript as its own file, a definition file, with the suffix .d.ts, and then include the module with your TypeScript using the import keyword, like this: import module = require('module'); Summary In this article we looked at how you can adapt the type-free nature of JSON with the type safety provided by languages such as C#, Java, and TypeScript to reduce programming errors in your application. Resources for Article: Further resources on this subject: Playing with Swift [article] Getting Started with JSON [article] Top two features of GSON [article]
Read more
  • 0
  • 0
  • 5624
article-image-code-style-django
Packt
17 Jun 2015
16 min read
Save for later

Code Style in Django

Packt
17 Jun 2015
16 min read
In this article written by Sanjeev Jaiswal and Ratan Kumar, authors of the book Learning Django Web Development, this article will cover all the basic topics which you would require to follow, such as coding practices for better Django web development, which IDE to use, version control, and so on. We will learn the following topics in this article: Django coding style Using IDE for Django web development Django project structure This article is based on the important fact that code is read much more often than it is written. Thus, before you actually start building your projects, we suggest that you familiarize yourself with all the standard practices adopted by the Django community for web development. Django coding style Most of Django's important practices are based on Python. Though chances are you already know them, we will still take a break and write all the documented practices so that you know these concepts even before you begin. To mainstream standard practices, Python enhancement proposals are made, and one such widely adopted standard practice for development is PEP8, the style guide for Python code–the best way to style the Python code authored by Guido van Rossum. The documentation says, "PEP8 deals with semantics and conventions associated with Python docstrings." For further reading, please visit http://legacy.python.org/dev/peps/pep-0008/. Understanding indentation in Python When you are writing Python code, indentation plays a very important role. It acts as a block like in other languages, such as C or Perl. But it's always a matter of discussion amongst programmers whether we should use tabs or spaces, and, if space, how many–two or four or eight. Using four spaces for indentation is better than eight, and if there are a few more nested blocks, using eight spaces for each indentation may take up more characters than can be shown in single line. But, again, this is the programmer's choice. The following is what incorrect indentation practices lead to: >>> def a(): ...   print "foo" ...     print "bar" IndentationError: unexpected indent So, which one we should use: tabs or spaces? Choose any one of them, but never mix up tabs and spaces in the same project or else it will be a nightmare for maintenance. The most popular way of indention in Python is with spaces; tabs come in second. If any code you have encountered has a mixture of tabs and spaces, you should convert it to using spaces exclusively. Doing indentation right – do we need four spaces per indentation level? There has been a lot of confusion about it, as of course, Python's syntax is all about indentation. Let's be honest: in most cases, it is. So, what is highly recommended is to use four spaces per indentation level, and if you have been following the two-space method, stop using it. There is nothing wrong with it, but when you deal with multiple third party libraries, you might end up having a spaghetti of different versions, which will ultimately become hard to debug. Now for indentation. When your code is in a continuation line, you should wrap it vertically aligned, or you can go in for a hanging indent. When you are using a hanging indent, the first line should not contain any argument and further indentation should be used to clearly distinguish it as a continuation line. A hanging indent (also known as a negative indent) is a style of indentation in which all lines are indented except for the first line of the paragraph. The preceding paragraph is the example of hanging indent. The following example illustrates how you should use a proper indentation method while writing the code: bar = some_function_name(var_first, var_second,                                            var_third, var_fourth) # Here indentation of arguments makes them grouped, and stand clear from others. def some_function_name(        var_first, var_second, var_third,        var_fourth):    print(var_first) # This example shows the hanging intent. We do not encourage the following coding style, and it will not work in Python anyway: # When vertical alignment is not used, Arguments on the first line are forbidden foo = some_function_name(var_first, var_second,    var_third, var_fourth) # Further indentation is required as indentation is not distinguishable between arguments and source code. def some_function_name(    var_first, var_second, var_third,    var_fourth):    print(var_first) Although extra indentation is not required, if you want to use extra indentation to ensure that the code will work, you can use the following coding style: # Extra indentation is not necessary. if (this    and that):    do_something() Ideally, you should limit each line to a maximum of 79 characters. It allows for a + or – character used for viewing difference using version control. It is even better to limit lines to 79 characters for uniformity across editors. You can use the rest of the space for other purposes. The importance of blank lines The importance of two blank lines and single blank lines are as follows: Two blank lines: A double blank lines can be used to separate top-level functions and the class definition, which enhances code readability. Single blank lines: A single blank line can be used in the use cases–for example, each function inside a class can be separated by a single line, and related functions can be grouped together with a single line. You can also separate the logical section of source code with a single line. Importing a package Importing a package is a direct implication of code reusability. Therefore, always place imports at the top of your source file, just after any module comments and document strings, and before the module's global and constants as variables. Each import should usually be on separate lines. The best way to import packages is as follows: import os import sys It is not advisable to import more than one package in the same line, for example: import sys, os You may import packages in the following fashion, although it is optional: from django.http import Http404, HttpResponse If your import gets longer, you can use the following method to declare them: from django.http import ( Http404, HttpResponse, HttpResponsePermanentRedirect ) Grouping imported packages Package imports can be grouped in the following ways: Standard library imports: Such as sys, os, subprocess, and so on. import reimport simplejson Related third party imports: These are usually downloaded from the Python cheese shop, that is, PyPy (using pip install). Here is an example: from decimal import * Local application / library-specific imports: This included the local modules of your projects, such as models, views, and so on. from models import ModelFoofrom models import ModelBar Naming conventions in Python/Django Every programming language and framework has its own naming convention. The naming convention in Python/Django is more or less the same, but it is worth mentioning it here. You will need to follow this while creating a variable name or global variable name and when naming a class, package, modules, and so on. This is the common naming convention that we should follow: Name the variables properly: Never use single characters, for example, 'x' or 'X' as variable names. It might be okay for your normal Python scripts, but when you are building a web application, you must name the variable properly as it determines the readability of the whole project. Naming of packages and modules: Lowercase and short names are recommended for modules. Underscores can be used if their use would improve readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. Since module names are mapped to file names (models.py, urls.py, and so on), it is important that module names be chosen to be fairly short as some file systems are case insensitive and truncate long names. Naming a class: Class names should follow the CamelCase naming convention, and classes for internal use can have a leading underscore in their name. Global variable names: First of all, you should avoid using global variables, but if you need to use them, prevention of global variables from getting exported can be done via __all__, or by defining them with a prefixed underscore (the old, conventional way). Function names and method argument: Names of functions should be in lowercase and separated by an underscore and self as the first argument to instantiate methods. For classes or methods, use CLS or the objects for initialization. Method names and instance variables: Use the function naming rules—lowercase with words separated by underscores as necessary to improve readability. Use one leading underscore only for non-public methods and instance variables. Using IDE for faster development There are many options on the market when it comes to source code editors. Some people prefer full-fledged IDEs, whereas others like simple text editors. The choice is totally yours; pick up whatever feels more comfortable. If you already use a certain program to work with Python source files, I suggest that you stick to it as it will work just fine with Django. Otherwise, I can make a couple of recommendations, such as these: SublimeText: This editor is lightweight and very powerful. It is available for all major platforms, supports syntax highlighting and code completion, and works well with Python. The editor is open source and you can find it at http://www.sublimetext.com/ PyCharm: This, I would say, is most intelligent code editor of all and has advanced features, such as code refactoring and code analysis, which makes development cleaner. Features for Django include template debugging (which is a winner) and also quick documentation, so this look-up is a must for beginners. The community edition is free and you can sample a 30-day trial version before buying the professional edition. Setting up your project with the Sublime text editor Most of the examples that we will show you in this book will be written using Sublime text editor. In this section, we will show how to install and set up the Django project. Download and installation: You can download Sublime from the download tab of the site www.sublimetext.com. Click on the downloaded file option to install. Setting up for Django: Sublime has a very extensive plug-in ecosystem, which means that once you have downloaded the editor, you can install plug-ins for adding more features to it. After successful installation, it will look like this: Most important of all is Package Control, which is the manager for installing additional plugins directly from within Sublime. This will be your only manual installation of the package. It will take care of the rest of the package installation ahead. Some of the recommendations for Python development using Sublime are as follows: Sublime Linter: This gives instant feedback about the Python code as you write it. It also has PEP8 support; this plugin will highlight in real time the things we discussed about better coding in the previous section so that you can fix them.   Sublime CodeIntel: This is maintained by the developer of SublimeLint. Sublime CodeIntel have some of advanced functionalities, such as directly go-to definition, intelligent code completion, and import suggestions.   You can also explore other plugins for Sublime to increase your productivity. Setting up the pycharm IDE You can use any of your favorite IDEs for Django project development. We will use pycharm IDE for this book. This IDE is recommended as it will help you at the time of debugging, using breakpoints that will save you a lot of time figuring out what actually went wrong. Here is how to install and set up pycharm IDE for Django: Download and installation: You can check the features and download the pycharm IDE from the following link: http://www.jetbrains.com/pycharm/ Setting up for Django: Setting up pycharm for Django is very easy. You just have to import the project folder and give the manage.py path, as shown in the following figure: The Django project structure The Django project structure has been changed in the 1.6 release version. Django (django-admin.py) also has a startapp command to create an application, so it is high time to tell you the difference between an application and a project in Django. A project is a complete website or application, whereas an application is a small, self-contained Django application. An application is based on the principle that it should do one thing and do it right. To ease out the pain of building a Django project right from scratch, Django gives you an advantage by auto-generating the basic project structure files from which any project can be taken forward for its development and feature addition. Thus, to conclude, we can say that a project is a collection of applications, and an application can be written as a separate entity and can be easily exported to other applications for reusability. To create your first Django project, open a terminal (or Command Prompt for Windows users), type the following command, and hit Enter: $ django-admin.py startproject django_mytweets This command will make a folder named django_mytweets in the current directory and create the initial directory structure inside it. Let's see what kind of files are created. The new structure is as follows: django_mytweets/// django_mytweets/ manage.py This is the content of django_mytweets/: django_mytweets/ __init__.py settings.py urls.py wsgi.py Here is a quick explanation of what these files are: django_mytweets (the outer folder): This folder is the project folder. Contrary to the earlier project structure in which the whole project was kept in a single folder, the new Django project structure somehow hints that every project is an application inside Django. This means that you can import other third party applications on the same level as the Django project. This folder also contains the manage.py file, which include all the project management settings. manage.py: This is utility script is used to manage our project. You can think of it as your project's version of django-admin.py. Actually, both django-admin.py and manage.py share the same backend code. Further clarification about the settings will be provided when are going to tweak the changes. Let's have a look at the manage.py file: #!/usr/bin/env python import os import sys if __name__ == "__main__":    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_mytweets.settings")    from django.core.management import   execute_from_command_line    execute_from_command_line(sys.argv) The source code of the manage.py file will be self-explanatory once you read the following code explanation. #!/usr/bin/env python The first line is just the declaration that the following file is a Python file, followed by the import section in which os and sys modules are imported. These modules mainly contain system-related operations. import os import sys The next piece of code checks whether the file is executed by the main function, which is the first function to be executed, and then loads the Django setting module to the current path. As you are already running a virtual environment, this will set the path for all the modules to the path of the current running virtual environment. if __name__ == "__main__":    os.environ.setdefault("DJANGO_SETTINGS_MODULE",     "django_mytweets.settings") django_mytweets/ ( Inner folder) __init__.py Django projects are Python packages, and this file is required to tell Python that this folder is to be treated as a package. A package in Python's terminology is a collection of modules, and they are used to group similar files together and prevent naming conflicts. settings.py: This is the main configuration file for your Django project. In it, you can specify a variety of options, including database settings, site language(s), what Django features need to be enabled, and so on. By default, the database is configured to use SQLite Database, which is advisable to use for testing purposes. Here, we will only see how to enter the database in the settings file; it also contains the basic setting configuration, and with slight modification in the manage.py file, it can be moved to another folder, such as config or conf. To make every other third-party application a part of the project, we need to register it in the settings.py file. INSTALLED_APPS is a variable that contains all the entries about the installed application. As the project grows, it becomes difficult to manage; therefore, there are three logical partitions for the INSTALLED_APPS variable, as follows: DEFAULT_APPS: This parameter contains the default Django installed applications (such as the admin) THIRD_PARTY_APPS: This parameter contains other application like SocialAuth used for social authentication LOCAL_APPS: This parameter contains the applications that are created by you url.py: This is another configuration file. You can think of it as a mapping between URLs and the Django view functions that handle them. This file is one of Django's more powerful features. When we start writing code for our application, we will create new files inside the project's folder. So, the folder also serves as a container for our code. Now that you have a general idea of the structure of a Django project, let's configure our database system. Summary We prepared our development environment in this article, created our first project, set up the database, and learned how to launch the Django development server. We learned the best way to write code for our Django project and saw the default Django project structure. Resources for Article: Further resources on this subject: Tinkering Around in Django JavaScript Integration [article] Adding a developer with Django forms [article] So, what is Django? [article]
Read more
  • 0
  • 0
  • 10329

article-image-digging-deep-requests
Packt
16 Jun 2015
17 min read
Save for later

Digging Deep into Requests

Packt
16 Jun 2015
17 min read
In this article by Rakesh Vidya Chandra and Bala Subrahmanyam Varanasi, authors of the book Python Requests Essentials, we are going to deal with advanced topics in the Requests module. There are many more features in the Requests module that makes the interaction with the web a cakewalk. Let us get to know more about different ways to use Requests module which helps us to understand the ease of using it. (For more resources related to this topic, see here.) In a nutshell, we will cover the following topics: Persisting parameters across requests using Session objects Revealing the structure of request and response Using prepared requests Verifying SSL certificate with Requests Body Content Workflow Using generator for sending chunk encoded requests Getting the request method arguments with event hooks Iterating over streaming API Self-describing the APIs with link headers Transport Adapter Persisting parameters across Requests using Session objects The Requests module contains a session object, which has the capability to persist settings across the requests. Using this session object, we can persist cookies, we can create prepared requests, we can use the keep-alive feature and do many more things. The Session object contains all the methods of Requests API such as GET, POST, PUT, DELETE and so on. Before using all the capabilities of the Session object, let us get to know how to use sessions and persist cookies across requests. Let us use the session method to get the resource. >>> import requests >>> session = requests.Session() >>> response = requests.get("https://google.co.in", cookies={"new-cookie-identifier": "1234abcd"}) In the preceding example, we created a session object with requests and its get method is used to access a web resource. The cookie value which we had set in the previous example will be accessible using response.request.headers. >>> response.request.headers CaseInsensitiveDict({'Cookie': 'new-cookie-identifier=1234abcd', 'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/2.2.1 CPython/2.7.5+ Linux/3.13.0-43-generic'}) >>> response.request.headers['Cookie'] 'new-cookie-identifier=1234abcd' With session object, we can specify some default values of the properties, which needs to be sent to the server using GET, POST, PUT and so on. We can achieve this by specifying the values to the properties like headers, auth and so on, on a Session object. >>> session.params = {"key1": "value", "key2": "value2"} >>> session.auth = ('username', 'password') >>> session.headers.update({'foo': 'bar'}) In the preceding example, we have set some default values to the properties—params, auth, and headers using the session object. We can override them in the subsequent request, as shown in the following example, if we want to: >>> session.get('http://mysite.com/new/url', headers={'foo': 'new-bar'}) Revealing the structure of request and response A Requests object is the one which is created by the user when he/she tries to interact with a web resource. It will be sent as a prepared request to the server and does contain some parameters which are optional. Let us have an eagle eye view on the parameters: Method: This is the HTTP method to be used to interact with the web service. For example: GET, POST, PUT. URL: The web address to which the request needs to be sent. headers: A dictionary of headers to be sent in the request. files: This can be used while dealing with the multipart upload. It's the dictionary of files, with key as file name and value as file object. data: This is the body to be attached to the request.json. There are two cases that come in to the picture here: If json is provided, content-type in the header is changed to application/json and at this point, json acts as a body to the request. In the second case, if both json and data are provided together, data is silently ignored. params: A dictionary of URL parameters to append to the URL. auth: This is used when we need to specify the authentication to the request. It's a tuple containing username and password. cookies: A dictionary or a cookie jar of cookies which can be added to the request. hooks: A dictionary of callback hooks. A Response object contains the response of the server to a HTTP request. It is generated once Requests gets a response back from the server. It contains all of the information returned by the server and also stores the Request object we created originally. Whenever we make a call to a server using the requests, two major transactions are taking place in this context which are listed as follows: We are constructing a Request object which will be sent out to the server to request a resource A Response object is generated by the requests module Now, let us look at an example of getting a resource from Python's official site. >>> response = requests.get('https://python.org') In the preceding line of code, a requests object gets constructed and will be sent to 'https://python.org'. Thus obtained Requests object will be stored in the response.request variable. We can access the headers of the Request object which was sent off to the server in the following way: >>> response.request.headers CaseInsensitiveDict({'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/2.2.1 CPython/2.7.5+ Linux/3.13.0-43-generic'}) The headers returned by the server can be accessed with its 'headers' attribute as shown in the following example: >>> response.headers CaseInsensitiveDict({'content-length': '45950', 'via': '1.1 varnish', 'x-cache': 'HIT', 'accept-ranges': 'bytes', 'strict-transport-security': 'max-age=63072000; includeSubDomains', 'vary': 'Cookie', 'server': 'nginx', 'age': '557','content-type': 'text/html; charset=utf-8', 'public-key-pins': 'max-age=600; includeSubDomains; ..) The response object contains different attributes like _content, status_code, headers, url, history, encoding, reason, cookies, elapsed, request. >>> response.status_code 200 >>> response.url u'https://www.python.org/' >>> response.elapsed datetime.timedelta(0, 1, 904954) >>> response.reason 'OK' Using prepared Requests Every request we send to the server turns to be a PreparedRequest by default. The request attribute of the Response object which is received from an API call or a session call is actually the PreparedRequest that was used. There might be cases in which we ought to send a request which would incur an extra step of adding a different parameter. Parameters can be cookies, files, auth, timeout and so on. We can handle this extra step efficiently by using the combination of sessions and prepared requests. Let us look at an example: >>> from requests import Request, Session >>> header = {} >>> request = Request('get', 'some_url', headers=header) We are trying to send a get request with a header in the previous example. Now, take an instance where we are planning to send the request with the same method, URL, and headers, but we want to add some more parameters to it. In this condition, we can use the session method to receive complete session level state to access the parameters of the initial sent request. This can be done by using the session object. >>> from requests import Request, Session >>> session = Session() >>> request1 = Request('GET', 'some_url', headers=header) Now, let us prepare a request using the session object to get the values of the session level state: >>> prepare = session.prepare_request(request1) We can send the request object request with more parameters now, as follows: >>> response = session.send(prepare, stream=True, verify=True) 200 Voila! Huge time saving! The prepare method prepares the complete request with the supplied parameters. In the previous example, the prepare_request method was used. There are also some other methods like prepare_auth, prepare_body, prepare_cookies, prepare_headers, prepare_hooks, prepare_method, prepare_url which are used to create individual properties. Verifying an SSL certificate with Requests Requests provides the facility to verify an SSL certificate for HTTPS requests. We can use the verify argument to check whether the host's SSL certificate is verified or not. Let us consider a website which has got no SSL certificate. We shall send a GET request with the argument verify to it. The syntax to send the request is as follows: requests.get('no ssl certificate site', verify=True) As the website doesn't have an SSL certificate, it will result an error similar to the following: requests.exceptions.ConnectionError: ('Connection aborted.', error(111, 'Connection refused')) Let us verify the SSL certificate for a website which is certified. Consider the following example: >>> requests.get('https://python.org', verify=True) <Response [200]> In the preceding example, the result was 200, as the mentioned website is SSL certified one. If we do not want to verify the SSL certificate with a request, then we can put the argument verify=False. By default, the value of verify will turn to True. Body content workflow Take an instance where a continuous stream of data is being downloaded when we make a request. In this situation, the client has to listen to the server continuously until it receives the complete data. Consider the case of accessing the content from the response first and the worry about the body next. In the above two situations, we can use the parameter stream. Let us look at an example: >>> requests.get("https://pypi.python.org/packages/source/F/Flask/Flask-0.10.1.tar.gz", stream=True) If we make a request with the parameter stream=True, the connection remains open and only the headers of the response will be downloaded. This gives us the capability to fetch the content whenever we need by specifying the conditions like the number of bytes of data. The syntax is as follows: if int(request.headers['content_length']) < TOO_LONG: content = r.content By setting the parameter stream=True and by accessing the response as a file-like object that is response.raw, if we use the method iter_content, we can iterate over response.data. This will avoid reading of larger responses at once. The syntax is as follows: iter_content(chunk_size=size in bytes, decode_unicode=False) In the same way, we can iterate through the content using iter_lines method which will iterate over the response data one line at a time. The syntax is as follows: iter_lines(chunk_size = size in bytes, decode_unicode=None, delimitter=None) The important thing that should be noted while using the stream parameter is it doesn't release the connection when it is set as True, unless all the data is consumed or response.close is executed. Keep-alive facility As the urllib3 supports the reuse of the same socket connection for multiple requests, we can send many requests with one socket and receive the responses using the keep-alive feature in the Requests library. Within a session, it turns to be automatic. Every request made within a session automatically uses the appropriate connection by default. The connection that is being used will be released after all the data from the body is read. Streaming uploads A file-like object which is of massive size can be streamed and uploaded using the Requests library. All we need to do is to supply the contents of the stream as a value to the data attribute in the request call as shown in the following lines. The syntax is as follows: with open('massive-body', 'rb') as file:    requests.post('http://example.com/some/stream/url',                  data=file) Using generator for sending chunk encoded Requests Chunked transfer encoding is a mechanism for transferring data in an HTTP request. With this mechanism, the data is sent in a series of chunks. Requests supports chunked transfer encoding, for both outgoing and incoming requests. In order to send a chunk encoded request, we need to supply a generator for your body. The usage is shown in the following example: >>> def generator(): ...     yield "Hello " ...     yield "World!" ... >>> requests.post('http://example.com/some/chunked/url/path',                  data=generator()) Getting the request method arguments with event hooks We can alter the portions of the request process signal event handling using hooks. For example, there is hook named response which contains the response generated from a request. It is a dictionary which can be passed as a parameter to the request. The syntax is as follows: hooks = {hook_name: callback_function, … } The callback_function parameter may or may not return a value. When it returns a value, it is assumed that it is to replace the data that was passed in. If the callback function doesn't return any value, there won't be any effect on the data. Here is an example of a callback function: >>> def print_attributes(request, *args, **kwargs): ...     print(request.url) ...     print(request .status_code) ...     print(request .headers) If there is an error in the execution of callback_function, you'll receive a warning message in the standard output. Now let us print some of the attributes of the request, using the preceding callback_function: >>> requests.get('https://www.python.org/',                  hooks=dict(response=print_attributes)) https://www.python.org/ 200 CaseInsensitiveDict({'content-type': 'text/html; ...}) <Response [200]> Iterating over streaming API Streaming API tends to keep the request open allowing us to collect the stream data in real time. While dealing with a continuous stream of data, to ensure that none of the messages being missed from it we can take the help of iter_lines() in Requests. The iter_lines() iterates over the response data line by line. This can be achieved by setting the parameter stream as True while sending the request. It's better to keep in mind that it's not always safe to call the iter_lines() function as it may result in loss of received data. Consider the following example taken from http://docs.python-requests.org/en/latest/user/advanced/#streaming-requests: >>> import json >>> import requests >>> r = requests.get('http://httpbin.org/stream/4', stream=True) >>> for line in r.iter_lines(): ...     if line: ...         print(json.loads(line) ) In the preceding example, the response contains a stream of data. With the help of iter_lines(), we tried to print the data by iterating through every line. Encodings As specified in the HTTP protocol (RFC 7230), applications can request the server to return the HTTP responses in an encoded format. The process of encoding turns the response content into an understandable format which makes it easy to access it. When the HTTP header fails to return the type of encoding, Requests will try to assume the encoding with the help of chardet. If we access the response headers of a request, it does contain the keys of content-type. Let us look at a response header's content-type: >>> re = requests.get('http://google.com') >>> re.headers['content-type'] 'text/html; charset=ISO-8859-1' In the preceding example the content type contains 'text/html; charset=ISO-8859-1'. This happens when the Requests finds the charset value to be None and the 'content-type' value to be 'Text'. It follows the protocol RFC 7230 to change the value of charset to ISO-8859-1 in this type of a situation. In case we are dealing with different types of encodings like 'utf-8', we can explicitly specify the encoding by setting the property to Response.encoding. HTTP verbs Requests support the usage of the full range of HTTP verbs which are defined in the following table. To most of the supported verbs, 'url' is the only argument that must be passed while using them. Method Description GET GET method requests a representation of the specified resource. Apart from retrieving the data, there will be no other effect of using this method. Definition is given as requests.get(url, **kwargs) POST The POST verb is used for the creation of new resources. The submitted data will be handled by the server to a specified resource. Definition is given as requests.post(url, data=None, json=None, **kwargs) PUT This method uploads a representation of the specified URI. If the URI is not pointing to any resource, the server can create a new object with the given data or it will modify the existing resource. Definition is given as requests.put(url, data=None, **kwargs) DELETE This is pretty easy to understand. It is used to delete the specified resource. Definition is given as requests.delete(url, **kwargs) HEAD This verb is useful for retrieving meta-information written in response headers without having to fetch the response body. Definition is given as requests.head(url, **kwargs) OPTIONS OPTIONS is a HTTP method which returns the HTTP methods that the server supports for a specified URL. Definition is given as requests.options(url, **kwargs) PATCH This method is used to apply partial modifications to a resource. Definition is given as requests.patch(url, data=None, **kwargs) Self-describing the APIs with link headers Take a case of accessing a resource in which the information is accommodated in different pages. If we need to approach the next page of the resource, we can make use of the link headers. The link headers contain the meta data of the requested resource, that is the next page information in our case. >>> url = "https://api.github.com/search/code?q=addClass+user:mozilla&page=1&per_page=4" >>> response = requests.head(url=url) >>> response.headers['link'] '<https://api.github.com/search/code?q=addClass+user%3Amozilla&page=2&per_page=4>; rel="next", <https://api.github.com/search/code?q=addClass+user%3Amozilla&page=250&per_page=4>; rel="last" In the preceding example, we have specified in the URL that we want to access page number one and it should contain four records. The Requests automatically parses the link headers and updates the information about the next page. When we try to access the link header, it showed the output with the values of the page and the number of records per page. Transport Adapter It is used to provide an interface for Requests sessions to connect with HTTP and HTTPS. This will help us to mimic the web service to fit our needs. With the help of Transport Adapters, we can configure the request according to the HTTP service we opt to use. Requests contains a Transport Adapter called HTTPAdapter included in it. Consider the following example: >>> session = requests.Session() >>> adapter = requests.adapters.HTTPAdapter(max_retries=6) >>> session.mount("http://google.co.in", adapter) In this example, we created a request session in which every request we make retries only six times, when the connection fails. Summary In this article, we learnt about creating sessions and using the session with different criteria. We also looked deeply into HTTP verbs and using proxies. We learnt about streaming requests, dealing with SSL certificate verifications and streaming responses. We also got to know how to use prepared requests, link headers and chunk encoded requests. Resources for Article: Further resources on this subject: Machine Learning [article] Solving problems – closest good restaurant [article] Installing NumPy, SciPy, matplotlib, and IPython [article]
Read more
  • 0
  • 0
  • 3865

article-image-deploying-play-application-coreos-and-docker
Packt
11 Jun 2015
8 min read
Save for later

Deploying a Play application on CoreOS and Docker

Packt
11 Jun 2015
8 min read
In this article by Giancarlo Inductivo, author of the book Play Framework Cookbook Second Edition, we will see deploy a Play 2 web application using CoreOS and Docker. CoreOS is a new, lightweight operating system ideal for modern application stacks. Together with Docker, a software container management system, this forms a formidable deployment environment for Play 2 web applications that boasts of simplified deployments, isolation of processes, ease in scalability, and so on. (For more resources related to this topic, see here.) For this recipe, we will utilize the popular cloud IaaS, Digital Ocean. Ensure that you sign up for an account here: https://cloud.digitalocean.com/registrations/new This recipe also requires Docker to be installed in the developer's machine. Refer to the official Docker documentation regarding installation: https://docs.docker.com/installation/ How to do it... Create a new Digital Ocean droplet using CoreOS as the base operating system. Ensure that you use a droplet with at least 1 GB of RAM for the recipe to work. note that Digital Ocean does not have a free tier and are all paid instances: Ensure that you select the appropriate droplet region: Select CoreOS 607.0.0 and specify a SSH key to use. Visit the following link if you need more information regarding SSH key generation:https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2: Once the Droplet is created, make a special note of the Droplet's IP address which we will use to log in to the Droplet: Next, create a Docker.com account at https://hub.docker.com/account/signup/ Create a new repository to house the play2-deploy-73 docker image that we will use for deployment: Create a new Play 2 webapp using the activator template, computer-database-scala, and change into the project root:    activator new play2-deploy-73 computer-database-scala && cd play2-deploy-73 Edit conf/application.conf to enable automatic database evolutions:    applyEvolutions.default=true Edit build.sbt to specify Docker settings for the web app:    import NativePackagerKeys._    import com.typesafe.sbt.SbtNativePackager._      name := """play2-deploy-73"""      version := "0.0.1-SNAPSHOT"      scalaVersion := "2.11.4"      maintainer := "<YOUR_DOCKERHUB_USERNAME HERE>"      dockerExposedPorts in Docker := Seq(9000)      dockerRepository := Some("YOUR_DOCKERHUB_USERNAME HERE ")      libraryDependencies ++= Seq(      jdbc,      anorm,      "org.webjars" % "jquery" % "2.1.1",      "org.webjars" % "bootstrap" % "3.3.1"    )          lazy val root = (project in file(".")).enablePlugins(PlayScala) Next, we build the Docker image and publish it to Docker Hub:    $ activator clean docker:stage docker:publish    ..    [info] Step 0 : FROM dockerfile/java    [info] ---> 68987d7b6df0    [info] Step 1 : MAINTAINER ginduc    [info] ---> Using cache    [info] ---> 9f856752af9e    [info] Step 2 : EXPOSE 9000    [info] ---> Using cache    [info] ---> 834eb5a7daec    [info] Step 3 : ADD files /    [info] ---> c3c67f0db512    [info] Removing intermediate container 3b8d9c18545e    [info] Step 4 : WORKDIR /opt/docker    [info] ---> Running in 1b150e98f4db    [info] ---> ae6716cd4643    [info] Removing intermediate container 1b150e98f4db  [info] Step 5 : RUN chown -R daemon .    [info] ---> Running in 9299421b321e    [info] ---> 8e15664b6012    [info] Removing intermediate container 9299421b321e    [info] Step 6 : USER daemon    [info] ---> Running in ea44f3cc8e11    [info] ---> 5fd0c8a22cc7    [info] Removing intermediate container ea44f3cc8e11    [info] Step 7 : ENTRYPOINT bin/play2-deploy-73    [info] ---> Running in 7905c6e2d155    [info] ---> 47fded583dd7    [info] Removing intermediate container 7905c6e2d155    [info] Step 8 : CMD    [info] ---> Running in b807e6360631    [info] ---> c3e1999cfbfd    [info] Removing intermediate container b807e6360631    [info] Successfully built c3e1999cfbfd    [info] Built image ginduc/play2-deploy-73:0.0.2-SNAPSHOT    [info] The push refers to a repository [ginduc/play2-deploy-73] (len: 1)    [info] Sending image list    [info] Pushing repository ginduc/play2-deploy-73 (1 tags)    [info] Pushing tag for rev [c3e1999cfbfd] on {https://cdn-registry-1.docker.io/v1/repositories/ginduc/play2-deploy-73/tags/0.0.2-SNAPSHOT}    [info] Published image ginduc/play2-deploy-73:0.0.2-SNAPSHOT Once the Docker image has been published, log in to the Digital Ocean droplet using SSH to pull the uploaded docker image. You will need to use the core user for your CoreOS Droplet:    ssh core@<DROPLET_IP_ADDRESS HERE>    core@play2-deploy-73 ~ $ docker pull <YOUR_DOCKERHUB_USERNAME HERE>/play2-deploy-73:0.0.1-SNAPSHOT    Pulling repository ginduc/play2-deploy-73    6045dfea237d: Download complete    511136ea3c5a: Download complete    f3c84ac3a053: Download complete    a1a958a24818: Download complete    709d157e1738: Download complete    d68e2305f8ed: Download complete    b87155bee962: Download complete    2097f889870b: Download complete    5d2fb9a140e9: Download complete    c5bdb4623fac: Download complete    68987d7b6df0: Download complete    9f856752af9e: Download complete    834eb5a7daec: Download complete    fae5f7dab7bb: Download complete    ee5ccc9a9477: Download complete    74b51b6dcfe7: Download complete    41791a2546ab: Download complete    8096c6beaae7: Download complete    Status: Downloaded newer image for <YOUR_DOCKERHUB_USERNAME HERE>/play2-deploy-73:0.0.2-SNAPSHOT We are now ready to run our Docker image using the following docker command:    core@play2-deploy-73 ~ $ docker run -p 9000:9000 <YOUR_DOCKERHUB_USERNAME_HERE>/play2-deploy-73:0.0.1-SNAPSHOT Using a web browser, access the computer-database webapp using the IP address we made note of in an earlier step of this recipe (http://192.241.239.43:9000/computers):   How it works... In this recipe, we deployed a Play 2 web application by packaging it as a Docker image and then installing and running the same Docker image in a Digital Ocean Droplet. Firstly, we will need an account on DigitalOcean.com and Docker.com. Once our accounts are ready and verified, we create a CoreOS-based droplet. CoreOS has Docker installed by default, so all we need to install in the droplet is the Play 2 web app Docker image. The Play 2 web app Docker image is based on the activator template, computer-database-scala, which we named play2-deploy-73. We make two modifications to the boilerplate code. The first modification in conf/application.conf:    applyEvolutions.default=true This setting enables database evolutions by default. The other modification is to be made in build.sbt. We import the required packages that contain the Docker-specific settings:    import NativePackagerKeys._    import com.typesafe.sbt.SbtNativePackager._ The next settings are to specify the repository maintainer, the exposed Docker ports, and the Docker repository in Docker.com; in this case, supply your own Docker Hub username as the maintainer and Docker repository values:    maintainer := "<YOUR DOCKERHUB_USERNAME>"      dockerExposedPorts in Docker := Seq(9000)      dockerRepository := Some("<YOUR_DOCKERHUB_USERNAME>") We can now build Docker images using the activator command, which will generate all the necessary files for building a Docker image:    activator clean docker:stage Now, we will use the activator docker command to upload and publish to your specified Docker.com repository:    activator clean docker:publish To install the Docker image in our Digital Ocean Droplet, we first log in to the droplet using the core user:    ssh core@<DROPLET_IP_ADDRESS> We then use the docker command, docker pull, to download the play2-deploy-73 image from Docker.com, specifying the tag:    docker pull <YOUR_DOCKERHUB_USERNAME>/play2-deploy-73:0.0.1-SNAPSHOT Finally, we can run the Docker image using the docker run command, exposing the container port 9000:    docker run -p 9000:9000 <YOUR_DOCKERHUB_USERNAME>/play2-deploy-73:0.0.1-SNAPSHOT There's more... Refer to the following links for more information on Docker and Digital Ocean: https://www.docker.com/whatisdocker/ https://www.digitalocean.com/community/tags/docker Summary In this recipe, we deployed a Play 2 web application by packaging it as a Docker image and then installing and running the same Docker image in a Digital Ocean Droplet. Resources for Article: Further resources on this subject: Less with External Applications and Frameworks [article] SpriteKit Framework and Physics Simulation [article] Speeding Vagrant Development With Docker [article]
Read more
  • 0
  • 0
  • 2561
article-image-edx-e-learning-course-marketing
Packt
05 Jun 2015
9 min read
Save for later

edX E-Learning Course Marketing

Packt
05 Jun 2015
9 min read
In this article by Matthew A. Gilbert, the author of edX E-Learning Course Development, we are going to learn various ways of marketing. (For more resources related to this topic, see here.) edX's marketing options If you don't market your course, you might not get any new students to teach. Fortunately, edX provides you with an array of tools for this purpose, as follows: Creative Submission Tool: Submit the assets required for creating a page in your edX course using the Creative Submission Tool. You can also use those very materials in promoting the course. Access the Creative Submission Tool at https://edx.projectrequest.net/index.php/request. Logo and the Media Kit: Although these are intended for members of the media, you can also use the edX Media Kit for your promotional purposes: you can download high-resolution photos, edX logo visual guidelines (in Adobe Illustrator and EPS versions), key facts about edX, and answers to frequently asked questions. You can also contact the press office for additional information. You can find the edX Media Kit online at https://www.edx.org/media-kit. edX Learner Stories: Using stories of students who have succeeded with other edX courses is a compelling way to market the potential of your course. Using Tumblr, edX Learner Stories offers more than a dozen student profiles. You might want to use their stories directly or use them as a template for marketing materials of your own. Read edX Learner Stories at http://edxstories.tumblr.com. Social media marketing Traditional marketing tools and the options available in the edX Marketing Portal are a fitting first step in promoting your course. However, social media gives you a tremendously enhanced toolkit you can use to attract, convert, and transform spectators into students. When marketing your course with social media, you will also simultaneously create a digital footprint for yourself. This in turn helps establish your subject matter expertise far beyond one edX course. What's more, you won't be alone; there exists a large community of edX instructors and students, including those from other MOOC platforms already online. Take, for example, the following screenshot from edX's Twitter account (@edxonline). edX has embraced social media as a means of marketing and to create a practicing virtual community for those creating and taking their courses. Likewise, edX also actively maintains a page on Facebook, as follows: You can also see how active edX's YouTube channel is in the following screenshot. Note that there are both educational and promotional videos. To get you started in social media—if you're not already there—take a look at the list of 12 social media tools, as follows. Not all of these tools might be relevant to your needs, but consider the suggestions to decide how you might best use them, and give them a try: Facebook (https://www.facebook.com): Create a fan page for your edX course; you can re-use content from your course's About page such as your course intro video, course description, course image, and any other relevant materials. Be sure to include a link from the Facebook page for your course to its About page. Look for ways to share other content from your course (or related to your course) in a way that engages members of your fan page. Use your Facebook page to generate interest and answer questions from potential students. You might also consider creating a Facebook group. This can be more useful for current students to share knowledge during the class and to network once it's complete. Visit edX on Facebook at https://www.facebook.com/edX. Google+ (https://plus.google.com): Take the same approach as you did with your Facebook fan page. While this is not as engaging as Facebook, you might find that posting content on Google+ increases traffic to your course's About page due to the increased referrals you are likely to experience via Google search results. Add edX to your circles on Google+ at https://plus.google.com/+edXOnline/posts. Instagram (https://instagram.com): Share behind-the-scenes pictures of you and your staff for your course. Show your students what a day in your life is like, making sure to use a unique hashtag for your course. Picture the possibilities with edX on Instagram at https://instagram.com/edxonline/. LinkedIn (https://www.linkedin.com): Share information about your course in relevant LinkedIn groups, and post public updates about it in your personal account. Again, make sure you include a unique hashtag for your course and a link to the About page. Connect with edX on LinkedIn at https://www.linkedin.com/company/edx. Pinterest (https://www.pinterest.com): Share photos as with Instagram, but also consider sharing infographics about your course's subject matter or share infographics or imagers you use in your actual course as well. You might consider creating pin boards for each course, or one per pin board per module in a course. Pin edX onto your Pinterest pin board at https://www.pinterest.com/edxonline/. Slideshare (http://www.slideshare.net): If you want to share your subject matter expertise and thought leadership with a wider audience, Slideshare is a great platform to use. You can easily post your PowerPoint presentations, class documents or scholarly papers, infographics, and videos from your course or another topic. All of these can then be shared across other social media platforms. Review presentations from or about edX courses on Slideshare at http://www.slideshare.net/search/slideshow?searchfrom=header&q=edx. SoundCloud (https://soundcloud.com): With SoundCloud, you can share MP3 files of your course lectures or create podcasts related to your areas of expertise. Your work can be shared on Twitter, Tumblr, Facebook, and Foursquare, expanding your influence and audience exponentially. Listen to some audio content from Harvard University at https://soundcloud.com/harvard. Tumblr (https://www.tumblr.com): Resembling what the child of WordPress and Twitter might be like, Tumblr provides a platform to share behind-the-scenes text, photos, quotes, links, chat, audios, and videos of your edX course and the people who make it possible. Share a "day in the life" or document in real time, an interactive history of each edX course you teach. Read edX's learner stories at http://edxstories.tumblr.com. Twitter (https://twitter.com): Although messages on Twitter are limited to 140 characters, one tweet can have a big impact. For a faculty wanting to promote its edX course, it is an efficient and cost-effective option. Tweet course videos, samples of content, links to other curriculum, or promotional material. Engage with other educators who teach courses and retweet posts from academic institutions. Follow edX on Twitter at https://twitter.com/edxonline. You might also consider subscribing to edX's Twitter list of edX instructors at https://twitter.com/edXOnline/lists/edx-professors-teachers, and explore the Twitter accounts of edX courses by subscribing to that list at https://twitter.com/edXOnline/lists/edx-course-handles. Vine (https://vine.co): A short-format video service owned by Twitter, Vine provides you with 6 seconds to share your creativity, either in a continuous stream or smaller segments linked together like stop motion. You might create a vine showing the inner working of the course faculty and staff, or maybe even ask short questions related to the course content and invite people to reply with answers. Watch vines about MOOCs at https://vine.co. WordPress: WordPress gives you two options to manage and share content with students. With WordPress.com (https://wordpress.com), you're given a selection of standardized templates to use on a hosted platform. You have limited control but reasonable flexibility and limited, if any, expenses. With Wordpress.org (https://wordpress.org), you have more control but you need to host it on your own web server, which requires some technical know-how. The choice is yours. Read posts on edX on the MIT Open Matters blog on Wordpress.com at https://mitopencourseware.wordpress.com/category/edx/. YouTube (https://www.youtube.com): YouTube is the heart of your edX course. It's the core of your curriculum and the anchor of engagement for your students. When promoting your course, use existing videos from your curriculum in your social media campaigns, but identify opportunities to record short videos specifically for promoting your course. Watch course videos and promotional content on the edX YouTube channel at https://www.youtube.com/user/EdXOnline. Personal branding basics Additionally, whether the impact of your effort is immediately evident or not, your social media presence powers your personal brand as a professor. Why is that important? Read on to know. With the possible exception of marketing professors, most educators likely tend to think more about creating and teaching their course than promoting it—or themselves. Traditionally, that made sense, but it isn't practical in today's digitally connected world. Social media opens an area of influence where all educators—especially those teaching an edX course—should be participating. Unfortunately, many professors don't know where or how to start with social media. If you're teaching a course on edX, or even edX Edge, you will likely have some kind of marketing support from your university or edX. But if you are just in an organization using edX Code, or simply want to promote yourself and your edX course, you might be on your own. One option to get you started with social media is the Babb Group, a provider of resources and consulting for online professors, business owners, and real-estate investors. Its founder and CEO, Dani Babb (PhD), says this: "Social media helps you show that you are an expert in a given field. It is an important tool today to help you get hired, earn promotions, and increase your visibility." The Babb Group offers five packages focused on different social media platforms: Twitter, LinkedIn, Facebook, Twitter and Facebook, or Twitter with Facebook and LinkedIn. You can view the Babb Group's social media marketing packages at http://www.thebabbgroup.com/social-media-profiles-for-professors.html. Connect with Dani Babb on LinkedIn at https://www.linkedin.com/in/drdanibabb or on Twitter at https://twitter.com/danibabb Summary In this article, we tackled traditional marketing tools, identified options available from edX, discussed social media marketing, and explored personal branding basics. Resources for Article: Further resources on this subject: Constructing Common UI Widgets [article] Getting Started with Odoo Development [article] MODx Web Development: Creating Lists [article]
Read more
  • 0
  • 0
  • 3728

article-image-regex-practice
Packt
04 Jun 2015
24 min read
Save for later

Regex in Practice

Packt
04 Jun 2015
24 min read
Knowing Regex's syntax allows you to model text patterns, but sometimes coming up with a good reliable pattern can be more difficult, so taking a look at some actual use cases can really help you learn some common design patterns. So, in this article by Loiane Groner and Gabriel Manricks, coauthors of the book JavaScript Regular Expressions, we will develop a form, and we will explore the following topics: Validating a name Validating e-mails Validating a Twitter username Validating passwords Validating URLs Manipulating text (For more resources related to this topic, see here.) Regular expressions and form validation By far, one of the most common uses for regular expressions on the frontend is for use with user submitted forms, so this is what we will be building. The form we will be building will have all the common fields, such as name, e-mail, website, and so on, but we will also experiment with some text processing besides all the validations. In real-world applications, you usually are not going to implement the parsing and validation code manually. You can create a regular expression and rely on some JavaScript libraries, such as: jQuery validation: Refer to http://jqueryvalidation.org/ Parsely.js: Refer to http://parsleyjs.org/ Even the most popular frameworks support the usage of regular expressions with its native validation engine, such as AngularJS (refer to http://www.ng-newsletter.com/posts/validations.html). Setting up the form This demo will be for a site that allows users to create an online bio, and as such, consists of different types of fields. However, before we get into this (since we won't be building a backend to handle the form), we are going to setup some HTML and JavaScript code to catch the form submission and extract/validate the data entered in it. To keep the code neat, we will create an array with all the validation functions, and a data object where all the final data will be kept. Here is a basic outline of the HTML code for which we begin by adding fields: <!DOCTYPE HTML> <html>    <head>        <title>Personal Bio Demo</title>    </head>    <body>        <form id="main_form">            <input type="submit" value="Process" />        </form>          <script>            // js goes here        </script>    </body> </html> Next, we need to write some JavaScript to catch the form and run through the list of functions that we will be writing. If a function returns false, it means that the verification did not pass and we will stop processing the form. In the event where we get through the entire list of functions and no problems arise, we will log out of the console and data object, which contain all the fields we extracted: <script>    var fns = [];    var data = {};      var form = document.getElementById("main_form");      form.onsubmit = function(e) {      e.preventDefault();          data = {};          for (var i = 0; i < fns.length; i++) {            if (fns[i]() == false) {                return;            }        }          console.log("Verified Data: ", data);    } </script> The JavaScript starts by creating the two variables I mentioned previously, we then pull the form's object from the DOM and set the submit handler. The submit handler begins by preventing a page from actually submitting, (as we don't have any backend code in this example) and then we go through the list of functions running them one by one. Validating fields In this section, we will explore how to validate different types of fields manually, such as name, e-mail, website URL, and so on. Matching a complete name To get our feet wet, let's begin with a simple name field. It's something we have gone through briefly in the past, so it should give you an idea of how our system will work. The following code goes inside the script tags, but only after everything we have written so far: function process_name() {    var field = document.getElementById("name_field");    var name = field.value;      var name_pattern = /^(S+) (S*) ?b(S+)$/;      if (name_pattern.test(name) === false) {        alert("Name field is invalid");         return false;    }      var res = name_pattern.exec(name);    data.first_name = res[1];    data.last_name = res[3];      if (res[2].length > 0) {        data.middle_name = res[2];    }      return true; }   fns.push(process_name); We get the name field in a similar way to how we got the form, then, we extract the value and test it against a pattern to match a full name. If the name doesn't match the pattern, we simply alert the user and return false to let the form handler know that the validations have failed. If the name field is in the correct format, we set the corresponding fields on the data object (remember, the middle name is optional here). The last line just adds this function to the array of functions, so it will be called when the form is submitted. The last thing required to get this working is to add HTML for this form field, so inside the form tags (right before the submit button), you can add this text input: Name: <input type="text" id="name_field" /><br /> Opening this page in your browser, you should be able to test it out by entering different values into the Name box. If you enter a valid name, you should get the data object printed out with the correct parameters, otherwise you should be able to see this alert message: Understanding the complete name Regex Let's go back to the regular expression used to match the name entered by a user: /^(S+) (S*) ?b(S+)$/ The following is a brief explanation of the Regex: The ^ character asserts its position at the beginning of a string The first capturing group (S+) S+ matches a non-white space character [^rntf] The + quantifier between one and unlimited times The second capturing group (S*) S* matches any non-whitespace character [^rntf] The * quantifier between zero and unlimited times " ?" matches the whitespace character The ? quantifier between zero and one time b asserts its position at a (^w|w$|Ww|wW) word boundary The third capturing group (S+) S+ matches a non-whitespace character [^rntf] The + quantifier between one and unlimited times $ asserts its position at the end of a string Matching an e-mail with Regex The next type of field we may want to add is an e-mail field. E-mails may look pretty simple at first glance, but there are a large variety of e-mails out there. You may just think of creating a [email protected] pattern, but the first section can contain many additional characters besides just letters, the domain can be a subdomain, or the suffix could have multiple parts (such as .co.uk for the UK). Our pattern will simply look for a group of characters that are not spaces or instances where the @ symbol has been used in the first section. We will then want an @ symbol, followed by another set of characters that have at least one period, followed by the suffix, which in itself could contain another suffix. So, this can be accomplished in the following manner: /[^s@]+@[^s@.]+.[^s@]+/ The pattern of our example is very simple and will not match every valid e-mail address. There is an official standard for an e-mail address's regular expressions called RFC 5322. For more information, please read http://www.regular-expressions.info/email.html. So, let's add the field to our page: Email: <input type="text" id="email_field" /><br /> We can then add this function to verify it: function process_email() {    var field = document.getElementById("email_field");    var email = field.value;      var email_pattern = /^[^s@]+@[^s@.]+.[^s@]+$/;      if (email_pattern.test(email) === false) {        alert("Email is invalid");        return false;    }      data.email = email;    return true; }   fns.push(process_email); There is an HTML5 field type specifically designed for e-mails, but here we are verifying manually, as this is a Regex book. For more information, please refer to http://www.w3.org/TR/html-markup/input.email.html. Understanding the e-mail Regex Let's go back to the regular expression used to match the name entered by the user: /^[^s@]+@[^s@.]+.[^s@]+$/ Following is a brief explanation of the Regex: ^ asserts a position at the beginning of the string [^s@]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches any white space character [rntf ] @ matches the @ literal character [^s@.]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches a [rntf] whitespace character @. is a single character in the @. list, literally . matches the . character literally [^s@]+ match a single character that is not present in the following list: The + quantifier between one and unlimited times s matches [rntf] a whitespace character @ is the @ literal character $ asserts its position at end of a string Matching a Twitter name The next field we are going to add is a field for a Twitter username. For the unfamiliar, a Twitter username is in the @username format, but when people enter this in, they sometimes include the preceding @ symbol and on other occasions, they only write the username by itself. Obviously, internally we would like everything to be stored uniformly, so we will need to extract the username, regardless of the @ symbol, and then manually prepend it with one, so regardless of whether it was there or not, the end result will look the same. So again, let's add a field for this: Twitter: <input type="text" id="twitter_field" /><br /> Now, let's write the function to handle it: function process_twitter() {    var field = document.getElementById("twitter_field");    var username = field.value;      var twitter_pattern = /^@?(w+)$/;      if (twitter_pattern.test(username) === false) {        alert("Twitter username is invalid");        return false;    }      var res = twitter_pattern.exec(username);    data.twitter = "@" + res[1];    return true; }   fns.push(process_twitter); If a user inputs the @ symbol, it will be ignored, as we will add it manually after checking the username. Understanding the twitter username Regex Let's go back to the regular expression used to match the name entered by the user: /^@?(w+)$/ This is a brief explanation of the Regex: ^ asserts its position at start of the string @? matches the @ character, literally The ? quantifier between zero and one time First capturing group (w+) w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times $ asserts its position at end of a string Matching passwords Another popular field, which can have some unique constraints, is a password field. Now, not every password field is interesting; you may just allow just about anything as a password, as long as the field isn't left blank. However, there are sites where you need to have at least one letter from each case, a number, and at least one other character. Considering all the ways these can be combined, creating a pattern that can validate this could be quite complex. A much better solution for this, and one that allows us to be a bit more verbose with our error messages, is to create four separate patterns and make sure the password matches each of them. For the input, it's almost identical: Password: <input type="password" id="password_field" /><br /> The process_password function is not very different from the previous example as we can see its code as follows: function process_password() {    var field = document.getElementById("password_field");    var password = field.value;      var contains_lowercase = /[a-z]/;    var contains_uppercase = /[A-Z]/;    var contains_number = /[0-9]/;    var contains_other = /[^a-zA-Z0-9]/;      if (contains_lowercase.test(password) === false) {        alert("Password must include a lowercase letter");        return false;    }      if (contains_uppercase.test(password) === false) {        alert("Password must include an uppercase letter");        return false;    }      if (contains_number.test(password) === false) {        alert("Password must include a number");        return false;    }      if (contains_other.test(password) === false) {        alert("Password must include a non-alphanumeric character");        return false;    }      data.password = password;    return true; }   fns.push(process_password); All in all, you may say that this is a pretty basic validation and something we have already covered, but I think it's a great example of working smart as opposed to working hard. Sure, we probably could have created one long pattern that would check everything together, but it would be less clear and less flexible. So, by breaking it into smaller and more manageable validations, we were able to make clear patterns, and at the same time, improve their usability with more helpful alert messages. Matching URLs Next, let's create a field for the user's website; the HTML for this field is: Website: <input type="text" id="website_field" /><br /> A URL can have many different protocols, but for this example, let's restrict it to only http or https links. Next, we have the domain name with an optional subdomain, and we need to end it with a suffix. The suffix itself can be a single word, such as .com or it can have multiple segments, such as.co.uk. All in all, our pattern looks similar to this: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i Here, we are using multiple noncapture groups, both for when sections are optional and for when we want to repeat a segment. You may have also noticed that we are using the case insensitive flag (/i) at the end of the regular expression, as links can be written in lowercase or uppercase. Now, we'll implement the actual function: function process_website() {    var field = document.getElementById("website_field");    var website = field.value;      var pattern = /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i      if (pattern.test(website) === false) {       alert("Website is invalid");        return false;    }      data.website = website;    return true; }   fns.push(process_website); At this point, you should be pretty familiar with the process of adding fields to our form and adding a function to validate them. So, for our remaining examples let's shift our focus a bit from validating inputs to manipulating data. Understanding the URL Regex Let's go back to the regular expression used to match the name entered by the user: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i This is a brief explanation of the Regex: ^ asserts its position at start of a string (?:https?://)? is anon-capturing group The ? quantifier between zero and one time http matches the http characters literally (case-insensitive) s? matches the s character literally (case-insensitive) The ? quantifier between zero and one time : matches the : character literally / matches the / character literally / matches the / character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.w+)? is a non-capturing group The ? quantifier between zero and one time . matches the . character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.[A-Z]{2,3})+ is a non-capturing group The + quantifier between one and unlimited times . matches the . character literally [A-Z]{2,3} matches a single character present in this list The {2,3} quantifier between2 and 3 times A-Z is a single character in the range between A and Z (case insensitive) $ asserts its position at end of a string i modifier: insensitive. Case insensitive letters, meaning it will match a-z and A-Z. Manipulating data We are going to add one more input to our form, which will be for the user's description. In the description, we will parse for things, such as e-mails, and then create both a plain text and HTML version of the user's description. The HTML for this form is pretty straightforward; we will be using a standard textbox and give it an appropriate field: Description: <br /> <textarea id="description_field"></textarea><br /> Next, let's start with the bare scaffold needed to begin processing the form data: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      // More Processing Here      data.html_description = "<p>" + description + "</p>";      return true; }   fns.push(process_description); This code gets the text from the textbox on the page and then saves both a plain text version and an HTML version of it. At this stage, the HTML version is simply the plain text version wrapped between a pair of paragraph tags, but this is what we will be working on now. The first thing I want to do is split between paragraphs, in a text area the user may have different split-ups—lines and paragraphs. For our example, let's say the user just entered a single new line character, then we will add a <br /> tag and if there is more than one character, we will create a new paragraph using the <p> tag. Using the String.replace method We are going to use JavaScript's replace method on the string object This function can accept a Regex pattern as its first parameter, and a function as its second; each time it finds the pattern it will call the function and anything returned by the function will be inserted in place of the matched text. So, for our example, we will be looking for new line characters, and in the function, we will decide if we want to replace the new line with a break line tag or an actual new paragraph, based on how many new line characters it was able to pick up: var line_pattern = /n+/g; description = description.replace(line_pattern, function(match) {    if (match == "n") {        return "<br />";    } else {        return "</p><p>";    } }); The first thing you may notice is that we need to use the g flag in the pattern, so that it will look for all possible matches as opposed to only the first. Besides this, the rest is pretty straightforward. Consider this form: If you take a look at the output from the console of the preceding code, you should get something similar to this: Matching a description field The next thing we need to do is try and extract e-mails from the text and automatically wrap them in a link tag. We have already covered a Regexp pattern to capture e-mails, but we will need to modify it slightly, as our previous pattern expects that an e-mail is the only thing present in the text. In this situation, we are interested in all the e-mails included in a large body of text. If you were simply looking for a word, you would be able to use the b matcher, which matches any boundary (that can be the end of a word/the end of a sentence), so instead of the dollar sign, which we used before to denote the end of a string, we would place the boundary character to denote the end of a word. However, in our case it isn't quite good enough, as there are boundary characters that are valid e-mail characters, for example, the period character is valid. To get around this, we can use the boundary character in conjunction with a lookahead group and say we want it to end with a word boundary, but only if it is followed by a space or end of a sentence/string. This will ensure we aren't cutting off a subdomain or a part of a domain, if there is some invalid information mid-way through the address. Now, we aren't creating something that will try and parse e-mails no matter how they are entered; the point of creating validators and patterns is to force the user to enter something logical. That said, we assume that if the user wrote an e-mail address and then a period, that he/she didn't enter an invalid address, rather, he/she entered an address and then ended a sentence (the period is not part of the address). In our code, we assume that to the end an address, the user is either going to have a space after, such as some kind of punctuation, or that he/she is ending the string/line. We no longer have to deal with lines because we converted them to HTML, but we do have to worry that our pattern doesn't pick up an HTML tag in the process. At the end of this, our pattern will look similar to this: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g We start off with a word boundary, then, we look for the pattern we had before. I added both the (>) greater-than and the (<) less-than characters to the group of disallowed characters, so that it will not pick up any HTML tags. At the end of the pattern, you can see that we want to end on a word boundary, but only if it is followed by a space, an HTML tag, or the end of a string. The complete function, which does all the matching, is as follows: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      var line_pattern = /n+/g;    description = description.replace(line_pattern, function(match) {        if (match == "n") {            return "<br />";        } else {            return "</p><p>";        }    });      var email_pattern = /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g;    description = description.replace(email_pattern, function(match){        return "<a href='mailto:" + match + "'>" + match + "</a>";    });      data.html_description = "<p>" + description + "</p>";      return true; } We can continue to add fields, but I think the point has been understood. You have a pattern that matches what you want, and with the extracted data, you are able to extract and manipulate the data into any format you may need. Understanding the description Regex Let's go back to the regular expression used to match the name entered by the user: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g This is a brief explanation of the Regex: b asserts its position at a (^w|w$|Ww|wW) word boundary [^s<>@]+ matches a single character not present in the this list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ is a single character in the <>@ list (case-sensitive) @ matches the @ character literally [^s<>@.]+ matches a single character not present in this list: The + quantifier between one and unlimited times s matches any [rntf] whitespace character <>@. is a single character in the <>@. list literally (case sensitive) . matches the . character literally [^s<>@]+ matches a single character not present in this the list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ isa single character in the <>@ list literally (case sensitive) b asserts its position at a (^w|w$|Ww|wW) word boundary (?=.?(?:s|<|$)) Positive Lookahead - Assert that the Regex below can be matched .? matches any character (except new line) The ? quantifier between zero and one time (?:s|<|$) is a non-capturing group: First alternative: s matches any white space character [rntf] Second alternative: < matches the character < literally Third alternative: $ assert position at end of the string The g modifier: global match. Returns all matches of the regular expression, not only the first one Explaining a Markdown example More examples of regular expressions can be seen with the popular Markdown syntax (refer to http://en.wikipedia.org/wiki/Markdown). This is a situation where a user is forced to write things in a custom format, although it's still a format, which saves typing and is easier to understand. For example, to create a link in Markdown, you would type something similar to this: [Click Me](http://gabrielmanricks.com) This would then be converted to: <a href="http://gabrielmanricks.com">Click Me</a> Disregarding any validation on the URL itself, this can easily be achieved using this pattern: /[([^]]*)](([^(]*))/g It looks a little complex, because both the square brackets and parenthesis are both special characters that need to be escaped. Basically, what we are saying is that we want an open square bracket, anything up to the closing square bracket, then we want an open parenthesis, and again, anything until the closing parenthesis. A good website to write markdown documents is http://dillinger.io/. Since we wrapped each section into its own capture group, we can write this function: text.replace(/[([^]]*)](([^(]*))/g, function(match, text, link){    return "<a href='" + link + "'>" + text + "</a>"; }); We haven't been using capture groups in our manipulation examples, but if you use them, then the first parameter to the callback is the entire match (similar to the ones we have been working with) and then all the individual groups are passed as subsequent parameters, in the order that they appear in the pattern. Summary In this article, we covered a couple of examples that showed us how to both validate user inputs as well as manipulate them. We also took a look at some common design patterns and saw how it's sometimes better to simplify the problem instead of using brute force in one pattern for the purpose of creating validations. Resources for Article: Further resources on this subject: Getting Started with JSON [article] Function passing [article] YUI Test [article]
Read more
  • 0
  • 0
  • 2819