Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-building-grid-system-susy
Packt
09 Aug 2016
14 min read
Save for later

Building a Grid System with Susy

Packt
09 Aug 2016
14 min read
In this article by Luke Watts, author of the book Mastering Sass, we will build a responsive grid system using the Susy library and a few custom mixins and functions. We will set a configuration map with our breakpoints which we will then loop over to automatically create our entire grid, using interpolation to create our class names. (For more resources related to this topic, see here.) Detailing the project requirements For this example, we will need bower to download Susy. After Susy has been downloaded we will only need two files. We'll place them all in the same directory for simplicity. These files will be style.scss and _helpers.scss. We'll place the majority of our SCSS code in style.scss. First, we'll import susy and our _helpers.scss at the beginning of this file. After that we will place our variables and finally our code which will create our grid system. Bower and Susy To check if you have bower installed open your command line (Terminal on Unix or CMD on Windows) and run: bower -v If you see a number like "1.7.9" you have bower. If not you will need to install bower using npm, a package manager for NodeJS. If you don't already have NodeJS installed, you can download it from: https://nodejs.org/en/. To install bower from your command line using npm you will need to run: npm install -g bower Once bower is installed cd into the root of your project and run: bower install susy This will create a directory called bower_components. Inside that you will find a folder called susy. The full path to file we will be importing in style.scss is bower_components/susy/sass/_susy.scss. However we can leave off the underscore (_) and also the extension (.scss). Sass will still load import the file just fine. In style.scss add the following at the beginning of our file: // style.scss @import 'bower_components/susy/sass/susy'; Helpers (mixins and functions) Next, we'll need to import our _helpers.scss file in style.scss. Our _helpers.scss file will contain any custom mixins or functions we'll create to help us in building our grid. In style.scss import _helpers.scss just below where we imported Susy: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; Mixin: bp (breakpoint) I don't know about you, but writing media queries always seems like bit of a chore to me. I just don't like to write (min-width: 768px) all the time. So for that reason I'm going to include the bp mixin, which means instead of writing: @media(min-width: 768px) { // ... } We can simply use: @include bp(md) { // ... } First we are going to create a map of our breakpoints. Add the $breakpoints map to style.scss just below our imports: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); Then, inside _helpers.scss we're going to create our bp mixin which will handle creating our media queries from the $breakpoints map. Here's the breakpoint (bp) mixin: @mixin bp($size: md) { @media (min-width: map-get($breakpoints, $size)) { @content; } } Here we are setting the default breakpoint to be md (768px). We then use the built in Sass function map-get to get the relevant value using the key ($size). Inside our @media rule we use the @content directive which will allows us pass any Sass or CSS directly into our bp mixin to our @media rule. The container mixin The container mixin sets the max-width of the containing element, which will be the .container element for now. However, it is best to use the container mixin to semantically restrict certain parts of the design to your max width instead of using presentational classes like container or row. The container mixin takes a width argument, which will be the max-width. It also automatically applies the micro-clearfix hack. This prevents the containers height from collapsing when the elements inside it are floated. I prefer the overflow: hidden method myself, but they do the same thing essentially. By default, the container will be set to max-width: 100%. However, you can set it to be any valid unit of dimension, such as 60em, 1160px, 50%, 90vw, or whatever. As long as it's a valid CSS unit it will work. In style.scss let's create our .container element using the container mixin: // style.scss .container { @include container(1160px); } The preceding code will give the following CSS output: .container { max-width: 1160px; margin-left: auto; margin-right: auto; } .container:after { content: " "; display: block; clear: both; } Due to the fact the container uses a max-width we don't need to specify different dimensions for various screen sizes. It will be 100% until the screen is above 1160px and then the max-width value will kick in. The .container:after rule is the micro-clearfix hack. The span mixin To create columns in Susy we use the span mixin. The span mixin sets the width of that element and applies a padding or margin depending on how Susy is set up. By default, Susy will apply a margin to the right of each column, but you can set it to be on the left, or to be padding on the left or right or padding or margin on both sides. Susy will do the necessary work to make everything work behind the scenes. To create a half width column in a 12 column grid you would use: .col-6 { @include span(6 of 12); } The of 12 let's Susy know this is a 12 column grid. When we define our $susy map later we can tell Susy how many columns we are using via the columns property. This means we can drop the of 12 part and simply use span(6) instead. Susy will then know we are using 12 columns unless we explicitly pass another value. The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } Notice the width and margin together would actually be 50.84746%, not 50% as you might expect. Therefor two of these column would actually be 101.69492%. That will cause the last column to wrap to the next row. To prevent this, you would need to remove the margin from the last column. The last keyword To address this, Susy uses the last keyword. When you pass this to the span mixin it lets Susy know this is the last column in a row. This removes the margin right and also floats the element in question to the right to ensure it's at the very end of the row. Let's take the previous example where we would have two col-6 elements. We could create a class of col-6-last and apply the last keyword to that span mixin: .col-6 { @include span(6 of 12); &-last { @include span(last 6 of 12) } } The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } .col-6-last { width: 49.15254%; float: right; margin-right: 0; } You can also place the last keyword at the end. This will also work: .col-6 { @include span(6 of 12); &-last { @include span(6 of 12 last) } } The $susy configuration map Susy allows for a lot of configuration through its configuration map which is defined as $susy. The settings in the $susy map allow us to set how wide the container should be, how many columns our grid should have, how wide the gutters are, whether those gutters should be margins or padding, and whether the gutters should be on the left, right or both sides of each column. Actually, there are even more settings available depending what type of grid you'd like to build. Let's, define our $susy map with the container set to 1160px just after our $breakpoints map: // style.scss $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); Here we've set our containers max-width to be 1160px. This is used when we use the container mixin without entering a value. We've also set our grid to be 12 columns with the gutters, (padding or margin) to be 1/3 the width of a column. That's about all we need to set for our purposes, however, Susy has a lot more to offer. In fact, to cover everything in Susy would need an entirely book of its own. If you want to explore more of what Susy can do you should read the documentation at http://susydocs.oddbird.net/en/latest/. Setting up a grid system We've all used a 12 column grid which has various sizes (small, medium, large) or a set breakpoint (or breakpoints). These are the most popular methods for two reasons...it works, and it's easy to understand. Furthermore, with the help of Susy we can achieve this with less than 30 lines of Sass! Don't believe me? Let's begin. The concept of our grid system Our grid system will be similar to that of Foundation and Bootstrap. It will have 3 breakpoints and will be mobile-first. It will have a container, which will act as both .container and .row, therefore removing the need for a .row class. The breakpoints Earlier we defined three sizes in our $breakpoints map. These were: $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); So our grid will have small, medium and large breakpoints. The columns naming convention Our columns will use a similar naming convention to that of Bootstrap. There will be four available sets of columns. The first will start from 0px up to the 399px (example: .col-12) The next will start from 480px up to 767px (example: .col-12-sm) The medium will start from 768px up to 979px (example: .col-12-md) The large will start from 980px (example: .col-12-lg) Having four options will give us the most flexibility. Building the grid From here we can use an @for loop and our bp mixin to create our four sets of classes. Each will go from 1 through 12 (or whatever our Susy columns property is set to) and will use the breakpoints we defined for small (sm), medium (md) and large (lg). In style.scss add the following: // style.scss @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } These 9 lines of code are responsible for our mobile-first set of column classes. This loops from one through 12 (which is currently the value of the $susy columns property) and creates a class for each. It also adds a class which handles removing the final columns right margin so our last column doesn't wrap onto a new line. Having control of when this happens will give us the most control. The preceding code would create: .col-1 { width: 6.38298%; float: left; margin-right: 2.12766%; } .col-1-last { width: 6.38298%; float: right; margin-right: 0; } /* 2, 3, 4, and so on up to col-12 */ That means our loop which is only 9 lines of Sass will generate 144 lines of CSS! Now let's create our 3 breakpoints. We'll use an @each loop to get the sizes from our $breakpoints map. This will mean if we add another breakpoint, such as extra-large (xl) it will automatically create the correct set of classes for that size. @each $size, $value in $breakpoints { // Breakpoint will go here and will use $size } Here we're looping over the $breakpoints map and setting a $size variable and a $value variable. The $value variable will not be used, however the $size variable will be set to small, medium and large for each respective loop. We can then use that to set our bp mixin accordingly: @each $size, $value in $breakpoints { @include bp($size) { // The @for loop will go here similar to the above @for loop... } } Now, each loop will set a breakpoint for small, medium and large, and any additional sizes we might add in the future will be generated automatically. Now we can use the same @for loop inside the bp mixin with one small change, we'll add a size to the class name: @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } That's everything we need for our grid system. Here's the full stye.scss file: / /style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); .container { @include container; } @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } With our bp mixin that's 45 lines of SCSS. And how many lines of CSS does that generate? Nearly 600 lines of CSS! Also, like I've said, if we wanted to create another breakpoint it would only require a change to the $breakpoint map. Then, if we wanted to have 16 columns instead we would only need to the $susy columns property. The above code would then automatically loop over each and create the correct amount of columns for each breakpoint. Testing our grid Next we need to check our grid works. We mainly want to check a few column sizes for each breakpoint and we want to be sure our last keyword is doing what we expect. I've created a simple piece of HTML to do this. I've also add a small bit of CSS to the file to correct box-sizing issues which will happen because of the additional 1px border. I've also restricted the height so text which wraps to a second line won't affect the heights. This is simply so everything remains in line so it's easy to see our widths are working. I don't recommend setting heights on elements. EVER. Instead using padding or line-height if you can to give an element more height and let the content dictate the size of the element. Create a file called index.html in the root of the project and inside add the following: <!doctype html> <html lang="en-GB"> <head> <meta charset="UTF-8"> <title>Susy Grid Test</title> <link rel="stylesheet" type="text/css" href="style.css" /> <style type="text/css"> *, *::before, *::after { box-sizing: border-box; } [class^="col"] { height: 1.5em; background-color: grey; border: 1px solid black; } </style> </head> <body> <div class="container"> <h1>Grid</h1> <div class="col-12 col-10-sm col-2-md col-10-lg">.col-sm-10.col-2-md.col-10-lg</div> <div class="col-12 col-2-sm-last col-10-md-last col-2-lg-last">.col-sm-2-last.col-10-md-last.col-2-lg-last</div> <div class="col-12 col-9-sm col-3-md col-9-lg">.col-sm-9.col-3-md.col-9-lg</div> <div class="col-12 col-3-sm-last col-9-md-last col-3-lg-last">.col-sm-3-last.col-9-md-last.col-3-lg-last</div> <div class="col-12 col-8-sm col-4-md col-8-lg">.col-sm-8.col-4-md.col-8-lg</div> <div class="col-12 col-4-sm-last col-8-md-last col-4-lg-last">.col-sm-4-last.col-8-md-last.col-4-lg-last</div> <div class="col-12 col-7-sm col-md-5 col-7-lg">.col-sm-7.col-md-5.col-7-lg</div> <div class="col-12 col-5-sm-last col-7-md-last col-5-lg-last">.col-sm-5-last.col-7-md-last.col-5-lg-last</div> <div class="col-12 col-6-sm col-6-md col-6-lg">.col-sm-6.col-6-md.col-6-lg</div> <div class="col-12 col-6-sm-last col-6-md-last col-6-lg-last">.col-sm-6-last.col-6-md-last.col-6-lg-last</div> </div> </body> </html> Use your dev tools responsive tools or simply resize the browser from full size down to around 320px and you'll see our grid works as expected. Summary In this article we used Susy grids as well as a simple breakpoint mixin (bp) to create a solid, flexible grid system. With just under 50 lines of Sass we generated our grid system which consists of almost 600 lines of CSS.  Resources for Article: Further resources on this subject: Implementation of SASS [article] Use of Stylesheets for Report Designing using BIRT [article] CSS Grids for RWD [article]
Read more
  • 0
  • 0
  • 2171

article-image-responsive-web-design
Packt
03 Aug 2016
32 min read
Save for later

What is Responsive Web Design

Packt
03 Aug 2016
32 min read
In this article by Alex Libby, Gaurav Gupta, and Asoj Talesra, the authors of the book, Responsive Web Design with HTML5 and CSS3 Essentials we will cover the basic elements of responsive web design (RWD). Getting started with Responsive Web Design If one had to describe Responsive Web Design in a sentence, then responsive design describes how the content is displayed across various screens and devices, such as mobiles, tablets, phablets or desktops. To understand what this means, let's use water as an example. The property of water is that it takes the shape of the container in which it is poured. It is an approach in which a website or a webpage adjusts the layout according to the size or resolution of the screen dynamically. This ensures that the users get the best experience while using the website. We develop a single website that uses a single code base. This will contain fluid, flexible images, proportion-based grids, fluid images or videos and CSS3 media queries to work across multiple devices and device resolutions—the key to making them work is the use of percentage values in place of fixed units, such as pixels or ems-based sizes. The best part of this is that we can use this technique without the knowledge or need of server based/backend solutions—to see it in action, we can use Packt's website as an example. Go ahead and browse to https://www.packtpub.com/web-development/mastering-html5-forms; this is what we will see as a desktop view: The mobile view for the same website shows this if viewed on a smaller device: We can clearly see the same core content is being displayed (that is, an image of the book, the buy button, pricing details and information about the book), but element such as the menu have been transformed into a single drop down located in the top left corner. This is what responsive web design is all about—producing a flexible design that adapts according to which device we choose to use in a format that suits the device being used. Understanding the elements of RWD Now that we've been introduced to RWD, it's important to understand some of the elements that make up the philosophy of what we know as flexible design. A key part of this is understanding the viewport or visible screen estate available to us—in addition, there are several key elements that make up RWD. There are several key elements involved—in addition to viewports, these center around viewports, flexible media, responsive text and grids, and media queries. We will cover each in more detail later in the book, but for now, let's have a quick overview of the elements that make up RWD. Controlling the viewport A key part of RWD is working with the viewport, or visible content area on a device. If we're working with desktops, then it is usually the resolution; this is not the case for mobile devices. There is a temptation to reach for JavaScript (or a library, such as jQuery) to set values, such as viewport width or height: there is no need, as we can do this using CSS: <meta name="viewport" content="width=device-width"> Or by using this directive: <meta name="viewport" content="width=device-width, initial-scale=1"> This means that the browser should render the width of the page to the same width as the browser window—if, for example, the latter is 480px, then the width of the page will be 480px. To see what a difference not setting a viewport can have, take a look at this example screenshot: This example was created from displaying some text in Chrome, in iPhone 6 Plus emulation mode, but without a viewport. Now, let's take a look at the same text, but this time with a viewport directive set: Even though this is a simple example, do you notice any difference? Yes, the title color has changed, but more importantly the width of our display has increased. This is all part of setting a viewport—browsers frequently assume we want to view content as if we're on a desktop PC. If we don't tell it that the viewport area has been shrunken in size, it will try to shoe horn all of the content into a smaller size, which doesn't work very well! It's critical therefore that we set the right viewport for our design and that we allow it to scale up or down in size, irrespective of the device—we will explore this in more detail. Creating flexible grids When designing responsive websites, we can either create our own layout or use a grid system already created for use, such as Bootstrap. The key here though is ensuring that the mechanics of our layout sizes and spacing are set according to the content we want to display for our users, and that when the browser is resized in width, it realigns itself correctly. For many developers, the standard unit of measure has been pixel values; a key part of responsive design is to make the switch to using percentage and em (or preferably rem) units. The latter scale better than standard pixels, although there is a certain leap of faith needed to get accustomed to working with the replacements! Making media responsive A key part of our layout is, of course, images and text—the former though can give designers a bit of a headache, as it is not enough to simply use large images and set overflow: hidden to hide the parts that are not visible! Images in a responsive website must be as flexible as the grid used to host them—for some, this may be a big issue if the website is very content-heavy; now is a good time to consider if some of that content is no longer needed, and can be removed from the website. We can, of course simply apply display: none to any image which shouldn't be displayed, according to the viewport set. This isn't a good idea though, as content still has to be downloaded before styles can be applied; it means we're downloading more than is necessary! Instead, we should assess the level of content, make sure it is fully optimized, and apply percentage values so it can be resized automatically to a suitable size when the browser viewport changes. Constructing suitable breakpoints With content and media in place, we must turn our attention to media queries—there is a temptation to create queries that suit specific devices, but this can become a maintenance headache. We can avoid the headache by designing queries based on where the content breaks, rather than for specific devices—the trick to this is to start small and gradually enhance the experience, with the use of media queries: <link rel="stylesheet" media="(max-device-width: 320px)" href="mobile.css" /> <link rel="stylesheet" media="(min-width: 1600px)" href="widescreen.css" /> We should aim for around 75 characters per line, to maintain an optimal length for our content. Introducing flexible grid layouts For many years, designers have built layouts of different types—they may be as simple as a calling card website, right through to a theme for a content management system, such as WordPress or Joomla. The meteoric rise of accessing the Internet through different devices means that we can no longer create layouts that are tied to specific devices or sizes—we must be flexible! To achieve this flexibility requires us to embrace a number of changes in our design process – the first being the type of layout we should create. A key part of this is the use of percentage values to define our layouts; rather than create something from ground up, we can make use of a predefined grid system that has been tried and tested, as a basis for future designs. The irony is that there are lots of grid systems vying for our attention, so without further ado, let's make a start by exploring the different types of layouts, and how they compare to responsive designs. Understanding the different layout types A problem that has been faced by web designers for some years is the type of layout their website should use—should it be fluid, fixed width, have the benefits of being elastic or a hybrid version that draws on the benefits of a mix of these layouts? The type of layout we choose use will of course depend on client requirements—making it a fluid layout means we are effectively one step closer to making it responsive: the difference being that the latter uses media queries to allow resizing of content for different devices, not just normal desktops! To understand the differences, and how responsive layouts compare, let's take a quick look at each in turn: Fixed-Width layouts: These are constrained to a fixed with; a good size is around 960px, as this can be split equally into columns, with no remainder. The downside is the fixed width makes assumptions about the available viewport area, and that if the screen is too small or large, it results in scrolling or lots of which affects the user experience. Fluid layouts: Instead of using static values, we use percentage-based units; it means that no matter what the size of the browser window, our website will adjust accordingly. This removes the problems that surround fixed layouts at a stroke. Elastic layouts: They are similar to fluid layouts, but the constraints are measure by type or font size, using em or rem units; these are based on the defined font size, so 16px is 1 rem, 32px is 2 rem, and so on. These layouts allow for decent readability, with lines of 45-70 characters; font sizes are resized automatically. We may still see scrollbars appear in some instances, or experience some odd effects, if we zoom our page content. Hybrid layouts: They combine a mix of two or more of these different layout types; this allows us to choose static widths for some elements whilst others remain elastic or fluid. In comparison, responsive layouts take fluid layouts a step further, by using media queries to not only make our designs resize automatically, but present different views of our content on multiple devices. Exploring the benefits of flexible grid layouts Now that we've been introduced to grid layouts as a tenet of responsive design, it's a good opportunity to explore why we should use them. Creating a layout from scratch can be time-consuming, and need lots of testing—there are some real benefits from using a grid layout: Grids make for a simpler design: Instead of trying to develop the proverbial wheel, we can focus on providing the content instead; the infrastructure will have already been tested by the developer and other users. They provide for a visually appealing design: Many people prefer content to be displayed in columns, so grid layouts make good use of this concept, to help organize content on the page. Grids can of course adapt to different size viewports: The system they use makes it easier to display a single codebase on multiple devices, which reduces the effort required for developers to maintain and webmasters to manage. Grids help with the display of adverts: Google has been known to favor websites which display genuine content and not those where it believes the sole purpose of the website is for ad generation; we can use the grid to define specific area for adverts, without getting in the way of natural content. All in all, it makes sense to familiarize ourselves with grid layouts—the temptation is of course to use an existing library. There is nothing wrong with this, but to really get the benefit out of using them, it's good to understand some of the basics around the mechanics of grid layouts, and how this can help with the construction of our website. Making media responsive Our journey through the basics of adding responsive capabilities to a website has so far touched on how we make our layouts respond automatically to changes – it's time for us to do the same to media! If your first thought is that we need lots of additional functionality to make media responsive, then I am sorry to disappoint—it's much easier, and requires zero additional software to do it! Yes, all we need is just a text editor and a browser. I'll use my favorite editor, Sublime Text, but you can use whatever works for you. Over the course of this chapter, we will take a look in turn at images, video, audio and text, and we'll see how with some simple changes, we can make each of them responsive. Let's kick off our journey first, with a look at making image content responsive. Creating fluid images It is often said that images speak a thousand words. We can express a lot more with media than we can using words. This is particularly true for website selling products—a clear, crisp image clearly paints a better picture than a poor quality one! When constructing responsive websites, we need our images to adjust in size automatically—to see why this is important, go ahead and extract coffee.html from a copy of the code download that accompanies this book, and run it in a browser. Try resizing the window—we should see something akin to this: It doesn't look great, does it? Leaving aside my predilection for nature's finest bean drink (that is, coffee!), we can't have images that don't resize properly, so let's take a look at what is involved to make this happen: Go ahead and extract a copy of coffee.html and save it to our project area. We also need our image. This is in the img folder; save a copy to the img folder in our project area. In a new text file, add the following code, saving it as coffee.css: img { max-width: 100%; height: auto; } Revert back to coffee.html. You will see line 6 is currently commented out; remove the comment tags. Save the file, then preview it in a browser. If all is well, we will still see the same image as before, but this time try resizing it. This time around, our image grows or shrinks automatically, depending on the size of our browser window: Although our image does indeed fit better, there are a couple of points we should be aware of, when using this method: Sometimes you might see !important set as a property against the height attribute when working with responsive images; this isn't necessary, unless you're setting sizes in a website where image sizes may be overridden at a later date. We've set max-width to 100% as a minimum. You may also need to set a width value too, to be sure that your images do not become too big and break your layout. This is an easy technique to use, although there is a downside that can trip us up—spot what it is? If we use a high quality image, its file size will be hefty. We can't expect users of mobile devices to download it, can we? Don't worry though—there is a great alternative that has quickly gained popularity amongst browsers; we can use the <picture> element to control what is displayed, depending on the size of the available window. Implementing the <picture> element In a nutshell, responsive images are images that are displayed their optimal form on a page, depending on the device your website is being viewed from. This can mean several things: You want to show a separate image asset based on the user's physical screen size—this might be a 13.5 inch laptop, or a 5inch mobile phone screen. You want to show a separate image based on the resolution of the device, or using the device-pixel ratio (which is the ratio of device pixels to CSS pixels). You want to show an image in a specified image format (WebP, for example) if the browser supports it. Traditionally, we might have used simple scripting to achieve this, but it is at the risk of potentially downloading multiple images or none at all, if the script loads after images have loaded, or if we don't specify any image in our HTML and want the script to take care of loading images. Making video responsive Flexible videos are somewhat more complex than images. The HTML5 <video> maintains its aspect ratio just like images, and therefore we can apply the same CSS principle to make it responsive: video { max-width: 100%; height: auto !important; } Until relatively recently, there have been issues with HTML5 video—this is due in the main to split support for codecs, required to run HTML video. The CSS required to make a HTML5 video is very straightforward, but using it directly presents a few challenges: Hosting video is bandwidth intensive and expensive Streaming requires complex hardware support in addition to video It is not easy to maintain a consistent look and feel across different formats and platforms For many, a better alternative is to host the video through a third-party service such as YouTube—we can let them worry about bandwidth issues and providing a consistent look and feel; we just have to make it fit on the page! This requires a little more CSS styling to make it work, so let's dig in and find out what is involved. We clearly need a better way to manage responsive images! A relatively new tag for HTML5 is perfect for this job: <picture>. We can use this in one of three different ways, depending on whether we want to resize an existing image, display a larger one, or show a high-resolution version of the image. Implementing the <picture> element. In a nutshell, responsive images are images that are displayed their optimal form on a page, depending on the device your website is being viewed from. This can mean several things: You want to show a separate image asset based on the user's physical screen size—this might be a 13.5 inch laptop, or a 5inch mobile phone screen You want to show a separate image based on the resolution of the device, or using the device-pixel ratio (which is the ratio of device pixels to CSS pixels) You want to show an image in a specified image format (WebP, for example) if the browser supports it Traditionally, we might have used simple scripting to achieve this, but it is at the risk of potentially downloading multiple images or none at all, if the script loads after images have loaded, or if we don't specify any image in our HTML and want the script to take care of loading images. We clearly need a better way to manage responsive images! A relatively new tag for HTML5 is perfect for this job: <picture>. We can use this in one of three different ways, depending on whether we want to resize an existing image, display a larger one, or show a high-resolution version of the image. Making text fit on screen When building websites, it goes without saying but our designs clearly must start somewhere—this is usually with adding text. It's therefore essential that we allow for this in our responsive designs at the same time. Now is a perfect opportunity to explore how to do this—although text is not media in the same way as images or video, it is still content that has to be added at some point to our pages! With this in mind, let's dive in and explore how we can make our text responsive. Sizing with em units When working on non-responsive websites, it's likely that sizes will be quoted in pixel values – it's a perfectly acceptable way of working. However, if we begin to make our websites responsive, then content won't resize well using pixel values—we have to use something else. There are two alternatives - em or rem units. The former is based on setting a base font size that in most browsers defaults to 16px; in this example, the equivalent pixel sizes are given in the comments that follow each rule: h1 { font-size: 2.4em; } /* 38px */ p { line-height: 1.4em; } /* 22px */ Unfortunately there is an inherent problem with using em units—if we nest elements, then font sizes will be compounded, as em units are calculated relative to its parent. For example, if the font size of a list element is set at 1.4em (22px), then the font size of a list within a list becomes 30.8em (1.4 x 22px). To work around these issues, we can use rem values as a replacement—these are calculated from the root element, in place of the parent element. If you look carefully throughout many of the demos created for this book, you will see rem units being used to define the sizes of elements in that demo. Using rem units as a replacement The rem (or root em) unit is set to be relative to the root, instead of the parent – it means that we eliminate any issue with compounding at a stroke, as our reference point remains constant, and is not affected by other elements on the page. The downside of this is support—rem units are not supported in IE7 or 8, so if we still have to support these browsers, then we must fall back to using pixel or em values instead. This of course raises the question—should we still support these browsers, or is their usage of our website so small, as to not be worth the effort required to update our code? If the answer is that we must support IE8 or below, then we can take a hybrid approach—we can set both pixel/em and rem values at the same time in our code, thus: .article-body { font-size: 1.125rem; /* 18 / 16 */ font-size: 18px; } .caps, figure, footer { font-size: 0.875rem; /* 14 / 16 */ font-size: 14px; } Notice how we set rem values first? Browsers which support rem units will use these first; any that don't can automatically fall back to using pixel or em values instead. Exploring some examples Open a browser—let's go and visit some websites. Now, you may think I've lost my marbles, but stay with me: I want to show you a few examples. Let's take a look at a couple of example websites at different screen widths—how about this example, from my favorite coffee company, Starbucks: Try resizing the browser window—if you get small enough, you will see something akin to this: Now, what was the point of all that, I hear you ask? Well, it's simple—all of them use media queries in some form or other; CSS Tricks uses the queries built into WordPress, Packt's website is hosted using Drupal, and Starbuck's website is based around the Handlebars template system. The key here is that all use media queries to determine what should be displayed—throughout the course of this chapter, we'll explore using them in more detail, and see how we can use them to better manage content in responsive websites. Let's make a start with exploring their make up in more detail. Understanding media queries The developer Bruce Lee sums it up perfectly, when liking the effects of media queries to how water acts in different containers: "Empty your mind, be formless, shapeless - like water. Now you put water in a cup, it becomes the cup; you put water into a bottle it becomes the bottle; you put it in a teapot it becomes the teapot. Now water can flow or it can crash. Be water, my friend." We can use media queries to apply different CSS styles, based on available screen estate or specific device characteristics. These might include, but not be limited to the type of display, screen resolution or display density. Media queries work on the basis of testing to see if certain conditions are true, using this format: @media [not|only] [mediatype] and ([media feature]) { // CSS code; } We can use a similar principle to determine if entire style sheets should be loaded, instead of individual queries: <link rel="stylesheet" media="mediatype and|only|not (media feature)" href="myStyle.css"> Seems pretty simple, right? The great thing about media queries is that we don't need to download or install any additional software to use or create them – we can build most of them in the browser directly. Removing the need for breakpoints Up until now, we've covered how we can use breakpoints to control what is displayed, and when, according to which device is being used. Let's assume you're working on a project for a client, and have created a series of queries that use values such as 320px, 480px, 768px, and 1024px to cover support for a good range of devices. No matter what our design looks like, we will always be faced with two issues, if we focus on using specific screen viewports as the basis for controlling our designs: Keeping up with the sheer number of devices that are available The inflexibility of limiting our screen width So hold on: we're creating breakpoints, yet this can end up causing us more problems? If we're finding ourselves creating lots of media queries that address specific problems (in addition to standard ones), then we will start to lose the benefits of a responsive website—instead we should re-examine our website, to understand why the design isn't working and see if we can't tweak it so as to remove the need for the custom query. Ultimately our website and target devices will dictate what is required—a good rule of thumb is if we are creating more custom queries than a standard bunch of 4-6 breakpoints, then perhaps it is time to recheck our design! As an alternative to working with specific screen sizes, there is a different approach we can take, which is to follow the principle of adaptive design, and not responsive design. Instead of simply specifying a number of fixed screen sizes (such as for the iPhone 6 Plus or a Samsung Galaxy unit), we build our designs around the point at which the design begins to fail. Why? The answer is simple—the idea here is to come up with different bands, where designs will work between a lower and upper value, instead of simply specifying a query that checks for fixed screen sizes that are lower or above certain values. Understanding the importance of speed The advent of using different devices to access the internet means speed is critical – the time it takes to download content from hosting servers, and how quickly the user can interact with the website are key to the success of any website. Why it is important to focus on the performance of our website on the mobile devices or those devices with lesser screen resolution? There are several reasons for this—they include: 80 percent of internet users owns a smartphone Around 90 percent of users go online through a mobile device, with 48% of users using search engines to research new products Approximately 72 percent users abandon a website if the loading time is more than 5-6 seconds Mobile digital media time is now significantly higher than compared to desktop use If we do not consider statistics such as these, then we may go ahead and construct our website, but end up with a customer losing both income and market share, if we have not fully considered the extent of where our website should work. Coupled with this is the question of performance – if our website is slow, then this will put customers off, and contribute to lost sales. A study performed by San Francisco-based Kissmetrics shows that mobile users wait between 6 to 10 seconds before they close the website and lose faith in it. At the same time, tests performed by Guy Podjarny for the Mediaqueri.es website (http://mediaqueri.es) indicate that we're frequently downloading the same content for both large and small screens—this is entirely unnecessary, when with some simple changes, we can vary content to better suit desktop PCs or mobile devices! So what can we do? Well, before we start exploring where to make changes, let's take a look at some of the reasons why websites run slowly. Understanding why pages load slowly Although we may build a great website that works well across multiple devices, it's still no good if it is slow! Every website will of course operate differently, but there are a number of factors to allow for, which can affect page (and website) speed: Downloading data unnecessarily: On a responsive website, we may hide elements that are not displayed on smaller devices; the use of display: none in code means that we still download content, even though we're not showing it on screen, resulting in slower websites and higher bandwidth usage. Downloading images before shrinking them: If we have not optimized our website with properly sized images, then we may end up downloading images that are larger than necessary on a mobile device. We can of course make them fluid by using percentage-based size values, but this places extra demand on the server and browser to resize them. A complicated DOM in use on the website: When creating a responsive website, we have to add in a layer of extra code to manage different devices; this makes the DOM more complicated, and slow our website down. It is therefore imperative that we're not adding in any unnecessary elements that require additional parsing time by the browser. Downloading media or feeds from external sources: It goes without saying that these are not under our control; if our website is dependent on them, then the speed of our website will be affected if these external sources fail. Use of Flash: Websites that rely on heavy use of Flash will clearly be slower to access than those that don't use the technology. It is worth considering if our website really needs to use it; recent changes by Adobe mean that Flash as a technology is being retired in favor of animation using other means such as HTML5 Canvas or WebGL. There is one point to consider that we've not covered in this list—the average size of a page has significantly increased since the dawn of the Internet in the mid-nineties. Although these figures may not be 100% accurate, they still give a stark impression of how things have changed: 1995: At that time the average page size used to be around 14.1 KB in size. The reason for it can be that it contained around 2 or 3 objects. That means just 2 or 3 calls to server on which the website was hosted. 2008: The average page size increased to around 498 KB in size, with an average use of around 70 objects that includes changes to CSS, images and JavaScript. Although this is tempered with the increased use of broadband, not everyone can afford fast access, so we will lose customers if our website is slow to load. All is not lost though—there are some tricks we can use to help optimize the performance of our websites. Testing website compatibility At this stage, our website would be optimized, and tested for performance—but what about compatibility? Although the wide range of available browsers has remained relatively static (at least for the ones in mainstream use), the functionality they offer is constantly changing—it makes it difficult for developers and designers to handle all of the nuances required to support each browser. In addition, the wide range makes it costly to support—in an ideal world, we would support every device available, but this is impossible; instead, we must use analytical software to determine which devices are being used, and therefore worthy of support. Working out a solution If we test our website on a device such as an iPhone 6, then there is a good chance it will work as well on other Apple devices, such as iPads. The same can be said for testing on a mobile device such as a Samsung Galaxy S4—we can use this principle to help prioritize support for particular mobile devices, if they require more tweaks to be made than for other devices. Ultimately though, we must use analytical software to determine who visits our website; the information such as browser, source, OS and device used will help determine what our target audience should be. This does not mean we completely neglect other devices; we can ensure they work with our website, but this will not be a priority during development. A key point of note is that we should not attempt to support every device – this is too costly to manage, and we would never keep up with all of the devices available for sale! Instead, we can use our analytics software to determine which devices are being used by our visitors; we can then test a number of different properties: Screen size: This should encompass a variety of different resolutions for desktop and mobile devices. Connection speed: Testing across different connection speeds will help us understand how the website behaves, and identify opportunities or weaknesses where we may need to effect changes. Pixel density: Some devices will support higher a pixel density, which allows them to display higher resolution images or content; this will make it easier to view and fix any issues with displaying content. Interaction style: The ability to view the Internet across different devices means that we should consider how our visitors interact with the website: is it purely on a desktop, or do they use tablets, smartphones or gaming-based devices? It's highly likely that the former two will be used to an extent, but the latter is not likely to feature as highly. Once we've determined which devices we should be supporting, then there are a range of tools available for us to use, to test browser compatibility. These include physical devices (ideal, but expensive to maintain), emulators or online services (these can be commercial, or free). Let's take a look at a selection of what is available, to help us test compatibility. Exploring tools available for testing When we test a mobile or responsive website, there are factors which we need to consider before we start testing, to help deliver a website which looks consistent across all the devices and browsers. These factors include: Does the website look good? Are there any bugs or defects? Is our website really responsive? To help test our websites, we can use any one of several tools (either paid or free)—a key point to note though is that we can already get a good idea of how well our websites work, by simply using the Developer toolbar that is available in most browsers! Viewing with Chrome We can easily emulate a mobile device within Chrome, by pressing Ctrl + Shift + M; Chrome displays a toolbar at the top of the window, which allows us to select different devices: If we click on the menu entry (currently showing iPhone 6 Plus), and change it to Edit, we can add new devices; this allows us to set specific dimensions, user agent strings and whether the device supports high-resolution images: Although browsers can go some way to providing an indication of how well our website works, they can only provide a limited view – sometimes we need to take things a step further and use commercial solutions to test our websites across multiple browsers at the same time. Let's take a look at some of the options available commercially. Exploring our options If you've spent any time developing code, then there is a good chance you may already be aware of Browserstack (from https://www.browserstack.com)—other options include the following: GhostLab: https://www.vanamco.com/ghostlab/ Muir: http://labs.iqfoundry.com/ CrossBrowserTesting: http://www.crossbrowsertesting.com/ If however all we need to do is check our website for its level of responsiveness, then we don't need to use paid options – there are a number of websites that allow us to check, without needing to installing plugins or additional tools: Am I Responsive: http://ami.responsive.is ScreenQueries: http://screenqueri.es Cybercrab's screen check facility: http://cybercrab.com/screencheck Remy Sharp's check website: http://responsivepx.com We can also use bookmarklets to check to see how well our websites work on different devices – a couple of examples to try are at http://codebomber.com/jquery/resizer and http://responsive.victorcoulon.fr/; it is worth noting that current browsers already include this functionality, making the bookmarklets less attractive as an option. We have now reached the end of our journey through the essentials of creating responsive websites with nothing more than plain HTML and CSS code. We hope you have enjoyed it as much as we have with writing, and that it helps you make a start into the world of responsive design using little more than plain HTML and CSS. Summary This article covers the elements of RWD and introduces us to the different flexible grid layouts. Resources for Article: Responsive Web Design with WordPress Web Design Principles in Inkscape Top Features You Need to Know About – Responsive Web Design
Read more
  • 0
  • 1
  • 2836

article-image-showing-cached-content-first-then-networks
Packt
03 Aug 2016
9 min read
Save for later

Showing cached content first then networks

Packt
03 Aug 2016
9 min read
In this article by Sean Amarasinghe, the author of the book, Service Worker Development Cookbook, we are going to look at the methods that enable us to control cached content by creating a performance art event viewer web app. If you are a regular visitor to a certain website, chances are that you may be loading most of the resources, like CSS and JavaScript files, from your cache, rather than from the server itself. This saves us necessary bandwidth for the server, as well as requests over the network. Having the control over which content we deliver from the cache and server is a great advantage. Server workers provide us with this powerful feature by having programmatic control over the content. (For more resources related to this topic, see here.) Getting ready To get started with service workers, you will need to have the service worker experiment feature turned on in your browser settings. Service workers only run across HTTPS. How to do it... Follow these instructions to set up your file structure. Alternatively, you can download the files from the following location: https://github.com/szaranger/szaranger.github.io/tree/master/service-workers/03/02/ First, we must create an index.html file as follows: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Cache First, then Network</title> <link rel="stylesheet" href="style.css"> </head> <body> <section id="events"> <h1><span class="nyc">NYC</span> Events TONIGHT</h1> <aside> <img src="hypecal.png" /> <h2>Source</h2> <section> <h3>Network</h3> <input type="checkbox" name="network" id="network- disabled-checkbox"> <label for="network">Disabled</label><br /> <h3>Cache</h3> <input type="checkbox" name="cache" id="cache- disabled-checkbox"> <label for="cache">Disabled</label><br /> </section> <h2>Delay</h2> <section> <h3>Network</h3> <input type="text" name="network-delay" id="network-delay" value="400" /> ms <h3>Cache</h3> <input type="text" name="cache-delay" id="cache- delay" value="1000" /> ms </section> <input type="button" id="fetch-btn" value="FETCH" /> </aside> <section class="data connection"> <table> <tr> <td><strong>Network</strong></td> <td><output id='network-status'></output></td> </tr> <tr> <td><strong>Cache</strong></td> <td><output id='cache-status'></output><td> </tr> </table> </section> <section class="data detail"> <output id="data"></output> </section> <script src="index.js"></script> </body> </html> Create a CSS file called style.css in the same folder as the index.html file. You can find the source code in the following location on GitHub: https://github.com/szaranger/szaranger.github.io/blob/master/service-workers/03/02/style.css Create a JavaScript file called index.js in the same folder as the index.html file. You can find the source code in the following location on GitHub: https://github.com/szaranger/szaranger.github.io/blob/master/service-workers/03/02/index.js Open up a browser and go to index.html. First we are requesting data from the network with the cache enabled. Click on the Fetch button. If you click fetch again, the data has been retrieved first from cache, and then from the network, so you see duplicate data. (See the last line is same as the first.) Now we are going to select the Disabled checkbox under the Network label, and click the Fetch button again, in order to fetch data only from the cache. Select the Disabled checkbox under the Network label, as well as the Cache label, and click the Fetch button again. How it works... In the index.js file, we are setting a page specific name for the cache, as the caches are per origin based, and no other page should use the same cache name: var CACHE_NAME = 'cache-and-then-network'; If you inspect the Resources tab of the development tools, you will find the cache inside the Cache Storage tab. If we have already fetched network data, we don't want the cache fetch to complete and overwrite the data that we just got from the network. We use the networkDataReceived flag to let the cache fetch callbacks to know if a network fetch has already completed: var networkDataReceived = false; We are storing elapsed time for both network and cache in two variables: var networkFetchStartTime; var cacheFetchStartTime; The source URL for example is pointing to a file location in GitHub via RawGit: var SOURCE_URL = 'https://cdn.rawgit.com/szaranger/ szaranger.github.io/master/service-workers/03/02/events'; If you want to set up your own source URL, you can easily do so by creating a gist, or a repository, in GitHub, and creating a file with your data in JSON format (you don't need the .json extension). Once you've done that, copy the URL of the file, head over to https://rawgit.com, and paste the link there to obtain another link with content type header as shown in the following screenshot: Between the time we press the Fetch button, and the completion of receiving data, we have to make sure the user doesn't change the criteria for search, or press the Fetch button again. To handle this situation, we disable the controls: function clear() { outlet.textContent = ''; cacheStatus.textContent = ''; networkStatus.textContent = ''; networkDataReceived = false; } function disableEdit(enable) { fetchButton.disabled = enable; cacheDelayText.disabled = enable; cacheDisabledCheckbox.disabled = enable; networkDelayText.disabled = enable; networkDisabledCheckbox.disabled = enable; if(!enable) { clear(); } } The returned data will be rendered to the screen in rows: function displayEvents(events) { events.forEach(function(event) { var tickets = event.ticket ? '<a href="' + event.ticket + '" class="tickets">Tickets</a>' : ''; outlet.innerHTML = outlet.innerHTML + '<article>' + '<span class="date">' + formatDate(event.date) + '</span>' + ' <span class="title">' + event.title + '</span>' + ' <span class="venue"> - ' + event.venue + '</span> ' + tickets + '</article>'; }); } Each item of the events array will be printed to the screen as rows. The function handleFetchComplete is the callback for both the cache and the network. If the disabled checkbox is checked, we are simulating a network error by throwing an error: var shouldNetworkError = networkDisabledCheckbox.checked, cloned; if (shouldNetworkError) { throw new Error('Network error'); } Because of the reason that request bodies can only be read once, we have to clone the response: cloned = response.clone(); We place the cloned response in the cache using cache.put as a key value pair. This helps subsequent cache fetches to find this update data: caches.open(CACHE_NAME).then(function(cache) { cache.put(SOURCE_URL, cloned); // cache.put(URL, response) }); Now we read the response in JSON format. Also, we make sure that any in-flight cache requests will not be overwritten by the data we have just received, using the networkDataReceived flag: response.json().then(function(data) { displayEvents(data); networkDataReceived = true; }); To prevent overwriting the data we received from the network, we make sure only to update the page in case the network request has not yet returned: result.json().then(function(data) { if (!networkDataReceived) { displayEvents(data); } }); When the user presses the fetch button, they make nearly simultaneous requests of the network and the cache for data. This happens on a page load in a real world application, instead of being the result of a user action: fetchButton.addEventListener('click', function handleClick() { ... } We start by disabling any user input while the network fetch requests are initiated: disableEdit(true); networkStatus.textContent = 'Fetching events...'; networkFetchStartTime = Date.now(); We request data with the fetch API, with a cache busting URL, as well as a no-cache option in order to support Firefox, which hasn't implemented the caching options yet: networkFetch = fetch(SOURCE_URL + '?cacheBuster=' + now, { mode: 'cors', cache: 'no-cache', headers: headers }) In order to simulate network delays, we wait before calling the network fetch callback. In situations where the callback errors out, we have to make sure that we reject the promise we received from the original fetch: return new Promise(function(resolve, reject) { setTimeout(function() { try { handleFetchComplete(response); resolve(); } catch (err) { reject(err); } }, networkDelay); }); To simulate cache delays, we wait before calling the cache fetch callback. If the callback errors out, we make sure that we reject the promise we got from the original call to match: return new Promise(function(resolve, reject) { setTimeout(function() { try { handleCacheFetchComplete(response); resolve(); } catch (err) { reject(err); } }, cacheDelay); }); The formatDate function is a helper function for us to convert the date format we receive in the response into a much more readable format on the screen: function formatDate(date) { var d = new Date(date), month = (d.getMonth() + 1).toString(), day = d.getDate().toString(), year = d.getFullYear(); if (month.length < 2) month = '0' + month; if (day.length < 2) day = '0' + day; return [month, day, year].join('-'); } If you consider a different date format, you can shuffle the position of the array in the return statement to your preferred format. Summary In this article, we have learned how to control cached content by creating a performance art event viewer web app. Resources for Article: Further resources on this subject: AngularJS Web Application Development Cookbook [Article] Being Offline [Article] Consuming Web Services using Microsoft Dynamics AX [Article]
Read more
  • 0
  • 0
  • 1059
Banner background image

article-image-basic-website-using-nodejs-and-mysql-database
Packt
14 Jul 2016
5 min read
Save for later

Basic Website using Node.js and MySQL database

Packt
14 Jul 2016
5 min read
In this article by Fernando Monteiro author of the book Node.JS 6.x Blueprints we will understand some basic concepts of a Node.js application using a relational database (Mysql) and also try to look at some differences between Object Document Mapper (ODM) from MongoDB and Object Relational Mapper (ORM) used by Sequelize and Mysql. For this we will create a simple application and use the resources we have available as sequelize is a powerful middleware for creation of models and mapping database. We will also use another engine template called Swig and demonstrate how we can add the template engine manually. (For more resources related to this topic, see here.) Creating the baseline applications The first step is to create another directory, I'll use the root folder. Create a folder called chapter-02. Open your terminal/shell on this folder and type the express command: express –-git Note that we are using only the –-git flag this time, we will use another template engine but we will install it manually. Installing Swig template Engine The first step to do is change the default express template engine to use Swig, a pretty simple template engine very flexible and stable, also offers us a syntax very similar to Angular which is denoting expressions just by using double curly brackets {{ variableName }}. More information about Swig can be found on the official website at: http://paularmstrong.github.io/swig/docs/ Open the package.json file and replace the jade line for the following: "swig": "^1.4.2" Open your terminal/shell on project folder and type: npm install Before we proceed let's make some adjust to app.js, we need to add the swig module. Open app.js and add the following code, right after the var bodyParser = require('body-parser'); line: var swig = require('swig'); Replace the default jade template engine line for the following code: var swig = new swig.Swig(); app.engine('html', swig.renderFile); app.set('view engine', 'html'); Refactoring the views folder Let's change the views folder to the following new structure: views pages/ partials/ Remove the default jade files form views. Create a file called layout.html inside pages folder and place the following code: <!DOCTYPE html> <html> <head> </head> <body> {% block content %} {% endblock %} </body> </html> Create a index.html inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <h1>{{ title }}</h1> Welcome to {{ title }} {% endblock %} Create a error.html page inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <div class="container"> <h1>{{ message }}</h1> <h2>{{ error.status }}</h2> <pre>{{ error.stack }}</pre> </div> {% endblock %} We need to adjust the views path on app.js, replace the code on line 14 for the following code: // view engine setup app.set('views', path.join(__dirname, 'views/pages')); At this time we completed the first step to start our MVC application. In this example we will use the MVC pattern in its full meaning, Model, View, Controller. Creating controllers folder Create a folder called controllers inside the root project folder. Create a index.js inside the controllers folder and place the following code: // Index controller exports.show = function(req, res) { // Show index content res.render('index', { title: 'Express' }); }; Edit the app.js file and replace the original index route app.use('/', routes); with the following code: app.get('/', index.show); Add the controller path to app.js on line 9, replace the original code, with the following code: // Inject index controller var index = require('./controllers/index'); Now it's time to get if all goes as expected, we run the application and check the result. Type on your terminal/shell the following command: npm start Check with the following URL: http://localhost:3000, you'll see the welcome message of express framework. Removing the default routes folder Remove the routes folder and its content. Remove the user route from the app.js, after the index controller and on line 31. Adding partials files for head and footer Inside views/partials create a new file called head.html and place the following code: <meta charset="utf-8"> <title>{{ title }}</title> <link rel='stylesheet' href='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/css/bootstrap.min.css'> <link rel="stylesheet" href="/stylesheets/style.css"> Inside views/partials create a file called footer.html and place the following code: <script src='https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js'></script> <script src='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/js/bootstrap.min.js'></script> Now is time to add the partials file to layout.html page using the include tag. Open layout.html and add the following highlighted code: <!DOCTYPE html> <html> <head> {% include "../partials/head.html" %} </head> <body> {% block content %} {% endblock %} {% include "../partials/footer.html" %} </body> </html> Finally we are prepared to continue with our project, this time our directories structure looks like the following image: Folder structure Summaray In this article, we are discussing the basic concept of Node.js and Mysql database and we also saw how to refactor express engine template and use another resource like Swig template library to build a basic website. Resources for Article: Further resources on this subject: Exception Handling in MySQL for Python [article] Python Scripting Essentials [article] Splunk's Input Methods and Data Feeds [article]
Read more
  • 0
  • 0
  • 10380

article-image-you-begin
Packt
13 Jul 2016
14 min read
Save for later

Before You Begin

Packt
13 Jul 2016
14 min read
In this article by Ashley Chiasson, the author of the book Mastering Articulate Storyline, provides you with an introduction to the purpose of this book, best practices related to e-learning product development. In this article, we will cover the following topics: Pushing Articulate Storyline to the limit Best practices How to be mindful of reusability Methods for organizing your project The differences between storyboarding and rapid development Ways of streamlining your development (For more resources related to this topic, see here.) Pushing Articulate Storyline to the limit The purpose of this book is really to get you comfortable with pushing Articulate Storyline to its limits. Doing this may also broaden your imagination, allowing you to push your creativity to its limits. There are so many things you can do within Storyline, and a lot of those features, interactions, or functions are overlooked because they just aren't used all that often. Often times, the basic functionality overshadows the more advanced functions because they're easier, they often address the need, and they take less time to learn. That's understandable, but this book is going to open your mind to many more things possible within this tool. You'll get excited, frustrated, excited again, and probably frustrated a few more times, but with all of the practical activities for you to follow along with (and or reverse engineer), you'll be mastering Articulate Storyline and pushing it to its limits within no time! If you don't quite get one of the concepts explained, don't worry. You'll always have access to this book and the activity downloads as a handy reference or refresher. Best practices Before you get too far into your development, it's important to take some steps to streamline your approach by establishing best practices—doing this will help you become more organized and efficient. Everyone has their own process, so this is by no means a prescribed format for the proper way of doing things. These are just some recommendations, from personal experience, that have proven effective as an e-learning developer. Please note that these best practices are not necessarily Storyline-related, but are best practices to consider ahead of development within any e-learning project. Your best practices will likely be project-specific in terms of how your clients or how your organization's internal processes work. Sometimes you'll be provided with storyboard ahead of development and sometimes you'll be expected to rapidly develop. Sometimes you'll be provided with all multimedia ahead of development and sometimes you'll be provided with multimedia after an alpha review. You may want to do a content dump at the beginning of your development process or you may want to work through each slide from start until finish before moving on. Through experience and observation of what other developers are doing, you will learn how to define and adapt your best practices. When a new project comes along, it's always a good idea to employ some form of organization. There are many great reasons for this, some of which include being mindful of reusability, maintaining and organizing project and file structure, and streamlining your development process. This article aims to provide you with as much information as necessary to ensure that you are effectively organizing your projects for enhanced efficiency and an understanding of why these methods should always be considered best practices. How to be mindful of reusability When I think about reusability in e-learning, I think about objects and content that can be reused in a variety of contexts. Developers often run into this when working on large projects or in industries that involve trade-specific content. When working on multiple projects within one sector, you may come across assets used previously in one course (for example, a 3D model of an aircraft) that may be reused in another course of the same content base. Being able to reuse content and/or assets can come in handy as it can save you resources in the long run. Reusing previously established assets (if permitted to do so, of course) would reduce the amount of development time various departments and/or individuals need to spend. Best practices for reusability might include creating your own content repository and defining a file naming convention that will make it easy for you to quickly find what you're looking for. If you're extra savvy, you can create a metadata-coded database, but that might require a lot more effort than you have available. While it does take extra time to either come up with a file naming convention or apply metadata tagging to all assets within your repository, the goal is to make your life easier in the long run. Much like the dreaded administrative tasks required of small business owners, it's not the most sought-after task, but it's a necessary one, especially if you truly want to optimize efficiency! Within Articulate Storyline, you may want to maintain a repository of themes and interactions as you can use elements of these assets for future development and they can save you a lot of time. Most projects, in the early stages, require an initial prototype for the client to sign off on the general look and feel. In this prototyping phase, having a repository of themes and interactions can really make the process a lot smoother because you can call on previous work in order to easily facilitate the elemental design of a new project. Storyline allows you to import content from many sources (for example, PowerPoint, Articulate Engage, Articulate Quizmaker, and more), so you don't feel limited to just reusing Storyline interactions and/or themes. Just structure your repository in an organized manner and you will be able to easily locate the files and file types that you're looking to use at a later date. Another great thing Articulate Storyline is good for when it comes to reusability is Question Banks! Most courses contain questions, knowledge checks, assessments, or whatever you want to call them, but all too seldom do people think about compiling these questions in one neat area for reuse later on. Instead, people often add new question slides, add the question, and go on their merry development way. If you're one of those people, you need to STOP. Your life will be entirely changed by the concept of question banks—if not entirely, at least a little bit, or at least the part of your life that dabbles in development will be changed in some small way. Question banks allow you to create a bank of questions (who would have thought) and call on these questions at any time for placement within your story—reusability at its finest, at least in Storyline. Methods for organizing your project Organizing your project is a necessary evil. Surely there is someone out there who loves this process, but for others who just want to develop all day and all night, there may be a smaller emphasis placed on organization. However, you can take some simple steps to organize your project that can be reused for future projects. Within Storyline, the organizational emphasis of this article will be placed on using Story View and optimizing the use of scenes. These are two elements of Storyline that, depending on the size of your project, can make a world of difference when it comes to making sense of all the content you've authored in terms of making the structure of your content more palatable. Using the Story View Story View is such a great feature of Storyline! It provides you with a bird's eye view of your project, or story, and essentially shows you a visual blueprint of all scenes and slides. This is particularly helpful in projects that involve a lot of branching. Instead of seeing the individual parts, you're seeing the parts as they represent the whole—the Gestalt psychology would be proud! You can also use Story View to plan out the movement of existing scenes or slides if content isn't lining up quite the way you want it to: Optimizing scene use Scenes play a very big role in maintaining organization within your story. They serve to group slides into smaller segments of the entire story and are typically defined using logical breaks. However, it's all up to you how you decide to group your slides. If the story you're working on consists of multiple topics or modules, each topic or module would logically become a new scene. Visually, scenes work in tandem with Story View in that while you're in Story View, you can clearly see the various scenes and move things around appropriately. Functionally, scenes serve to create submenus in the main Storyline menu, but you can change this if you don't want to see each scene delineated in the menu. From an organization and control perspective, scenes can help you reel in unwieldy and overwhelming content. This particularly comes in handy with large courses, where you can easily lose your place when trying to track down a specific slide of a scene, for example, in a sea of 150 slides. In this sense, scenes allow you to chunk content into more manageable scenes within your story and will likely allow you to save on development and revision time. Using scenes will also help when it comes to previewing your story. Instead of having to wait to load 150 slides each time you preview, you can choose to preview a scene and will only have to wait for the slides in that scene to load—perhaps 15 slides of the entire course instead of 150. Scenes really are a magical thing! Asset management Asset management is just what it sounds like—managing your assets. Now, your assets may come in many forms, for example, media assets (your draft and/or completed images/video/audio), customer furnished assets (files provided by the client, which could be raw images/video/audio/PowerPoint/Word documents, and so on.), or content output (outputs from whichever authoring tool you're using). If you've worked on large projects, you will likely relate to how unwieldy these assets can become if you don't have a system in place for keeping everything organized. This is where the management element comes into play. Structuring your folders Setting up a consistent folder structure is really important when it comes to managing your assets. Structuring your folders may seem like a daunting administrative task, but once you determine a structure that works well for you and your projects, you can copy the structure for each project. So yeah, there is a little bit of up front effort, but the headache it will save you in the long run when it comes to tracking down assets for reuse is worth the effort! Again, this folder structure is in no way prescribed, but it is a recommendation, and one that has worked well. It looks something like the following: It may look overwhelming, but it's really not that bad. There are likely more elements accounted here than you may need for your project, but all main elements are included, and you can customize it as you see fit. This is how the folder structure breaks down: Project Folder: 100 Project Management Depending on how large the project is, this folder may have subfolders, for example: Meeting Minutes Action Tracking Risk Management Contracts Invoices 200 Development This folder typically contains subfolders related to my development, for example: Client-Furnished Information (CFI) Scripts and Storyboards Scripts Audio Narration Storyboards Media Video Audio Draft Audio Final Audio Images Flash Output Quality Assurance 300 Client This folder will include anything sent to the client for review, for example: Delivered Review Comments Final Within these folders, there may be other subfolders, but this is the general structure that has proven effective for me. When it comes to filenames, you may wish to follow a file naming convention dictated by the client or follow an internal file naming convention, which indicates the project, type of media, asset number, and version number, for example, PROJECT_A_001_01. If there are multiple courses for one project, you may also want to add an arbitrary course number to keep tabs on which asset belongs to which course. Once a file naming convention has been determined, these filenames will be managed within a spreadsheet, housed within the main 200>Media folder. The basic goal of this recommended folder structure is to organize your course assets and break them into three groups to further help with the organization. If this folder structure sounds like it might be functional for your purposes, go ahead and download a ready-made version of the folder structure. Storyboarding and rapid prototyping Storyboarding and rapid prototyping will likely make their way into your development glossary, if they haven't already, so they're important concepts to discuss when it comes to streamlining your development. Through experience, you'll learn how each of these concepts can help you become more efficient, and this section will discuss some benefits and detriments of both. Storyboarding is a process wherein the sequence of an e-learning project is laid out visually or textually. This process allows instructional designers to layout the e-learning project to indicate screens, topics, teaching points, onscreen text, and media descriptions. However, storyboards may not be limited to just those elements. There are many variations. However, the previously mentioned elements are most commonly represented within a storyboard. Other elements may include audio narration script, assessment items, high-level learning objectives, filenames, source/reference images, or screenshots illustrating the anticipated media asset or screen to be developed. The good thing about storyboarding is that it allows you to organize the content and provides documentation that may be reviewed prior to entry into an authoring environment. Storyboarding provides subject matter experts with a great opportunity for ironing out textual content to ensure accuracy, and can help developers in terms of reducing small text changes once in the authoring environment. These small changes are just that, small, but they also add up quickly and can quickly throw a wrench into your well-oiled, efficient, development machine. Storyboarding also has its downsides. It is an extra step in the development process and may be perceived, by potential clients, as an additional and unnecessary expense. Because storyboards do not depict the final product, reviewers may have difficulty in reviewing content as they cannot contextualize without being able to see the final product. This can be especially true when it comes to reviewing a storyboard involving complex branching scenarios. Rapid prototyping on the other hand involves working within the authoring environment, in this case Articulate Storyline, to develop your e-learning project, slide by slide. This may occur in developing an initial prototype, but may also occur throughout the lifecycle of the project as a means for eliminating the step of storyboarding from the development process. With rapid prototyping, reviewers have added context of visuals and functionality. They are able to review a proposed version of the end product, and as such, their review comments may become more streamlined and their review may take less time to conduct. However, reviewers may also get overloaded by visual stimuli, which may hamper their ability to review for content accuracy. Additionally, rapid prototyping may become less rapid when it comes to revising complex interactions. In both situations, there are clear advantages and disadvantages, so a best practice should be to determine an appropriate way ahead with regard to development and understand which process may best suit the project for which you are authoring. Streamlining your development Storyline provides you with so many ways to streamline your development. A sampling of topics discussed includes the following: Setting up auto-save Setting up defaults Keyboard shortcuts Dockable panels Using the format painter Using the eyedropper Cue points Duplicating objects Naming objects Summary This article introduced you to the concept of pushing Articulate Storyline 2 to its limits, provided you with some tips and tricks when it comes to best practices and being mindful of reusability, identified a functional folder structure and explained the importance that organization will play in your Storyline development, explained the difference between storyboarding and rapid prototyping, and gave you a taste of some topics that may help you streamline your development process. You are now armed with all of my best advice for staying productive and organized, and you should be ready to start a new Storyline project! Resources for Article: Further resources on this subject: Data Science with R [article] Sizing and Configuring your Hadoop Cluster [article] Creating Your Own Theme—A Wordpress Tutorial [article]
Read more
  • 0
  • 0
  • 1477

article-image-working-spring-tag-libraries
Packt
13 Jul 2016
26 min read
Save for later

Working with Spring Tag Libraries

Packt
13 Jul 2016
26 min read
In this article by Amuthan G, the author of the book Spring MVC Beginners Guide - Second Edition, you are going to learn more about the various tags that are available as part of the Spring tag libraries. (For more resources related to this topic, see here.) After reading this article, you will have a good idea about the following topics: JavaServer Pages Standard Tag Library (JSTL) Serving and processing web forms Form-binding and whitelisting Spring tag libraries JavaServer Pages Standard Tag Library JavaServer Pages (JSP) is a technology that lets you embed Java code inside HTML pages. This code can be inserted by means of <% %> blocks or by means of JSTL tags. To insert Java code into JSP, the JSTL tags are generally preferred, since tags adapt better to their own tag representation of HTML, so your JSP pages will look more readable. JSP even lets you  define your own tags; you must write the code that actually implements the logic of your own tags in Java. JSTL is just a standard tag library provided by Oracle. We can add a reference to the JSTL tag library in our JSP pages as follows: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> Similarly, Spring MVC also provides its own tag library to develop Spring JSP views easily and effectively. These tags provide a lot of useful common functionality such as form binding, evaluating errors and outputting messages, and more when we work with Spring MVC. In order to use these, Spring MVC has provided tags in our JSP pages. We must add a reference to that tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> These taglib directives declare that our JSP page uses a set of custom tags related to Spring and identify the location of the library. It also provides a means to identify the custom tags in our JSP page. In the taglib directive, the uri attribute value resolves to a location that the servlet container understands and the prefix attribute informs which bits of markup are custom actions. Serving and processing forms In Spring MVC, the process of putting a HTML form element's values into model data is called form binding. The following line is a typical example of how we put data into the Model from the Controller: model.addAttribute(greeting,"Welcome") Similarly, the next line shows how we retrieve that data in the View using a JSTL expression: <p> ${greeting} </p> But what if we want to put data into the Model from the View? How do we retrieve that data in the Controller? For example, consider a scenario where an admin of our store wants to add new product information to our store by filling out and submitting a HTML form. How can we collect the values filled out in the HTML form elements and process them in the Controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form backing bean in the Model. Later, the Controller can retrieve the formbacking bean from the Model using the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. The form backing bean (sometimes called the form bean) is used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields in the form and the properties in our domain object. Another approach is creating separate classes for form beans, which is sometimes called Data Transfer Objects (DTO). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags, which are more or less similar to HTML form and input tags, but have some special attributes to bind form elements’ data with the form backed bean. Let's create a Spring web form in our application to add new products to our product list: Open our ProductRepository interface and add one more method declaration to it as follows: void addProduct(Product product); Add an implementation for this method in the InMemoryProductRepository class as follows: @Override public void addProduct(Product product) { String SQL = "INSERT INTO PRODUCTS (ID, " + "NAME," + "DESCRIPTION," + "UNIT_PRICE," + "MANUFACTURER," + "CATEGORY," + "CONDITION," + "UNITS_IN_STOCK," + "UNITS_IN_ORDER," + "DISCONTINUED) " + "VALUES (:id, :name, :desc, :price, :manufacturer, :category, :condition, :inStock, :inOrder, :discontinued)"; Map<String, Object> params = new HashMap<>(); params.put("id", product.getProductId()); params.put("name", product.getName()); params.put("desc", product.getDescription()); params.put("price", product.getUnitPrice()); params.put("manufacturer", product.getManufacturer()); params.put("category", product.getCategory()); params.put("condition", product.getCondition()); params.put("inStock", product.getUnitsInStock()); params.put("inOrder", product.getUnitsInOrder()); params.put("discontinued", product.isDiscontinued()); jdbcTempleate.update(SQL, params); } Open our ProductService interface and add one more method declaration to it as follows: void addProduct(Product product); And add an implementation for this method in the ProductServiceImpl class as follows: @Override public void addProduct(Product product) { productRepository.addProduct(product); } Open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/market/products"; } Finally, add one more JSP View file called addProduct.jsp under the  src/main/webapp/WEB-INF/views/ directory and add the following tag reference declaration as the very first line in it: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now add the following code snippet under the tag declaration line and save addProduct.jsp. Note that I skipped some <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage you to add binding tags for the skipped fields while you are trying out this exercise: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form method="POST" modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> <form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now run our application and enter the URL: http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add product information as shown in the following screenshot:Add a products web form Now enter all the information related to the new product that you want to add and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. Whatever was mentioned prior to step 5 was very familiar to you I guess. Anyhow, I will give you a brief note on what we did in steps 1 to 4. In step 1, we just created an addProduct method declaration in our ProductRepository interface to add new products. And in step 2, we just implemented the addProduct method in our InMemoryProductRepository class. Steps 3 and 4 are just a Service layer extension for ProductRepository. In step 3, we declared a similar method addProduct in our ProductService and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; what we did in step 5 was nothing but adding two request mapping methods, namely getAddNewProductForm and processAddNewProductForm: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/market/products"; } If you observe those methods carefully, you will notice a peculiar thing, that is, both the methods have the same URL mapping value in their @RequestMapping annotations (value = "/products/add"). So if we enter the URL http://localhost:8080/webstore/market/products/add in the browser, which method will Spring MVC  map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). Yes if you look again, even though both methods have the same URL mapping, they differ in the request method. So what is happening behind the screen is when we enter the URL http://localhost:8080/webstore/market/products/add in the browser, it is considered as a GET request, so Spring MVC will map that request to the getAddNewProductForm method. Within that method, we simply attach a new empty Product domain object with the model, under the attribute name newProduct. So in the  addproduct.jsp View, we can access that newProduct Model object: Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp View file for some time, so that you understand the form processing flow without confusion. In addproduct.jsp, we just added a <form:form> tag from Spring's tag library: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is coming from a Spring tag library, we need to add a reference to that tag library in our JSP file; that's why we added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of the modelAttribute in the <form:form> tag. If you remember correctly, you can see that this value of the modelAttribute and the attribute name we used to store the newProduct object in the Model from our getAddNewProductForm method are the same. So the newProduct object that we attached to the model from the Controller method (getAddNewProductForm) is now bound to the form. This object is called the form backing bean in Spring MVC. Okay now you should look at every <form:input> tag inside the <form:form>tag. You can observe a common attribute in every tag. That attribute is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to form backing bean. So the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now it’s time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button on our form. Yes, since every form submission is considered a POST request, this time the browser will send a POST request to the same URL http://localhost:8080/webstore/products/add. So this time the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply are calling the addProduct service method to add the new product to the repository: productService.addProduct(productToBeAdded); But the interesting question here is how come the productToBeAdded object is populated with the data that we entered in the form? The answer lies in the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. Notice the method signature of the processAddNewProductForm method: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here if you look at the value attribute of the @ModelAttribute annotation, you can observe a pattern. Yes, the @ModelAttribute annotation's value and the value of the modelAttribute from the <form:form> tag are the same. So Spring MVC knows that it should assign the form bounded newProduct object to the processAddNewProductForm method's parameter productToBeAdded. The @ModelAttribute annotation is not only used to retrieve a object from the Model, but if we want we can even use the @ModelAttribute annotation to add objects to the Model. For instance, we can even rewrite our getAddNewProductForm method to something like the following with using the @ModelAttribute annotation: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can see that we haven't created a new empty Product domain object and attached it to the model. All we did was added a parameter of the type Product and annotated it with the @ModelAttribute annotation, so Spring MVC will know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical View name it is returning: redirect:/market/products. So what we are trying to tell Spring MVC by returning the string redirect:/market/products? To get the answer, observe the logical View name string carefully; if we split this string with the ":" (colon) symbol, we will get two parts. The first part is the prefix redirect and the second part is something that looks like a request path: /market/products. So, instead of returning a View name, we are simply instructing Spring to issue a redirect request to the request path /market/products, which is the request path for the list method of our ProductController. So after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring will use a special View object called RedirectView (org.springframework.web.servlet.view.RedirectView) to issue the redirect command behind the screen. Instead of landing on a web page after the successful submission of a web form, we are spawning a new request to the request path /market/products with the help of RedirectView. This pattern is called redirect-after-post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form. Sometimes after submitting the form, if we press the browser's refresh button or back button, there are chances to resubmit the same form. This behavior is called double submission. Have a go hero – customer registration form It is great that we created a web form to add new products to our web application under the URL http://localhost:8080/webstore/market/products/add. Why don't you create a customer registration form in our application to register a new customer in our application? Try to create a customer registration form under the URL http://localhost:8080/webstore/customers/add. Customizing data binding In the last section, you saw how to bind data submitted by a HTML form to a form backing bean. In order to do the binding, Spring MVC internally uses a special binding object called WebDataBinder (org.springframework.web.bind.WebDataBinder). WebDataBinder extracts the data out of the HttpServletRequest object and converts it to a proper data format, loads it into a form backing bean, and validates it. To customize the behavior of data binding, we can initialize and configure the WebDataBinder object in our Controller. The @InitBinder (org.springframework.web.bind.annotation.InitBinder) annotation helps us to do that. The @InitBinder annotation designates a method to initialize WebDataBinder. Let's look at a practical use of customizing WebDataBinder. Since we are using the actual domain object itself as form backing bean, during the form submission there is a chance for security vulnerabilities. Because Spring automatically binds HTTP parameters to form bean properties, an attacker could bind a suitably-named HTTP parameter with form properties that weren't intended for binding. To address this problem, we can explicitly tell Spring which fields are allowed for form binding. Technically speaking, the process of explicitly telling which fields are allowed for binding is called whitelisting binding in Spring MVC; we can do whitelisting binding using WebDataBinder. Time for action – whitelisting form fields for binding In the previous exercise while adding a new product, we bound every field of the Product domain in the form, but it is meaningless to specify unitsInOrder and discontinued values during the addition of a new product because nobody can make an order before adding the product to the store and similarly discontinued products need not be added in our product list. So we should not allow these fields to be bounded with the form bean while adding a new product to our store. However all the other fields of the Product domain object to be bound. Let's see how to this with the following steps: Open our ProductController class and add a method as follows: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("productId", "name", "unitPrice", "description", "manufacturer", "category", "unitsInStock", "condition"); } Add an extra parameter of the type BindingResult (org.springframework.validation.BindingResult) to the processAddNewProductForm method as follows: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded, BindingResult result) In the same processAddNewProductForm method, add the following condition just before the line saving the productToBeAdded object: String[] suppressedFields = result.getSuppressedFields(); if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Now run our application and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add new product information. Fill out all the fields, particularly Units in order and discontinued. Now press the Add button and you will see a HTTP status 500 error on the web page as shown in the following image: The add product page showing an error for disallowed fields Now open addProduct.jsp from /Webshop/src/main/webapp/WEB-INF/views/ in your project and remove the input tags that are related to the Units in order and discontinued fields. Basically, you need to remove the following block of code: <div class="form-group"> <label class="control-label col-lg-2" for="unitsInOrder">Units In Order</label> <div class="col-lg-10"> <form:input id="unitsInOrder" path="unitsInOrder" type="text" class="form:input-large"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add a new product, but this time without the Units in order and Discontinued fields. Now enter all information related to the new product and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? Our intention was to put some restrictions on binding HTTP parameters with the form baking bean. As we already discussed, the automatic binding feature of Spring could lead to a potential security vulnerability if we used a domain object itself as form bean. So we have to explicitly tell Spring MVC which are fields are allowed. That's what we are doing in step 1. The @InitBinder annotation designates a Controller method as a hook method to do some custom configuration regarding data binding on the WebDataBinder. And WebDataBinder is the thing that is doing the data binding at runtime, so we need to tell which fields are allowed to bind to WebDataBinder. If you observe our initialiseBinder method from ProductController, it has a parameter called binder, which is of the type WebDataBinder. We are simply calling the setAllowedFields method on the binder object and passing the field names that are allowed for binding. Spring MVC will call this method to initialize WebDataBinder before doing the binding since it has the @InitBinder annotation. WebDataBinder also has a method called setDisallowedFields to strictly specify which fields are disallowed for binding . If you use this method, Spring MVC allows any HTTP request parameters to be bound except those fields names specified in the setDisallowedFields method. This is called blacklisting binding. Okay, we configured which the allowed fields are for binding, but we need to verify whether any fields other than those allowed are bound with the form baking bean. That's what we are doing in steps 2 and 3. We changed processAddNewProductForm by adding one extra parameter called result, which is of the type BindingResult. Spring MVC will fill this object with the result of the binding. If any attempt is made to bind any fields other than the allowed fields, the BindingResult object will have a getSuppressedFields count greater than zero. That's why we were checking the suppressed field count and throwing a RuntimeException exception: if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Here the static class StringUtils comes from org.springframework.util.StringUtils. We want to ensure that our binding configuration is working—that's why we run our application without changing the View file addProduct.jsp in step 4. And as expected, we got the HTTP status 500 error saying Attempting to bind disallowed fields when we submit the Add products form with the unitsInOrder and discontinued fields filled out. Now we know our binder configuration is working, we could change our View file so not to bind the disallowed fields—that's what we were doing in step 6; just removing the input field elements that are related to the disallowed fields from the addProduct.jsp file. After that, our added new products page just works fine, as expected. If any of the outside attackers try to tamper with the POST request and attach a HTTP parameter with the same field name as the form baking bean, they will get a RuntimeException. The whitelisting is just an example of how can we customize the binding with the help of WebDataBinder. But by using WebDataBinder, we can perform many more types of binding customization as well. For example, WebDataBinder internally uses many PropertyEditor (java.beans.PropertyEditor) implementations to convert the HTTP request parameters to the target field of the form backing bean. We can even register custom PropertyEditor objects with WebDataBinder to convert more complex data types. For instance, look at the following code snippet that shows how to register the custom PropertyEditor to convert a Date class: @InitBinder public void initialiseBinder (WebDataBinder binder) { DateFormat dateFormat = new SimpleDateFormat("MMM d, YYYY"); CustomDateEditor orderDateEditor = new CustomDateEditor(dateFormat, true); binder.registerCustomEditor(Date.class, orderDateEditor); } There are many advanced configurations we can make with WebDataBinder in terms of data binding, but for a beginner level, we don’t need to go that deep. Pop quiz – data binding Considering the following data binding customization and identify the possible matching field bindings: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("unit*"); } NoOfUnit unitPrice priceUnit united Externalizing text messages So far in all our View files, we hardcoded text values for all the labels; for instance, take our addProduct.jsp file—for the productId input tag, we have a label tag with the hardcoded text value as Product id: <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> Externalizing these texts from a View file into a properties file will help us to have a single centralized control for all label messages. Moreover, it will help us to make our web pages ready for internationalization. But in order to perform internalization, we need to externalize the label messages first. So now you are going to see how to externalize locale-sensitive text messages from a web page to a property file. Time for action – externalizing messages Let's externalize the labels texts in our addProduct.jsp: Open our addProduct.jsp file and add the following tag lib reference at the top: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> Change the product ID <label> tag's value ID to <spring:message code="addProdcut.form.productId.label"/>. After changing your product ID <label> tag's value, it should look as follows: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Create a file called messages.properties under /src/main/resources in your project and add the following line to it: addProduct.form.productId.label = New Product ID Now open our web application context configuration file WebApplicationContextConfig.java and add the following bean definition to it: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see the added product page with the product ID label showing as New Product ID. What just happened? Spring MVC has a special a tag called <spring:message> to externalize texts from JSP files. In order to use this tag, we need to add a reference to a Spring tag library—that's what we did in step 1. We just added a reference to the Spring tag library in our addProduct.jsp file: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> In step 2, we just used that tag to externalize the label text of the product ID input tag: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Here, an important thing you need to remember is the code attribute of <spring:message> tag, we have assigned the value addProduct.form.productId.label as the code for this <spring:message> tag. This code attribute is a kind of key; at runtime Spring will try to read the corresponding value for the given key (code) from a message source property file. We said that Spring will read the message’s value from a message source property file, so we need to create that file property file. That's what we did in step 3. We just created a property file with the name messages.properties under the resource directory. Inside that file, we just assigned the label text value to the message tag code: addProduct.form.productId.label = New Product ID Remember for demonstration purposes I just externalized a single label, but a typical web application will have externalized messages  for almost all tags; in that case messages messages.properties file will have many code-value pair entries. Okay, we created a message source property file and added the <spring:message> tag in our JSP file, but to connect these two, we need to create one more Spring bean in our web application context for the org.springframework.context.support.ResourceBundleMessageSource class with the name messageSource—we did that in step 4: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } One important property you need to notice here is the basename property; we assigned the value messages for that property. If you remember, this is the name of the property file that we created in step 3. That is all we did to enable the externalizing of messages in a JSP file. Now if we run the application and open up the Add products page, you can see that the product ID label will have the same text as we assigned to the  addProdcut.form.productId.label code in the messages.properties file. Have a go hero – externalize all the labels from all the pages I just showed you how to externalize the message for a single label; you can now do that for every single label available in all the pages. Summary At the start of this article, you saw how to serve and process forms, and you learned how to bind form data with a form backing bean. You also learned how to read a bean in the Controller. After that, we went a little deeper into the form bean binding and configured the binder in our Controller to whitelist some of the POST parameters from being bound to the form bean. Finally, you saw how to use one more Spring special tag <spring:message> to externalize the messages in a JSP file. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application[article] Mixing ASP.NET Webforms and ASP.NET MVC[article] ASP.NET MVC Framework[article]
Read more
  • 0
  • 1
  • 13588
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-web-typography
Packt
13 Jul 2016
14 min read
Save for later

Web Typography

Packt
13 Jul 2016
14 min read
This article by Dario Calonaci, author of Practical Responsive Typography teaches you about typography: it's fascinating mysteries, sensual shapes, and everything else you wanted to know about it; this article is about to reveal everything on the subject for you!Every letter, every curve, and every shape in the written form conveys feelings; so it's important to learn everything about it if you want to be a better designer. You also need to know how readable your text is, therefore you have to set it up following some natural constraints our eyes and minds have built in, how white space influences your message, how every form should be taken into consideration in the writing of a textand this article will tell you exactly that! Plus a little more! You will also learn how to approach all of the above in today number one medium, the World Wide Web. Since 95 percent of the Web is made of typography, according toOliver Reichenstein, it's only logical that if you want to approach the Web you surely need to understand it better. Through this article, you'll learn all the basics of typography and will be introduced to it core features, such as: Anatomy Line Height Families Kerning (For more resources related to this topic, see here.) Note that typography, the art of drawing with words, is really ancient, as much as 3200 years prior to the mythological appearance of Christ and the very first book on this matter is the splendid Manuale Tipograficofrom Giambattista Bodoni, which he self-published in 1818. Taking into consideration all the old data, and the new knowledge, everything started from back then and every rule that has been born in print is still valid today, even for the different medium that the Web is. Typefaces classification The most commonly used type classification is based on the technical style and as such it's the one we are going to analyze and use. They are as follows: Serifs Serifs are referred to as such because of the small details that extend from the ending shapes of the characters; the origin of the word itself is obscure, various explanations have been given but none has been accepted as resolute. Their origin can be traced back to the Latin alphabetsof Roman times, probably because of the flares of the brush marks in corners, which were later chiseled in stone by the carvers. They generally give better readability in print than on a screen, probably because of the better definition and evolution of the former in hundreds of years, while the latter technology is, on an evolutionary path, a newborn. With the latest technologies and the high definition monitors that can rival the print definition, multiple scientific studies have been found inconclusive, showing that there is no discernible difference in readability between sans and serifs on the screen and as of today they are both used on the Web. Within this general definition, there are multiples sub-families, as Old Style or Humanist. Old Style or Humanist The oldest ones, dating as far back as the mid 1400s are recognized for the diagonal guide on which the characters are built on; these are clearly visible for example on the e and o of Adobe Jenson. Transitional Serifs They are neither antique nor modern and they date back to the 1700s and are generally numerous. They tend to abandon some of the diagonal stress, but not all of them, especially keeping the o. Georgia and Baskerville are some well-known examples. Modern Serifs Modern Serifs tend to rely on the contrast between thick and thin strokes, abandon diagonal for vertical stress, and on more straight serifs. They appeared in the late 1700s. Bodoni and Didot are certainly the most famous typefaces in this family. Slab Serifs Slab Serifs have little to no contrast between strokes, thick serifs, and sometimes appear with fixed widths, the underlying base resembles one of the sansmore. American Typewriter is the most famous typefaces in this familyas shown in the following image: Sans Serifs They are named sodue to the loss of the decorative serifs, in French "sans" stands for "without". Sans Serif isa more recent invention, since it was born in the late 18th century. They are divided into the following four sub-families: Grotesque Sans It is the earliest of the bunch; its appearance is similar to the serif with contrasted strokesbut without serifsand with angled terminals Franklin Gothic is one of the most famous typefaces in this family. Neo-Grotesque Sans It is plain looking with little to no contrast, small apertures, and horizontal terminals. They are one of the most common font styles ranging from Arial and Helvetica to Universe. Humanist font They have a friendly tone due to the calligraphic stylewith a mixture of different widths characters and, most of the times, contrasted strokes. Gill Sans being the flag-carrier. Geometric font Based on the geometric and rigorous shapes, they are more modern and are used less for body copy. They have a general simplicity but readability of their charactersis difficult. Futura is certainly the most famous geometric font. Script typefaces They are usually classified into two sub-familiesbased upon the handwriting, with cursive aspect and connected letterforms. They are as follows: Formal script Casual script Monospaced typefaces Display typefaces Formal script They are reminiscent of the handwritten letterforms common in the 17th and 18th centuries, sometimes they are also based on handwritings offamous people. They are commonly used for elevated and highly elegant designs and are certainly unusable for long body copy. Kunstler Script is a relatively recent formal script. Casual script This is less precise and tends to resemble a more modern and fast handwriting. They are as recent as the mid-twentieth century. Mistral is certainly the most famous casual script. Monospaced typefaces Almost all the aforementioned families are proportional in their style, (each character takes up space that is proportional to its width). This sub-family addresses each character width as the same, with narrower ones, such as i,just gain white space around them, sometimesresulting in weird appearances. Hence,Due to their nature and their spacing, they aren’t advised as copy typefaces, since their mono spacing can bring unwanted visual imbalance to the text. Courier is certainly the most known monospaced typeface. Display typefaces They are the broadest category and are aimed at small copy to draw attention and rarely follow rules, spreading from every one of the above families and expressing every mood. Recently even Blackletters (the very first fonts designed with the very first, physical printing machines) are being named under this category. For example, Danube and Val are just two of the multitude thatare out there: Expressing different moods In conjunction with the division of typography families, it's also really importantfor every project, both in print and web, to know what they express and why. It takes years of experience to understand those characteristics and the methodto use them correctly; here we are just addressing a very basic distinction to help you start with. Remember that in typography and type design, every curve conveys a different mood, so just be patient while studying and designing. Serifs vs Sans Serifs, through their decorations, their widths, and in and out of their every sub-family convey old and antique/traditional serious feelings, even when more modern ones are used; they certainly convey a more formal appearance. On the other hand, sans serifare aimed at a more modern and up-to-date world, conveying technological advancement, rationality, usually but not always,and less of a human feeling. They're more mechanical and colder than a serif, unless the author voluntarily designed them to be more friendly than the standard ones.. Scripts vs scripts As said, they are of two types, and as the name suggests, the division is straightforward. Vladimir is elegant, refined, upper class looking, and expressesfeelings such as respect. Arizonia on the other hand is not completely informal but is still a schizophrenic mess of strokes and a conclusionless expression of feeling; I'm not sure whether I feel amused or offended for its exaggerated confidentiality. Displaytypefaces Since theyare different in aspect from each other and the fact that there is no general rule that surrounds and defines the Display family, they can express the whole range of emotions.They can go from apathy to depression, from a complete childish involvement and joy to some suited, scary seriousness business feeling (the latter definition is usually expression of some monospaced typefaces). Like every other typeface, more specifically here, every change in weight and style brings in a new sentiment to the table: use it in bold and your content will be strong, fierce; change it to a lighter italic and it will look like its moving, ready to exit from the page. As such, they take years to master and we advice not to use them on your first web work, unless you are completely sure of what you are doing. Every font communicates differently, on a conscious as well as on a subconscious level; even within the same typeface,it all comes down to what we are accustomed to. In the case of font color, what a script does and feel in the European culture can drastically change if the same is used for advertising in the Asian market. Always do your research first. Combining typefaces Combining typefaces is a vital aspect of your projects but it's a tool that is hard to master. Generally,it is said that you should use no more than two fonts in your design. It is a good rule; but let me explain it—or better—enlarge it. While working with text for an informational text block, similar tothe one you are reading now, stick to it. You will express enough contrast and interest while stayingbalanced and the reader willnot get distracted. They will follow the flow and understand the hierarchy of what they are reading. However, as a designer, while typesetting you're not always working on a pure text block: you could be working with words on a packaging or on the web. However, if you know enough about typography and your eyes are well trained (usually after years of visual research and of designing with attention) you can break the rules. You get energy only when mixing contrasting fonts, so why not add a third one to bring in a better balance between the two? As a rule, you can combine fonts when: They are not in the same classification. You mix fonts to add contrast and energy and to inject interest and readability in your document and this is why the clash between serif and sans has been proven timeless.Working with two serifs/sans together instead works only with extensive trial and error and you should choose two fonts that carry enough differences. You can usually combine different subfamilies, for example a slab serif with a modern one or a geometric sans with a grotesque. If your scope is readability, find the same structure.A similar height and similar width works easily when choosing two classifications; but if your scope is aesthetic for small portions of text, you can try completely different structures, such as a slab serif with a geometric sans. You willsee that sometimes it does the job! Go extreme!This requires more experience to balance it out, but if you're working with display or script typefaces, it's almost impossible to find something similar without being boring or unreadable. Try to mix them with more simplistic typefaces if the starting point has a lot of decorations; you won't regret the trial! Typography properties Now that you know the families, you need to know the general rules that will make your text and their usage flow like a springtime breeze. Kerning Is the adjusting of space between two characters to achieve a visually balanced word trough anda visually equal distribution of white space. The word originates from the Latin wordcardo meaning Hinge.When letters were made of metal on wooden blocks, parts of them were built to hang off the base, thus giving space for the next character to sit closer. Tracking It is also as called letter-spacingand it is concerned with the entire word—not single characters or the whole text block—to change the density and texture in a text and to affect its readability. The word originates from the metal tracks where the wooden blocks with the characters were moved horizontally. Tracking request careful settings: too much white space and the words won't appear as single coherent blocks anymore –reduce the white space between the letters drastically and the letters themselves won't be readable. As a rule, you want your lines of text to be made of 50 to 75 characters, including dots and spaces, to achieve better readability. Some will ask you to stop your typing as soon as approximately 39 characters are reached, but I tend to differ. Ligatures According to kerning, especially on serifs, two or three character can clash together. Ligatures are born to avoid this; they are stylistic characters that combine two or three letters into one letter: Standard ligatures are naturally and functionally the most common ones and are made between fi, fl, and other letters when placed next to an f. They should be used, as they tend to make the script more legible. Discretionary ligatures are not functional, they just serve a decorative purpose. They are commonly found and designed between Th and st;as mentioned above, you should use them at your discretion. Leading Leading is the space between the baselines of your text, while line-height adds to the notions and also to the height of ascenders and descenders.The name came to be because in the ancient times, stripes of lead were used to add white space between two lines of text. There are many rules in typesetting (none of which came out as a perfect winner) and everything changes according to the typeface you're using. Mechanical print tends to add 2 points to the current measure being used, while a basic rule for digital is to scale the line-spacing as much as 120 percent of your x-height, which is called single spacing. As a rule of thumb, scale in between 120 and 180 percent and youare good to go (of course with the latter being used for typefaces with a major x-height). Just remember, the descenders should never touch the next line ascenders, otherwise the eye will perceive the text as crumpled and you will have difficulties to understand where one line ends and the other start. Summary The preceding text covers the basics of typography, which you should study and know in order to make the text in your assignment flow better. Now, you have a greater understanding of typography: what it is; what it's made of; what are its characteristics; what the brain search for and process in a text; the lengths it will go to understand it; and the alignments, spacing, and other issues that revolve around this beautiful subject. The most important rule to remember is that text is used to express something. It may be an informative reading, may be the expression of a feeling, such as a poem, or it can be something to make you feel something specifically. Every text has a feeling, every text has an inner tone of voice that can be expressed visually through typography. Usually it’s the text itself that dictates its feeling – and help you decide which and how to express it. All the preceding rules, properties, and knowledgeare means for you to express it and there's a large range of properties on the Web for you to use them. There is almost as much variety available in print with properties for leading, kerning, tracking, and typographical hierarchy all built in your browsers. Resources for Article: Further resources on this subject: Exploring Themes [article] A look into responsive design frameworks [article] Joomla! Template System [article]
Read more
  • 0
  • 0
  • 2454

article-image-working-forms-using-rest-api
Packt
11 Jul 2016
21 min read
Save for later

Working with Forms using REST API

Packt
11 Jul 2016
21 min read
WordPress, being an ever-improving content management system, is now moving toward becoming a full-fledged application framework, which brings up the necessity for new APIs. The WordPress REST API has been created to create necessary and reliable APIs. The plugin provides an easy-to-use REST API, available via HTTP that grabs your site's data in the JSON format and further retrieves it. WordPress REST API is now at its second version and has brought a few core differences, compared to its previous one, including route registration via functions, endpoints that take a single parameter, and all built-in endpoints that use a common controller. In this article by Sufyan bin Uzayr, author of the book Learning WordPress REST API, you'll learn how to write a functional plugin to create and edit posts using the latest version of the WordPress REST API. This article will also cover the process on how to work efficiently with data to update your page dynamically based on results. This tutorial comes to serve as a basis and introduction to processing form data using the REST API and AJAX and not as a redo of the WordPress post editor or a frontend editing plugin. REST API's first task is to make your WordPress powered websites more dynamic, and for this precise reason, I have created a thorough tutorial that will take you step by step in this process. After you understand how the framework works, you will be able to implement it on your sites, thus making them more dynamic. (For more resources related to this topic, see here.) Fundamentals In this article, you will be doing something similar, but instead of using the WordPress HTTP API and PHP, you'll use jQuery's AJAX methods. All of the code for that project should go in its plugin file.Another important tip before starting is to have the required JavaScript client installed that uses the WordPress REST API. You will be using the JavaScript client to make it possible to authorize via the current user's cookies. As a note for this tip would be the fact that you can actually substitute another authorization method such as OAuth if you would find it suitable. Setup the plugin During the course of this tutorial, you'll only need one PHP and one JavaScript file. Nothing else is necessary for the creation of our plugin. We will be starting off with writing a simple PHP file that will do the following three key things for us: Enqueue the JavaScript file Localize a dynamically created JavaScript object into the DOM when you use the said file Create the HTML markup for our future form All that is required of us is to have two functions and two hooks. To get this done, we will be creating a new folder in our plugin directory with one of the PHP files inside it. This will serve as the foundation for our future plugin. We will give the file a conventional name, such as my-rest-post-editor.php. In the following you can see our starting PHP file with the necessary empty functions that we will be expanding in the next steps: <?php /* Plugin Name: My REST API Post Editor */ add_shortcode( 'My-Post-EditorR', 'my_rest_post_editor_form'); function my_rest_post_editor_form( ) { } add_action( 'wp_enqueue_scripts', 'my_rest_api_scripts' ); function my_rest_api_scripts() { } For this demonstration, notice that you're working only with the post title and post content. This means that in the form editor function, you only need the HTML for a simple form for those two fields. Creating the form with HTML markup As you can notice, we are only working with the post title and post content. This makes it necessary only to have the HTML for a simple form for those two fields in the editor form function. The necessary code excerpt is as follows: function my_rest_post_editor_form( ) { $form = ' <form id="editor"> <input type="text" name="title" id="title" value="My title"> <textarea id="content"></textarea> <input type="submit" value="Submit" id="submit"> </form> <div id="results"> </div>'; return $form; } Our aim is to show this only to those users who are logged in on the site and have the ability to edit posts. We will be wrapping the variable containing the form in some conditional checks that will allow us to fulfill the said aim. These tests will check whether the user is logged-inin the system or not, and if he's not,he will be provided with a link to the default WordPress login page. The code excerpt with the required function is as follows: function my_rest_post_editor_form( ) { $form = ' <form id="editor"> <input type="text" name="title" id="title" value="My title"> <textarea id="content"></textarea> <input type="submit" value="Submit" id="submit"> </form> <div id="results"> </div> '; if ( is_user_logged_in() ) { if ( user_can( get_current_user_id(), 'edit_posts' ) ) { return $form; } else { return __( 'You do not have permissions to do this.', 'my-rest-post-editor' ); } } else {      return sprintf( '<a href="%1s" title="Login">%2s</a>', wp_login_url( get_permalink( get_ queried_object_id() ) ), __( 'You must be logged in to do this, please click here to log in.', 'my-rest-post-editor') ); } } To avoid confusions, we do not want our page to be processed automatically or somehow cause a page reload upon submitting it, which is why our form will not have either a method or an action set. This is an important thing to notice because that's how we are avoiding the unnecessary automatic processes. Enqueueing your JavaScript file Another necessary thing to do is to enqueue your JavaScript file. This step is important because this function provides a systematic and organized way of loading Javascript files and styles. Using the wp_enqueue_script function, you will tell WordPress when to load a script, where to load it, and what are its dependencies. By doing this, everyone utilizes the built-in JavaScript libraries that come bundled with WordPress rather than loading the same third-party script several times. Another big advantage of doing this is that it helps reduce the page load time and avoids potential code conflicts with other plugins. We use this method instead the wrong method of loading in the head section of our site because that's how we avoid loading two different plugins twice, in case we add one more manually. Once the enqueuing is done, we will be localizing an array of data into it, which you'll need to include in the JavaScript that needs to be generated dynamically. This will include the base URL for the REST API, as that can change with a filter, mainly for security purposes. Our next step is to make this piece as useable and user-friendly as possible, and for this, we will be creating both a failure and success message in an array so that our strings would be translation friendly. When done with this, you'll need to know the current user's ID and include that one in the code as well. The result we have accomplished so far is owed to the wp_enqueue_script()and wp_localize_script()functions. It would also be possible to add custom styles to the editor, and that would be achieved by using the wp_enqueue_style()function. While we have assessed the importance and functionality of wp_enqueue_script(), let's take a close look at the other ones as well. The wp_localize_script()function allows you to localize a registered script with data for a JavaScript variable. By this, we will be offered a properly localized translation for any used string within our script. As WordPress currently offers localization API in PHP; this comes as a necessary measure. Though the localization is the main use of the function, it can be used to make any data available to your script that you can usually only get from the server side of WordPress. The wp_enqueue_stylefunctionis the best solution for adding stylesheets within your WordPress plugins, as this will handle all of the stylesheets that need to be added to the page and will do it in one place. If you have two plugins using the same stylesheet and both of them use the same handle, then WordPress will only add the stylesheet on the page once. When adding things to wp_enqueue_style, it adds your styles to a list of stylesheets it needs to add on the page when it is loaded. If a handle already exists, it will not add a new stylesheet to the list. The function is as follows: function my_rest_api_scripts() { wp_enqueue_script( 'my-api-post-editor', plugins_url( 'my-api-post-editor.js', __FILE__ ), array( 'jquery' ), false, true ); wp_localize_script( 'my-api-post-editor', 'my_post_editor', array( 'root' => esc_url_raw( rest_url() ), 'nonce' => wp_create_nonce( 'wp_json' ), 'successMessage' => __( 'Post Creation Successful.', 'my-rest-post-editor' ), 'failureMessage' => __( 'An error has occurred.', 'my-rest-post-editor' ), 'userID'    => get_current_user_id(), ) ); } That will be all the PHP you need as everything else is handled via JavaScript. Creating a new page with the editor shortcode (MY-POST-EDITOR) is what you should be doing next and then proceed to that new page. If you've followed the instructions precisely, then you should see the post editor form on that page. It will obviously not be functional just yet, not before we write some JavaScript that will add functionality to it. Issuing requests for creating posts To create posts from our form, we will need to use a POST request, which we can make by using jQuery's AJAX method. This should be a familiar and very simple process for you, yet if you're not acquitted with it,you may want to take a look through the documentation and guiding offered by the guys at jQuery themselves (http://api.jquery.com/jquery.ajax/). You will also need to create two things that may be new to you, such as the JSON array and adding the authorization header. In the following, we will be walking through each of them in details. To create the JSON object for your AJAX request, you must firstly create a JavaScript array from the input and then use the JSON.stringify()to convert it into JSON. The JSON.strinfiy() method will convert a JavaScript value to a JSON string by replacing values if a replacer function is specified or optionally including only the specified properties if a replacer array is specified. The following code excerpt is the beginning of the JavaScript file that shows how to build the JSON array: (function($){ $( '#editor' ).on( 'submit', function(e) {        e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val();        var JSONObj = { "title"  :title, "content_raw" :content, "status"  :'publish' };        var data = JSON.stringify(JSONObj); })(jQuery); Before passing the variable data to the AJAX request, you will have first to set the URL for the request. This step is as simple as appending wp.v2/posts to the root URL for the API, which is accessible via _POST_EDITOR.root: var url = _POST_EDITOR.root; url = url + 'wp/v2/posts'; The AJAX request will look a lot like any other AJAX request you would make, with the sole exception of the authorization headers. Because of the REST API's JavaScript client, the only thing that you will be required to do is to add a header to the request containing the nonce set in the _POST_EDITOR object. Another method that could work as an alternative would be the OAuth authorization method. Nonce is an authorization method that generates a number for specific use, such as a session authentication. In this context, nonce stands for number used once or number once. OAuth authorization method OAuth authorization method provides users with secure access to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing any user credentials. It is important to state that is has been designed to work with HTTP protocols, allowing an authorization server to issue access tokens to third-party clients. The third party would then use the access token to access the protected resources hosted on the server. Using the nonce method to verify cookie authentication involves setting a request header with the name X-WP-Nonce, which will contain the said nonce value. You can then use the beforeSend function of the request to send the nonce. Following is what that looks like in the AJAX request: $.ajax({            type:"POST", url: url, dataType : 'json', data: data,            beforeSend : function( xhr ) {                xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, }); As you might have noticed, the only missing things are the functions that would display success and failure. These alerts can be easily created by using the messages that we localized into the script earlier. We will now output the result of the provided request as a simple JSON array so that we would see how it looks like. Following is the complete code for the JavaScript to create a post editor that can now create new posts: (function($){ $( '#editor' ).on( 'submit', function(e) {        e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val();        var JSONObj = { "title"   :title, "content_raw" :content, "status"   :'publish' };        var data = JSON.stringify(JSONObj);        var url = MY_POST_EDITOR.root; url += 'wp/v2/posts';        $.ajax({            type:"POST", url: url, dataType : 'json', data: data,            beforeSend : function( xhr ) {                xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, success: function(response) {                alert( MY_POST_EDITOR.successMessage );                $( "#results").append( JSON.stringify( response ) ); }, failure: function( response ) {                alert( MY_POST_EDITOR.failureMessage ); } }); }); })(jQuery); This is how we can create a basic editor in WP REST API. If you are a logged in and the API is still active, you should create a new post and then create an alert telling you that the post has been created. The returned JSON object would then be placed into the #results container. Insert image_B05401_04_01.png If you followed each and every step precisely, you should now have a basic editor ready. You may want to give it a try and see how it works for you. So far, we have created and set up a basic editor that allows you to create posts. In our next steps, we will go through the process of adding functionality to our plugin, which will enable us to edit existing posts. Issuing requests for editing posts In this section, we will go together through the process of adding functionality to our editor so that we could edit existing posts. This part may be a little bit more detailed, mainly because the first part of our tutorial covered the basics and setup of the editor. To edit posts, we would need to have the following two things: A list of posts by author, with all of the posts titles and post content A new form field to hold the ID of the post you're editing As you can understand, the list of posts by author and the form field would lay the foundation for the functionality of editing posts. Before adding that hidden field to your form, add the following HTMLcode: <input type="hidden" name="post-id" id="post-id" value=""> In this step, we will need to get the value of the field for creating new posts. This will be achieved by writing a few lines of code in the JavaScript function. This code will then allow us to automatically change the URL, thus making it possible to edit the post of the said ID, rather than having to create a new one every time we would go through the process. This would be easily achieved by writing down a simple code piece, like the following one: var postID = $( '#post-id').val(); if ( undefined !== postID ) { url += '/';    url += postID; } As we move on, the preceding code will be placed before the AJAX section of the editor form processor. It is important to understand that the variable URL in the AJAX function will have the ID of the post that you are editing only if the field has value as well. The case in which no such value is present for the field, it will yield in the creation of a new post, which would be identical to the process you have been taken through previously. It is important to understand that to populate the said field, including the post title and post content field, you will be required to add a second form. This will result in all posts to be retrieved by the current user, by using a GET request. Based on the selection provided in the said form, you can set the editor form to update. In the PHP, you will then add the second form, which will look similar to the following: <form id="select-post"> <select id="posts" name="posts"> </select> <input type="submit" value="Select a Post to edit" id="choose-post"> </form> REST API will now be used to populate the options within the #posts select. For us to achieve that, we will have to create a request for posts by the current user. To accomplish our goal, we will be using the available results. We will now have to form the URL for requesting posts by the current user, which will happen if you will set the current user ID as a part of the _POST_EDITOR object during the processes of the script setup. A function needs to be created to get posts by the current author and populate the select field. This is very similar to what we did when we made our posts update, yet it is way simpler. This function will not require any authentication, and given the fact that you have already been taken through the process of creating a similar function, creating this one shouldn't be any more of a hassle for you. The success function loops through the results and adds them to the postselector form as options for its one field and will generate a similar code, something like the following: function getPostsByUser( defaultID ) {    url += '?filter[author]=';    url += my_POST_EDITOR.userID;    url += '&filter[per_page]=20';    $.ajax({ type:"GET", url: url, dataType : 'json', success: function(response) { var posts = {}; $.each(response, function(i, val) {                $( "#posts" ).append(new Option( val.title, val.ID ) ); });            if ( undefined != defaultID ) {                $('[name=posts]').val( defaultID ) } } }); } You can notice that the function we have created has one of the parameters set for defaultID, but this shouldn't be a matter of concern for you just now. The parameter, if defined, would be used to set the default value of the select field, yet, for now, we will ignore it. We will use the very same function, but without the default value, and will then set it to run on document ready. This is simply achieved by a small piece of code like the following: $( document ).ready( function() {    getPostsByUser(); }); Having a list of posts by the current user isn't enough, and you will have to get the title and the content of the selected post and push it into the form for further editing. This is will assure the proper editing possibility and make it possible to achieve the projected result. Moving on, we will need the other GET request to run on the submission of the postselector form. This should be something of the kind: $( '#select-post' ).on( 'submit', function(e) {    e.preventDefault();    var ID = $( '#posts' ).val();    var postURL = MY_POST_EDITOR.root; postURL += 'wp/v2/posts/';    postURL += ID;    $.ajax({ type:"GET", url: postURL, dataType : 'json', success: function(post) { var title = post.title; var content = post.content;            var postID = postID; $( '#editor #title').val( title ); $( '#editor #content').val( content );            $( '#select-post #posts').val( postID ); } }); }); In the form of <json-url>wp/v2/posts/<post-id>, we will build a new URL that will be used to scrape post data for any selected post. This will result in us making an actual request that will be used to take the returned data and then set it as the value of any of the three fields there in the editor form. Upon refreshing the page, you will be able to see all posts by the current user in a specific selector. Submitting the data by a click will yield in the following: The content and title of the post that you have selected will be visible to the editor, given that you have followed the preceding steps correctly. And the second occurrence will be in the fact that the hidden field for the post ID you have added should now be set. Even though the content and title of the post will be visible, we would still be unable to edit the actual posts as the function for the editor form was not set for this specific purpose, just yet. To achieve that, we will need to make a small modification to the function that will make it possible for the content to be editable. Besides, at the moment, we would only get our content and title displayed in raw JSON data; however, applying the method described previously will improve the success function for that request so that the title and content of the post displays in the proper container, #results. In order to achieve this, you will need a function that is going to update the said container with the appropriate data. The code piece for this function will be something like the following: function results( val ) { $( "#results").empty();        $( "#results" ).append( '<div class="post-title">' + val.title + '</div>'  );        $( "#results" ).append( '<div class="post-content">' + val.content + '</div>'  ); } The preceding code makes use of some very simple jQuery techniques, but that doesn't make it any worse as a proper introduction to updating page content by making use of data from the REST API. There are countless ways of getting a lot more detailed or creative with this if you dive in the markup or start adding any additional fields. That will always be an option for you if you're more of a savvy developer, but as an introductory tutorial, we're trying not to keep this tutorial extremely technical, which is why we'll stick to the provided example for now. Insert image_B05401_04_02.png As we move forward, you can use it in your modified form procession function, which will be something like the following: $( '#editor' ).on( 'submit', function(e) {    e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val(); console.log( content );    var JSONObj = { "title" "content_raw" "status" }; :title, :content, :'publish'    var data = JSON.stringify(JSONObj);    var postID = $( '#post-id').val();    if ( undefined !== postID ) { url += '/';        url += postID; }    $.ajax({        type:"POST", url: url, dataType : 'json', data: data,        beforeSend : function( xhr ) {            xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, success: function(response) {            alert( MY_POST_EDITOR.successMessage );            getPostsByUser( response.ID ); results( response ); }, failure: function( response ) {            alert( MY_POST_EDITOR.failureMessage ); } }); }); As you have noticed, a few changes have been applied, and we will go through each of them in specific: The first thing that has changed is the Post ID that's being edited is now conditionally added. This implies that we will make use of the form and it will serve to create new posts by POSTing to the endpoint. Another change with the POST ID is that it will now update posts via posts/<post-id>. The second change regards the success function. A new result() function was used to output the post title and content during the process of editing. Another thing is that we also reran the getPostsbyUser() function, yet it has been set in a way that posts will automatically offer the functionality of editing, just after you will createthem. Summary With this, we havefinishedoff this article, and if you have followed each step with precision, you should now have a simple yet functional plugin that can create and edit posts by using the WordPress REST API. This article also covered techniques on how to work with data in order to update your page dynamically based on the available results. We will now progress toward further complicated actions with REST API. Resources for Article: Further resources on this subject: Implementing a Log-in screen using Ext JS [article] Cluster Computing Using Scala [article] Understanding PHP basics [article]
Read more
  • 0
  • 0
  • 19090

article-image-angulars-component-architecture
Packt
07 Jul 2016
11 min read
Save for later

Angular's component architecture

Packt
07 Jul 2016
11 min read
In this article by Gion Kunz, author of the book Mastering Angular 2 Components, has explained the concept of directives from the first version of Angular changed the game in frontend UI frameworks. This was the first time that I felt that there was a simple yet powerful concept that allowed the creation of reusable UI components. Directives could communicate with DOM events or messaging services. They allowed you to follow the principle of composition, and you could nest directives and create larger directives that solely consisted of smaller directives arranged together. Actually, directives were a very nice implementation of components for the browser. (For more resources related to this topic, see here.) In this section, we'll look into the component-based architecture of Angular 2 and how the previous topic about general UI components will fit into Angular. Everything is a component As an early adopter of Angular 2 and while talking to other people about it, I got frequently asked what the biggest difference is to the first version. My answer to this question was always the same. Everything is a component. For me, this paradigm shift was the most relevant change that both simplified and enriched the framework. Of course, there are a lot of other changes with Angular 2. However, as an advocate of component-based user interfaces, I've found that this change is the most interesting one. Of course, this change also came with a lot of architectural changes. Angular 2 supports the idea of looking at the user interface holistically and supporting composition with components. However, the biggest difference to its first version is that now your pages are no longer global views, but they are simply components that are assembled from other components. If you've been following this chapter, you'll notice that this is exactly what a holistic approach to user interfaces demands. No more pages but systems of components. Angular 2 still uses the concept of directives, although directives are now really what the name suggests. They are orders for the browser to attach a given behavior to an element. Components are a special kind of directives that come with a view. Creating a tabbed interface component Let's introduce a new UI component in our ui folder in the project that will provide us with a tabbed interface that we can use for composition. We use what we learned about content projection in order to make this component reusable. We'll actually create two components, one for Tabs, which itself holds individual Tab components. First, let's create the component class within a new tabs/tab folder in a file called tab.js: import {Component, Input, ViewEncapsulation, HostBinding} from '@angular/core'; import template from './tab.html!text'; @Component({ selector: 'ngc-tab', host: { class: 'tabs__tab' }, template, encapsulation: ViewEncapsulation.None }) export class Tab { @Input() name; @HostBinding('class.tabs__tab--active') active = false; } The only state that we store in our Tab component is whether the tab is active or not. The name that is displayed on the tab will be available through an input property. We use a class property binding to make a tab visible. Based on the active flag we set a class; without this, our tabs are hidden. Let's take a look at the tab.html template file of this component: <ng-content></ng-content> This is it already? Actually, yes it is! The Tab component is only responsible for the storage of its name and active state, as well as the insertion of the host element content in the content projection point. There's no additional templating that is needed. Now, we'll move one level up and create the Tabs component that will be responsible for the grouping all the Tab components. As we won't include Tab components directly when we want to create a tabbed interface but use the Tabs component instead, this needs to forward content that we put into the Tabs host element. Let's look at how we can achieve this. In the tabs folder, we will create a tabs.js file that contains our Tabs component code, as follows: import {Component, ViewEncapsulation, ContentChildren} from '@angular/core'; import template from './tabs.html!text'; // We rely on the Tab component import {Tab} from './tab/tab'; @Component({ selector: 'ngc-tabs', host: { class: 'tabs' }, template, encapsulation: ViewEncapsulation.None, directives: [Tab] }) export class Tabs { // This queries the content inside <ng-content> and stores a // query list that will be updated if the content changes @ContentChildren(Tab) tabs; // The ngAfterContentInit lifecycle hook will be called once the // content inside <ng-content> was initialized ngAfterContentInit() { this.activateTab(this.tabs.first); } activateTab(tab) { // To activate a tab we first convert the live list to an // array and deactivate all tabs before we set the new // tab active this.tabs.toArray().forEach((t) => t.active = false); tab.active = true; } } Let's observe what's happening here. We used a new @ContentChildren annotation, in order to query our inserted content for directives that match the type that we pass to the decorator. The tabs property will contain an object of the QueryList type, which is an observable list type that will be updated if the content projection changes. You need to remember that content projection is a dynamic process as the content in the host element can actually change, for example, using the NgFor or NgIf directives. We use the AfterContentInit lifecycle hook, which we've already briefly discussed in the Custom UI elements section of Chapter 2, Ready, Set, Go! This lifecycle hook is called after Angular has completed content projection on the component. Only then we have the guarantee that our QueryList object will be initialized, and we can start working with child directives that were projected as content. The activateTab function will set the Tab components active flag, deactivating any previous active tab. As the observable QueryList object is not a native array, we first need to convert it using toArray() before we start working with it. Let's now look at the template of the Tabs component that we created in a file called tabs.html in the tabs directory: <ul class="tabs__tab-list"> <li *ngFor="let tab of tabs"> <button class="tabs__tab-button" [class.tabs__tab-button--active]="tab.active" (click)="activateTab(tab)">{{tab.name}}</button> </li> </ul> <div class="tabs__l-container"> <ng-content select="ngc-tab"></ng-content> </div> The structure of our Tabs component is as follows. First we render all the tab buttons in an unordered list. After the unordered list, we have a tabs container that will contain all our Tab components that are inserted using content projection and the <ng-content> element. Note that the selector that we use is actually the selector we use for our Tab component. Tabs that are not active will not be visible because we control this using CSS on our Tab component class attribute binding (refer to the Tab component code). This is all that we need to create a flexible and well-encapsulated tabbed interface component. Now, we can go ahead and use this component in our Project component to provide a segregation of our project detail information. We will create three tabs for now where the first one will embed our task list. We will address the content of the other two tabs in a later chapter. Let's modify our Project component template in the project.html file as a first step. Instead of including our TaskList component directly, we now use the Tabs and Tab components to nest the task list into our tabbed interface: <ngc-tabs> <ngc-tab name="Tasks"> <ngc-task-list [tasks]="tasks" (tasksUpdated)="updateTasks($event)"> </ngc-task-list> </ngc-tab> <ngc-tab name="Comments"></ngc-tab> <ngc-tab name="Activities"></ngc-tab> </ngc-tabs> You should have noticed by now that we are actually nesting two components within this template code using content projection, as follows: First, the Tabs component uses content projection to select all the <ngc-tab> elements. As these elements happen to be components too (our Tab component will attach to elements with this name), they will be recognized as such within the Tabs component once they are inserted. In the <ngc-tab> element, we then nest our TaskList component. If we go back to our Task component template, which will be attached to elements with the name ngc-tab, we will have a generic projection point that inserts any content that is present in the host element. Our task list will effectively be passed through the Tabs component into the Tab component. The visual efforts timeline Although the components that we created so far to manage efforts provide a good way to edit and display effort and time durations, we can still improve this with some visual indication. In this section, we will create a visual efforts timeline using SVG. This timeline should display the following information: The total estimated duration as a grey background bar The total effective duration as a green bar that overlays on the total estimated duration bar A yellow bar that shows any overtime (if the effective duration is greater than the estimated duration) The following two figures illustrate the different visual states of our efforts timeline component: The visual state if the estimated duration is greater than the effective duration The visual state if the effective duration exceeds the estimated duration (the overtime is displayed as a yellow bar) Let's start fleshing out our component by creating a new EffortsTimeline Component class on the lib/efforts/efforts-timeline/efforts-timeline.js path: … @Component({ selector: 'ngc-efforts-timeline', … }) export class EffortsTimeline { @Input() estimated; @Input() effective; @Input() height; ngOnChanges(changes) { this.done = 0; this.overtime = 0; if (!this.estimated && this.effective || (this.estimated && this.estimated === this.effective)) { // If there's only effective time or if the estimated time // is equal to the effective time we are 100% done this.done = 100; } else if (this.estimated < this.effective) { // If we have more effective time than estimated we need to // calculate overtime and done in percentage this.done = this.estimated / this.effective * 100; this.overtime = 100 - this.done; } else { // The regular case where we have less effective time than // estimated this.done = this.effective / this.estimated * 100; } } } Our component has three input properties: estimated: This is the estimated time duration in milliseconds effective: This is the effective time duration in milliseconds height: This is the desired height of the efforts timeline in pixels In the OnChanges lifecycle hook, we set two component member fields, which are based on the estimated and effective time: done: This contains the width of the green bar in percentage that displays the effective duration without overtime that exceeds the estimated duration overtime: This contains the width of the yellow bar in percentage that displays any overtime, which is any time duration that exceeds the estimated duration Let's look at the template of the EffortsTimeline component and see how we can now use the done and overtime member fields to draw our timeline. We will create a new lib/efforts/efforts-timeline/efforts-timeline.html file: <svg width="100%" [attr.height]="height"> <rect [attr.height]="height" x="0" y="0" width="100%" class="efforts-timeline__remaining"></rect> <rect *ngIf="done" x="0" y="0" [attr.width]="done + '%'" [attr.height]="height" class="efforts-timeline__done"></rect> <rect *ngIf="overtime" [attr.x]="done + '%'" y="0" [attr.width]="overtime + '%'" [attr.height]="height" class="efforts-timeline__overtime"></rect> </svg> Our template is SVG-based, and it contains three rectangles for each of the bars that we want to display. The background bar that will be visible if there is remaining effort will always be displayed. Above the remaining bar, we conditionally display the done and the overtime bar using the calculated widths from our component class. Now, we can go ahead and include the EffortsTimeline class in our Efforts component. This way our users will have visual feedback when they edit the estimated or effective duration, and it provides them a sense of overview. Let's look into the template of the Efforts component to see how we integrate the timeline: … <ngc-efforts-timeline height="10" [estimated]="estimated" [effective]="effective"> </ngc-efforts-timeline> As we have the estimated and effective duration times readily available in our Efforts component, we can simply create a binding to the EffortsTimeline component input properties: The Efforts component displaying our newly-created efforts timeline component (the overtime of six hours is visualized with the yellow bar) Summary In this article, we learned about the architecture of the components in Angular. We also learned how to create a tabbed interface component and how to create a visual efforts timeline using SVG. Resources for Article: Further resources on this subject: Angular 2.0[article] AngularJS Project[article] AngularJS[article]
Read more
  • 0
  • 0
  • 4144

article-image-animating-elements
Packt
05 Jul 2016
17 min read
Save for later

Animating Elements

Packt
05 Jul 2016
17 min read
In this article by Alex Libby, author of the book Mastering PostCSS for Web Design, you will study about animating elements. A question if you had the choice of three websites: one static, one with badly done animation, and one that has been enhanced with subtle use of animation. Which would you choose? Well, my hope is the answer to that question should be number three: animation can really make a website stand out if done well, or fail miserably if done badly! So far, our content has been relatively static, save for the use of media queries. It's time though to take a look at how PostCSS can help make animating content a little easier. We'll begin with a quick recap on the basics of animation before exploring the route to moving away from pure animation through to SASS and finally across to PostCSS. We will cover a number of topics throughout this article, which will include: A recap on the use of jQuery to animate content Switching to CSS-based animation Exploring the use of prebuilt libraries, such as Animate.css (For more resources related to this topic, see here.) Let's make a start! Revisiting basic animations Animation is quickly becoming a king in web development; more and more websites are using animations to help bring life and keep content fresh. If done correctly, they add an extra layer of experience for the end user; if done badly, the website will soon lose more custom than water through a sieve! Throughout the course of the article, we'll take a look at making the change from writing standard animation through to using processors, such as SASS, and finally, switching to using PostCSS. I can't promise you that we'll be creating complex JavaScript-based demos, such as the Caaaat animation (http://roxik.com/cat/ try resizing the window!), but we will see that using PostCSS is really easy when creating animations for the browser. To kick off our journey, we'll start with a quick look at the traditional animation. How many times have you had to use .animate() in jQuery over the years? Thankfully, we have the power of CSS3 to help with simple animations, but there was a time when we had to animate content using jQuery. As a quick reminder, try running animate.html from the T34 - Basic animation using jQuery animate() folder. It's not going to set the world on fire, but is a nice reminder of the times gone by, when we didn't know any better: If we take a look at a profile of this animation from within a DOM inspector from within a browser, such as Firefox, it would look something like this screenshot: While the numbers aren't critical, the key point here are the two dotted green lines and that the results show a high degree of inconsistent activity. This is a good indicator that activity is erratic, with a low frame count, resulting in animations that are jumpy and less than 100% smooth. The great thing though is that there are options available to help provide smoother animations; we'll take a brief look at some of the options available before making the change to using PostCSS. For now though, let's make that first step to moving away from using jQuery, beginning with a look at the options available for reducing dependency on the use of .animate() or jQuery. Moving away from jQuery Animating content can be a contentious subject, particularly if jQuery or JavaScript is used. If we were to take a straw poll of 100 people and ask which they used, it is very likely that we would get mixed answers! A key answer of "it depends" is likely to feature at or near the top of the list of responses; many will argue that animating content should be done using CSS, while others will affirm that JavaScript-based solutions still have value. Leaving this aside, shall we say lively debate? If we're looking to move away from using jQuery and in particular .animate(), then we have some options available to us: Upgrade your version of jQuery! Yes, this might sound at odds with the theme of this article, but the most recent versions of jQuery introduced the use of requestAnimationFrame, which improved performance, particularly on mobile devices. A quick and dirty route is to use the jQuery Animate Enhanced plugin, available from http://playground.benbarnett.net/jquery-animate-enhanced/ - although a little old, it still serves a useful purpose. It will (where possible) convert .animate() calls into CSS3 equivalents; it isn't able to convert all, so any that are not converted will remain as .animate() calls. Using the same principle, we can even take advantage of the JavaScript animation library, GSAP. The Greensock team have made available a plugin (from https://greensock.com/jquery-gsap-plugin) that replaces jQuery.animate() with their own GSAP library. The latter is reputed to be 20 times faster than standard jQuery! With a little effort, we can look to rework our existing code. In place of using .animate(), we can add the equivalent CSS3 style(s) into our stylesheet and replace existing calls to .animate() with either .removeClass() or .addClass(), as appropriate. We can switch to using libraries, such as Transit (http://ricostacruz.com/jquery.transit/). It still requires the use of jQuery, but gives better performance than using the standard .animate() command. Another alternative is Velocity JS by Jonathan Shapiro, available from http://julian.com/research/velocity/; this has the benefit of not having jQuery as a dependency. There is even talk of incorporating all or part of the library into jQuery, as a replacement for .animate(). For more details, check out the issue log at https://github.com/jquery/jquery/issues/2053. Many people automatically assume that CSS animations are faster than JavaScript (or even jQuery). After all, we don't need to call an external library (jQuery); we can use styles that are already baked into the browser, right? The truth is not as straightforward as this. In short, the right use of either will depend on your requirements and the limits of each method. For example, CSS animations are great for simple state changes, but if sequencing is required, then you may have to resort to using the JavaScript route. The key, however, is less in the method used, but more in how many frames per second are displayed on the screen. Most people cannot distinguish above 60fps. This produces a very smooth experience. Anything less than around 25FPS will produce blur and occasionally appear jerky – it's up to us to select the best method available, that produces the most effective solution. To see the difference in frame rate, take a look at https://frames-per-second.appspot.com/ the animations on this page can be controlled; it's easy to see why 60FPS produces a superior experience! So, which route should we take? Well, over the next few pages, we'll take a brief look at each of these options. In a nutshell, they are all methods that either improve how animations run or allow us to remove the dependency on .animate(), which we know is not very efficient! True, some of these alternatives still use jQuery, but the key here is that your existing code could be using any or a mix of these methods. All of the demos over the next few pages were run at the same time as a YouTube video was being run; this was to help simulate a little load and get a more realistic comparison. Running animations under load means less graphics processing power is available, which results in a lower FPS count. Let's kick off with a look at our first option—the Transit JS library. Animating content with Transit.js In an ideal world, any project we build will have as few dependencies as possible; this applies equally to JavaScript or jQuery-based content as CSS styling. To help with reducing dependencies, we can use libraries such as TransitJS or Velocity to construct our animations. The key here is to make use of the animations that these libraries create as a basis for applying styles that we can then manipulate using .addClass() or .removeClass(). To see what I mean, let's explore this concept with a simple demo: We'll start by opening up a copy of animate.html. To make it easier, we need to change the reference to square-small from a class to a selector: <div id="square-small"></div> Next, go ahead and add in a reference to the Transit library immediately before the closing </head> tag: <script src="js/jquery.transit.min.js"></script> The Transit library uses a slightly different syntax, so go ahead and update the call to .animate() as indicated: smallsquare.transition({x: 280}, 'slow'); Save the file and then try previewing the results in a browser. If all is well, we should see no material change in the demo. But the animation will be significantly smoother—the frame count is higher, at 44.28fps, with less dips. Let's compare this with the same profile screenshot taken for revisiting basic animations earlier in this article. Notice anything? Profiling browser activity can be complex, but there are only two things we need to concern ourselves with here: the fps value and the state of the green line. The fps value, or frames per second, is over three times higher, and for a large part, the green line is more consistent with fewer more short-lived dips. This means that we have a smoother, more consistent performance; at approximately 44fps, the average frame rate is significantly better than using standard jQuery. But we're still using jQuery! There is a difference though. Libraries such as Transit or Velocity convert animations where possible to CSS3 equivalents. If we take a peek under the covers, we can see this in the flesh: We can use this to our advantage by removing the need to use .animate() and simply use .addClass() or .removeClass(). If you would like to compare our simple animation when using Transit or Velocity, there are examples available in the code download, as demos T35A and T35B, respectively. To take it to the next step, we can use the Velocity library to create a version of our demo using plain JavaScript. We'll see how as part of the next demo. Beware though this isn't an excuse to still use JavaScript; as we'll see, there is little difference in the frame count! Animating with plain JavaScript Many developers are used to working with jQuery. After all, it makes it a cinch to reference just about any element on a page! Sometimes though, it is preferable to work in native JavaScript; this could be for speed. If we only need to support newer browsers (such as IE11 or Edge, and recent versions of Chrome or Firefox), then adding jQuery as a dependency isn't always necessary. The beauty about libraries, such as Transit (or Velocity), means that we don't always have to use jQuery to still achieve the same effect; as we'll see shortly, removing jQuery can help improve matters! Let's put this to the test and adapt our earlier demo to work without using jQuery: We'll start by extracting a copy of the T35B folder from the code download bundle. Save this to the root of our project area. Next, we need to edit a copy of animate.html within this folder. Go ahead and remove the link to jQuery and then remove the link to velocity.ui.min.js; we should be left with this in the <head> of our file: <link rel="stylesheet" type="text/css" href="css/style.css">   <script src="js/velocity.min.js"></script> </head> A little further down, alter the <script> block as shown:   <script>     var smallsquare = document.getElementById('square-small');     var animbutton = document.getElementById('animation-button');     animbutton.addEventListener("click", function() {       Velocity(document.getElementById('square-small'), {left: 280}, {duration: 'slow'});     }); </script> Save the file and then preview the results in a browser. If we monitor performance of our demo using a DOM Inspector, we can see a similar frame rate being recorded in our demo: With jQuery as a dependency no longer in the picture, we can clearly see that the frame rate is improved—the downside though is that the support is reduced for some browsers, such as IE8 or 9. This may not be an issue for your website; both Microsoft and the jQuery Core Team have announced changes to drop support for IE8-10 and IE8 respectively, which will help encourage users to upgrade to newer browsers. It has to be said though that while using CSS3 is preferable for speed and keeping our pages as lightweight as possible, using Velocity does provide a raft of extra opportunities that may be of use to your projects. The key here though is to carefully consider if you really do need them or whether CSS3 will suffice and allow you to use PostCSS. Switching classes using jQuery At this point, there is one question that comes to mind—what about using class-based animation? By this, I mean dropping any dependency on external animation libraries, and switching to use plain jQuery with either .addClass() or .removeClass() methods. In theory, it sounds like a great idea—we can remove the need to use .animate() and simply swap classes as needed, right? Well, it's an improvement, but it is still lower than using a combination of pure JavaScript and switching classes. It will all boil down to a trade-off between using the ease of jQuery to reference elements against pure JavaScript for speed, as follows: 1.      We'll start by opening a copy of animate.html from the previous exercise. First, go ahead and replace the call to VelocityJS with this line within the <head> of our document: <script src="js/jquery.min.js"></script> 2.      Next, remove the code between the <script> tags and replace it with this: var smallsquare = $('.rectangle').find('.square-small'); $('#animation-button').on("click", function() {       smallsquare.addClass("move");       smallsquare.one('transitionend', function(e) {     $('.rectangle').find('.square-small') .removeClass("move");     });  }); 3.      Save the file. If we preview the results in a browser, we should see no apparent change in how the demo appears, but the transition is marginally more performant than using a combination of jQuery and Transit. The real change in our code though, will be apparent if we take a peek under the covers using a DOM Inspector. Instead of using .animate(), we are using CSS3 animation styles to move our square-small <div>. Most browsers will accept the use of transition and transform, but it is worth running our code through a process, such as Autocomplete, to ensure we apply the right vendor prefixes to our code. The beauty about using CSS3 here is that while it might not suit large, complex animations, we can at least begin to incorporate the use of external stylesheets, such as Animate.css, or even use a preprocessor, such as SASS to create our styles. It's an easy change to make, so without further ado and as the next step on our journey to using PostCSS, let's take a look at this in more detail. If you would like to create custom keyframe-based animations, then take a look at http://cssanimate.com/, which provides a GUI-based interface for designing them and will pipe out the appropriate code when requested! Making use of prebuilt libraries Up to this point, all of our animations have had one thing in common; they are individually created and stored within the same stylesheet as other styles for each project. This will work perfectly well, but we can do better. After all, it's possible that we may well create animations that others have already built! Over time, we may also build up a series of animations that can form the basis of a library that can be reused for future projects. A number of developers have already done this. One example of note is the Animate.css library created by Dan Eden. In the meantime, let's run through a quick demo of how it works as a precursor to working with it in PostCSS. The images used in this demo are referenced directly from the LoremPixem website as placeholder images. Let's make a start: We'll start by extracting a copy of the T37 folder from the code download bundle. Save the folder to our project area. Next, open a new file and add the following code: body { background: #eee; } #gallery {   width: 745px;   height: 500px;   margin-left: auto;   margin-right: auto; }   #gallery img {   border: 0.25rem solid #fff;   margin: 20px;   box-shadow: 0.25rem 0.25rem 0.3125rem #999;   float: left; } .animated {   animation-duration: 1s; animation-fill-mode: both; } .animated:hover {   animation-duration: 1s;   animation-fill-mode: both; }  Save this as style.css in the css subfolder within the T37 folder. Go ahead and preview the results in a browser. If all is well, then we should see something akin to this screenshot: If we run the demo, we should see images run through different types of animation; there is nothing special or complicated here. The question is though, how does it all fit in with PostCSS? Well, there's a good reason for this; there will be some developers who have used Animate.css in the past and will be familiar with how it works; we will also be using a the postcss-animation plugin later in Updating code to use PostCSS, which is based on the Animate.css stylesheet library. For those of you who are not familiar with the stylesheet library though, let's quickly run through how it works within the context of our demo. Dissecting the code to our demo The effects used in our demo are quite striking. Indeed, one might be forgiven for thinking that they required a lot of complex JavaScript! This, however, could not be further from the truth. The Animate.css file contains a number of animations based on @keyframe similar to this: @keyframes bounce {   0%, 20%, 50%, 80%, 100% {transform: translateY(0);}   40% {transform: translateY(-1.875rem);}   60% {transform: translateY(-0.9375rem);} } We pull in the animations using the usual call to the library within the <head> section of our code. We can then call any animation by name from within our code:   <div id="gallery">     <a href="#"><img class="animated bounce" src="http://lorempixum.com/200/200/city/1" alt="" /></a> ...   </div>   </body> You will notice the addition of the .animated class in our code. This controls the duration and timing of the animation, which are set according to which animation name has been added to the code. The downside of not using JavaScript (or jQuery for that matter) means that the animation will only run once when the demo is loaded; we can set it to run continuously by adding the .infinite class to the element being animated (this is part of the Animate library). We can fake a click option in CSS, but it is an experimental hack that is not supported across all the browsers. To affect any form of control, we really need to use JavaScript (or even jQuery)! If you are interested in details of the hack, then take a look at this response on Stack Overflow at http://stackoverflow.com/questions/13630229/can-i-have-an-onclick-effect-in-css/32721572#32721572. Okay! Onward we go. We've covered the basic use of prebuilt libraries, such as Animate. It's time to step up a gear and make the transition to PostCSS. Summary In this article, we studied about recap on the use of jQuery to animate content. We also looked into switching to CSS-based animation. At last, we saw how to make use of prebuilt libraries in short.  Resources for Article:   Further resources on this subject: Responsive Web Design with HTML5 and CSS3 - Second Edition [article] Professional CSS3 [article] Instant LESS CSS Preprocessor How-to [article]
Read more
  • 0
  • 0
  • 1569
article-image-angular-2-components-making-app-development-easier
Mary Gualtieri
30 Jun 2016
5 min read
Save for later

Angular 2 Components: Making app development easier

Mary Gualtieri
30 Jun 2016
5 min read
When Angular 2 was announced in October 2014, the JavaScript community went into a frenzy. What was going to happen to the beloved Angular 1, which so many developers loved? Could change be a good thing? Change only means improvement, so we should welcome Angular 2 with open arms and embrace it. One of the biggest changes from Angular 1 to Angular 2 was the purpose of a component. In Angular 2, components are the main way to build and specify logic on a page; or in other words, components are where you define a specific responsibility of your application. In Angular 1, this was achieved through directives, controllers, and scope. With this change, Angular 2 offers better functionality and actually makes it easier to build applications. Angular 2 components also ensure that code from different sections of the application will not interfere with other sections of the component. To build a component, let’s first look at what is needed to create it: An association of DOM/host elements. Well-defined input and output properties of public APIs. A template that describes how the component is rendered on the HTML page. Configuration of the dependency injection. Input and output properties Input and Output properties are considered the public API of a component, allowing you to access the backend data. The input property is where data flows into a component. Data flows out of the component through the output property. The purpose of the input and output propertiesis to represent a component in your application. Template A template is needed in order to render a component on a page. In order to render the template, you must have a list of directives that can be used in the template. Host elements In order for a component to be rendered in DOM, the component must associate itself with a DOM or host element. A component can interact with host elements by listening to its events, updating properties, and invoking methods. Dependency Injection Dependency Injection is when a component depends on a service. You request this service through a constructor, and the framework provides you with this service. This is significant because you can depend on interfaces instead of concrete types. The benefit of this enables testability and more control. Dependency Injections is created and configured in directives and component decorators. Bootstrap In Angular,you have to bootstrap in order to initialize your application through automation or by manually initializing it. In Angular 1, to automatically bootstrap your app, you added ng-app into your HTML file. To manually bootstrap your app, you would add angular.bootstrap(document, [‘myApp’});. In Angular 2, you can bootstrap by just adding bootstrap();. It’s important to remember that bootstrapping in Angular is completely differentfromTwitter Bootstrap. Directives Directives are essentially components without a template. The purpose behind a directive is to allow components to interact with one another. Another way to think of a directive is a component with a template. You still have the option to write a directive with a decorator. Selectors Selectors are very easy to understand. Use a selector in order for Angular to identify the component. The selector is used to call the component into the HTML file. For example, if your selector is called App, you can use <app> </app> to call the component in the HTML file. Let’s build a simple component! Let’s walk through the steps required to actually build a component using Angular 2: Step 1: add a component decorator: Step 2: Add a selector: In your HTML file, use <myapp></myapp> to call the template. Step 3: Add a template: Step 4: Add a class to represent the component: Step 5: Bootstrap the component class: Step 6: Finally, import both the bootstrap and Component file: This is a root component. In Angular, you have what is called a component tree. Everything comes back to the component tree. The question that you must ask yourself is: what does a component looks like if it is not a root component? Perform the following steps for this: Step 1: Add an import component: For every component that you create, it is important to add “import {Component} from "angular2/core";” Step 2: Add a selector and a template: Step 3: Export the class that represents the component: Step 4: Switch to the root component. Then, import the component file: Add the relative path(./todo) to the file. Step 5: Add an array of directives to the root component in order to be able to use the component: Let’s review. In order to make a component, you must associate host elements, have well-defined input and output properties, have a template, and configure Dependency Injection. This is all achieved through the use of selectors, directives, and a template. Angular 2 is still in the beta stages, but once it arrives, it will be a game changer for the development world. About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri
Read more
  • 0
  • 0
  • 1624

article-image-adding-media-our-site
Packt
21 Jun 2016
19 min read
Save for later

Adding Media to Our Site

Packt
21 Jun 2016
19 min read
In this article by Neeraj Kumar et al, authors of the book, Drupal 8 Development Beginner's Guide - Second Edtion, explains a text-only site is not going to hold the interest of visitors; a site needs some pizzazz and some spice! One way to add some pizzazz to your site is by adding some multimedia content, such as images, video, audio, and so on. But, we don't just want to add a few images here and there; in fact, we want an immersive and compelling multimedia experience that is easy to manage, configure, and extend. The File entity (https://drupal.org/project/file_entity) module for Drupal 8 will enable us to manage files very easily. In this article, we will discover how to integrate the File entity module to add images to our d8dev site, and will explore compelling ways to present images to users. This will include taking a look at the integration of a lightbox-type UI element for displaying the File-entity-module-managed images, and learning how we can create custom image styles through UI and code. The following topics will be covered in this article: The File entity module for Drupal 8 Adding a Recipe image field to your content types Code example—image styles for Drupal 8 Displaying recipe images in a lightbox popup Working with Drupal issue queues (For more resources related to this topic, see here.) Introduction to the File entity module As per the module page at https://www.drupal.org/project/file_entity: File entity provides interfaces for managing files. It also extends the core file entity, allowing files to be fieldable, grouped into types, viewed (using display modes) and formatted using field formatters. File entity integrates with a number of modules, exposing files to Views, Entity API, Token and more. In our case, we need this module to easily edit image properties such as Title text and Alt text. So these properties will be used in the colorbox popup to display them as captions. Working with dev versions of modules There are times when you come across a module that introduces some major new features and is fairly stable, but not quite ready for use on a live/production website, and is therefore available only as a dev version. This is a perfect opportunity to provide a valuable contribution to the Drupal community. Just by installing and using a dev version of a module (in your local development environment, of course), you are providing valuable testing for the module maintainers. Of course, you should enter an issue in the project's issue queue if you discover any bugs or would like to request any additional features. Also, using a dev version of a module presents you with the opportunity to take on some custom Drupal development. However, it is important that you remember that a module is released as a dev version for a reason, and it is most likely not stable enough to be deployed on a public-facing site. Our use of the File entity module in this article is a good example of working with the dev version of a module. One thing to note: Drush will download official and dev module releases. But at this point in time, there is no official port for the File entity module in Drupal, so we will use the unofficial one, which lives on GitHub (https://github.com/drupal-media/file_entity). In the next step, we will be downloading the dev release with GitHub. Time for action – installing a dev version of the File entity module In Drupal, we use Drush to download and enable any module/theme, but there is no official port yet for the file entity module in Drupal, so we can use the unofficial one, which lives on GitHub at https://github.com/drupal-media/file_entity: Open the Terminal (Mac OS X) or Command Prompt (Windows) application, and go to the root directory of your d8dev site. Go inside the modules folder and download the File entity module from GitHub. We use the git command to download this module: $ git clone https://github.com/drupal-media/file_entity. Another way is to download a .zip file from https://github.com/drupal-media/file_entity and extract it in the modules folder: Next, on the Extend page (admin/modules), enable the File entity module. What just happened? We enabled the File entity module, and learned how to download and install with GitHub. A new recipe for our site In this article, we are going to create a new recipe: Thai Basil Chicken. If you would like to have more real content to use as an example, and feel free to try the recipe out! Name: Thai Basil Chicken Description: A spicy, flavorful version of one of my favorite Thai dishes RecipeYield : Four servings PrepTime: 25 minutes CookTime: 20 minutes Ingredients : One pound boneless chicken breasts Two tablespoons of olive oil Four garlic cloves, minced Three tablespoons of soy sauce Two tablespoons of fish sauce Two large sweet onions, sliced Five cloves of garlic One yellow bell pepper One green bell pepper Four to eight Thai peppers (depending on the level of hotness you want) One-third cup of dark brown sugar dissolved in one cup of hot water One cup of fresh basil leaves Two cups of Jasmin rice Instructions: Prepare the Jasmine rice according to the directions. Heat the olive oil in a large frying pan over medium heat for two minutes. Add the chicken to the pan and then pour on soy sauce. Cook the chicken until there is no visible pinkness—approximately 8 to 10 minutes. Reduce heat to medium low. Add the garlic and fish sauce, and simmer for 3 minutes. Next, add the Thai chilies, onion, and bell pepper and stir to combine. Simmer for 2 minutes. Add the brown sugar and water mixture. Stir to mix, and then cover. Simmer for 5 minutes. Uncover, add basil, and stir to combine. Serve over rice. Time for action – adding a Recipe images field to our Recipe content type We will use the manage fields administrative page to add a Media field to our d8dev Recipe content type: Open up the d8dev site in your favorite browser, click on the Structure link in the Admin toolbar, and then click on the Content types link. Next, on the Content types administrative page, click on the Manage fields link for your Recipe content type: Now, on the Manage fields administrative page, click on the Add field link. On the next screen, select Image from the Add a new field dropdown and Label as Recipe images. Click on the Save field settings button. Next, on the Field settings page, select Unlimited as the allowed number of values. Click on the Save field settings button. On the next screen, leave all settings as they are and click on the Save settings button. Next, on the Manage form display page, select widget Editable file for the Recipe images field and click on the Save button. Now, on the Manage display page, for the Recipe images field, select Hidden as the label. Click on the settings icon. Then select Medium (220*220) as the image style, and click on the Update button. At the bottom, click on the Save button: Let's add some Recipe images to a recipe. Click on the Content link in the menu bar, and then click on Add content and Recipe. On the next screen, fill in the title as Thai Basil Chicken and other fields respectively as mentioned in the preceding recipe details. Now, scroll down to the new Recipe images field that you have added. Click on the Add a new file button or drag and drop images that you want to upload. Then click on the Save and Publish button: Reload your Thai Basil Chicken recipe page, and you should see something similar to the following: All the images are stacked on top of each other. So, we will add the following CSS just under the style for field--name-field-recipe-images and field--type-recipe-images in the /modules/d8dev/styles/d8dev.css file, to lay out the Recipe images in more of a grid: .node .field--type-recipe-images { float: none !important; } .field--name-field-recipe-images .field__item { display: inline-flex; padding: 6px; } Now we will load this d8dev.css file to affect this grid style. In Drupal 8, loading a CSS file has a process: Save the CSS to a file. Define a library, which can contain the CSS file. Attach the library to a render array in a hook. So, we have already saved a CSS file called d8dev.css under the styles folder; now we will define a library. To define one or more (asset) libraries, add a *.libraries.yml file to your theme folder. Our module is named d8dev, and then the filename should be d8dev.libraries.yml. Each library in the file is an entry detailing CSS, like this: d8dev: version: 1.x css: theme: styles/d8dev.css: {} Now, we define the hook_page_attachments() function to load the CSS file. Add the following code inside the d8dev.module file. Use this hook when you want to conditionally add attachments to a page: /** * Implements hook_page_attachments(). */ function d8dev_page_attachments(array &$attachments) { $attachments['#attached']['library'][] = 'd8dev/d8dev'; } Now, we will need to clear the cache for our d8dev site by going to Configuration, clicking on the Performance link, and then clicking on the Clear all caches button. Reload your Thai Basil Chicken recipe page, and you should see something similar to the following: What just happened? We added and configured a media-based field for our Recipe content type. We updated the d8dev module with custom CSS code to lay out the Recipe images in more of a grid format. And also we looked at how to attach a CSS file through a module. Creating a custom image style Before we configure a colorbox feature, we are going to create a custom image style to use when we add them in colorbox content preview settings. Image styles for Drupal 8 are part of the core Image module. The core image module provides three default image styles—thumbnail, medium, and large—as seen in the following Image style configuration page: Now, we are going to add a fifth custom image style, an image style that will resize our images somewhere between the 100 x 75 thumbnail style and the 220 x 165 medium style. We will walkthrough the process of creating an image style through the Image style administrative page, and also walkthrough the process of programmatically creating an image style. Time for action – adding a custom image style through the image style administrative page First, we will use the Image style administrative page (admin/config/media/image-styles) to create a custom image style: Open the d8dev site in your favorite browser, click on the Configuration link in the Admin toolbar, and click on the Image styles link under the Media section. Once the Image styles administrative page has loaded, click on the Add style link. Next, enter small for the Image style name of your custom image style, and click on the Create new style button: Now, we will add the one and only effect for our custom image style by selecting Scale from the EFFECT options and then clicking on the Add button. On the Add Scale effect page, enter 160 for the width and 120 for the height. Leave the Allow Upscaling checkbox unchecked, and click on the Add effect button: Finally, just click on the Update style button on the Edit small style administrative page, and we are done. We now have a new custom small image style that we will be able to use to resize images for our site: What just happened? We learned how easy it is to add a custom image style with the administrative UI. Now, we are going to see how to add a custom image style by writing some code. The advantage of having code-based custom image styles is that it will allow us to utilize a source code repository, such as Git, to manage and deploy our custom image styles between different environments. For example, it would allow us to use Git to promote image styles from our development environment to a live production website. Otherwise, the manual configuration that we just did would have to be repeated for every environment. Time for action – creating a programmatic custom image style Now, we will see how we can add a custom image style with code: The first thing we need to do is delete the small image style that we just created. So, open your d8dev site in your favorite browser, click on the Configuration link in the Admin toolbar, and then click on the Image styles link under the Media section. Once the Image styles administrative page has loaded, click on the delete link for the small image style that we just added. Next, on the Optionally select a style before deleting small page, leave the default value for the Replacement style select list as No replacement, just delete, and click on the Delete button: In Drupal 8, image styles have been converted from an array to an object that extends ConfigEntity. All image styles provided by modules need to be defined as YAML configuration files in the config/install folder of each module. Suppose our module is located at modules/d8dev. Create a file called modules/d8dev/config/install/image.style.small.yml with the following content: uuid: b97a0bd7-4833-4d4a-ae05-5d4da0503041 langcode: en status: true dependencies: { } name: small label: small effects: c76016aa-3c8b-495a-9e31-4923f1e4be54: uuid: c76016aa-3c8b-495a-9e31-4923f1e4be54 id: image_scale weight: 1 data: width: 160 height: 120 upscale: false We need to use a UUID generator to assign unique IDs to image style effects. Do not copy/paste UUIDs from other pieces of code or from other image styles! The name of our custom style is small, is provided as the name and label as same. For each effect that we want to add to our image style, we will specify the effect we want to use as the name key, and then pass in values as the settings for the effect. In the case of the image_scale effect that we are using here, we pass in the width, height, and upscale settings. Finally, the value for the weight key allows us to specify the order the effects should be processed in, and although it is not very useful when there is only one effect, it becomes important when there are multiple effects. Now, we will need to uninstall and install our d8dev module by going to the Extend page. On the next screen click on the Uninstall tab, check the d8dev checkbox and click on the Uninstall button. Now, click on the List tab, check d8dev, and click on the Install button. Then, go back to the Image styles administrative page and you will see our programmatically created small image style. What just happened? We created a custom image style with some custom code. We then configured our Recipe content type to use our custom image style for images added to the Recipe images field. Integrating the Colorbox and File entity modules The File entity module provides interfaces for managing files. For images, we will be able to edit Title text, Alt text, and Filenames easily. However, the images are taking up quite a bit of room. Let's create a pop-up lightbox gallery and show images in a popup. When someone clicks on an image, a lightbox will pop up and allow the user to cycle through larger versions of all associated images. Time for action – installing the Colorbox module Before we can display Recipe images in a Colorbox, we need to download and enable the module: Open the Mac OS X Terminal or Windows Command Prompt, and change to the d8dev directory. Next, use Drush to download and enable the current dev release of the Colorbox module (http://drupal.org/project/colorbox): $ drush dl colorbox-8.x-1.x-dev Project colorbox (8.x-1.x-dev) downloaded to /var/www/html/d8dev/modules/colorbox. [success] $ drushencolorbox The following extensions will be enabled: colorbox Do you really want to continue? (y/n): y colorbox was enabled successfully. [ok] The Colorbox module depends on the Colorbox jQuery plugin available at https://github.com/jackmoore/colorbox. The Colorbox module includes a Drush task that will download the required jQuery plugin at the /libraries directory: $drushcolorbox-plugin Colorbox plugin has been installed in libraries [success] Next, we will look into the Colorbox display formatter. Click on the Structure link in the Admin toolbar, then click on the Content types link, and finally click on the manage display link for your Recipe content type under the Operations dropdown: Next, click on the FORMAT select list for the Recipe images field, and you will see an option for Colorbox, Select as Colorbox then you will see the settings change. Then, click on the settings icon: Now, you will see the settings for Colorbox. Select Content image style as small and Content image style for first image as small in the dropdown, and use the default settings for other options. Click on the Update button and next on the Save button at the bottom: Reload our Thai Basil Chicken recipe page, and you should see something similar to the following (with the new image style, small): Now, click on any image and then you will see the image loaded in the colorbox popup: We have learned more about images for colorbox, but colorbox also supports videos. Another way to add some spice to our site is by adding videos. So there are several modules available to work with colorbox for videos. The Video Embed Field module creates a simple field type that allows you to embed videos from YouTube and Vimeo and show their thumbnail previews simply by entering the video's URL. So you can try this module to add some pizzazz to your site! What just happened? We installed the Colorbox module and enabled it for the Recipe images field on our custom Recipe content type. Now, we can easily add images to our d8dev content with the Colorbox pop-up feature. Working with Drupal issue queues Drupal has its own issue queue for working with a team of developers around the world. If you need help for a specific project, core, module, or a theme related, you should go to the issue queue, where the maintainers, users, and followers of the module/theme communicate. The issue page provides a filter option, where you can search for specific issues based on Project, Assigned, Submitted by, Followers, Status, Priority, Category, and so on. We can find issues at https://www.drupal.org/project/issues/colorbox. Here, replace colorbox with the specific module name. For more information, see https://www.drupal.org/issue-queue. In our case, we have one issue with the colorbox module. Captions are working for the Automatic and Content title properties, but are not working for the Alt text and Title text properties. To check this issue, go to Structure | Content types and click on Manage display. On the next screen, click on the settings icon for the Recipe images field. Now select the Caption option as Title text or Alt text and click on the Update button. Finally, click on the Save button. Reload the Thai Basil Chicken recipe page, and click on any image. Then it opens in popup, but we cannot see captions for this. Make sure you have the Title text and Alt text properties updated for Recipe images field for the Thai Basil Chicken recipe. Time for action – creating an issue for the Colorbox module Now, before we go and try to figure out how to fix this functionality for the Colorbox module, let's create an issue: On https://www.drupal.org/project/issues/colorbox, click on the Create a new issue link: On the next screen we will see a form. We will fill in all the required fields: Title, Category as Bug report, Version as 8.x-1.x-dev, Component as Code, and the Issue summary field. Once I submitted my form, an issue was created at https://www.drupal.org/node/2645160. You should see an issue on Drupal (https://www.drupal.org/) like this: Next, the Maintainers of the colorbox module will look into this issue and reply accordingly. Actually, @frjo replied saying "I have never used that module but if someone who does sends in a patch I will take a look at it." He is a contributor to this module, so we will wait for some time and will see if someone can fix this issue by giving a patch or replying with useful comments. In case someone gives the patch, then we have to apply that to the colorbox module. This information is available on Drupal at https://www.drupal.org/patch/apply . What just happened? We understood and created an issue in the Colorbox module's issue queue list. We also looked into what the required fields are and how to fill them to create an issue in the Drupal module queue list form. Summary In this article, we looked at a way to use our d8dev site with multimedia, creating image styles using some custom code, and learned some new ways of interacting with the Drupal developer community. We also worked with the Colorbox module to add images to our d8dev content with the Colorbox pop-up feature. Lastly, we looked into the custom module to work with custom CSS files. Resources for Article: Further resources on this subject: Installing Drupal 8 [article] Drupal 7 Social Networking: Managing Users and Profiles [article] Drupal 8 and Configuration Management [article]
Read more
  • 0
  • 0
  • 2318

article-image-five-common-questions-netjava-developers-learning-javascript-and-nodejs
Packt
20 Jun 2016
19 min read
Save for later

Five common questions for .NET/Java developers learning JavaScript and Node.js

Packt
20 Jun 2016
19 min read
In this article by Harry Cummings, author of the book Learning Node.js for .NET Developers For those with web development experience in .NET or Java, perhaps who've written some browser-based JavaScript in the past, it might not be obvious why anyone would want to take JavaScript beyond the browser and treat it as a general-purpose programming language. However, this is exactly what Node.js does. What's more, Node.js has been around for long enough now to have matured as a platform, and has sustained its impressive growth in popularity well beyond any period that could be attributed to initial hype over a new technology. In this introductory article, we'll look at why Node.js is a compelling technology worth learning more about, and address some of the common barriers and sources of confusion that developers encounter when learning Node.js and JavaScript. (For more resources related to this topic, see here.) Why use Node.js? The execution model of Node.js follows that of JavaScript in the browser. This might not be an obvious choice for server-side development. In fact, these two use cases do have something important in common. User interface code is naturally event-driven (for example, binding event handlers to button clicks). Node.js makes this a virtue by applying an event-driven approach to server-side programming. Stated formally, Node.js has a single-threaded, non-blocking, event-driven execution model. We'll define each of these terms. Non-blocking Put simply, Node.js recognizes that many programmes spend most of their time waiting for other things to happen. For example, slow I/O operations such as disk access and network requests. Node.js addresses this by making these operations non-blocking. This means that program execution can continue while they happen. This non-blocking approach is also called asynchronous programming. Of course, other platforms support this too (for example, C#'s async/await keywords and the Task Parallel Library). However, it is baked in to Node.js in a way that makes it simple and natural to use. Asynchronous API methods are all called in the same way: They all take a callback function to be invoked ("called back") when the execution completes. This function is invoked with an optional error parameter and the result of the operation. The consistency of calling non-blocking (asynchronous) API methods in Node.js carries through to its third-party libraries. This consistency makes it easy to build applications that are asynchronous throughout. Other JavaScript libraries, such as bluebird (http://bluebirdjs.com/docs/getting-started.html), allow callback-based APIs to be adapted to other asynchronous patterns. As an alternative to callbacks, you may choose to use Promises (similar to Tasks in .NET or Futures in Java) or coroutines (similar to async methods in C#) within your own codebase. This allows you to streamline your code while retaining the benefits of consistent asynchronous APIs in Node.js and its third-party libraries. Event-driven The event-driven nature of Node.js describes how operations are scheduled. In typical procedural programming environments, a program has some entry point that executes a set of commands until completion or enters a loop to perform work on each iteration. Node.js has a built-in event loop, which isn't directly exposed to the developer. It is the job of the event loop to decide which piece of code to execute next. Typically, this will be some callback function that is ready to run in response to some other event. For example, a filesystem operation may have completed, a timeout may have expired, or a new network request may have arrived. This built-in event loop simplifies asynchronous programming by providing a consistent approach and avoiding the need for applications to manage their own scheduling. Single-threaded The single-threaded nature of Node.js simply means that there is only one thread of execution in each process. Also, each piece of code is guaranteed to run to completion without being interrupted by other operations. This greatly simplifies development and makes programmes easier to reason about. It removes the possibility for a range of concurrency issues. For example, it is not necessary to synchronize/lock access to shared in-process state as it is in Java or .NET. A process can't deadlock itself or create race conditions within its own code. Single-threaded programming is only feasible if the thread never gets blocked waiting for long-running work to complete. Thus, this simplified programming model is made possible by the non-blocking nature of Node.js. Writing web applications The flagship use case for Node.js is in building websites and web APIs. These are inherently event-driven as most or all processing takes place in response to HTTP requests. Also, many websites do little computational heavy-lifting of their own. They tend to perform a lot of I/O operations, for example: Streaming requests from the client Talking to a database locally or over the network Pulling in data from remote APIs over the network Reading files from disk to send back to the client These factors make I/O operations a likely bottleneck for web applications. The non-blocking programming model of Node.js allows web applications to make the most of a single thread. As soon as any of these I/O operations starts, the thread is immediately free to pick up and start processing another request. Processing of each request continues via asynchronous callbacks when I/O operations complete. The processing thread is only kicking off and linking together these operations, never waiting for them to complete. This allows Node.js to handle a much higher rate of requests per thread than other runtime environments. How does Node.js scale? So, Node.js can handle many requests per thread, but what happens when we reach the limit of what one thread can handle? The answer is, of course, to use more threads! You can achieve this by starting multiple Node.js processes, typically, one for each web server CPU core. Note that this is still quite different to most Java or .NET web applications. These typically use a pool of threads much larger than the number of cores, because threads are expected to spend much of their time being blocked. The built-in Node.js cluster module makes it straightforward to spawn multiple Node.js processes. Tools such as PM2 (http://pm2.keymetrics.io/) and libraries such as throng (https://github.com/hunterloftis/throng) make it even easier to do so. This approach gives us the best of all worlds: Using multiple threads makes the most of our available CPU power By having a single thread per core, we also save overheads from the operating system context-switching between threads Since the processes are independent and don't share state directly, we retain the benefits of the single-threaded programming model discussed above By using long-running application processes (as with .NET or Java), we avoid the overhead of a process-per-request (as in PHP) Do I really have to use JavaScript? A lot of web developers new to Node.js will already have some experience of client-side JavaScript. This experience may not have been positive and might put you off using JavaScript elsewhere. You do not have to use JavaScript to work with Node.js. TypeScript (http://www.typescriptlang.org/) and other compile-to-JavaScript languages exist as alternatives. However, I do recommend learning Node.js with JavaScript first. It will give you a clearer understanding of Node.js and simplify your tool chain. Once you have a project or two under your belt, you'll be better placed to understand the pros and cons of other languages. In the meantime, you might be pleasantly surprised by the JavaScript development experience in Node.js. There are three broad categories of prior JavaScript development experience that can lead to people having a negative impression of it. These are as follows: Experience from the late 90s and early 00s, prior to MV* frameworks like Angular/Knockout/Backbone/Ember, maybe even prior to jQuery. This is the pioneer phase of client-side web development. More recent experience within the much more mature JavaScript ecosystem, perhaps as a full-stack developer writing server-side and client-side code. The complexity of some frameworks (such as the MV* frameworks listed earlier), or the sheer amount of choice in general, can be overwhelming. Limited experience with JavaScript itself, but exposure to some its most unusual characteristics. This may lead to a jarring sensation as a result of encountering the language in surprising or unintuitive ways. We'll address groups of people affected by each type of experience in turn. But note that individuals might identify with more than one of these groups. I'm happy to admit that I've been a member of all three in the past. The web pioneers These developers have been burned by worked with client-side JavaScript in the past. The browser is sometimes described as a hostile environment for code to execute in. A single execution context shared by all code allows for some particularly nasty gotchas. For example, third-party code on the same page can create and modify global objects. Node.js solves some of these issues on a fundamental level, and mitigates others where this isn't possible. It's JavaScript, so it's still the case that everything is mutable. But the Node.js module system narrows the global scope, so libraries are less likely to step on each other's toes. The conventions that Node.js establishes also make third-party libraries much more consistent. This makes the environment less hostile and more predictable. The web pioneers will also have had to cope with the APIs available to JavaScript in the browser. Although these have improved over time as browsers and standards have matured, the earlier days of web development were more like the Wild West. Quirks and inconsistencies in fundamental APIs caused a lot of hard work and frustration. The rise of jQuery is a testament to the difficulty of working with the Document Object Model of old. The continued popularity of jQuery indicates that people still prefer to avoid working with these APIs directly. Node.js addresses these issues quite thoroughly: First of all, by taking JavaScript out of the browser, the DOM and other APIs simply go away as they are no longer relevant. The new APIs that Node.js introduces are small, focused, and consistent. You no longer need to contend with inconsistencies between browsers. Everything you write will execute in the same JavaScript engine (V8). The overwhelmed full-stack developers Many of the frontend JavaScript frameworks provide a lot of power, but come with a great deal of complexity. For example, AngularJS has a steep learning curve, is quite opinionated about application structure, and has quite a few gotchas or things you just need to know. JavaScript itself is actually a language with a very small surface area. This provides a blank canvas for Node.js to provide a small number of consistent APIs (as described in the previous section). Although there's still plenty to learn in total, you can focus on just the things you need without getting tripped up by areas you're not yet familiar with. It's still true that there's a lot of choice and that this can be bewildering. For example, there are many competing test frameworks for JavaScript. The trend towards smaller, more composable packages in the Node.js ecosystem—while generally a good thing—can mean more research, more decisions, and fewer batteries-included frameworks that do everything out of the box. On balance though, this makes it easier to move at your own pace and understand everything that you're pulling into your application. The JavaScript dabblers It's easy to have a poor impression of JavaScript if you've only worked with it occasionally and never as the primary (or even secondary) language on a project. JavaScript doesn't do itself any favors here, with a few glaring gotchas that most people will encounter. For example, the fundamentally broken == equality operator and other symptoms of type coercion. Although these make a poor first impression, they aren't really indicative of the experience of working with JavaScript more regularly. As mentioned in the previous section, JavaScript itself is actually a very small language. Its simplicity limits the number of gotchas there can be. While there are a few things you "just need to know", it's a short list. This compares well against the languages that offer a constant stream of nasty surprises (for example, PHP's notoriously inconsistent built-in functions). What's more, successive ECMAScript standards have done a lot to clean up the JavaScript language. With Node.js, you get to take advantage of this, as all your code will run on the V8 engine, which implements the latest ES2015 standard. The other big that reason JavaScript can be jarring is more a matter of context than an inherent flaw. It looks superficially similar to the other languages with a C-like syntax, like Java and C#. The similarity to Java was intentional when JavaScript was created, but it's unfortunate. JavaScript's programming model is quite different to other object-oriented languages like Java or C#. This can be confusing or frustrating, when its syntax suggests that it may work in roughly the same way. This is especially true of object-oriented programming in JavaScript, as we'll discuss shortly. Once you've understood the fundamentals of JavaScript though, it's very easy to work productively with it. Working with JavaScript I'm not going to argue that JavaScript is the perfect language. But I do think many of the factors that lead to people having a bad impression of JavaScript are not down to the language itself. Importantly, many factors simply don't apply when you take JavaScript out of the browser environment. What's more, JavaScript has some really great extrinsic properties. These are things that aren't visible in the code, but have an effect on what it's like to work with the language. For example, JavaScript's interpreted nature makes it easy to set up automated tests to run continuously and provide near-instant feedback on your code changes. How does inheritance work in JavaScript? When introducing object-oriented programming, we usually talk about classes and inheritance. Java, C# and numerous other languages take a very similar approach to these concepts. JavaScript is quite unusual in that; it supports object-oriented programming without classes. It does this by applying the concept of inheritance directly to objects. Anything that is not one of JavaScript's built-in primitives (strings, number, null, and so on) is an object. Functions are just a special type of object that can be invoked with arguments. Arrays are a special type of object with list-like behavior. All objects (including these two special types) can have properties, which are just names with a value. You can think of JavaScript objects as a dictionary with string keys and object values. Programming without classes Let's say you have a chart with a very large number of data points. These points may be represented by objects that have some common behavior. In C# or Java, you might create a Point class. In JavaScript, you could implement points like this: function create Point(x, y) {     return {         x: x,         y: y,         isAboveDiagonal: function() {             return this.y > this.x;         }     }; }   var myPoint = createPoint(1, 2); console.log(myPoint.isAboveDiagonal()); // Prints "true" The createPoint function returns a new object each time it is called (the object is defined using JavaScript's object-literal notation, which is the basis for JSON). One problem with this approach is that the function assigned to the isAboveDiagonal property is redefined for each point on the graph, thus taking up more space in memory. You can address this using prototypal inheritance. Although JavaScript doesn't have classes, objects can inherit from other objects. Each object has a prototype. If the interpreter attempts to access a property on an object and that property doesn't exist, it will look for a property with the same name on the object's prototype instead. If the property doesn't exist there, it will check the prototype's prototype, and so on. The prototype chain will end with the built-in Object.prototype. You can implement point objects using a prototype as follows: var pointPrototype = {     isAboveDiagonal: function() {         return this.y > this.x;     } };   function createPoint(x, y) {     var newPoint = Object.create(pointPrototype);     newPoint.x = x;     newPoint.y = y;     return newPoint; }   var myPoint = createPoint(1, 2); console.log(myPoint.isAboveDiagonal()); // Prints "true" The Object.create method creates a new object with a specified prototype. The isAboveDiagonal method now only exists once in memory on the pointPrototype object. When the code tries to call isAboveDiagonal on an individual point object, it is not present, but it is found on the prototype instead. Note that the preceding example tells us something important about the behavior of the this keyword in JavaScript. It actually refers to the object that the current function was called on, rather than the object it was defined on. Creating objects with the 'new' keyword You can rewrite the previous code example in a more compact form using the new operator: function Point(x, y) {     this.x = x;     this.y = y; }   Point.prototype.isAboveDiagonal = function() {     return this.y > this.x; }   var myPoint = new Point(1, 2); By convention, functions have a property named prototype, which defaults to an empty object. Using the new operator with the Point function creates an object that inherits from Point.prototype and applies the Point function to the newly created object. Programming with classes Although JavaScript doesn't fundamentally have classes, ES2015 introduces a new class keyword. This makes it possible to implement shared behavior and inheritance in a way that may be more familiar compared to other object-oriented languages. The equivalent of the previous code example would look like the following: class Point {     constructor(x, y) {         this.x = x;         this.y = y;     }         isAboveDiagonal() {         return this.y > this.x;     } }   var myPoint = new Point(1, 2); Note that this really is equivalent to the previous example. The class keyword is just syntactic sugar for setting up the prototype-based inheritance already discussed. Once you know how to define objects and classes, you can start to structure the rest of your application. How do I structure Node.js applications? In C# and Java, the static structure of an application is defined by namespaces or packages (respectively) and static types. An application's run-time structure (that is, the set of objects created in memory) is typically bootstrapped using a dependency injection (DI) container. Examples of DI containers include NInject, Autofac and Unity in .NET, or Spring, Guice and Dagger in Java. These frameworks provide features like declarative configuration and autowiring of dependencies. Since JavaScript is a dynamic, interpreted language, it has no inherent static application structure. Indeed, in the browser, all the scripts loaded into a page run one after the other in the same global context. The Node.js module system allows you to structure your application into files and directories and provides a mechanism for importing functionality from one file into another. There are DI containers available for JavaScript, but they are less commonly used. It is more common to pass around dependencies explicitly. The Node.js module system and JavaScript's dynamic typing makes this approach more natural. You don't need to add a lot of fields and constructors/properties to set up dependencies. You can just wrap modules in an initialization function that takes dependencies as parameters. The following very simple example illustrates the Node.js module system, and shows how to inject dependencies via a factory function: We add the following code under /src/greeter.js: module.exports = function(writer) {     return {         greet: function() { writer.write('Hello World!'); }     } }; We add the following code under /src/main.js: var consoleWriter = {     write: function(string) { console.log(string); } }; var greeter = require('./greeter.js')(consoleWriter); greeter.greet(); In the Node.js module system, each file establishes a new module with its own global scope. Within this scope, Node.js provides the module object for the current module to export its functionality, and the require function for importing other modules. If you run the previous example (using node main.js), the Node.js runtime will load the greeter module as a result of the main module's call to the require function. The greeter module assigns a value to the exports property of the module object. This becomes the return value of the require call back in the main module. In this case, the greeter module exports a single object, which is a factory function that takes a dependency. Summary In this article, we have: Understood the Node.js programming model and its use in web applications Described how Node.js web applications can scale Discussed the suitability of JavaScript as a programming language Illustrated how object-oriented programming works in JavaScript Seen how dependency injection works with the Node.js module system Hopefully this article has given you some insight into why Node.js is a compelling technology, and made you better prepared to learn more about writing server-side applications with JavaScript and Node.js. Resources for Article: Further resources on this subject: Web Components [article] Implementing a Log-in screen using Ext JS [article] Arrays [article]
Read more
  • 0
  • 0
  • 1637
article-image-web-components
Packt
17 Jun 2016
12 min read
Save for later

Web Components

Packt
17 Jun 2016
12 min read
In this article by Arshak Khachatryan, the author of Getting Started with Polymer, we will discuss web components. Currently, web technologies are growing rapidly. Though most websites use these technologies nowadays, we come across many with a bad, unresponsive UI design and awful performance. The only reason we should think about a responsive website is that users are now moving to the mobile web. 55% of the web users use mobile phones because they are faster and more comfortable. This is why we need to provide mobile content in the simplest way possible. Everything is moving to minimalism, even the Web. The new web standards are changing rapidly too. In this article, we will cover one of these new technologies, web components, and what they do. We will discuss the following specifications of web components in this article: Templates Shadow DOM (For more resources related to this topic, see here.) Templates In this section, we will discuss what we can do with templates. However, let's answer a few questions before this. What are templates, and why should we use them? Templates are basically fragments of HTML, but let's call these fragments as the "zombie" fragments of HTML as they are neither alive nor dead. What is meant by "neither alive nor dead"? Let me explain this with a real-life example. Once, when I was working on the ucraft.me project (it's a website built with a lot of cool stuff in it), we faced some rather new challenges with the templates. We had a lot of form elements, but we didn't know where to save the form elements content. We didn't want to load the DOM of each form element, but what could we do? As always, we did some magic; we created a lot of div elements with the form elements and hid it with CSS. But the CSS display: none property did not render the element, but it loaded the element. This was also a problem because there were a lot of form element templates, and it affected the performance of the website. I recommended to my team that they work with templates. Templates can contain HTML content, but they do not load the element nor render. We call template elements "dead elements" because they do not load the content until you get their content with JavaScript. Let's move ahead, and let me show you some examples of how you can create templates and do some stuff with its content. Imagine that you are working on a big project where you need to load some dynamic content without AJAX. If I had a task such as this, I would create a PHP file and get its content by calling the jQuery .load() function. However, now, you can save your content inside of the <template> element and get the content without any jQuery and AJAX but with a single line of JavaScript code. Let's create a template. In index.html, we have <template> and some content we want to get in the future, as shown in the following code block: <template class="superman"> <div> <img src="assets/img/superman.png" class="animated_superman" /> </div> </template> The time has now come for JavaScript! Execute the following code: <script> // selecting the template element with querySelector() var tmpl = document.querySelector('.superman'); //getting the <template> content var content = tmpl.content; // making some changes in the content content.querySelector('.animated_superman').width = 200; // appending the template to the body document.body.appendChild(content); </script> So, that's it! Cool, right? The content will load only after you append the content to the document. So, do you realize that templates are a part of the future web? If you are using Chrome Canary, just turn on the flags of experimental web platform features and enable HTML imports and experimental JavaScript. There are four ways to use templates, which are: Add templates with hidden elements in the document and just copy and paste the data when you need it, as follows: <div hidden data-template="superman"> <div> <p>SuperMan Head</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> However, the problem is that a browser will load all the content. It means that the browser will load but not render images, video, audio, and so on. Get the content of the template as a string (by requesting with AJAX or from <script type="x-template">). However, we might have some problems in working with the string. This can be dangerous for XSS attacks; we just need to pay some more attention to this: <script data-template="batman" type="x-template"> <div> <p>Batman Head this time!</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> Compiled templates such as Hogan.js (http://twitter.github.io/hogan.js/) work with strings. So, they have the same flaw as the patterns of the second type. Templates do not have these disadvantages. We will work with DOM and not with the strings. We will then decide when to run the code. In conclusion: The <template> tag is not intended to replace the system of standardization. There are no tricky iteration operators or data bindings. Its main feature is to be able to insert "live" content along with scripts. Lastly, it does not require any libraries. Shadow DOM The Shadow DOM specification is a separate standard. A part of it is used for standard DOM elements, but it is also used to create with web components. In this section, you will learn what Shadow DOM is and how to use it. Shadow DOM is an internal DOM element that is separated from an external document. It can store your ID, styles, and so on. Most importantly, Shadow DOM is not visible outside of its scope without the use of special techniques. Hence, there are no conflicts with the external world; it's like an iframe. Inside the browser The Shadow DOM concept has been used for a long time inside browsers themselves. When the browser shows complex controls, such as a <input type = "range"> slider or a <input type = "date"> calendar within itself, it constructs them out of the most ordinary styled <div>, <span>, and other elements. They are invisible at the first glance, but they can be easily seen if the checkbox in Chrome DevTools is set to display Shadow DOM: In the preceding code, #shadow-root is the Shadow DOM. Getting items from the Shadow DOM can only be done using special JavaScript calls or selectors. They are not children but a more powerful separation of content from the parent. In the preceding Shadow DOM, you can see a useful pseudo attribute. It is nonstandard and is present for solely historical reasons. It can be styled via CSS with the help of subelements—for example, let's change the form input dates to red via the following code: <style> input::-webkit-datetime-edit { background: red; } </style> <input type="date" /> Once again, make a note of the pseudo custom attribute. Speaking chronologically, in the beginning, the browsers started to experiment with encapsulated DOM structure inside their scopes, then Shadow DOM appeared which allowed developers to do the same. Now, let's work with the Shadow DOM from JavaScript or the standard Shadow DOM. Creating a Shadow DOM The Shadow DOM can create any element within the elem.createShadowRoot() call, as shown by the following code: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = "Because I'm Batman!"; </script> If you run this example, you will see that the contents of the #container element disappeared somewhere, and it only shows "Because I'm Batman!". This is because the element has a Shadow DOM and ignores the previous content of the element. Because of the creation of Shadow DOM, instead of the content, the browser has shown only the Shadow DOM. If you wish, you can put the contents of the ordinary inside this Shadow DOM. To do this, you need to specify where it is to be done. The Shadow DOM is done through the "insertion point", and it is declared using the <content> tag; here's an example: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1><p>Winter is coming!</p>'; </script> Now, you will see "You know why?" in the title followed by "Winter is coming!". Here's a Shadow DOM example in Chrome DevTool: The following are some important details about the Shadow DOM: The <content> tag affects only the display, and it does not move the nodes physically. As you can see in the preceding picture, the node "You know why?" remained inside the div#container. It can even be obtained using container.firstElementChild. Inside the <content> tag, we have the content of the element itself. In this example string "You know why?". With the select attribute of the <content> element, you can specify a particular selector content you want to transfer; for example, <content select="p"></content> will transfer only paragraphs. Inside the Shadow DOM, you can use the <content> tag multiple times with different values of select, thus indicating where to place which part of the original content. However, it is impossible to duplicate nodes. If the node is shown in a <content> tag, then the next node will be missed. For example, if there is a <content select="h3.title"> tag and then <content select= "h3">, the first <content> will show the headers <h3> with the class title, while the second will show all the others, except for the ones already shown. In the preceding example from DevTools, the <content></content> tag is empty. If we add some content in the <content> tag, it will show that in that case, if there are no other nodes. Check out the following code: <div id="container"> <h3>Once upon a time, in Westeros</h3> <strong>Ruled a king by name Joffrey and he's dead!</strong> </div> <script> var root = container.createShadowRoot(); root.innerHTML = '<content select='h3'></content> <content select=".writer"> Jon Snow </content> <content></content>'; </script> When you run the JS code, you will see the following: The first <content select='h3'> tag will display the title The second <content select = ".hero"> tag would show the hero name, but if there's no any element with this selector, it will take the default value: <content select=".hero"> The third <content> tag displays the rest of the original contents of the elements without the header <h3>, which it had launched earlier Once again, note that <content> moves nodes on the DOM physically. Root shadowRoot After the creation of a root in the internal DOM, the tree will be available as container.shadowRoot. It is a special object that supports the basic methods of CSS requests and is described in detail in ShadowRoot. You need to go through container.shadowRoot if you need to work with content in the Shadow DOM. You can create a new Shadow DOM tree of JavaScript; here's an example: <div id="container">Polycasts</div> <script> // create a new Shadow DOM tree for element var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1> <strong>Hey googlers! Let's code today.</strong>'; </script> <script> // read data from Shadow DOM for elem var root = container.shadowRoot; // Hey googlers! Let's code today. document.write('<br/><em>container: ' + root. querySelector('strong').innerHTML); // empty as physical nodes - is content document.write('<br/><em>content: ' + root. querySelector('content').innerHTML); </script> To finish up, Shadow DOM is a tool to create a separate DOM tree inside the cell, which is not visible from outside without using special techniques: A lot of browser components with complex structures have Shadow DOM already. You can create Shadow DOM inside every element by calling elem.createShadowRoot(). In the future, it will be available as a elem.shadowRoot root, and you can access it inside the Shadow DOM. It is not available for custom elements. Once the Shadow DOM appears in the element, the content of it is hidden. You can see just the Shadow DOM. The <content> element moves the contents of the original item in the Shadow DOM only visually. However, it remains in the same place in the DOM structure. Detailed specifications are given at http://w3c.github.io/webcomponents/spec/shadow/. Summary Using web components, you can easily create your web application by splitting it into parts/components. Resources for Article: Further resources on this subject: Handling the DOM in Dart [article] Manipulation of DOM Objects using Firebug [article] jQuery 1.4 DOM Manipulation Methods for Style Properties and Class Attributes [article]
Read more
  • 0
  • 0
  • 1757

article-image-capability-model-microservices
Packt
17 Jun 2016
19 min read
Save for later

A capability model for microservices

Packt
17 Jun 2016
19 min read
In this article by Rajesh RV, the author of Spring Microservices, you will learn aboutthe concepts of microservices. More than sticking to definitions, it is better to understand microservices by examining some common characteristics of microservices that are seen across many successful microservices implementations. Spring Boot is an ideal framework to implement microservices. In this article, we will examine how to implement microservices using Spring Boot with an example use case. Beyond services, we will have to be aware of the challenges around microservices implementation. This article will also talk about some of the common challenges around microservices. A successful microservices implementation has to have some set of common capabilities. In this article, we will establish a microservices capability model that can be used in a technology-neutral framework to implement large-scale microservices. What are microservices? Microservices is an architecture style used by many organizations today as a game changer to achieve a high degree of agility, speed of delivery, and scale. Microservices give us a way to develop more physically separated modular applications. Microservices are not invented. Many organizations, such as Netflix, Amazon, and eBay, successfully used the divide-and-conquer technique to functionally partition their monolithic applications into smaller atomic units, and each performs a single function. These organizations solved a number of prevailing issues they experienced with their monolithic application. Following the success of these organizations, many other organizations started adopting this as a common pattern to refactor their monolithic applications. Later, evangelists termed this pattern microservices architecture. Microservices originated from the idea of Hexagonal Architecture coined by Alister Cockburn. Hexagonal Architecture is also known as thePorts and Adapters pattern. Microservices is an architectural style or an approach to building IT systems as aset of business capabilities that are autonomous, self-contained, and loosely coupled. The preceding diagram depicts a traditional N-tier application architecture having a presentation layer, business layer, and database layer. Modules A, B, and C represents three different business capabilities. The layers in the diagram represent a separation of architecture concerns. Each layer holds all three business capabilities pertaining to this layer. The presentation layer has the web components of all three modules, the business layer has the business components of all the three modules, and the database layer hosts tables of all the three modules. In most cases, layers are physically spreadable, whereas modules within a layer are hardwired. Let's now examine a microservices-based architecture, as follows: As we can note in the diagram, the boundaries are inverted in the microservices architecture. Each vertical slice represents a microservice. Each microservice has its own presentation layer, business layer, and database layer. Microservices are aligned toward business capabilities. By doing so, changes to one microservice do not impact others. There is no standard for communication or transport mechanisms for microservices. In general, microservices communicate with each other using widely adopted lightweight protocols such as HTTP and REST or messaging protocols such as JMS or AMQP. In specific cases, one might choose more optimized communication protocols such as Thrift, ZeroMQ, Protocol Buffers, or Avro. As microservices are more aligned to the business capabilities and have independently manageable lifecycles, they are the ideal choice for enterprises embarking on DevOps and cloud. DevOps and cloud are two other facets of microservices. Microservices are self-contained, independently deployable, and autonomous services that take full responsibility of a business capability and its execution. They bundle all dependencies, including library dependencies and execution environments such as web servers and containers or virtual machines that abstract physical resources. These self-contained services assume single responsibility and are well enclosed with in a bounded context. Microservices – The honeycomb analogy The honeycomb is an ideal analogy to represent the evolutionary microservices architecture. In the real world, bees build a honeycomb by aligning hexagonal wax cells. They start small, using different materials to build the cells. Construction is based on what is available at the time of building. Repetitive cells form a pattern and result in a strong fabric structure. Each cell in the honeycomb is independent but also integrated with other cells. By adding new cells, the honeycomb grows organically to a big solid structure. The content inside each cell is abstracted and is not visible outside. Damage to one cell does not damage other cells, and bees can reconstruct these cells without impacting the overall honeycomb. Characteristics of microservices The microservices definition discussed at the beginning of this article is arbitrary. Evangelists and practitioners have strong but sometimes different opinions on microservices. There is no single, concrete, and universally accepted definition for microservices. However, all successful microservices implementations exhibit a number of common characteristics. Some of these characteristics are explained as follows: Since microservices are more or less similar to a flavor of SOA, many of the service characteristics of SOA are applicable to microservices, as well. In the microservices world, services are first-class citizens. Microservices expose service endpoints as APIs and abstract all their realization details. The APIs could be synchronous or asynchronous. HTTP/REST is the popular choice for APIs. As microservices are autonomous and abstract everything behind service APIs, it is possible to have different architectures for different microservices. The internal implementation logic, architecture, and technologies, including programming language, database, quality of service mechanisms,and so on, are completely hidden behind the service API. Well-designed microservices are aligned to a single business capability, so they perform only one function. As a result, one of the common characteristics we see in most of the implementations are microservices with smaller footprints. Most of the microservices implementations are automated to the maximum extent possible, from development to production. Most large-scale microservices implementations have a supporting ecosystem in place. The ecosystem's capabilities include DevOps processes, centralized log management, service registry, API gateways, extensive monitoring, service routing and flow control mechanisms, and so on. Successful microservices implementations encapsulate logic and data within the service. This results in two unconventional situations: a distributed data and logic and decentralized governance. A microservice example The Customer profile microservice exampleexplained here demonstrates the implementation of microservice and interaction between different microservices. In this example, two microservices, Customer Profile and Customer Notification, will be developed. As shown in the diagram, the Customer Profile microservice exposes methods to create, read, update, and delete a customer and a registration service to register a customer. The registration process applies a certain business logic, saves the customer profile, and sends a message to the CustomerNotification microservice. The CustomerNotification microservice accepts the message send by the registration service and sends an e-mail message to the customer using an SMTP server. Asynchronous messaging is used to integrate CustomerProfile with the CustomerNotification service. The customer microservices class domain model diagram is as shown here: Implementing this Customer Profile microservice is not a big deal. The Spring framework, together with Spring Boot, provides all the necessary capabilities to implement this microservice without much hassle. The key is CustomerController in the diagram, which exposes the REST endpoint for our microservice. It is also possible to use HATEOAS to explore the repository's REST services directly using the @RepositoryRestResource annotation. The following code sample shows the Spring Boot main class called Application and theREST endpoint definition for theregistration of a new customer: @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args);  } }   @RestController class CustomerController{ //other code here @RequestMapping( path="/register", method = RequestMethod.POST) Customer register(@RequestBody Customer customer){ returncustomerComponent.register(customer); } } CustomerControllerinvokes a component class, CustomerComponent. The component class/bean handles all the business logic. CustomerRepository is a Spring data JPA repository defined to handlethe persistence of the Customer entity. The whole application will then be deployed as a Spring Boot application by building a standalone jar rather than using the conventional war file. Spring Boot encapsulates the server runtime along with the fat jar it produces. By default, it is an instance of the Tomcat server. CustomerComponent, in addition to calling the CustomerRepository class, sends a message to the RabbitMQ queue, where the CustomerNotification component is listening. This can be easily achieved in Spring using the RabbitMessagingTemplate class as shown in the following Sender implementation: @Component class CustomerComponent { //other code here   Customer register(Customer customer){ customerRespository.save(customer); sender.send(customer.getEmail()); return customer; } }   @Component @Lazy class Sender { RabbitMessagingTemplate template;   @Autowired Sender(RabbitMessagingTemplate template){ this.template = template; }   @Bean Queue queue() { return new Queue("CustomerQ", false); }   public void send(String message){ template.convertAndSend("CustomerQ", message); } } The receiver on the other sideconsumes the message using RabbitListener and sends out an e-mail using theJavaMailSender component. Execute the following code: @Component class Receiver { @Autowired private  JavaMailSenderjavaMailService;   @Bean Queue queue() { return new Queue("CustomerQ", false); }   @RabbitListener(queues = "CustomerQ") public void processMessage(String email) { System.out.println(email); SimpleMailMessagemailMessage=new SimpleMailMessage(); mailMessage.setTo(email); mailMessage.setSubject("Registration"); mailMessage.setText("Successfully Registered"); javaMailService.send(mailMessage);       }   } In this case,CustomerNotification isour secondSpring Boot microservice. In this case, instead of the REST endpoint, it only exposes a message listener end point. Microservices challenges In the previous section,you learned about the right design decisions to be made and the trade-offs to be applied. In this section, we will review some of the challenges with microservices. Take a look at the following list: Data islands: Microservices abstract their own local transactional store, which is used for their own transactional purposes. The type of store and the data structure will be optimized for the services offered by the microservice. This can lead to data islands and, hence, challenges around aggregating data from different transactional stores to derive meaningful information. Logging and monitoring: Log files are a good piece of information for analysis and debugging. As each microservice is deployed independently, they emit separate logs, maybe to a local disk. This will result in fragmented logs. When we scale services across multiple machines, each service instance would produce separate log files. This makes it extremely difficult to debug and understand the behavior of the services through log mining. Dependency management: Dependency management is one of the key issues in large microservices deployments. How do we ensure the chattiness between services is manageable?How do we identify and reduce the impact of a change? How do we know whether all the dependent services are up and running? How will the service behave if one of the dependent services is not available? Organization's culture: One of the biggest challenges in microservices implementation is the organization's culture. An organization following waterfall development or heavyweight release management processes with infrequent release cycles is a challenge for microservices development. Insufficient automation is also a challenge for microservices deployments. Governance challenges: Microservices impose decentralized governance, and this is quite in contrast to the traditional SOA governance. Organizations may find it hard to come up with this change, and this could negatively impact microservices development. How can we know who is consuming service? How do we ensure service reuse? How do we define which services are available in the organization? How do we ensure that the enterprise polices are enforced? Operation overheads: Microservices deployments generally increases the number of deployable units and virtual machines (or containers). This adds significant management overheads and cost of operations.With a single application, a dedicated number of containers or virtual machines in an on-premises data center may not make much sense unless the business benefit is high. With many microservices, the number of Configurable Items (CIs) is too high, and the number of servers in which these CIs are deployed might also be unpredictable. This makes it extremely difficult to manage data in a traditional Configuration Management Database (CMDB). Testing microservices: Microservices also pose a challenge for the testability of services. In order to achieve full service functionality, one service may rely on another service, and this, in turn, may rely on another service, either synchronously or asynchronously. The issue is how we test an end-to-end service to evaluate its behavior. Dependent services may or may not be available at the time of testing. Infrastructure provisioning: As briefly touched upon under operation overheads, manual deployment can severely challenge microservices rollouts. If a deployment has manual elements, the deployer or operational administrators should know the running topology, manually reroute traffic, and then deploy the application one by one until all the services are upgraded. With many server instances running, this could lead to significant operational overheads. Moreover, the chance of error is high in this manual approach. Beyond just services– The microservices capability model Microservice are not as simple as the Customer Profile implementation we discussedearlier. This is specifically true when deploying hundreds or thousands of services. In many cases, an improper microservices implementation may lead to a number of challenges, as mentioned before.Any successful Internet-scale microservices deployment requires a number of additional surrounding capabilities. The following diagram depicts the microservices capability model: The capability model is broadly classified in to four areas, as follows: Core capabilities, which are part of the microservices themselves Supporting capabilities, which are software solutions supporting core microservice implementations Infrastructure capabilities, which are infrastructure-level expectations for a successful microservices implementation Governance capabilities, which are more of process, people, and reference information Core capabilities The core capabilities are explained here: Service listeners (HTTP/Messaging): If microservices are enabled for HTTP-based service endpoints, then the HTTP listener will be embedded within the microservices, thereby eliminating the need to have any external application server requirement. The HTTP listener will be started at the time of the application startup. If the microservice is based on asynchronous communication, then instead of an HTTP listener, a message listener will be stated. Optionally, other protocols could also be considered. There may not be any listeners if the microservices is a scheduled service. Spring Boot and Spring Cloud Streams provide this capability. Storage capability: Microservices have storage mechanisms to store state or transactional data pertaining to the business capability. This is optional, depending on the capabilities that are implemented. The storage could be either a physical storage (RDBMS,such as MySQL, and NoSQL,such as Hadoop, Cassandra, Neo4J, Elasticsearch,and so on), or it could be an in-memory store (cache,such as Ehcache and Data grids,such as Hazelcast, Infinispan,and so on). Business capability definition: This is the core of microservices, in which the business logic is implemented. This could be implemented in any applicable language, such as Java, Scala, Conjure, Erlang, and so on. All required business logic to fulfil the function is embedded within the microservices itself. Event sourcing: Microservices send out state changes to the external world without really worrying about the targeted consumers of these events. They could be consumed by other microservices, supporting services such as audit by replication, external applications,and so on. This will allow other microservices and applications to respond to state changes. Service endpoints and communication protocols: This defines the APIs for external consumers to consume. These could be synchronous endpoints or asynchronous endpoints. Synchronous endpoints could be based on REST/JSON or other protocols such as Avro, Thrift, protocol buffers, and so on. Asynchronous endpoints will be through Spring Cloud Streams backed by RabbitMQ or any other messaging servers or other messaging style implementations, such as Zero MQ. The API gateway: The API gateway provides a level of indirection by either proxying service endpoints or composing multiple service endpoints. The API gateway is also useful for policy enforcements. It may also provide real-time load balancing capabilities. There are many API gateways available in the market. Spring Cloud Zuul, Mashery, Apigee, and 3 Scale are some examples of API gateway providers. User interfaces: Generally, user interfaces are also part of microservices for users to interact with the business capabilities realized by the microservices. These could be implemented in any technology and is channel and device agnostic. Infrastructure capabilities Certain infrastructure capabilities are required for a successful deployment and to manage large-scale microservices. When deploying microservices at scale, not having proper infrastructure capabilities can be challenging and can lead to failures. Cloud: Microservices implementation is difficult in a traditional data center environment with a long lead time to provision infrastructures. Even a large number of infrastructure dedicated per microservice may not be very cost effective. Managing them internally in a data center may increase the cost of ownership and of operations. A cloud-like infrastructure is better for microservices deployment. Containers or virtual machines: Managing large physical machines is not cost effective and is also hard to manage. With physical machines, it is also hard to handle automatic fault tolerance. Virtualization is adopted by many organizations because of its ability to provide an optimal use of physical resources, and it provides resource isolation. It also reduces the overheads in managing large physical infrastructure components. Containers are the next generation of virtual machines. VMWare, Citrix,and so on provide virtual machine technologies. Docker, Drawbridge, Rocket, and LXD are some containerizing technologies. Cluster control and provisioning: Once we have a large number of containers or virtual machines, it is hard to manage and maintain them automatically. Cluster control tools provide a uniform operating environment on top of the containers and share the available capacity across multiple services. Apache Mesos and Kubernetes are examples of cluster control systems. Application lifecycle management: Application lifecycle management tools help to invoke applications when a new container is launched or kill the application when the container shuts down. Application lifecycle management allows to script application deployments and releases. It automatically detects failure scenarios and responds to them, thereby ensuring the availability of the application. This works in conjunction with the cluster control software. Marathon partially address this capability. Supporting capabilities Supporting capabilities are not directly linked to microservices, but these are essential for large-scale microservices development. Software-defined load balancer: The load balancer should be smart enough to understand the changes in deployment topology and respond accordingly. This moves away from the traditional approach of configuring static IP addresses, domain aliases, or cluster address in the load balancer. When new servers are added to the environment, it should automatically detect this and include them in the logical cluster by avoiding any manual interactions. Similarly, if a service instance is unavailable, it should take it out of the load balancer. A combination of Ribbon, Eureka, and Zuul provides this capability in Spring Cloud Netflix. Central log management: As explored earlier in this article, a capability is required to centralize all the logs emitted by service instances with correlation IDs. This helps debug, identify performances bottlenecks, and in predictive analysis. The result of this could feedback into the lifecycle manager to take corrective actions. Service registry: A service registry provides a runtime environment for services to automatically publish their availability at runtime. A registry will be a good source of information to understand the services topology at any point. Eureka from Spring Cloud, ZooKeeper, and Etcd are some of the service registry tools available. Security service: The distributed microservices ecosystem requires a central server to manage service security. This includes service authentication and token services. OAuth2-based services are widely used for microservices security. Spring Security and Spring Security OAuth are good candidates to build this capability. Service configuration: All service configurations should be externalized, as discussed in the Twelve-Factor application principles. A central service for all configurations could be a good choice. The Spring Cloud Config server and Archaius are out-of-the-box configuration servers. Testing tools (Anti-Fragile, RUM, and so on): Netflix uses Simian Army for antifragile testing. Mature services need consistent challenges to see the reliability of the services and how good fallback mechanisms are. Simian Army components create various error scenarios to explore the behavior of the system under failure scenarios. Monitoring and dashboards: Microservices also require a strong monitoring mechanism. This monitoring is not just at the infrastructurelevel but also at the service level. Spring Cloud Netflix Turbine, the Hysterix dashboard,and others provide service-level information. End-to-end monitoring tools,such as AppDynamic, NewRelic, Dynatrace, and other tools such as Statd, Sensu, and Spigo, could add value in microservices monitoring. Dependency and CI management: We also need tools to discover runtime topologies, to find service dependencies, and to manage configurable items (CIs). A graph-based CMDB is more obvious to manage these scenarios. Data lakes: As discussed earlier in this article, we need a mechanism to combine data stored in different microservices and perform near real-time analytics. Data lakesare a good choice to achieve this. Data ingestion tools such as Spring Cloud Data Flow, Flume, and Kafka are used to consume data. HDFS, Cassandra,and others are used to store data. Reliable messaging: If the communication is asynchronous, we may need a reliable messaging infrastructure service, such as RabbitMQ or any other reliable messaging service. Cloud messaging or messaging as service is a popular choice in Internet-scale message-based service endpoints. Process and governance capabilities The last in the puzzle are the process and governance capabilities required for microservices, which are: DevOps: Key in successful implementation is to adopt DevOps. DevOps complements microservices development by supporting agile development, high-velocity delivery, automation, and better change management. DevOps tools: DevOps tools for agile development, continuous integration, continuous delivery, and continuous deployment are essential for a successful delivery of microservices. A lot of emphasis is required in automated, functional, and real user testing as well as synthetic, integration, release, and performance testing. Microservices repository: A microservices repository is where the versioned binaries of microservices are placed. These could be a simple Nexus repository or container repositories such as the Docker registry. Microservice documentation: It is important to have all microservices properly documented. Swagger or API blueprint are helpful in achieving good microservices documentation. Reference architecture and libraries: Reference architecture provides a blueprint at the organization level to ensure that services are developed according to certain standards and guidelines in a consistent manner. Many of these could then be translated to a number of reusable libraries that enforce service development philosophies. Summary In this article,you learned the concepts and characteristics of microservices. We took as example a holiday portal to understand the concept of microservices better. We also examined some of the common challenges in large-scale microservice implementation. Finally, we established a microservices capability model in this article that can be used to deliver successful Internet-scale microservices.
Read more
  • 0
  • 0
  • 7996