Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-creating-graphs-and-charts
Packt
12 Apr 2016
17 min read
Save for later

Creating Graphs and Charts

Packt
12 Apr 2016
17 min read
In this article by Bhushan Purushottam Joshi author of the book Canvas Cookbook, highlights data representation in the form of graphs and charts with the following topics: Drawing the axes Drawing a simple equation Drawing a sinusoidal wave Drawing a line graph Drawing a bar graph Drawing a pie chart (For more resources related to this topic, see here.) Drawing the axes In school days, we all might have used a graph paper and drawn a vertical line called y axis and a horizontal line called as x axis. Here, in the first recipe of ours, we do only the drawing of axes. Also, we mark the points at equal intervals. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Axes</title> <script src="graphaxes.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> </form> </body> </html> The JavaScript code is as follows: varxMin=-10;varyMin=-10;varxMax=10;varyMax=10; //draw the x-axis varcan;varctx;varxaxisx;varxaxisy;varyaxisx;varyaxisy; varinterval;var length; functioninit(){ can=document.getElementById('MyCanvasArea'); ctx=can.getContext('2d'); ctx.clearRect(0,0,can.width,can.height); varsel=document.forms['myform'].elements['startvalue']; xMin=sel.value; yMin=xMin; xMax=-xMin; yMax=-xMin; drawXAxis(); drawYAxis(); } functiondrawXAxis(){ //x axis drawing and marking on the same xaxisx=10; xaxisy=can.height/2; ctx.beginPath(); ctx.lineWidth=2; ctx.strokeStyle="black"; ctx.moveTo(xaxisx,xaxisy); xaxisx=can.width-10; ctx.lineTo(xaxisx,xaxisy); ctx.stroke(); ctx.closePath(); length=xaxisx-10; noofxfragments=xMax-xMin; interval=length/noofxfragments; //mark the x-axis xaxisx=10; ctx.beginPath(); ctx.font="bold 10pt Arial"; for(vari=xMin;i<=xMax;i++) { ctx.lineWidth=0.15; ctx.strokeStyle="grey"; ctx.fillText(i,xaxisx-5,xaxisy-10); ctx.moveTo(xaxisx,xaxisy-(can.width/2)); ctx.lineTo(xaxisx,(xaxisy+(can.width/2))); ctx.stroke(); xaxisx=Math.round(xaxisx+interval); } ctx.closePath(); } functiondrawYAxis(){ yaxisx=can.width/2; yaxisy=can.height-10; ctx.beginPath(); ctx.lineWidth=2; ctx.strokeStyle="black"; ctx.moveTo(yaxisx,yaxisy); yaxisy=10 ctx.lineTo(yaxisx,yaxisy); ctx.stroke(); ctx.closePath(); yaxisy=can.height-10; length=yaxisy-10; noofxfragments=yMax-yMin; interval=length/noofxfragments; //mark the y-axis ctx.beginPath(); ctx.font="bold 10pt Arial"; for(vari=yMin;i<=yMax;i++) { ctx.lineWidth=0.15; ctx.strokeStyle="grey"; ctx.fillText(i,yaxisx-20,yaxisy+5); ctx.moveTo(yaxisx-(can.height/2),yaxisy); ctx.lineTo((yaxisx+(can.height/2)),yaxisy); ctx.stroke(); yaxisy=Math.round(yaxisy-interval); } ctx.closePath(); } How it works... There are two functions in the JavaScript code viz. drawXAxis and drawYAxis. A canvas is not calibrated the way a graph paper is. A simple calculation is used to do the same. In both the functions, there are two parts. One part draws the axis and the second marks the axis on regular intervals. These are delimited by ctx.beginPath() and ctx.closePath(). In the first part, the canvas width and height are used to draw the axis. In the second part, we do some calculation. The length of the axis is divided by the number of markers to get the interval. If the starting point is -3, then we have -3, -2, -1, 0, 1, 2, and 3 on the axis, which makes 7 marks and 6 parts. The interval is used to generate x and y coordinate value for the starting point and plot the markers. There is more... Try to replace the following: ctx.moveTo(xaxisx,xaxisy-(can.width/2)); (in drawXAxis()) ctx.lineTo(xaxisx,(xaxisy+(can.width/2)));(in drawXAxis()) ctx.moveTo(yaxisx-(can.height/2),yaxisy);(in drawYAxis()) ctx.lineTo((yaxisx+(can.height/2)),yaxisy);(in drawYAxis()) WITH ctx.moveTo(xaxisx,xaxisy-5); ctx.lineTo(xaxisx,(xaxisy+5)); ctx.moveTo(yaxisx-5,yaxisy); ctx.lineTo((yaxisx+5),yaxisy); Also, instead of grey color for markers, you can use red. Drawing a simple equation This recipe is a simple line drawing on a graph using an equation. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Equation</title> <script src="graphaxes.js"></script> <script src="plotequation.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> <br> Enter the coeficient(c) for the equation y=cx <input type="text" size=5 name="coef"> <input type="button" value="Click to plot" onclick="plotEquation()"> <input type="button" value="Reset" onclick="init()"> </form> </body> </html> The JavaScript code is as follows: functionplotEquation(){ varcoef=document.forms['myform'].elements['coef']; var s=document.forms['myform'].elements['startvalue']; var c=coef.value; var x=parseInt(s.value); varxPos; varyPos; while(x<=xMax) { y=c*x; xZero=can.width/2; yZero=can.height/2; if(x!=0) xPos=xZero+x*interval; else xPos=xZero-x*interval; if(y!=0) yPos=yZero-y*interval; else yPos=yZero+y*interval; ctx.beginPath(); ctx.fillStyle="blue"; ctx.arc(xPos,yPos,5,Math.PI/180,360*Math.PI/180,false); ctx.fill(); ctx.closePath(); if(x<xMax) { ctx.beginPath(); ctx.lineWidth=3; ctx.strokeStyle="green"; ctx.moveTo(xPos,yPos); nextX=x+1; nextY=c*nextX; if(nextX!=0) nextXPos=xZero+nextX*interval; else nextXPos=xZero-nextX*interval; if(nextY!=0) nextYPos=yZero-nextY*interval; else nextYPos=yZero+nextY*interval; ctx.lineTo(nextXPos,nextYPos); ctx.stroke(); ctx.closePath(); } x=x+1; } } How it works... We use one more script in this recipe. There are two scripts referred by the HTML file. One is the previous recipe named graphaxes.js, and the other one is the current one named plotequation.js. JavaScript allows you to use the variables created in one file into the other, and this is done in this new recipe. You already know how the axes are drawn. This recipe is to plot an equation y=cx, where c is the coefficient entered by the user. We take the minimum of the x value from the drop-down list and calculate the values for y in a loop. We plot the current and next coordinate and draw a line between the two. This happens till we reach the maximum value of x. Remember that the maximum and minimum value of x and y is same. There is more... Try the following: Input positive as well as negative value for coefficient. Drawing a sinusoidal wave This recipe also uses the previous recipe of axes drawing. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Equation</title> <script src="graphaxes.js"></script> <script src="plotSineEquation.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> <br> <input type="button" value="Click to plot a sine wave" onclick="plotEquation()"> <input type="button" value="Reset" onclick="init()"> </form> </body> </html> The JavaScript code is as follows: functionplotEquation() { var s=document.forms['myform'].elements['startvalue']; var x=parseInt(s.value); //ctx.fillText(x,100,100); varxPos; varyPos; varnoofintervals=Math.round((2*Math.abs(x)+1)/2); xPos=10; yPos=can.height/2; xEnd=xPos+(2*interval); yEnd=yPos; xCtrl1=xPos+Math.ceil(interval/2); yCtrl1=yPos-200; xCtrl2=xEnd-Math.ceil(interval/2); yCtrl2=yPos+200; drawBezierCurve(ctx,xPos,yPos,xCtrl1,yCtrl1,xCtrl2,yCtrl2,xEnd,yEnd,"red",2); for(vari=1;i<noofintervals;i++) { xPos=xEnd; xEnd=xPos+(2*interval); xCtrl1=xPos+Math.floor(interval/2)+15; xCtrl2=xEnd-Math.floor(interval/2)-15; drawBezierCurve(ctx,xPos,yPos,xCtrl1,yCtrl1,xCtrl2,yCtrl2,xEnd,yEnd,"red",2); } } function drawBezierCurve(ctx,xstart,ystart,xctrl1,yctrl1,xctrl2,yctrl2,xend,yend,color,width) { ctx.strokeStyle=color; ctx.lineWidth=width; ctx.beginPath(); ctx.moveTo(xstart,ystart); ctx.bezierCurveTo(xctrl1,yctrl1,xctrl2,yctrl2,xend,yend); ctx.stroke(); } How it works... We use the Bezier curve to draw the sine wave along the x axis. A bit of calculation using the interval between two points, which encompasses a phase, is done to achieve this. The number of intervals is calculated in the following statement: varnoofintervals=Math.round((2*Math.abs(x)+1)/2); where x is the value in the drop-down list. One phase is initially drawn before the for loop begins. The subsequent phases are drawn in the for loop. The start and end x coordinate changes in every iteration. The ending coordinate for the first sine wave is the first coordinate for the subsequent sine wave. Drawing a line graph Graphs are always informative. The basic graphical representation can be a line graph, which is demonstrated here: How to do it… The HTML code is as follows: <html> <head> <title>A simple Line chart</title> <script src="linechart.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit() { vargCanvas = document.getElementById('MyCanvasArea'); // Ensure that the element is available within the DOM varctx = gCanvas.getContext('2d'); // Bar chart data var data = new Array(7); data[0] = "1,130"; data[1] = "2,140"; data[2] = "3,150"; data[3] = "4,140"; data[4] = "5,180"; data[5] = "6,240"; data[6] = "7,340"; // Draw the bar chart drawLineGraph(ctx, data, 70, 100, (gCanvas.height - 40), 50); } functiondrawLineGraph(ctx, data, startX, barWidth, chartHeight, markDataIncrementsIn) { // Draw the x axis ctx.lineWidth = "3.0"; var max=0; varstartY = chartHeight; drawLine(ctx, startX, startY, startX, 1); drawLine(ctx, startX, startY, 490, startY); for(vari=0,m=0;i<data.length;i++,m+=60) { ctx.lineWidth=0.3; drawLine(ctx,startX,startY-m,490,startY-m) ctx.font="bold 12pt Arial"; ctx.fillText(m,startX-30,startY-m); } for(vari=0,m=0;i<data.length;i++,m+=61) { ctx.lineWidth=0.3; drawLine(ctx, startX+m, startY, startX+m, 1); var values=data[i].split(","); var day; switch(values[0]) { case "1": day="MO"; break; case "2": day="TU"; break; case "3": day="WE"; break; case "4": day="TH"; break; case "5": day="FR"; break; case "6": day="SA"; break; case "7": day="SU"; break; } ctx.fillText(day,startX+m-10, startY+20); } //plot the points and draw lines between them varstartAngle = 0 * (Math.PI/180); varendAngle = 360 * (Math.PI/180); varnewValues; for(vari=0,m=0;i<data.length;i++,m+=60) { ctx.beginPath(); var values=data[i].split(","); varxPos=startX+parseInt(values[0])+m; varyPos=chartHeight-parseInt(values[1]); ctx.arc(xPos, yPos, 5, startAngle,endAngle, false); ctx.fillStyle="red"; ctx.fill(); ctx.fillStyle="blue"; ctx.fillText(values[1],xPos, yPos); ctx.stroke(); ctx.closePath(); if(i>0){ ctx.strokeStyle="green"; ctx.lineWidth=1.5; ctx.moveTo(oldxPos,oldyPos); ctx.lineTo(xPos,yPos); ctx.stroke(); } oldxPos=xPos; oldyPos=yPos; } } functiondrawLine(ctx, startx, starty, endx, endy) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); } How it works... All the graphs in the subsequent recipes also work on an array named data. The array element has two parts: one indicates the day and the second indicates the usage in minutes. A split function down the code splits the element into two independent elements. The coordinates are calculated using a parameter named m, which is used in calculating the value of the x coordinate. The value in minutes and the chart height is used to calculate the position of y coordinate. Inside the loop, there are two coordinates, which are used to draw a line. One in the moveTo() method and the other in the lineTo() method. However, the coordinates oldxPos and oldyPos are not calculated in the first iteration, for the simple reason that we cannot draw a line with a single coordinate. Next iteration onwards, we have two coordinates and then the line is drawn between the prior and current coordinates. There is more... Use your own data Drawing a bar graph Another typical representation, which is widely used, is the bar graph. Here is an output of this recipe: How to do it… The HTML code is as follows: <html> <head> <title>A simple Bar chart</title> <script src="bargraph.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit(){ vargCanvas = document.getElementById('MyCanvasArea'); // Ensure that the element is available within the DOM varctx = gCanvas.getContext('2d'); // Bar chart data var data = new Array(7); data[0] = "MON,130"; data[1] = "TUE,140"; data[2] = "WED,150"; data[3] = "THU,140"; data[4] = "FRI,170"; data[5] = "SAT,250"; data[6] = "SUN,340"; // Draw the bar chart drawBarChart(ctx, data, 70, 100, (gCanvas.height - 40), 50); } functiondrawBarChart(ctx, data, startX, barWidth, chartHeight, markDataIncrementsIn) { // Draw the x and y axes ctx.lineWidth = "3.0"; varstartY = chartHeight; //drawLine(ctx, startX, startY, startX, 30); drawBarGraph(ctx, startX, startY, startX, 30,data,chartHeight); drawLine(ctx, startX, startY, 570, startY); } functiondrawLine(ctx, startx, starty, endx, endy) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); } functiondrawBarGraph(ctx, startx, starty, endx, endy,data,chartHeight) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); var max=0; //code to label x-axis for(i=0;i<data.length;i++) { varxValues=data[i].split(","); varxName=xValues[0]; ctx.textAlign="left"; ctx.fillStyle="#b90000"; ctx.font="bold 15px Arial"; ctx.fillText(xName,startx+i*50+i*20,chartHeight+15,200); var height=parseInt(xValues[1]); if(parseInt(height)>parseInt(max)) max=height; varcolor='#'+Math.floor(Math.random()*16777215).toString(16); drawBar(ctx,startx+i*50+i*20,(chartHeight-height),height,50,color); ctx.fillText(Math.round(height/60)+" hrs",startx+i*50+i*20,(chartHeight-height-20),200); } //title the x-axis ctx.beginPath(); ctx.fillStyle="black"; ctx.font="bolder 20pt Arial"; ctx.fillText("<------------Weekdays------------>",startx+150,chartHeight+35,200); ctx.closePath(); //y-axis labelling varylabels=Math.ceil(max/60); varyvalue=0; ctx.font="bold 15pt Arial"; for(i=0;i<=ylabels;i++) { ctx.textAlign="right"; ctx.fillText(yvalue,startx-5,(chartHeight-yvalue),50); yvalue+=60; } //title the y-axis ctx.beginPath(); ctx.font = 'bolder 20pt Arial'; ctx.save(); ctx.translate(20,70); ctx.rotate(-0.5*Math.PI); varrText = 'Rotated Text'; ctx.fillText("<--------Time in minutes--------->" , 0, 0); ctx.closePath(); ctx.restore(); } functiondrawBar(ctx,xPos,yPos,height,width,color){ ctx.beginPath(); ctx.fillStyle=color; ctx.rect(xPos,yPos,width,height); ctx.closePath(); ctx.stroke(); ctx.fill(); } How it works... The processing is similar to that of a line graph, except that here there are rectangles drawn, which represent bars. Also, the number 1, 2, 3… are represented as day of the week (for example, 1 means Monday). This line in the code: varcolor='#'+Math.floor(Math.random()*16777215).toString(16); is used to generate random colors for the bars. The number 16777215 is a decimal value for #FFFFF. Note that the value of the control variable i is not directly used for drawing the bar. Rather i is manipulated to get the correct coordinates on the canvas and then the bar is drawn using the drawBar() function. drawBar(ctx,startx+i*50+i*20,(chartHeight-height),height,50,color); There is more... Use your own data and change the colors. Drawing a pie chart A share can be easily represented in form of a pie chart. This recipe demonstrates a pie chart: How to do it… The HTML code is as follows: <html> <head> <title>A simple Pie chart</title> <script src="piechart.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit() { var can = document.getElementById('MyCanvasArea'); varctx = can.getContext('2d'); var data = [130,140,150,140,170,250,340]; varcolors = ["crimson", "blue", "yellow", "navy", "aqua", "purple","red"]; var names=["MON","TUE","WED","THU","FRI","SAT","SUN"]; varcenterX=can.width/2; varcenterY=can.height/2; //varcenter = [can.width/2,can.height / 2]; var radius = (Math.min(can.width,can.height) / 2)-50; varstartAngle=0, total=0; for(vari in data) { total += data[i]; } varincrFactor=-(centerX-centerX/2); var angle=0; for (vari = 0; i<data.length; i++){ ctx.fillStyle = colors[i]; ctx.beginPath(); ctx.moveTo(centerX,centerY); ctx.arc(centerX,centerY,radius,startAngle,startAngle+(Math.PI*2*(data[i]/total)),false); ctx.lineTo(centerX,centerY); ctx.rect(centerX+incrFactor,20,20,10); ctx.fill(); ctx.fillStyle="black"; ctx.font="bold 10pt Arial"; ctx.fillText(names[i],centerX+incrFactor,15); ctx.save(); ctx.translate(centerX,centerY); ctx.rotate(startAngle); var dx=Math.floor(can.width*0.5)-100; vardy=Math.floor(can.height*0.20); ctx.fillText(names[i],dx,dy); ctx.restore(); startAngle += Math.PI*2*(data[i]/total); incrFactor+=50; } } How it works... Again the data here is the same, but instead of bars, we use arcs here. The trick is done by changing the end angle as per the data available. Translation and rotation helps in naming the weekdays for the pie chart. There is more... Use your own data and change the colors to get acquainted. Summary Managers make decisions based on the data representations. The data is usually represented in a report form and in the form of graph or charts. The latter representation plays a major role in providing a quick review of the data. In this article, we represent dummy data in the form of graphs and chart. Resources for Article: Further resources on this subject: HTML5 Canvas[article] HTML5: Developing Rich Media Applications using Canvas[article] Building the Untangle Game with Canvas and the Drawing API[article]
Read more
  • 0
  • 0
  • 2937

article-image-advanced-react
Packt
12 Apr 2016
7 min read
Save for later

Advanced React

Packt
12 Apr 2016
7 min read
In this article by Sven A. Robbestad, author of ReactJS Blueprints, we will cover the following topics: Understanding Webpack Adding Redux to your ReactJS app Understanding Redux reducers, actions, and the store (For more resources related to this topic, see here.) Introduction Understanding the tools you use and the libraries you include in your web app is important to make an efficient web application. In this article, we'll look at some of the difficult parts of modern web development with ReactJS, including Webpack and Redux. Webpack is an important tool for modern web developers. It is a module bundler and works by bundling all modules and files within the context of your base folder. Any file within this context is considered a module and attemptes will be made to bundled it. The only exceptions are files placed in designated vendor folders by default, that are node_modules and web_modules files. Files in these folders are explicitly required in your code to be bundled. Redux is an implementation of the Flux pattern. Flux describes how data should flow through your app. Since the birth of the pattern, there's been an explosion in the number of libraries that attempt to execute on the idea. It's safe to say that while many have enjoyed moderate success, none has been as successful as Redux. Configuring Webpack You can configure Webpack to do almost anything you want, including replacing the current code loaded in your browser with the updated code, while preserving the state of the app. Webpack is configured by writing a special configuration file, usually called webpack.config.js. In this file, you specify the entry and output parameters, plugins, module loaders, and various other configuration parameters. A very basic config file looks like this: var webpack = require('webpack'); module.exports = { entry: [ './entry' ], output: { path: './', filename: 'bundle.js' } }; It's executed by issuing this command from the command line: webpack --config webpack.config.js You can even drop the config parameter, as Webpack will automatically look for the presence of webpack.config.js if not specified. In order to convert the source files before bundling, you use module loaders. Adding this section to the Webpack config file will ensure that the babel-loader module converts JavaScript 2015 code to ECMAScript 5: module: { loaders: [{ test: /.js?$/', loader: 'babel-loader', exclude: /node_modules/, query: { presets: ['es2015','react'] } }] } The first option (required), test, is a regex match that tells Webpack which files these loader operates on. The regex tells Webpack to look for files with a period followed by the letters js and then any optional letters (?) before the end ($). This makes sure that the loader reads both plain JavaScript files and JSX files. The second option (required), loader, is the name of the package that we'll use to convert the code. The third option (optional), exclude, is another regex variable used to explicitly ignore a set of folders or files. The final option (optional), query, contains special configuration options for Babel. The recommended way to do it is actually by setting them in a special file called .babelrc. This file will be picked up automatically by Babel when transpiling files. Adding Redux to your ReactJS app When ReactJS was first introduced to the public in late 2013/early 2014, you would often hear it mentioned together with functional programming. However, there's no inherent requirement to write functional code when writing the ReactJS code, and JavaScript itself being a multi-paradigm language is neither strictly functional nor strictly imperative. Redux chose the functional approach, and it's quickly gaining traction as the superior Flux implementation. There are a number of benefits of choosing a functional, which are as follows: No side effects allowed, that is, the operation is stateless Always returns the same output for a given input Ideal for creating recursive operations Ideal for parallel execution Easy to establish the single source of truth Easy to debug Easy to persist the store state for a faster development cycle Easy to create functionality such as undo and redo Easy to inject the store state for server rendering The concept of stateless operations is possibly the number one benefit, as it makes it very easy to reason about the state of your application. This is, however, not the idiomatic Reflux approach, because it's actually designed to create many stores and has the children listen to changes separately. Application state is the only most difficult part of any application, and every single implementation of Flux has attempted to solve this problem. Redux solves it by not actually doing Flux at all but is an amalgamation of the ideas of Flux and the functional programming language Elm. There are three parts to Redux: actions, reducers, and the global store. The store In Redux, there is only one global store. It is an object that holds the state of your entire application. You create a store by passing your root reducing function (or reducer, for short) to a method called createStore. Rather than creating more stores, you use a concept called reducer composition to split data handling logic. You will then need to use a function called combineReducers to create a single root reducer. The createStore function is derived from Redux and is usually called once in the root of your app (or your store file). It is then passed on to your app and then propagated to the app's children. The only way to change the state of the store is to dispatch an action on it. This is not the same as a Flux dispatcher because Redux doesn't have one. You can also subscribe to changes from the store in order to update your components when the store changes state. Actions An action is an object that represents an intention to change the state. It must have a type field that indicates what kind of action is being performed. They can be defined as constants and imported from other modules. Apart from this requirement, the structure of the object is entirely up to you. A basic action object can look like this: { type: 'UPDATE', payload: { value: "some value" } } The payload property is optional and can be an object, as we saw earlier, or any other valid JavaScript type, such as a function or a primitive. Reducers A reducer is a function that accepts an accumulation and a value and returns a new accumulation. In other words, it returns the next state based on the previous state and an action. It must be a pure function, free of side effects, and it does not mutate the existing state. For smaller apps, it's okay to start with a single reducer, and as your app grows, you split off smaller reducers that manage specific parts of your state tree. This is what's called reducer composition and is the fundamental pattern of building apps with Redux. You start with a single reducer, and as your app grows, split it off into smaller reducers that manage specific parts of the state tree. Because reducers are just functions, you can control the order in which they are called, pass additional data, or even make reusable reducers for common tasks such as pagination. It's okay to have multiple reducers. In fact, it's encouraged. Summary In this article, you learned about Webpack and how to configure it. You also learned about adding Redux to your ReactJS app. Apart from this, you learned about Redux's reducers, actions, and the store. Resources for Article: Further resources on this subject: Getting Started with React [article] Reactive Programming and the Flux Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 4410

article-image-mastering-fundamentals
Packt
08 Apr 2016
10 min read
Save for later

Mastering of Fundamentals

Packt
08 Apr 2016
10 min read
In this article by Piotr Sikora, author of the book Professional CSS3, you will master box model, floating's troubleshooting positioning and display types. Readers, after this article, will be more aware of the foundation of HTML and CSS. In this article, we shall cover the following topics: Get knowledge about the traditional box model Basics of floating elements The foundation of positioning elements on webpage Get knowledge about display types (For more resources related to this topic, see here.) Traditional box model Understanding box model is the foundation in CSS theories. You have to know the impact of width, height, margin, and borders on the size of the box and how can you manage it to match the element on a website. Main questions for coders and frontend developers on interviews are based on box model theories. Let's begin this important lesson, which will be the foundation for every subject. Padding/margin/border/width/height The ingredients of final width and height of the box are: Width Height Margins Paddings Borders For a better understanding of box model, here is the image from Chrome inspector: For a clear and better understanding of box model, let's analyze the image: On the image, you can see that, in the box model, we have four edges: Content edge Padding edge Border edge Margin edge The width and height of the box are based on: Width/height of content Padding Border Margin The width and height of the content in box with default box-sizing is controlled by properties: Min-width Max-width Width Min-height Max-height Height An important thing about box model is how background properties will behave. Background will be included in the content section and in the padding section (to padding edge). Let's get a code and try to point all the elements of the box model. HTML: <div class="element">   Lorem ipsum dolor sit amet consecteur </div> CSS: .element {    background: pink;    padding: 10px;    margin: 20px;   width: 100px;   height: 100px;    border: solid 10px black; }   In the browser, we will see the following: This is the view from the inspector of Google Chrome: Let's check how the areas of box model are placed in this specific example: The basic task for interviewed Front End Developer is—the box/element is described with the styles: .box {     width: 100px;     height: 200px;     border: 10px solid #000;     margin: 20px;     padding: 30px; } Please count the final width and height (the real space that is needed for this element) of this element. So, as you can see, the problem is to count the width and height of the box. Ingridients of width: Width Border left Border right Padding left Padding right Additionally, for the width of the space taken by the box: Margin left Margin right Ingridients of height: Height Border top Border bottom Padding top Padding bottom Additionally, for height of the space taken by the box: Margin top Margin bottom So, when you will sum the element, you will have an equation: Width: Box width = width + borderLeft + borderRight + paddingLeft + paddingRight Box width = 100px + 10px + 10px + 30px + 30px = 180px Space width: width = width + borderLeft + borderRight + paddingLeft + paddingRight +  marginLeft + marginRight width = 100px + 10px + 10px + 30px + 30px + 20px + 20 px = 220px Height: Box height = height + borderTop + borderBottom + paddingTop + paddingBottom Box height  = 200px + 10px + 10px + 30px + 30px = 280px Space height: Space height = height + borderTop + borderBottom + paddingTop + paddingBottom +  marginTop + marginBottom Space height = 200px + 10px + 10px + 30px + 30px + 20px + 20px = 320px Here, you can check it in a real browser: Omiting problems with traditional box model (box sizing) The basic theory of box model is pretty hard to learn. You need to remember about all the elements of width/height, even if you set the width and height. The hardest for beginners is the understanding of padding, which shouldn't be counted as a component of width and height. It should be inside the box, and it should impact on these values. To change this behavior to support CSS3 since Internet Explorer 8, box sizing comes to picture. You can set the value: box-sizing: border-box What it gives to you? Finally, the counting of box width and height will be easier because box padding and border is inside the box. So, if we are taking our previous class: .box {     width: 100px;     height: 200px;     border: 10px solid #000;     margin: 20px;     padding: 30px; } We can count the width and height easily: Width = 100px Height = 200px Additionally, the space taken by the box: Space width = 140 px (because of the 20 px margin on both sides: left and right) Space height = 240 px (because of the 20 px margin on both sides: top and bottom) Here is a sample from Chrome: So, if you don't want to repeat all the problems of a traditional box model, you should use it globally for all the elements. Of course, it's not recommended in projects that you are getting in some old project, for example, from new client that needs some small changes:  * { width: 100px; } Adding the preceding code can make more harm than good because of the inheritance of this property for all the elements, which are now based on a traditional box model. But for all the new projects, you should use it. Floating elements Floating boxes are the most used in modern layouts. The theory of floating boxes was still used especially in grid systems and inline lists in CSS frameworks. For example, class and mixin inline-list (in Zurb Foundation framework) are based on floats. Possibilities of floating elements Element can be floated to the left and right. Of course, there is a method that is resetting floats too. The possible values are: float: left; // will float element to left float: right; // will float element to right float: none; // will reset float Most known floating problems When you are using floating elements, you can have some issues. Most known problems with floated elements are: Too big elements (because of width, margin left/right, padding left/right, and badly counted width, which is based on box model) Not cleared floats All of these problems provide a specific effect, which you can easily recognize and then fix. Too big elements can be recognized when elements are not in one line and it should. What you should check first is if the box-sizing: border-box is applied. Then, check the width, padding, and margin. Not cleared floats you can easily recognize when to floating structure some elements from next container are floated. It means that you have no clearfix in your floating container. Define clearfix/class/mixin When I was starting developing HTML and CSS code, there was a method to clear the floats with classes .cb or .clear, both defined as: .clearboth, .cb {     clear: both } This element was added in the container right after all the floated elements. This is important to remember about clearing the floats because the container which contains floating elements won't inherit the height of highest floating element (will have a height equal 0). For example: <div class="container">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div>     <div class="clearboth"></div> </div> Where CSS looks like this: .float {     width: 100px;     height: 100px;     float: left; }   .clearboth {     clear: both } Nowadays, there is a better and faster way to clear floats. You can do this with clearfix, which can be defined like this: .clearfix:after {     content: " ";     visibility: hidden;     display: block;     height: 0;     clear: both; } You can use in HTML code: <div class="container clearfix">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div> </div> The main reason to switch on clearfix is that you save one tag (with clears both classes). Recommended usage is based on the clearfix mixin, which you can define like this in SASS: =clearfix   &:after     content: " "     visibility: hidden     display: block     height: 0     clear: both So, every time you need to clear floating in some container, you need to invoke it. Let's take the previous code as an example: <div class="container">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div> </div> A container can be described as: .container   +clearfix Example of using floating elements The most known usage of float elements is grids. Grid is mainly used to structure the data displayed on a webpage. In this article, let's check just a short draft of grid. Let's create an HTML code: <div class="row">     <div class="column_1of2">         Lorem     </div>     <div class="column_1of2">         Lorem     </div>   </div> <div class="row">     <div class="column_1of3">         Lorem     </div>     <div class="column_1of3">         Lorem     </div>     <div class="column_1of3">         Lorem     </div>   </div>   <div class="row">     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div> </div> And SASS: *   box-sizing: border-box =clearfix   &:after     content: " "     visibility: hidden     display: block     height: 0     clear: both .row   +clearfix .column_1of2   background: orange   width: 50%   float: left   &:nth-child(2n)     background: red .column_1of3   background: orange   width: (100% / 3)   float: left   &:nth-child(2n)     background: red .column_1of4   background: orange   width: 25%   float: left   &:nth-child(2n)     background: red The final effect: As you can see, we have created a structure of a basic grid. In places where HTML code is placed, Lorem here is a full lorem ipsum to illustrate the grid system. Summary In this article, we studied about the traditional box model and floating elements in detail. Resources for Article: Further resources on this subject: Flexbox in CSS [article] CodeIgniter Email and HTML Table [article] Developing Wiki Seek Widget Using Javascript [article]
Read more
  • 0
  • 0
  • 1532
Banner background image

article-image-using-native-sdks-and-libraries-react-native
Emilio Rodriguez
07 Apr 2016
6 min read
Save for later

Using Native SDKs and Libraries in React Native

Emilio Rodriguez
07 Apr 2016
6 min read
When building an app in React Native we may end up needing to use third-party SDKs or libraries. Most of the time, these are only available in their native version, and, therefore, only accessible as Objective-C or Swift libraries in the case of iOS apps or as Java Classes for Android apps. Only in a few cases these libraries are written in JavaScript and even then, they may need pieces of functionality not available in React Native such as DOM access or Node.js specific functionality. In my experience, this is one of the main reasons driving developers and IT decision makers in general to run away from React Native when considering a mobile development framework for their production apps. The creators of React Native were fully aware of this potential pitfall and left a door open in the framework to make sure integrating third-party software was not only possible but also quick, powerful, and doable by any non-iOS/Android native developer (i.e. most of the React Native developers). As a JavaScript developer, having to write Objective-C or Java code may not be very appealing in the beginning, but once you realize the whole process of integrating a native SDK can take as little as eight lines of code split in two files (one header file and one implementation file), the fear quickly fades away and the feeling of being able to perform even the most complex task in a mobile app starts to take over. Suddenly, the whole power of iOS and Android can be at any React developer’s disposal. To better illustrate how to integrate a third-party SDK we will use one of the easiest to integrate payment providers: Paymill. If we take a look at their site, we notice that only iOS and Android SDKs are available for mobile payments. That should leave out every app written in React Native if it wasn’t for the ability of this framework to communicate with native modules. For the sake of convenience I will focus this article on the iOS module. Step 1: Create two native files for our bridge. We need to create an Objective-C class, which will serve as a bridge between our React code and Paymill’s native SDK. Normally, an Objective-C class is made out of two files, a .m and a .h, holding the module implementation and the header for this module respectively. To create the .h file we can right-click on our project’s main folder in XCode > New File > Header file. In our case, I will call this file PaymillBridge.h. For React Native to communicate with our bridge, we need to make it implement the RTCBridgeModule included in React Native. To do so, we only have to make sure our .h file looks like this: // PaymillBridge.h #import "RCTBridgeModule.h" @interface PaymillBridge : NSObject <RCTBridgeModule> @end We can follow a similar process to create the .m file: Right-click our project’s main folder in XCode > New File > Objective-C file. The module implementation file should include the RCT_EXPORT_MODULE macro (also provided in any React Native project): // PaymillBridge.m @implementation PaymillBridge RCT_EXPORT_MODULE(); @end A macro is just a predefined piece of functionality that can be imported just by calling it. This will make sure React is aware of this module and would make it available for importing in your app. Now we need to expose the method we need in order to use Paymill’s services from our JavaScript code. For this example we will be using Paymill’s method to generate a token representing a credit card based on a public key and some credit card details: generateTokenWithPublicKey. To do so, we need to use another macro provided by React Native: RCT_EXPORT_METHOD. // PaymillBridge.m @implementation PaymillBridge RCT_EXPORT_MODULE(); RCT_EXPORT_METHOD(generateTokenWithPublicKey: (NSString *)publicKey cardDetails:(NSDictionary *)cardDetails callback:(RCTResponseSenderBlock)callback) { //… Implement the call as described in the SDK’s documentation … callback(@[[NSNull null], token]); } @end In this step we will have to write some Objective-C but most likely it would be a very simple piece of code using the examples stated in the SDK’s documentation. One interesting point is how to send data from the native SDK to our React code. To do so you need to pass a callback as you can see I did as the last parameter of our exported method. Callbacks in React Native’s bridges have to be defined as RCTResponseSenderBlock. Once we do this, we can call this callback passing an array of parameters, which will be sent as parameters for our JavaScript function in React Native (in our case we decided to pass two parameters back: an error set to null following the error handling conventions of node.js, and the token generated by Paymill natively). Step 2: Call our bridge from our React Native code. Once the module is properly set up, React Native makes it available in our app just by importing it from our JavaScript code: // PaymentComponent.js var Paymill = require('react-native').NativeModules.PaymillBridge; Paymill.generateTokenWithPublicKey( '56s4ad6a5s4sd5a6', cardDetails, function(error, token){ console.log(token); }); NativeModules holds the list of modules we created implementing the RCTBridgeModule. React Native makes them available by the name we chose for our Objective-C class name (PaymillBridge in our example). Then, we can call any exported native method as a normal JavaScript method from our React Native Component or library. Going Even Further That should do it for any basic SDK, but React Native gives developers a lot more control on how to communicate with native modules. For example, we may want to force the module to be run in the main thread. For that we just need to add an extra method to our native module implementation: // PaymillBridge.m @implementation PaymillBridge //... - (dispatch_queue_t)methodQueue { return dispatch_get_main_queue(); } Just by adding this method to our PaymillBridge.m React Native will force all the functionality related to this module to be run on the main thread, which will be needed when running main-thread-only iOS API. And there is more: promises, exporting constants, sending events to JavaScript, etc. More complex functionality can be found in the official documentation of React Native; the topics covered on this article, however, should solve 80 percent of the cases when implementing most of the third-party SDKs. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 2
  • 17687

article-image-caching-symfony
Packt
05 Apr 2016
15 min read
Save for later

Caching in Symfony

Packt
05 Apr 2016
15 min read
In this article by Sohail Salehi, author of the book, Mastering Symfony, we are going to discuss performance improvement using cache. Caching is a vast subject and needs its own book to be covered properly. However, in our Symfony project, we are interested in two types of caches only: Application cache Database cache We will see what caching facilities are provided in Symfony by default and how we can use them. We are going to apply the caching techniques on some methods in our projects and watch the performance improvement. By the end of this article, you will have a firm understanding about the usage of HTTP cache headers in the application layer and caching libraries. (For more resources related to this topic, see here.) Definition of cache Cache is a temporary place that stores contents that can be served faster when they are needed. Considering that we already have a permanent place on disk to store our web contents (templates, codes, and database tables), cache sounds like a duplicate storage. That is exactly what they are. They are duplicates and we need them because, in return for consuming an extra space to store the same data, they provide a very fast response to some requests. So this is a very good trade-off between storage and performance. To give you an example about how good this deal can be, consider the following image. On the left side, we have a usual client/server request/response model and let's say the response latency is two seconds and there are only 100 users who hit the same content per hour: On the right side, however, we have a cache layer that sits between the client and server. What it does basically is receive the same request and pass it to the server. The server sends a response to the cache and, because this response is new to the cache, it will save a copy (duplicate) of the response and then pass it back to the client. The latency is 2 + 0.2 seconds. However, it doesn't add up, does it? The purpose of using cache was to improve the overall performance and reduce the latency. It has already added more delays to the cycle. With this result, how could it possibly be beneficial? The answer is in the following image: Now, with the response being cached, imagine the same request comes through. (We have about 100 requests/hour for the same content, remember?) This time, the cache layer looks into its space, finds the response, and sends it back to the client, without bothering the server. The latency is 0.2 seconds. Of course, these are only imaginary numbers and situations. However, in the simplest form, this is how cache works. It might not be very helpful on a low traffic website; however, when we are dealing with thousands of concurrent users on a high traffic website, then we can appreciate the value of caching. So, according to the previous images, we can define some terminology and use them in this article as we continue. In the first image, when a client asked for that page, it wasn't exited and the cache layer had to store a copy of its contents for the future references. This is called Cache Miss. However, in the second image, we already had a copy of the contents stored in the cache and we benefited from it. This is called Cache Hit. Characteristics of a good cache If you do a quick search, you will find that a good cache is defined as the one which misses only once. In other words, this cache miss happens only if the content has not been requested before. This feature is necessary but it is not sufficient. To clarify the situation a little bit, let's add two more terminology here. A cache can be in one of the following states: fresh (has the same contents as the original response) and stale (has the old response's contents that have now changed on the server). The important question here is for how long should a cache be kept? We have the power to define the freshness of a cache via a setting expiration period. We will see how to do this in the coming sections. However, just because we have this power doesn't mean that we are right about the content's freshness. Consider the situation shown in the following image: If we cache a content for a long time, cache miss won't happen again (which satisfies the preceding definition), but the content might lose its freshness according to the dynamic resources that might change on the server. To give you an example, nobody likes to read the news of three months ago when they open the BBC website. Now, we can modify the definition of a good cache as follows: A cache strategy is considered to be good if cache miss for the same content happens only once, while the cached contents are still fresh. This means that defining the cache expiry time won't be enough and we need another strategy to keep an eye on cache freshness. This happens via a cache validation strategy. When the server sends a response, we can set the validation rules on the basis of what really matters on the server side, and this way, we can keep the contents stored in the cache fresh, as shown in the following image. We will see how to do this in Symfony soon. Caches in a Symfony project In this article, we will focus on two types of caches: The gateway cache (which is called reverse proxy cache as well) and doctrine cache. As you might have guessed, the gateway cache deals with all of the HTTP cache headers. Symfony comes with a very strong gateway cache out of the box. All you need to do is just activate it in your front controller then start defining your cache expiration and validation strategies inside your controllers. That said, it does not mean that you are forced or restrained to use the Symfony cache only. If you prefer other reverse proxy cache libraries (that is, Varnish or Django), you are welcome to use them. The caching configurations in Symfony are transparent such that you don't need to change a single line inside your controllers when you change your caching libraries. Just modify your config.yml file and you will be good to go. However, we all know that caching is not for application layers and views only. Sometimes, we need to cache any database-related contents as well. For our Doctrine ORM, this includes metadata cache, query cache, and result cache. Doctrine comes with its own bundle to handle these types of caches and it uses a wide range of libraries (APC, Memcached, Redis, and so on) to do the job. Again, we don't need to install anything to use this cache bundle. If we have Doctrine installed already, all we need to do is configure something and then all the Doctrine caching power will be at our disposal. Putting these two caching types together, we will have a big picture to cache our Symfony project: As you can see in this image, we might have a problem with the final cached page. Imagine that we have a static page that might change once a week, and in this page, there are some blocks that might change on a daily or even hourly basis, as shown in the following image. The User dashboard in our project is a good example. Thus, if we set the expiration on the gateway cache to one week, we cannot reflect all of those rapid updates in our project and task controllers. To solve this problem, we can leverage from Edge Side Includes (ESI) inside Symfony. Basically, any part of the page that has been defined inside an ESI tag can tell its own cache story to the gateway cache. Thus, we can have multiple cache strategies living side by side inside a single page. With this solution, our big picture will look as follows: Thus, we are going to use the default Symfony and Doctrine caching features for application and model layers and you can also use some popular third-party bundles for more advanced settings. If you completely understand the caching principals, moving to other caching bundles would be like a breeze. Key players in the HTTP cache header Before diving into the Symfony application cache, let's familiarize ourselves with the elements that we need to handle in our cache strategies. To do so, open https://www.wikipedia.org/ in your browser and inspect any resource with the 304 response code and ponder on request/response headers inside the Network tab: Among the response elements, there are four cache headers that we are interested in the most: expires and cache-control, which will be used for an expiration model, and etag and last-modified, which will be used for a validation model. Apart from these cache headers, we can have variations of the same cache (compressed/uncompressed) via the Vary header and we can define a cache as private (accessible by a specific user) or public (accessible by everyone). Using the Symfony reverse proxy cache There is no complicated or lengthy procedure required to activate the Symfony's gateway cache. Just open the front controller and uncomment the following lines: // web/app.php <?php //... require_once __DIR__.'/../app/AppKernel.php'; //un comment this line require_once __DIR__.'/../app/AppCache.php'; $kernel = new AppKernel('prod', false); $kernel->loadClassCache(); // and this line $kernel = new AppCache($kernel); // ... ?> Now, the kernel is wrapped around the Application Cache layer, which means that any request coming from the client will pass through this layer first. Set the expiration for the dashboard page Log in to your project and click on the Request/Response section in the debug toolbar. Then, scroll down to Response Headers and check the contents: As you can see, only cache-control is sitting there with some default values among the cache headers that we are interested in. When you don't set any value for Cache-Control, Symfony considers the page contents as private to keep them safe. Now, let's go to the Dashboard controller and add some gateway cache settings to the indexAction() method: // src/AppBundle/Controller/DashboardController.php <?php namespace AppBundleController; use SymfonyBundleFrameworkBundleControllerController; use SymfonyComponentHttpFoundationResponse; class DashboardController extends Controller { public function indexAction() { $uId = $this->getUser()->getId(); $util = $this->get('mava_util'); $userProjects = $util->getUserProjects($uId); $currentTasks= $util->getUserTasks($uId, 'in progress'); $response = new Response(); $date = new DateTime('+2 days'); $response->setExpires($date); return $this->render( 'CoreBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects ), $response ); } } You might have noticed that we didn't change the render() method. Instead, we added the response settings as the third parameter of this method. This is a good solution because now we can keep the current template structure and adding new settings won't require any other changes in the code. However, you might wonder what other options do we have? We can save the whole $this->render() method in a variable and assign a response setting to it as follows: // src/AppBundle/Controller/DashboardController.php <?php // ... $res = $this->render( 'AppBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects ) ); $res->setExpires($date); return $res; ?> Still looks like a lot of hard work for a simple response header setting. So let me introduce a better option. We can use the @Cache annotation as follows: // src/AppBundle/Controller/DashboardController.php <?php namespace AppBundleController; use SymfonyBundleFrameworkBundleControllerController; use SensioBundleFrameworkExtraBundleConfigurationCache; class DashboardController extends Controller { /** * @Cache(expires="next Friday") */ public function indexAction() { $uId = $this->getUser()->getId(); $util = $this->get('mava_util'); $userProjects = $util->getUserProjects($uId); $currentTasks= $util->getUserTasks($uId, 'in progress'); return $this->render( 'AppBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects )); } } Have you noticed that the response object is completely removed from the code? With an annotation, all response headers are sent internally, which helps keep the original code clean. Now that's what I call zero-fee maintenance. Let's check our response headers in Symfony's debug toolbar and see what it looks like: The good thing about the @Cache annotation is that they can be nested. Imagine you have a controller full of actions. You want all of them to have a shared maximum age of half an hour except one that is supposed to be private and should be expired in five minutes. This sounds like a lot of code if you going are to use the response objects directly, but with an annotation, it will be as simple as this: <?php //... /** * @Cache(smaxage="1800", public="true") */ class DashboardController extends Controller { public function firstAction() { //... } public function secondAction() { //... } /** * @Cache(expires="300", public="false") */ public function lastAction() { //... } } The annotation defined before the controller class will apply to every single action, unless we explicitly add a new annotation for an action. Validation strategy In the previous example, we set the expiry period very long. This means that if a new task is assigned to the user, it won't show up in his dashboard because of the wrong caching strategy. To fix this issue, we can validate the cache before using it. There are two ways for validation: We can check the content's date via the Last-Modified header: In this technique, we certify the freshness of a content via the time it has been modified. In other words, if we keep track of the dates and times of each change on a resource, then we can simply compare that date with cache's date and find out if it is still fresh. We can use the ETag header as a unique content signature: The other solution is to generate a unique string based on the contents and evaluate the cache's freshness based on its signature. We are going to try both of them in the Dashboard controller and see them in action. Using the right validation header is totally dependent on the current code. In some actions, calculating modified dates is way easier than creating a digital footprint, while in others, going through the date and time function might looks costly. Of course, there are situations where generating both headers are critical. So creating it is totally dependent on the code base and what you are going to achieve. As you can see, we have two entities in the indexAction() method and, considering the current code, generating the ETag header looks practical. So the validation header will look as follows: // src/AppBundle/Controller/DashboardController.php <?php //... class DashboardController extends Controller { /** * @Cache(ETag="userProjects ~ finishedTasks") */ public function indexAction() { //... } } The next time a request arrives, the cache layer looks into the ETag value in the controller, compares it with its own ETag, and calls the indexAction() method; only, there is a difference between these two. How to mix expiration and validation strategies Imagine that we want to keep the cache fresh for 10 minutes and simultaneously keep an eye on any changes over user projects or finished tasks. It is obvious that tasks won't finish every 10 minutes and it is far beyond reality to expect changes on project status during this period. So what we can do to make our caching strategy efficient is that we can combine Expiration and Validation together and apply them to the Dashboard Controller as follows: // src/CoreBundle/Controller/DashboardController.php <?php //... /** * @Cache(expires="600") */ class DashboardController extends Controller { /** * @Cache(ETag="userProjects ~ finishedTasks") */ public function indexAction() { //... } } Keep in mind that Expiration has a higher priority over Validation. In other words, the cache is fresh for 10 minutes, regardless of the validation status. So when you visit your dashboard for the first time, a new cache plus a 302 response (not modified) is generated automatically and you will hit cache for the next 10 minutes. However, what happens after 10 minutes is a little different. Now, the expiration status is not satisfying; thus, the HTTP flow falls into the validation phase and in case nothing happened to the finished tasks status or the your project status, then a new expiration period is generated and you hit the cache again. However, if there is any change in your tasks or project status, then you will hit the server to get the real response, and a new cache from response's contents, new expiration period, and new ETag are generated and stored in the cache layer for future references. Summary In this article, you learned about the basics of gateway and Doctrine caching. We saw how to set expiration and validation strategies using HTTP headers such as Cache-Control, Expires, Last-Modified, and ETag. You learned how to set public and private access levels for a cache and use an annotation to define cache rules in the controller. Resources for Article: Further resources on this subject: User Interaction and Email Automation in Symfony 1.3: Part1 [article] The Symfony Framework – Installation and Configuration [article] User Interaction and Email Automation in Symfony 1.3: Part2 [article]
Read more
  • 0
  • 0
  • 4019

article-image-how-get-started-redux-react-native
Emilio Rodriguez
04 Apr 2016
5 min read
Save for later

How To Get Started with Redux in React Native

Emilio Rodriguez
04 Apr 2016
5 min read
In mobile development there is a need for architectural frameworks, but complex frameworks designed to be used in web environments may end up damaging the development process or even the performance of our app. Because of this, some time ago I decided to introduce in all of my React Native projects the leanest framework I ever worked with: Redux. Redux is basically a state container for JavaScript apps. It is 100 percent library-agnostic so you can use it with React, Backbone, or any other view library. Moreover, it is really small and has no dependencies, which makes it an awesome tool for React Native projects. Step 1: Install Redux in your React Native project. Redux can be added as an npm dependency into your project. Just navigate to your project’s main folder and type: npm install --save react-redux By the time this article was written React Native was still depending on React Redux 3.1.0 since versions above depended on React 0.14, which is not 100 percent compatible with React Native. Because of this, you will need to force version 3.1.0 as the one to be dependent on in your project. Step 2: Set up a Redux-friendly folder structure. Of course, setting up the folder structure for your project is totally up to every developer but you need to take into account that you will need to maintain a number of actions, reducers, and components. Besides, it’s also useful to keep a separate folder for your API and utility functions so these won’t be mixing with your app’s core functionality. Having this in mind, this is my preferred folder structure under the src folder in any React Native project: Step 3: Create your first action. In this article we will be implementing a simple login functionality to illustrate how to integrate Redux inside React Native. A good point to start this implementation is the action, a basic function called from the component whenever we want the whole state of the app to be changed (i.e. changing from the logged out state into the logged in state). To keep this example as concise as possible we won’t be doing any API calls to a backend – only the pure Redux integration will be explained. Our action creator is a simple function returning an object (the action itself) with a type attribute expressing what happened with the app. No business logic should be placed here; our action creators should be really plain and descriptive. Step 4: Create your first reducer. Reducers are the ones in charge of updating the state of the app. Unlike in Flux, Redux only has one store for the whole app, but it will be conveniently name-spaced automatically by Redux once the reducers have been applied. In our example, the user reducer needs to be aware of when the user is logged in. Because of that, it needs to import the LOGIN_SUCCESS constant we defined in our actions before and export a default function, which will be called by Redux every time an action occurs in the app. Redux will automatically pass the current state of the app and the action occurred. It’s up to the reducer to realize if it needs to modify the state or not based on the action.type. That’s why almost every time our reducer will be a function containing a switch statement, which modifies and returns the state based on what action occurred. It’s important to state that Redux works with object references to identify when the state is changed. Because of this, the state should be cloned before any modification. It’s also interesting to know that the action passed to the reducers can contain other attributes apart from type. For example, when doing a more complex login, the user first name and last name can be added to the action by the action created and used by the reducer to update the state of the app. Step 5: Create your component. This step is almost pure React Native coding. We need a component to trigger the action and to respond to the change of state in the app. In our case it will be a simple View containing a button that disappears when logged in. This is a normal React Native component except for some pieces of the Redux boilerplate: The three import lines at the top will require everything we need from Redux ‘mapStateToProps’ and ‘mapDispatchToProps’ are two functions bound with ‘connect’ to the component: this makes Redux know that this component needs to be passed a piece of the state (everything under ‘userReducers’) and all the actions available in the app. Just by doing this, we will have access to the login action (as it is used in the onLoginButtonPress) and to the state of the app (as it is used in the !this.props.user.loggedIn statement) Step 6: Glue it all from your index.ios.js. For Redux to apply its magic, some initialization should be done in the main file of your React Native project (index.ios.js). This is pure boilerplate and only done once: Redux needs to inject a store holding the app state into the app. To do so, it requires a ‘Provider’ wrapping the whole app. This store is basically a combination of reducers. For this article we only need one reducer, but a full app will include many others and each of them should be passed into the combineReducers function to be taken into account by Redux whenever an action is triggered. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 0
  • 22390
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-making-app-react-and-material-design
Soham Kamani
21 Mar 2016
7 min read
Save for later

Making an App with React and Material Design

Soham Kamani
21 Mar 2016
7 min read
There has been much progression in the hybrid app development space, and also in React.js. Currently, almost all hybrid apps use cordova to build and run web applications on their platform of choice. Although learning React can be a bit of a steep curve, the benefit you get is that you are forced to make your code more modular, and this leads to huge long-term gains. This is great for developing applications for the browser, but when it comes to developing mobile apps, most web apps fall short because they fail to create the "native" experience that so many users know and love. Implementing these features on your own (through playing around with CSS and JavaScript) may work, but it's a huge pain for even something as simple as a material-design-oriented button. Fortunately, there is a library of react components to help us out with getting the look and feel of material design in our web application, which can then be ported to a mobile to get a native look and feel. This post will take you through all the steps required to build a mobile app with react and then port it to your phone using cordova. Prerequisites and dependencies Globally, you will require cordova, which can be installed by executing this line: npm install -g cordova Now that this is done, you should make a new directory for your project and set up a build environment to use es6 and jsx. Currently, webpack is the most popular build system for react, but if that's not according to your taste, there are many more build systems out there. Once you have your project folder set up, install react as well as all the other libraries you would be needing: npm init npm install --save react react-dom material-ui react-tap-event-plugin Making your app Once we're done, the app should look something like this:   If you just want to get your hands dirty, you can find the source files here. Like all web applications, your app will start with an index.html file: <html> <head> <title>My Mobile App</title> </head> <body> <div id="app-node"> </div> <script src="bundle.js" ></script> </body> </html> Yup, that's it. If you are using webpack, your CSS will be included in the bundle.js file itself, so there's no need to put "style" tags either. This is the only HTML you will need for your application. Next, let's take a look at index.js, the entry point to the application code: //index.js import React from 'react'; import ReactDOM from 'react-dom'; import App from './app.jsx'; const node = document.getElementById('app-node'); ReactDOM.render( <App/>, node ); What this does is grab the main App component and attach it to the app-node DOM node. Drilling down further, let's look at the app.jsx file: //app.jsx'use strict';import React from 'react';import AppBar from 'material-ui/lib/app-bar';import MyTabs from './my-tabs.jsx';let App = React.createClass({ render : function(){ return ( <div> <AppBar title="My App" /> <MyTabs /> </div> ); }});module.exports = App; Following react's philosophy of structuring our code, we can roughly break our app down into two parts: The title bar The tabs below The title bar is more straightforward and directly fetched from the material-ui library. All we have to do is supply a "title" property to the AppBar component. MyTabs is another component that we have made, put in a different file because of the complexity: 'use strict';import React from 'react';import Tabs from 'material-ui/lib/tabs/tabs';import Tab from 'material-ui/lib/tabs/tab';import Slider from 'material-ui/lib/slider';import Checkbox from 'material-ui/lib/checkbox';import DatePicker from 'material-ui/lib/date-picker/date-picker';import injectTapEventPlugin from 'react-tap-event-plugin';injectTapEventPlugin();const styles = { headline: { fontSize: 24, paddingTop: 16, marginBottom: 12, fontWeight: 400 }};const TabsSimple = React.createClass({ render: () => ( <Tabs> <Tab label="Item One"> <div> <h2 style={styles.headline}>Tab One Template Example</h2> <p> This is the first tab. </p> <p> This is to demonstrate how easy it is to build mobile apps with react </p> <Slider name="slider0" defaultValue={0.5}/> </div> </Tab> <Tab label="Item 2"> <div> <h2 style={styles.headline}>Tab Two Template Example</h2> <p> This is the second tab </p> <Checkbox name="checkboxName1" value="checkboxValue1" label="Installed Cordova"/> <Checkbox name="checkboxName2" value="checkboxValue2" label="Installed React"/> <Checkbox name="checkboxName3" value="checkboxValue3" label="Built the app"/> </div> </Tab> <Tab label="Item 3"> <div> <h2 style={styles.headline}>Tab Three Template Example</h2> <p> Choose a Date:</p> <DatePicker hintText="Select date"/> </div> </Tab> </Tabs> )});module.exports = TabsSimple; This file has quite a lot going on, so let’s break it down step by step: We import all the components that we're going to use in our app. This includes tabs, sliders, checkboxes, and datepickers. injectTapEventPlugin is a plugin that we need in order to get tab switching to work. We decide the style used for our tabs. Next, we make our Tabs react component, which consists of three tabs: The first tab has some text along with a slider. The second tab has a group of checkboxes. The third tab has a pop-up datepicker. Each component has a few keys, which are specific to it (such as the initial value of the slider, the value reference of the checkbox, or the placeholder for the datepicker). There are a lot more properties you can assign, which are specific to each component. Building your App For building on Android, you will first need to install the Android SDK. Now that we have all the code in place, all that is left is building the app. For this, make a new directory, start a new cordova project, and add the Android platform, by running the following on your terminal: mkdir my-cordova-project cd my-cordova-project cordova create . cordova platform add android Once the installation is complete, build the code we just wrote previously. If you are using the same build system as the source code, you will have only two files, that is, index.html and bundle.min.js. Delete all the files that are currently present in the www folder of your cordova project and copy those two files there instead. You can check whether your app is working on your computer by running cordova serve and going to the appropriate address on your browser. If all is well, you can build and deploy your app: cordova build android cordova run android This will build and install the app on your Android device (provided it is in debug mode and connected to your computer). Similarly, you can build and install the same app for iOS or windows (you may need additional tools such as XCode or .NET for iOS or Windows). You can also use any other framework to build your mobile app. The angular framework also comes with its own set of material design components. About the Author Soham Kamani is a full-stack web developer and electronics hobbyist.  He is especially interested in JavaScript, Python, and IoT.
Read more
  • 0
  • 0
  • 3676

Packt
17 Mar 2016
9 min read
Save for later

Microservices – Brave New World

Packt
17 Mar 2016
9 min read
In this article by David Gonzalez, author of the book Developing Microservices with Node.js, we will cover the need for microservices, explain the monolithic approach, and study how to build and deploy microservices. (For more resources related to this topic, see here.) Need for microservices The world of software development has evolved quickly over the past 40 years. One of the key points of this evolution has been the size of these systems. From the days of MS-DOS, we taken a hundred-fold leap into our present systems. This growth in size creates a need for better ways of organizing the code and software components. Usually, when a company grows due to business needs, which is known as organic growth, the software gets organized on a monolithic architecture as it is the easiest and quickest way of building software. After few years (or even months), adding new features becomes harder due to the coupled nature of the created software. Monolithic software There are a few companies that have already started building their software using microservices, which is the ideal scenario. The problem is that not all the companies can plan their software upfront. Instead of planning, these companies build the software based on the organic growth experienced: few software components that group business flows by affinity. It is not rare to see companies having two big software components: the user facing website and the internal administration tools. This is usually known as a monolithic software architecture. Some of these companies face big problems when trying to scale the engineering teams. It is hard to coordinate the teams that build, deploy, and maintain a single software component. Clashes on releases and reintroduction of bugs are a common problem that drains a big chunk of energy from the teams. One of the solution to this problem (it also has other benefits) is to split the monolithic software into microservices so that the teams are able to specialize in few smaller modules and autonomous and isolated software components that can be versioned, updated, and deployed without interfering with the rest of the systems of the company. One of the most interesting solutions to this problem is splitting the monolithic architecture into microservices. This enables the engineering team to create isolated and autonomous units of work that are highly specialized in a given task (such as sending e-mails, processing card payment, and so on). Microservices in the real world Microservices are small software components that specialize in one task and work together to achieve a higher-level task. Forget about software for a second and think about how a company works. When someone applies for a job in a company, he applies for a given position: software engineer, systems administrator, or office manager The reason for it can be summarized in one word—specialization. If you are used to working as a software engineer, you will get better with the experience and add more value to the company. The fact that you don’t know how to deal with a customer, won’t affect your performance as it is not your area of expertise and will hardly add any value to your day-to-day work. A microservice is an autonomous unit of work that can execute one task without interfering with other parts of the system, similar to what a job position is to a company. This has a number of benefits that can be used in favor of the engineering team in order to help to scale the systems of a company. Nowadays, hundreds of systems are built using a microservices-oriented architectures, as follows: Netflix: They are one of the most popular streaming services and have built an entire ecosystem of applications that collaborate in order to provide a reliable and scalable streaming system used across the globe. Spotify: They are one of the leading music streaming services in the world and have built this application using microservices. Every single widget of the application (which is a website exposed as a desktop app using Chromium Embedded Framework (CEF)) is a different microservice that can be updated individually. First, there was the monolith A huge percentage (my estimate is around 90%) of the modern enterprise software is built following a monolithic approach. Huge software components that run in a single container and have a well-defined development life cycle that goes completely against the following agile principles, deliver early and deliver often (https://en.wikipedia.org/wiki/Release_early,_release_often): Deliver early: The sooner you fail, the easier it is to recover. If you are working for two years in a software component and then, it is released, there is a huge risk of deviation from the original requirements, which are usually wrong and changing every few days. Deliver often: Everything of the software is delivered to all the stake holders so that they can have their inputs and see the changes reflected in the software. Errors can be fixed in a few days and improvements are identified easily. Companies build big software components instead of smaller ones that work together as it is the natural thing to do, as follows: The developer has a new requirement. He builds a new method on an existing class on the service layer. The method is exposed on the API via HTTP, SOAP, or any other protocol. Now, repeat it by the number of developers in your company and you will obtain something called organic growth. Organic growth is the type of uncontrolled and unplanned growth on software systems under business pressure without an adequate long-term planning, and it is bad. How to tackle the organic growth? The first thing needed to tackle the organic growth is make sure that business and IT are aligned in the company. Usually, in big companies, IT is not seen as a core part of the business. Organizations outsource their IT systems, keeping the cost in mind, but not the quality so that the partners building these software components are focused on one thing: deliver on time and according to the specification, even if it is incorrect. This produces a less-than-ideal ecosystem to respond to the business needs with a working solution for an existing problem. IT is lead by people who barely understand how the systems are built and usually overlook the complexity of the software development. Fortunately, this is a changing tendency as IT systems have become the drivers of 99% of the businesses around the world, but we need to be smarter about how we build them. The first measure to tackle the organic growth is to align IT and business stakeholders in order to work together, educating the non-technical stakeholders is the key to success. If we go back to the example from the previous section (few releases with quite big changes). Can we do it better? Of course, we can. Divide the work into manageable software artifacts that model a single and well-defined business activity and give it an entity. It does not need to be a microservice at this stage, but keeping the logic inside a separated, well-defined, easy testable, and decoupled module will give us a huge advantage towards future changes in the application. Building microservices – The fallback strategy When you design a system, we usually think about the replaceability of the existing components. For example, when using a persistence technology in Java, we tend to lean towards the standards (Java Persistence API (JPA)) so that we can replace the underneath implementation without too much effort. Microservices take the same approach, but they isolate the problem instead of working towards an easy replaceability. Also, e-mailing is something that, although it seems simple, always ends up giving problems. Consider that we want to replace Mandrill with a plain SMTP server, such as Gmail. We don't need to do anything special, we just change the implementation and rollout the new version of our microservice, as follows: var nodemailer = require('nodemailer'); var seneca = require("seneca")(); var transporter = nodemailer.createTransport({ service: 'Gmail', auth: { user: '[email protected]', pass: 'verysecurepassword' } }); /** * Sends an email including the content. */ seneca.add({area: "email", action: "send"}, function(args, done) { var mailOptions = { from: 'Micromerce Info ✔ <[email protected]>', to: args.to, subject: args.subject, html: args.body }; transporter.sendMail(mailOptions, function(error, info){ if(error){ done({code: e}, null); } done(null, {status: "sent"}); }); }); For the outer world, our simplest version of the e-mail sender is now at all lights, using SMTP through Gmail to deliver our e-mails. We could even rollout one server with this version and send some traffic to it in order to validate our implementation without affecting all the customers (in other words, contain the failure). Deploying microservices Deployment is usually the ugly friend of the software development life cycle party. There is a missing contact point in between development and system administration, which DevOps is going to solve in the following few years (or has already done it and no one told me). The following is the graph showing the cost of fixing software bugs versus the various phases of development: From the continuous integration up to continuous delivery, the process should be automated as much as possible, where as much as possible means 100%. Remember, humans are imperfect…if we rely on humans carrying on a manual repetitive process for a bug-free software, we are walking the wrong path. Remember that a machine will always be error free (as long as the algorithm that is executed is error free) so…why not let a machine control our infrastructure? Summary In this article, we saw how microservices are required in complex software systems, how the monolithic approach is useful, and how to build and deploy microservices. Resources for Article: Further resources on this subject: Making a Web Server in Node.js [article] Node.js Fundamentals and Asynchronous JavaScript [article] An Introduction to Node.js Design Patterns [article]
Read more
  • 0
  • 0
  • 1805

article-image-flexbox-css
Packt
09 Mar 2016
8 min read
Save for later

Flexbox in CSS

Packt
09 Mar 2016
8 min read
In this article by Ben Frain, the author of Responsive Web Design with HTML5 and CSS3, Second Edition, we will look at Flexbox and its uses. In 2015, we have better means to build responsive websites than ever. There is a new CSS layout module called Flexible Box (or Flexbox as it is more commonly known) that now has enough browser support to make it viable for everyday use. It can do more than merely provide a fluid layout mechanism. Want to be able to easily center content, change the source order of markup, and generally create amazing layouts with relevant ease? Flexbox is the layout mechanism for you. (For more resources related to this topic, see here.) Introducing Flexbox Here's a brief overview of Flexbox's superpowers: It can easily vertically center contents It can change the visual order of elements It can automatically space and align elements within a box, automatically assigning available space between them It can make you look 10 years younger (probably not, but in low numbers of empirical tests (me) it has been proven to reduce stress) The bumpy path to Flexbox Flexbox has been through a few major iterations before arriving at the relatively stable version we have today. For example, consider the changes from the 2009 version (http://www.w3.org/TR/2009/WD-css3-flexbox-20090723/), the 2011 version (http://www.w3.org/TR/2011/WD-css3-flexbox-20111129/), and the 2014 version we are basing our examples on (http://www.w3.org/TR/css-flexbox-1/). The syntax differences are marked. These differing specifications mean there are three major implementation versions. How many of these you need to concern yourself with depends on the level of browser support you need. Browser support for Flexbox Let's get this out of the way up front: there is no Flexbox support in Internet Explorer 9, 8, or below. For everything else you'd likely want to support (and virtually all mobile browsers), there is a way to enjoy most (if not all) of Flexbox's features. You can check the support information at http://caniuse.com/. Now, let's look at one of its uses. Changing source order Since the dawn of CSS, there has only been one way to switch the visual ordering of HTML elements in a web page. That was achieved by wrapping elements in something set to display: table and then switching the display property on the items within, between display: table-caption (puts it on top), display: table-footer-group (sends it to the bottom), and display: table-header-group (sends it to just below the item set to display: table-caption). However, as robust as this technique is, it was a happy accident, rather than the true intention of these settings. However, Flexbox has visual source re-ordering built in. Let's have a look at how it works. Consider this markup: <div class="FlexWrapper">     <div class="FlexItems FlexHeader">I am content in the Header.</div>     <div class="FlexItems FlexSideOne">I am content in the SideOne.</div>     <div class="FlexItems FlexContent">I am content in the Content.</div>     <div class="FlexItems FlexSideTwo">I am content in the SideTwo.</div>     <div class="FlexItems FlexFooter">I am content in the Footer.</div> </div> You can see here that the third item within the wrapper has a HTML class of FlexContent—imagine that this div is going to hold the main content for the page. OK, let's keep things simple. We will add some simple colors to more easily differentiate the sections and just get these items one under another in the same order they appear in the markup. .FlexWrapper {     background-color: indigo;     display: flex;     flex-direction: column; }   .FlexItems {     display: flex;     align-items: center;     min-height: 6.25rem;     padding: 1rem; }   .FlexHeader {     background-color: #105B63;    }   .FlexContent {     background-color: #FFFAD5; }   .FlexSideOne {     background-color: #FFD34E; }   .FlexSideTwo {     background-color: #DB9E36; }   .FlexFooter {     background-color: #BD4932; } That renders in the browser like this:   Now, suppose we want to switch the order of .FlexContent to be the first item, without touching the markup. With Flexbox it's as simple as adding a single property/value pair: .FlexContent {     background-color: #FFFAD5;     order: -1; } The order property lets us revise the order of items within a Flexbox simply and sanely. In this example, a value of -1 means that we want it to be before all the others. If you want to switch items around quite a bit, I'd recommend being a little more declarative and add an order number for each. This makes things a little easier to understand when you combine them with media queries. Let's combine our new source order changing powers with some media queries to produce not just a different layout at different sizes but different ordering. As it's generally considered wise to have your main content at the beginning of a document, let's revise our markup to this: <div class="FlexWrapper">     <div class="FlexItems FlexContent">I am content in the Content.</div>     <div class="FlexItems FlexSideOne">I am content in the SideOne.</div>     <div class="FlexItems FlexSideTwo">I am content in the SideTwo.</div>     <div class="FlexItems FlexHeader">I am content in the Header.</div>     <div class="FlexItems FlexFooter">I am content in the Footer.</div> </div> First the page content, then our two sidebar areas, then the header and finally the footer. As I'll be using Flexbox, we can structure the HTML in the order that makes sense for the document, regardless of how things need to be laid out visually. For the smallest screens (outside of any media query), I'll go with this ordering: .FlexHeader {     background-color: #105B63;     order: 1; }   .FlexContent {     background-color: #FFFAD5;     order: 2; }   .FlexSideOne {     background-color: #FFD34E;     order: 3; }   .FlexSideTwo {     background-color: #DB9E36;     order: 4; }   .FlexFooter {     background-color: #BD4932;     order: 5; } Which gives us this in the browser:   And then, at a breakpoint, I'm switching to this: @media (min-width: 30rem) {     .FlexWrapper {         flex-flow: row wrap;     }     .FlexHeader {         width: 100%;     }     .FlexContent {         flex: 1;         order: 3;     }     .FlexSideOne {         width: 150px;         order: 2;     }     .FlexSideTwo {         width: 150px;         order: 4;     }     .FlexFooter {         width: 100%;     } } Which gives us this in the browser: In that example, the shortcut flex-flow: row wrap has been used. That allows the flex items to wrap onto multiple lines. It's one of the poorer supported properties, so depending upon how far back support is needed, it might be necessary to wrap the content and two side bars in another element. Summary There are near endless possibilities when using the Flexbox layout system and due to its inherent "flexiness", it's a perfect match for responsive design. If you've never built anything with Flexbox before, all the new properties and values can seem a little odd and it's sometimes disconcertingly easy to achieve layouts that have previously taken far more work. To double-check implementation details against the latest version of the specification, make sure you check out http://www.w3.org/TR/css-flexbox-1/. I think you'll love building things with Flexbox. To check out the other amazing things you can do with Flexbox, have a look at Responsive Web Design with HTML5 and CSS3, Second Edition. The book also features a plethora of other awesome tips and tricks related to responsive web design. Resources for Article: Further resources on this subject: CodeIgniter Email and HTML Table [article] ASP.Net Site Performance: Improving JavaScript Loading [article] Adding Interactive Course Material in Moodle 1.9: Part 1 [article]
Read more
  • 0
  • 0
  • 3007

Packt
08 Mar 2016
17 min read
Save for later

Magento 2 – the New E-commerce Era

Packt
08 Mar 2016
17 min read
In this article by Ray Bogman and Vladimir Kerkhoff, the authors of the book, Magento 2 Cookbook, we will cover the basic tasks related to creating a catalog and products in Magento 2. You will learn the following recipes: Creating a root catalog Creating subcategories Managing an attribute set (For more resources related to this topic, see here.) Introduction This article explains how to set up a vanilla Magento 2 store. If Magento 2 is totally new for you, then lots of new basic whereabouts are pointed out. If you are currently working with Magento 1, then not a lot has changed since. The new backend of Magento 2 is the biggest improvement of them all. The design is built responsively and has a great user experience. Compared to Magento 1, this is a great improvement. The menu is located vertically on the left of the screen and works great on desktop and mobile environments: In this article, we will see how to set up a website with multiple domains using different catalogs. Depending on the website, store, and store view setup, we can create different subcategories, URLs, and product per domain name. There are a number of different ways customers can browse your store, but one of the most effective one is layered navigation. Layered navigation is located in your catalog and holds product features to sort or filter. Every website benefits from great Search Engine Optimization (SEO). You will learn how to define catalog URLs per catalog. Throughout this article, we will cover the basics on how to set up a multidomain setup. Additional tasks required to complete a production-like setup are out of the scope of this article. Creating a root catalog The first thing that we need to start with when setting up a vanilla Magento 2 website is defining our website, store, and store view structure. So what is the difference between website, store, and store view, and why is it important: A website is the top-level container and most important of the three. It is the parent level of the entire store and used, for example, to define domain names, different shipping methods, payment options, customers, orders, and so on. Stores can be used to define, for example, different store views with the same information. A store is always connected to a root catalog that holds all the categories and subcategories. One website can manage multiple stores, and every store has a different root catalog. When using multiple stores, it is not possible to share one basket. The main reason for this has to do with the configuration setup where shipping, catalog, customer, inventory, taxes, and payment settings are not sharable between different sites. Store views is the lowest level and mostly used to handle different localizations. Every store view can be set with a different language. Besides using store views just for localizations, it can also be used for Business to Business (B2B), hidden private sales pages (with noindex and nofollow), and so on. The option where we use the base link URL, for example, (yourdomain.com/myhiddenpage) is easy to set up. The website, store, and store view structure is shown in the following image: Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create a multi-website setup including three domains (yourdomain.com, yourdomain.de, and yourdomain.fr) and separate root catalogs. The following steps will guide you through this: First, we need to update our NGINX. We need to configure the additional domains before we can connect them to Magento. Make sure that all domain names are connected to your server and DNS is configured correctly. Go to /etc/nginx/conf.d, open the default.conf file, and include the following content at the top of your file: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } Your configuration should look like this now: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } upstream fastcgi_backend { server 127.0.0.1:9000; } server { listen 80; listen 443 ssl http2; server_name yourdomain.com; set $MAGE_ROOT /var/www/html; set $MAGE_MODE developer; ssl_certificate /etc/ssl/yourdomain-com.cert; ssl_certificate_key /etc/ssl/yourdomain-com.key; include /var/www/html/nginx.conf.sample; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location ~ /\.ht { deny all; } } Now let's go to the Magento 2 configuration file in /var/www/html/ and open the nginx.conf.sample file. Go to the bottom and look for the following: location ~ (index|get|static|report|404|503)\.php$ Now we add the following lines to the file under fastcgi_pass   fastcgi_backend;: fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; Your configuration should look like this now (this is only a small section of the bottom section): location ~ (index|get|static|report|404|503)\.php$ { try_files $uri =404; fastcgi_pass fastcgi_backend; fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; fastcgi_param PHP_FLAG "session.auto_start=off \n suhosin.session.cryptua=off"; fastcgi_param PHP_VALUE "memory_limit=256M \n max_execution_time=600"; fastcgi_read_timeout 600s; fastcgi_connect_timeout 600s; fastcgi_param MAGE_MODE $MAGE_MODE; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } The current setup is using the MAGE_RUN_TYPE website variable. You may change website to store depending on your setup preferences. When changing the variable, you need your default.conf mapping codes as well. Now, all you have to do is restart NGINX and PHP-FPM to use your new settings. Run the following command: service nginx restart && service php-fpm restart Before we continue we need to check whether our web server is serving the correct codes. Run the following command in the Magento 2 web directory: var/www/html/pub echo "<?php header("Content-type: text/plain"); print_r($_SERVER); ?>" > magecode.php Don't forget to update your nginx.conf.sample file with the new magecode code. It's located at the bottom of your file and should look like this: location ~ (index|get|static|report|404|503|magecode)\.php$ { Restart NGINX and open the file in your browser. The output should look as follows. As you can see, the created MAGE_RUN variables are available. Congratulations, you just finished configuring NGINX including additional domains. Now let's continue connecting them in Magento 2. Now log in to the backend and navigate to Stores | All Stores. By default, Magento 2 has one Website, Store, and Store View setup. Now click on Create Website and commit the following details: Name My German Website Code de Next, click on Create Store and commit the following details: Web site My German Website Name My German Website Root Category Default Category (we will change this later)  Next, click on Create Store View and commit the following details: Store My German Website Name German Code de Status Enabled  Continue the same step for the French domain. Make sure that the Code in Website and Store View is fr. The next important step is connecting the websites with the domain name. Navigate to Stores | Configuration | Web | Base URLs. Change the Store View scope at the top to My German Website. You will be prompted when switching; press ok to continue. Now, unset the checkbox called Use Default from Base URL and Base Link URL and commit your domain name. Save and continue the same procedure for the other website. The output should look like this: Save your entire configuration and clear your cache. Now go to Products | Categories and click on Add Root Category with the following data: Name Root German Is Active Yes Page Title My German Website Continue the same step for the French domain. You may add additional information here but it is not needed. Changing the current Root Category called Default Category to Root English is also optional but advised. Save your configuration, go to Stores | All Stores, and change all of the stores to the appropriate Root Catalog that we just created. Every Root Category should now have a dedicated Root Catalog. Congratulations, you just finished configuring Magento 2 including additional domains and dedicated Root Categories. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 11, we created a multistore setup for .com, .de, and .fr domains using a separate Root Catalog. In steps 1 through 4, we configured the domain mapping in the NGINX default.conf file. Then, we added the fastcgi_param MAGE_RUN code to the nginx.conf.sample file, which will manage what website or store view to request within Magento. In step 6, we used an easy test method to check whether all domains run the correct MAGE_RUN code. In steps 7 through 9, we configured the website, store, and store view name and code for the given domain names. In step 10, we created additional Root Catalogs for the remaining German and French stores. They are then connected to the previously created store configuration. All stores have their own Root Catalog now. There's more… Are you able to buy additional domain names but like to try setting up a multistore? Here are some tips to create one. Depending on whether you are using Windows, Mac OS, or Linux, the following options apply: Windows: Go to C:\Windows\System32\drivers\etc, open up the hosts file as an administrator, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and click on the Start button; then search for cmd.exe and commit the following: ipconfig /flushdns Mac OS: Go to the /etc/ directory, open the hosts file as a super user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: dscacheutil -flushcache Depending on your Mac version, check out the different commands here: http://www.hongkiat.com/blog/how-to-clear-flush-dns-cache-in-os-x-yosemite/ Linux: Go to the /etc/ directory, open the hosts file as a root user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: service nscd restart Depending on your Linux version, check out the different commands here: http://www.cyberciti.biz/faq/rhel-debian-ubuntu-flush-clear-dns-cache/ Open your browser and surf to the custom-made domains. These domains work only on your PC. You can copy these IP and domain names on as many PCs as you prefer. This method also works great when you are developing or testing and your production domain is not available on your development environment. Creating subcategories After creating the foundation of the website, we need to set up a catalog structure. Setting up a catalog structure is not difficult, but needs to be thought out well. Some websites have an easy setup using two levels, while others sometimes use five or more subcategories. Always keep in mind the user experience; your customer needs to crawl the pages easily. Keep it simple! Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to set up a catalog including subcategories. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Products | Categories. As we have already created Root Catalogs, we start with using the Root English catalog first. Click on the Root English catalog on the left and then select the Add Subcategory button above the menu. Now commit the following and repeat all steps again for the other Root Catalogs: Name Shoes (Schuhe) (Chaussures) Is Active Yes Page Title Shoes (Schuhe) (Chaussures) Name Clothes (Kleider) (Vêtements) Is Active Yes Page Title Clothes (Kleider) (Vêtements) As we have created the first level of our catalog, we can continue with the second level. Now click on the first level that you need to extend with a subcategory and select the Add Subcategory button. Now commit the following and repeat all steps again for the other Root Catalogs: Name Men (Männer) (Hommes) Is Active Yes Page Title Men (Männer) (Hommes) Name Women (Frau) (Femmes) Is Active Yes Page Title Women (Frau) (Femmes) Congratulations, you just finished configuring subcategories in Magento 2. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. Your categories should now look as follows: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 4, we created subcategories for the English, German, and French stores. In this recipe, we created a dedicated Root Catalog for every website. This way, every store can be configured using their own tax and shipping rules. There's more… In our example, we only submitted Name, Is Active, and Page Title. You may continue to commit the Description, Image, Meta Keywords, and Meta Description fields. By default, the URL key is the same as the Name field; you can change this depending on your SEO needs. Every category or subcategory has a default page layout defined by the theme. You may need to override this. Go to the Custom Design tab and click the drop-down menu of Page Layout. We can choose from the following options: 1 column, 2 columns with left bar, 2 columns with right bar, 3 columns, or Empty. Managing an attribute set Every product has a unique DNA; some products such as shoes could have different colors, brands, and sizes, while a snowboard could have weight, length, torsion, manufacture, and style. Setting up a website with all the attributes does not make sense. Depending on the products that you sell, you should create attributes that apply per website. When creating products for your website, attributes are the key elements and need to be thought through. What and how many attributes do I need? How many values does one need? All types of questions that could have a great impact on your website and, not to forget, the performance of it. Creating an attribute such as color and having 100 K of different key values stored is not improving your overall speed and user experience. Always think things through. After creating the attributes, we combine them in attribute sets that can be picked when starting to create a product. Some attributes can be used more than once, while others are unique to one product of an attribute set. Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create product attributes and sets. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Stores | Products. As we are using a vanilla setup, only system attributes and one attribute set is installed. Now click on Add New Attribute and commit the following data in the Properties tab: Attribute Properties Default label shoe_size Catalog Input Type for Store Owners Dropdown Values Required No Manage Options (values of your attribute) English Admin French German 4 4 35 35 4.5 4.5 35 35 5 5 35-36 35-36 5.5 5.5 36 36 6 6 36-37 36-37 6.5 6.5 37 37 7 7 37-38 37-38 7.5 7.5 38 38 8 8 38-39 38-39 8.5 8.5 39 39 Advanced Attribute Properties Scope Global Unique Value No Add to Column Options Yes Use in Filer Options Yes As we have already set up a multi-website that sells shoes and clothes, we stick with this. The attributes that we need to sell shoes are: shoe_size, shoe_type, width, color, gender, and occasion. Continue with the rest of the chart accordingly (http://www.shoesizingcharts.com). Click on Save and Continue Edit now and continue on the Manage Labels tab with the following information: Manage Titles (Size, Color, etc.) English French German Size Taille Größe Click on Save and Continue Edit now and continue on the Storefront Properties tab with the following information: Storefront Properties Use in Search No Comparable in Storefront No Use in Layered Navigation Filterable (with result) Use in Search Result Layered Navigation No Position 0 Use for Promo Rule Conditions No Allow HTML Tags on Storefront Yes Visible on Catalog Pages on Storefront Yes Used in Product Listing No Used for Sorting in Product Listing No Click on Save Attribute now and clear the cache. Depending on whether you have set up the index management accordingly through the Magento 2 cronjob, it will automatically update the newly created attribute. The additional shoe_type, width, color, gender, and occasion attributes configuration can be downloaded at https://github.com/mage2cookbook/chapter4. After creating all of the attributes, we combine them in an attribute set called Shoes. Go to Stores | Attribute Set, click on Add Attribute Set, and commit the following data: Edit Attribute Set Name Name Shoes Based On Default Now click on the Add New button in the Groups section and commit the group name called Shoes. The newly created group is now located at the bottom of the list. You may need to scroll down before you see it. It is possible to drag and drop the group higher up in the list. Now drag and drop the created attributes, shoe_size, shoe_type, width, color, gender, and occasion to the group and save the configuration. The notice of the cron job is automatically updated depending on your settings. Congratulations, you just finished creating attributes and attribute sets in Magento 2. This can be seen in the following screenshot: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 10, we created attributes that will be used in an attribute set. The attributes and sets are the fundamentals for every website. In steps 1 through 5, we created multiple attributes to define all details about the shoes and clothes that we would like to sell. Some attributes are later used as configurable values on the frontend while others only indicate the gender or occasion. In steps 6 through 9, we connected the attributes to the related attribute set so that when creating a product, all correct elements are available. There's more… After creating the attribute set for Shoes, we continue to create an attribute set for Clothes. Use the following attributes to create the set: color, occasion, apparel_type, sleeve_length, fit, size, length, and gender. Follow the same steps as we did before to create a new attribute set. You may reuse the attributes, color, occasion, and gender. All detailed attributes can be found at https://github.com/mage2cookbook/chapter4#clothes-set. The following is the screenshot of the Clothes attribute set: Summary In this article, you learned how to create a Root Catalog, subcategories, and manage attribute sets. For more information on Magento 2, Refer the following books by Packt Publishing: Magento 2 Development Cookbook (https://www.packtpub.com/web-development/magento-2-development-cookbook) Magento 2 Developer's Guide (https://www.packtpub.com/web-development/magento-2-developers-guide) Resources for Article: Further resources on this subject: Social Media in Magento [article] Upgrading from Magneto 1 [article] Social Media and Magento [article]
Read more
  • 0
  • 0
  • 1742
article-image-common-grunt-plugins
Packt
07 Mar 2016
9 min read
Save for later

Common Grunt Plugins

Packt
07 Mar 2016
9 min read
In this article, by Douglas Reynolds author of the book Learning Grunt, you will learn about Grunt plugins as they are core of the Grunt functionality and are an important aspect of Grunt as plugins are what we use in order to design an automated build process. (For more resources related to this topic, see here.) Common Grunt plugins and their purposes At this point, you should be asking yourself, what plugins can benefit me the most and why? Once you ask these questions, you may find that a natural response will be to ask further questions such as "what plugins are available?" This is exactly the intended purpose of this section, to introduce useful Grunt plugins and describe their intended purpose. contrib-watch This is, in the author's opinion, probably the most useful plugin available. The contrib-watch plugin responds to changes in files defined by you and runs additional tasks upon being triggered by the changed file events. For example, let's say that you make changes to a JavaScript file. When you save, and these changes are persisted, contrib-watch will detect that the file being watched has changed. An example workflow might be to make and save changes in a JavaScript file, then run Lint on the file. You might paste the code into a Lint tool, such as http://www.jslint.com/, or you might run an editor plugin tool on the file to ensure that your code is valid and has no defined errors. Using Grunt and contrib-watch, you can configure contrib-watch to automatically run a Grunt linting plugin so that every time you make changes to your JavaScript files, they are automatically lined. Installation of contrib-watch is straight forward and accomplished using the following npm-install command: npm install grunt-contrib-watch –save-dev The contrib-watch plugin will now be installed into your node-module directory. This will be located in the root of your project, you may see the Angular-Seed project for an example. Additionally, contrib-watch will be registered in package.json, you will see something similar to the following in package.json when you actually run this command: "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-watch": "~0.4.0" } Notice the tilde character (~) in the grunt-contrib-watch: 0.5.3 line, the tilde actually specifies that the most recent minor version should be used when updating. Therefore, for instance, if you updated your npm package for the project, it would use 0.5.4 if available; however, it would not use 0.6.x as that is a higher minor version. There is also a caret character (^) that you may see. It will allow updates matching the most recent major version. In the case of the 0.5.3 example, 0.6.3 will be allowed, while 1.x versions are not allowed. At this point, contrib-watch is ready to be configured into your project in a gruntfile, we will look more into the gruntfile later. It should be noted that this, and many other tasks, can be run manually. In the case of contrib-watch, once installed and configured, you can run the grunt watch command to start watching the files. It will continue to watch until you end your session. The contrib-watch has some useful options. While we won't cover all of the available options, the following are some notable options that you should be aware of. Make sure to review the documentation for the full listing of options: Options.event: This will allow you to configure contrib-watch to only trigger when certain event types occur. The available types are all, changed, added, and deleted. You may configure more than one type if you wish. The all will trigger any file change, added will respond to new files added, and deleted will be triggered on removal of a file. Options.reload: This will trigger the reload of the watch task when any of the watched files change. A good example of this is when the file in which the watch is configured changes, it is called gruntfile.js. This will reload the gruntfile and restart contrib-watch, watching the new version of gruntfile.js. Options.livereload: This is different than reload, therefore, don't confuse the two. Livereload starts a server that enables live reloading. What this means is that when files are changed, your server will automatically update with the changed files. Take, for instance, a web application running in a browser. Rather than saving your files and refreshing your browser to get the changed files, livereload automatically reloads your app in the browser for you. contrib-jshint The contrib-jshint plugin is to run automated JavaScript error detection and will help with identifying potential problems in your code, which may surface during the runtime. When the plugin is run, it will scan your JavaScript code and issue warnings on the preconfigured options. There are a large number of error messages that jshint might provide and it can be difficult at times to understand what exactly a particular message might be referring to. Some examples are shown in the following: The array literal notation [] is preferable The {a} is already defined Avoid arguments.{a} Bad assignment Confusing minuses The list goes on and there are resources such as http://jslinterrors.com/, whose purpose is to help you understand what a particular warning/error message means. Installation of contrib-jshint follows the same pattern as other plugins, using npm to install the plugin in your project, as shown in the following: npm install grunt-contrib-jshint --save-dev This will install the contrib-jshint plugin in your project's node-modules directory and register the plugin in your devDependencies section of the package.json file at the root of your project. It will be similar to the following: "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-jshint": "~0.4.5" } Similar to the other plugins, you may manually run contrib-jshint using the grunt jshint command. The contrib-jshint is jshint, therefore, any of the options available in jshint may be passed to contrib-jshint. Take a look at http://jshint.com/docs/options/ for a complete listing of the jshint options. Options are configured in the gruntfile.js file that we will cover in detail later in this book. Some examples of options are as follows: curly: This enforces that curly braces are used in code blocks undef: This ensures that all the variables have been declared maxparams: This checks to make sure that the number of arguments in a method does not exceed a certain limit The contrib-jshint allows you to configure the files that will be linted, the order in which the linting will occur, and even control linting before and after the concatenation. Additionally, contrib-jshint allows you to suppress warnings in the configuration options using the ignore_warning option. contrib-uglify Compression and minification are important for reducing the file sizes and contributing for better loading times to improve the performance. The contrib-uglify plugin provides the compression and minification utility by optimizing the JavaScript code and removing unneeded line breaks and whitespaces. It does this by parsing JavaScript and outputting regenerated, optimized, and code with shortened variable names for example. The contrib-uglify plugin is installed in your project using the npm install command, just as you will see with all the other Grunt plugins, as follows: npm install grunt-contrib-uglify --save-dev After you run this command, contrib-uglify will be installed in your node-modules directory at the root of your application. The plugin will also be registered in your devDependencies section of package.json. You should see something similar to the following in devDependencies: "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-uglify": "~0.4.0" } In addition to running as an automated task, contrib-uglify plugin may be run manually by issuing the grunt uglify command. The contrib-uglify plugin is configured to process specific files as defined in the gruntfile.js configuration file. Additionally, contrib-uglify will have defined output destination files that will be created for the processed minified JavaScript. There is also a beautify option that can be used to revert the minified code, should you wish to easily debug your JavaScript. A useful option that is available in conntrib-uglify is banners. Banners allow you to configure banner comments to be added to the minified output files. For example, a banner could be created with the current date and time, author, version number, and any other important information that should be included. You may reference your package.json file in order to get information, such as the package name and version, directly from the package.json configuration file. Another notable option is the ability to configure directory-level compiling of files. You achieve this through the configuration of the files option to use wildcard path references with file extension, such as **/*.js. This is useful when you want to minify all the contents in a directory. contrib-less The contrib-less is a plugin that compiles LESS files into CSS files. The LESS provides extensibility to standard CSS by allowing variables, mixins (declaring a group of style declarations at once that can be reused anywhere in the stylesheet), and even conditional logic to manage styles throughout the document. Just as with other plugins, contrib-less is installed to your project using the npm install command with the following command: npm install grunt-contrib-less –save-dev The npm install will add contrib-less to your node-modules directory, located at the root of your application. As we are using --save-dev, the task will be registered in devDependencies of package.json. The registration will look something similar to the following: "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-less": "~0.4.5" } Typical of Grunt tasks, you may also run contrib-less manually using the grunt less command. The contrib-less will be configured using the path and file options to define the location of source and destination output files. The contrib-less plugin can also be configured with multiple environment-type options, for example dev, test, and production, in order to apply different options that may be needed for different environments. Some typical options used in contrib-less include the following: paths: These are the directories that should be scanned compress: This shows whether to compress output to remove the whitespace plugins: This is the mechanism for including additional plugins in the flow of processing banner: This shows the banner to use in the compiled destination files There are several more options that are not listed here, make sure to refer to the documentation for the full listing of contrib-less options and example usage. Summary In this article we have covered some basic grunt plugins. Resources for Article: Further resources on this subject: Grunt in Action [article] Optimizing JavaScript for iOS Hybrid Apps [article] Welcome to JavaScript in the full stack [article]
Read more
  • 0
  • 0
  • 2412

article-image-app-development-using-react-native-vs-androidios
Manuel Nakamurakare
03 Mar 2016
6 min read
Save for later

App Development Using React Native vs. Android/iOS

Manuel Nakamurakare
03 Mar 2016
6 min read
Until two years ago, I had exclusively done Android native development. I had never developed iOS apps, but that changed last year, when my company decided that I had to learn iOS development. I was super excited at first, but all that excitement started to fade away as I started developing our iOS app and I quickly saw how my productivity was declining. I realized I had to basically re-learn everything I learnt in Android: the framework, the tools, the IDE, etc. I am a person who likes going to meetups, so suddenly I started going to both Android and iOS meetups. I needed to keep up-to-date with the latest features in both platforms. It was very time-consuming and at the same time somewhat frustrating since I was feeling my learning pace was not fast enough. Then, React Native for iOS came out. We didn’t start using it until mid 2015. We started playing around with it and we really liked it. What is React Native? React Native is a technology created by Facebook. It allows developers to use JavaScript in order to create mobile apps in both Android and iOS that look, feel, and are native. A good way to explain how it works is to think of it as a wrapper of native code. There are many components that have been created that are basically wrapping the native iOS or Android functionality. React Native has been gaining a lot of traction since it was released because it has basically changed the game in many ways. Two Ecosystems One reason why mobile development is so difficult and time consuming is the fact that two entirely different ecosystems need to be learned. If you want to develop an iOS app, then you need to learn Swift or Objective-C and Cocoa Touch. If you want to develop Android apps, you need to learn Java and the Android SDK. I have written code in the three languages, Swift, Objective C, and Java. I don’t really want to get into the argument of comparing which of these is better. However, what I can say is that they are different and learning each of them takes a considerable amount of time. A similar thing happens with the frameworks: Cocoa Touch and the Android SDK. Of course, with each of these frameworks, there is also a big bag of other tools such as testing tools, libraries, packages, etc. And we are not even considering that developers need to stay up-to-date with the latest features each ecosystem offers. On the other hand, if you choose to develop on React Native, you will, most of the time, only need to learn one set of tools. It is true that there are many things that you will need to get familiar with: JavaScript, Node, React Native, etc. However, it is only one set of tools to learn. Reusability Reusability is a big thing in software development. Whenever you are able to reuse code that is a good thing. React Native is not meant to be a write once, run everywhere platform. Whenever you want to build an app for them, you have to build a UI that looks and feels native. For this reason, some of the UI code needs to be written according to the platform's best practices and standards. However, there will always be some common UI code that can be shared together with all the logic. Being able to share code has many advantages: better use of human resources, less code to maintain, less chance of bugs, features in both platforms are more likely to be on parity, etc. Learn Once, Write Everywhere As I mentioned before, React Native is not meant to be a write once, run everywhere platform. As the Facebook team that created React Native says, the goal is to be a learn once, write everywhere platform. And this totally makes sense. Since all of the code for Android and iOS is written using the same set of tools, it is very easy to imagine having a team of developers building the app for both platforms. This is not something that will usually happen when doing native Android and iOS development because there are very few developers that do both. I can even go farther and say that a team that is developing a web app using React.js will not have a very hard time learning React Native development and start developing mobile apps. Declarative API When you build applications using React Native, your UI is more predictable and easier to understand since it has a declarative API as opposed to an imperative one. The difference between these approaches is that when you have an application that has different states, you usually need to keep track of all the changes in the UI and modify them. This can become a complex and very unpredictable task as your application grows. This is called imperative programming. If you use React Native, which has declarative APIs, you just need to worry about what the current UI state looks like without having to keep track of the older ones. Hot Reloading The usual developer routine when coding is to test changes every time some code has been written. For this to happen, the application needs to be compiled and then installed in either a simulator or a real device. In case of React Native, you don’t, most of the time, need to recompile the app every time you make a change. You just need to refresh the app in the simulator, emulator, or device and that’s it. There is even a feature called Live Reload that will refresh the app automatically every time it detects a change in the code. Isn’t that cool? Open Source React Native is still a very new technology; it was made open source less than a year ago. It is not perfect yet. It still has some bugs, but, overall, I think it is ready to be used in production for most mobile apps. There are still some features that are available in the native frameworks that have not been exposed to React Native but that is not really a big deal. I can tell from experience that it is somewhat easy to do in case you are familiar with native development. Also, since React Native is open source, there is a big community of developers helping to implement more features, fix bugs, and help people. Most of the time, if you are trying to build something that is common in mobile apps, it is very likely that it has already been built. As you can see, I am really bullish on React Native. I miss native Android and iOS development, but I really feel excited to be using React Native these days. I really think React Native is a game-changer in mobile development and I cannot wait until it becomes the to-go platform for mobile development!
Read more
  • 0
  • 0
  • 2281

article-image-making-web-server-nodejs
Packt
25 Feb 2016
38 min read
Save for later

Making a Web Server in Node.js

Packt
25 Feb 2016
38 min read
In this article, we will cover the following topics: Setting up a router Serving static files Caching content in memory for immediate delivery Optimizing performance with streaming Securing against filesystem hacking exploits (For more resources related to this topic, see here.) One of the great qualities of Node is its simplicity. Unlike PHP or ASP, there is no separation between the web server and code, nor do we have to customize large configuration files to get the behavior we want. With Node, we can create the web server, customize it, and deliver content. All this can be done at the code level. This article demonstrates how to create a web server with Node and feed content through it, while implementing security and performance enhancements to cater for various situations. If we don't have Node installed yet, we can head to http://nodejs.org and hit the INSTALL button appearing on the homepage. This will download the relevant file to install Node on our operating system. Setting up a router In order to deliver web content, we need to make a Uniform Resource Identifier (URI) available. This recipe walks us through the creation of an HTTP server that exposes routes to the user. Getting ready First let's create our server file. If our main purpose is to expose server functionality, it's a general practice to call the server.js file (because the npm start command runs the node server.js command by default). We could put this new server.js file in a new folder. It's also a good idea to install and use supervisor. We use npm (the module downloading and publishing command-line application that ships with Node) to install. On the command-line utility, we write the following command: sudo npm -g install supervisor Essentially, sudo allows administrative privileges for Linux and Mac OS X systems. If we are using Node on Windows, we can drop the sudo part in any of our commands. The supervisor module will conveniently autorestart our server when we save our changes. To kick things off, we can start our server.js file with the supervisor module by executing the following command: supervisor server.js For more on possible arguments and the configuration of supervisor, check out https://github.com/isaacs/node-supervisor. How to do it... In order to create the server, we need the HTTP module. So let's load it and use the http.createServer method as follows: var http = require('http'); http.createServer(function (request, response) {   response.writeHead(200, {'Content-Type': 'text/html'});   response.end('Woohoo!'); }).listen(8080); Now, if we save our file and access localhost:8080 on a web browser or using curl, our browser (or curl) will exclaim Woohoo! But the same will occur at localhost:8080/foo. Indeed, any path will render the same behavior. So let's build in some routing. We can use the path module to extract the basename variable of the path (the final part of the path) and reverse any URI encoding from the client with decodeURI as follows: var http = require('http'); var path = require('path'); http.createServer(function (request, response) {   var lookup=path.basename(decodeURI(request.url)); We now need a way to define our routes. One option is to use an array of objects as follows: var pages = [   {route: '', output: 'Woohoo!'},   {route: 'about', output: 'A simple routing with Node example'},   {route: 'another page', output: function() {return 'Here's     '+this.route;}}, ]; Our pages array should be placed above the http.createServer call. Within our server, we need to loop through our array and see if the lookup variable matches any of our routes. If it does, we can supply the output. We'll also implement some 404 error-related handling as follows: http.createServer(function (request, response) {   var lookup=path.basename(decodeURI(request.url));   pages.forEach(function(page) {     if (page.route === lookup) {       response.writeHead(200, {'Content-Type': 'text/html'});       response.end(typeof page.output === 'function'       ? page.output() : page.output);     }   });   if (!response.finished) {      response.writeHead(404);      response.end('Page Not Found!');   } }).listen(8080); How it works... The callback function we provide to http.createServer gives us all the functionality we need to interact with our server through the request and response objects. We use request to obtain the requested URL and then we acquire its basename with path. We also use decodeURI, without which another page route would fail as our code would try to match another%20page against our pages array and return false. Once we have our basename, we can match it in any way we want. We could send it in a database query to retrieve content, use regular expressions to effectuate partial matches, or we could match it to a filename and load its contents. We could have used a switch statement to handle routing, but our pages array has several advantages—it's easier to read, easier to extend, and can be seamlessly converted to JSON. We loop through our pages array using forEach. Node is built on Google's V8 engine, which provides us with a number of ECMAScript 5 (ES5) features. These features can't be used in all browsers as they're not yet universally implemented, but using them in Node is no problem! The forEach function is an ES5 implementation; the ES3 way is to use the less convenient for loop. While looping through each object, we check its route property. If we get a match, we write the 200 OK status and content-type headers, and then we end the response with the object's output property. The response.end method allows us to pass a parameter to it, which it writes just before finishing the response. In response.end, we have used a ternary operator (?:) to conditionally call page.output as a function or simply pass it as a string. Notice that the another page route contains a function instead of a string. The function has access to its parent object through the this variable, and allows for greater flexibility in assembling the output we want to provide. In the event that there is no match in our forEach loop, response.end would never be called and therefore the client would continue to wait for a response until it times out. To avoid this, we check the response.finished property and if it's false, we write a 404 header and end the response. The response.finished flag is affected by the forEach callback, yet it's not nested within the callback. Callback functions are mostly used for asynchronous operations, so on the surface this looks like a potential race condition; however, the forEach loop does not operate asynchronously; it blocks until all loops are complete. There's more... There are many ways to extend and alter this example. There are also some great non-core modules available that do the legwork for us. Simple multilevel routing Our routing so far only deals with a single level path. A multilevel path (for example, /about/node) will simply return a 404 error message. We can alter our object to reflect a subdirectory-like structure, remove path, and use request.url for our routes instead of path.basename as follows: var http=require('http'); var pages = [   {route: '/', output: 'Woohoo!'},   {route: '/about/this', output: 'Multilevel routing with Node'},   {route: '/about/node', output: 'Evented I/O for V8 JavaScript.'},   {route: '/another page', output: function () {return 'Here's '     + this.route; }} ]; http.createServer(function (request, response) {   var lookup = decodeURI(request.url); When serving static files, request.url must be cleaned prior to fetching a given file. Check out the Securing against filesystem hacking exploits recipe in this article. Multilevel routing could be taken further; we could build and then traverse a more complex object as follows: {route: 'about', childRoutes: [   {route: 'node', output: 'Evented I/O for V8 JavaScript'},   {route: 'this', output: 'Complex Multilevel Example'} ]} After the third or fourth level, this object would become a leviathan to look at. We could alternatively create a helper function to define our routes that essentially pieces our object together for us. Alternatively, we could use one of the excellent noncore routing modules provided by the open source Node community. Excellent solutions already exist that provide helper methods to handle the increasing complexity of scalable multilevel routing. Parsing the querystring module Two other useful core modules are url and querystring. The url.parse method allows two parameters: first the URL string (in our case, this will be request.url) and second a Boolean parameter named parseQueryString. If the url.parse method is set to true, it lazy loads the querystring module (saving us the need to require it) to parse the query into an object. This makes it easy for us to interact with the query portion of a URL as shown in the following code: var http = require('http'); var url = require('url'); var pages = [   {id: '1', route: '', output: 'Woohoo!'},   {id: '2', route: 'about', output: 'A simple routing with Node     example'},   {id: '3', route: 'another page', output: function () {     return 'Here's ' + this.route; }   }, ]; http.createServer(function (request, response) {   var id = url.parse(decodeURI(request.url), true).query.id;   if (id) {     pages.forEach(function (page) {       if (page.id === id) {         response.writeHead(200, {'Content-Type': 'text/html'});         response.end(typeof page.output === 'function'         ? page.output() : page.output);       }     });   }   if (!response.finished) {     response.writeHead(404);     response.end('Page Not Found');   } }).listen(8080); With the added id properties, we can access our object data by, for instance, localhost:8080?id=2. The routing modules There's an up-to-date list of various routing modules for Node at https://github.com/joyent/node/wiki/modules#wiki-web-frameworks-routers. These community-made routers cater to various scenarios. It's important to research the activity and maturity of a module before taking it into a production environment. NodeZoo (http://nodezoo.com) is an excellent tool to research the state of a NODE module. See also The Serving static files and Securing against filesystem hacking exploits recipes discussed in this article Serving static files If we have information stored on disk that we want to serve as web content, we can use the fs (filesystem) module to load our content and pass it through the http.createServer callback. This is a basic conceptual starting point to serve static files; as we will learn in the following recipes, there are much more efficient solutions. Getting ready We'll need some files to serve. Let's create a directory named content, containing the following three files: index.html styles.css script.js Add the following code to the HTML file index.html: <html>   <head>     <title>Yay Node!</title>     <link rel=stylesheet href=styles.css type=text/css>     <script src=script.js type=text/javascript></script>   </head>   <body>     <span id=yay>Yay!</span>   </body> </html> Add the following code to the script.js JavaScript file: window.onload = function() { alert('Yay Node!'); }; And finally, add the following code to the CSS file style.css: #yay {font-size:5em;background:blue;color:yellow;padding:0.5em} How to do it... As in the previous recipe, we'll be using the core modules http and path. We'll also need to access the filesystem, so we'll require fs as well. With the help of the following code, let's create the server and use the path module to check if a file exists: var http = require('http'); var path = require('path'); var fs = require('fs'); http.createServer(function (request, response) {   var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/' + lookup;   fs.exists(f, function (exists) {     console.log(exists ? lookup + " is there"     : lookup + " doesn't exist");   }); }).listen(8080); If we haven't already done it, then we can initialize our server.js file by running the following command: supervisor server.js Try loading localhost:8080/foo. The console will say foo doesn't exist, because it doesn't. The localhost:8080/script.js URL will tell us that script.js is there, because it is. Before we can serve a file, we are supposed to let the client know the content-type header, which we can determine from the file extension. So let's make a quick map using an object as follows: var mimeTypes = {   '.js' : 'text/javascript',   '.html': 'text/html',   '.css' : 'text/css' }; We could extend our mimeTypes map later to support more types. Modern browsers may be able to interpret certain mime types (like text/javascript), without the server sending a content-type header, but older browsers or less common mime types will rely upon the correct content-type header being sent from the server. Remember to place mimeTypes outside of the server callback, since we don't want to initialize the same object on every client request. If the requested file exists, we can convert our file extension into a content-type header by feeding path.extname into mimeTypes and then pass our retrieved content-type to response.writeHead. If the requested file doesn't exist, we'll write out a 404 error and end the response as follows: //requires variables, mimeType object... http.createServer(function (request, response) {     var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/' + lookup;   fs.exists(f, function (exists) {     if (exists) {       fs.readFile(f, function (err, data) {         if (err) {response.writeHead(500); response.end('Server           Error!'); return; }         var headers = {'Content-type': mimeTypes[path.extname          (lookup)]};         response.writeHead(200, headers);         response.end(data);       });       return;     }     response.writeHead(404); //no such file found!     response.end();   }); }).listen(8080); At the moment, there is still no content sent to the client. We have to get this content from our file, so we wrap the response handling in an fs.readFile method callback as follows: //http.createServer, inside fs.exists: if (exists) {   fs.readFile(f, function(err, data) {     var headers={'Content-type': mimeTypes[path.extname(lookup)]};     response.writeHead(200, headers);     response.end(data);   });  return; } Before we finish, let's apply some error handling to our fs.readFile callback as follows: //requires variables, mimeType object... //http.createServer,  path exists, inside if(exists):  fs.readFile(f, function(err, data) {     if (err) {response.writeHead(500); response.end('Server       Error!');  return; }     var headers = {'Content-type': mimeTypes[path.extname      (lookup)]};     response.writeHead(200, headers);     response.end(data);   }); return; } Notice that return stays outside of the fs.readFile callback. We are returning from the fs.exists callback to prevent further code execution (for example, sending the 404 error). Placing a return statement in an if statement is similar to using an else branch. However, the pattern of the return statement inside the if loop is encouraged instead of if else, as it eliminates a level of nesting. Nesting can be particularly prevalent in Node due to performing a lot of asynchronous tasks, which tend to use callback functions. So, now we can navigate to localhost:8080, which will serve our index.html file. The index.html file makes calls to our script.js and styles.css files, which our server also delivers with appropriate mime types. We can see the result in the following screenshot: This recipe serves to illustrate the fundamentals of serving static files. Remember, this is not an efficient solution! In a real world situation, we don't want to make an I/O call every time a request hits the server; this is very costly especially with larger files. In the following recipes, we'll learn better ways of serving static files. How it works... Our script creates a server and declares a variable called lookup. We assign a value to lookup using the double pipe || (OR) operator. This defines a default route if path.basename is empty. Then we pass lookup to a new variable that we named f in order to prepend our content directory to the intended filename. Next, we run f through the fs.exists method and check the exist parameter in our callback to see if the file is there. If the file does exist, we read it asynchronously using fs.readFile. If there is a problem accessing the file, we write a 500 server error, end the response, and return from the fs.readFile callback. We can test the error-handling functionality by removing read permissions from index.html as follows: chmod -r index.html Doing so will cause the server to throw the 500 server error status code. To set things right again, run the following command: chmod +r index.html chmod is a Unix-type system-specific command. If we are using Windows, there's no need to set file permissions in this case. As long as we can access the file, we grab the content-type header using our handy mimeTypes mapping object, write the headers, end the response with data loaded from the file, and finally return from the function. If the requested file does not exist, we bypass all this logic, write a 404 error message, and end the response. There's more... The favicon icon file is something to watch out for. We will explore the file in this section. The favicon gotcha When using a browser to test our server, sometimes an unexpected server hit can be observed. This is the browser requesting the default favicon.ico icon file that servers can provide. Apart from the initial confusion of seeing additional hits, this is usually not a problem. If the favicon request does begin to interfere, we can handle it as follows: if (request.url === '/favicon.ico') {   console.log('Not found: ' + f);   response.end();   return; } If we wanted to be more polite to the client, we could also inform it of a 404 error by using response.writeHead(404) before issuing response.end. See also The Caching content in memory for immediate delivery recipe The Optimizing performance with streaming recipe The Securing against filesystem hacking exploits recipe Caching content in memory for immediate delivery Directly accessing storage on each client request is not ideal. For this task, we will explore how to enhance server efficiency by accessing the disk only on the first request, caching the data from file for that first request, and serving all further requests out of the process memory. Getting ready We are going to improve upon the code from the previous task, so we'll be working with server.js and in the content directory, with index.html, styles.css, and script.js. How to do it... Let's begin by looking at our following script from the previous recipe Serving Static Files: var http = require('http'); var path = require('path'); var fs = require('fs');    var mimeTypes = {   '.js' : 'text/javascript',   '.html': 'text/html',   '.css' : 'text/css' };   http.createServer(function (request, response) {   var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/'+lookup;   fs.exists(f, function (exists) {     if (exists) {       fs.readFile(f, function(err,data) {         if (err) {           response.writeHead(500); response.end('Server Error!');           return;         }         var headers = {'Content-type': mimeTypes[path.extname          (lookup)]};         response.writeHead(200, headers);         response.end(data);       });     return;     }     response.writeHead(404); //no such file found!     response.end('Page Not Found');   }); } We need to modify this code to only read the file once, load its contents into memory, and respond to all requests for that file from memory afterwards. To keep things simple and preserve maintainability, we'll extract our cache handling and content delivery into a separate function. So above http.createServer, and below mimeTypes, we'll add the following: var cache = {}; function cacheAndDeliver(f, cb) {   if (!cache[f]) {     fs.readFile(f, function(err, data) {       if (!err) {         cache[f] = {content: data} ;       }       cb(err, data);     });     return;   }   console.log('loading ' + f + ' from cache');   cb(null, cache[f].content); } //http.createServer A new cache object and a new function called cacheAndDeliver have been added to store our files in memory. Our function takes the same parameters as fs.readFile so we can replace fs.readFile in the http.createServer callback while leaving the rest of the code intact as follows: //...inside http.createServer:   fs.exists(f, function (exists) {   if (exists) {     cacheAndDeliver(f, function(err, data) {       if (err) {         response.writeHead(500);         response.end('Server Error!');         return; }       var headers = {'Content-type': mimeTypes[path.extname(f)]};       response.writeHead(200, headers);       response.end(data);     }); return;   } //rest of path exists code (404 handling)... When we execute our server.js file and access localhost:8080 twice, consecutively, the second request causes the console to display the following output: loading content/index.html from cache loading content/styles.css from cache loading content/script.js from cache How it works... We defined a function called cacheAndDeliver, which like fs.readFile, takes a filename and callback as parameters. This is great because we can pass the exact same callback of fs.readFile to cacheAndDeliver, padding the server out with caching logic without adding any extra complexity visually to the inside of the http.createServer callback. As it stands, the worth of abstracting our caching logic into an external function is arguable, but the more we build on the server's caching abilities, the more feasible and useful this abstraction becomes. Our cacheAndDeliver function checks to see if the requested content is already cached. If not, we call fs.readFile and load the data from disk. Once we have this data, we may as well hold onto it, so it's placed into the cache object referenced by its file path (the f variable). The next time anyone requests the file, cacheAndDeliver will see that we have the file stored in the cache object and will issue an alternative callback containing the cached data. Notice that we fill the cache[f] property with another new object containing a content property. This makes it easier to extend the caching functionality in the future as we would just have to place extra properties into our cache[f] object and supply logic that interfaces with these properties accordingly. There's more... If we were to modify the files we are serving, the changes wouldn't be reflected until we restart the server. We can do something about that. Reflecting content changes To detect whether a requested file has changed since we last cached it, we must know when the file was cached and when it was last modified. To record when the file was last cached, let's extend the cache[f] object as follows: cache[f] = {content: data,timestamp: Date.now() // store a Unix                                                 // time stamp }; That was easy! Now let's find out when the file was updated last. The fs.stat method returns an object as the second parameter of its callback. This object contains the same useful information as the command-line GNU (GNU's Not Unix!) coreutils stat. The fs.stat function supplies three time-related properties: last accessed (atime), last modified (mtime), and last changed (ctime). The difference between mtime and ctime is that ctime will reflect any alterations to the file, whereas mtime will only reflect alterations to the content of the file. Consequently, if we changed the permissions of a file, ctime would be updated but mtime would stay the same. We want to pay attention to permission changes as they happen so let's use the ctime property as shown in the following code: //requires and mimeType object.... var cache = {}; function cacheAndDeliver(f, cb) {   fs.stat(f, function (err, stats) {     if (err) { return console.log('Oh no!, Error', err); }     var lastChanged = Date.parse(stats.ctime),     isUpdated = (cache[f]) && lastChanged  > cache[f].timestamp;     if (!cache[f] || isUpdated) {       fs.readFile(f, function (err, data) {         console.log('loading ' + f + ' from file');         //rest of cacheAndDeliver   }); //end of fs.stat } If we're using Node on Windows, we may have to substitute ctime with mtime, since ctime supports at least Version 0.10.12. The contents of cacheAndDeliver have been wrapped in an fs.stat callback, two variables have been added, and the if(!cache[f]) statement has been modified. We parse the ctime property of the second parameter dubbed stats using Date.parse to convert it to milliseconds since midnight, January 1st, 1970 (the Unix epoch) and assign it to our lastChanged variable. Then we check whether the requested file's last changed time is greater than when we cached the file (provided the file is indeed cached) and assign the result to our isUpdated variable. After that, it's merely a case of adding the isUpdated Boolean to the conditional if(!cache[f]) statement via the || (or) operator. If the file is newer than our cached version (or if it isn't yet cached), we load the file from disk into the cache object. See also The Optimizing performance with streaming recipe discussed in this article Optimizing performance with streaming Caching content certainly improves upon reading a file from disk for every request. However, with fs.readFile, we are reading the whole file into memory before sending it out in a response object. For better performance, we can stream a file from disk and pipe it directly to the response object, sending data straight to the network socket a piece at a time. Getting ready We are building on our code from the last example, so let's get server.js, index.html, styles.css, and script.js ready. How to do it... We will be using fs.createReadStream to initialize a stream, which can be piped to the response object. In this case, implementing fs.createReadStream within our cacheAndDeliver function isn't ideal because the event listeners of fs.createReadStream will need to interface with the request and response objects, which for the sake of simplicity would preferably be dealt with in the http.createServer callback. For brevity's sake, we will discard our cacheAndDeliver function and implement basic caching within the server callback as follows: //...snip... requires, mime types, createServer, lookup and f //  vars...   fs.exists(f, function (exists) {   if (exists) {     var headers = {'Content-type': mimeTypes[path.extname(f)]};     if (cache[f]) {       response.writeHead(200, headers);       response.end(cache[f].content);       return;    } //...snip... rest of server code... Later on, we will fill cache[f].content while we are interfacing with the readStream object. The following code shows how we use fs.createReadStream: var s = fs.createReadStream(f); The preceding code will return a readStream object that streams the file, which is pointed at by variable f. The readStream object emits events that we need to listen to. We can listen with addEventListener or use the shorthand on method as follows: var s = fs.createReadStream(f).on('open', function () {   //do stuff when the readStream opens }); Because createReadStream returns the readStream object, we can latch our event listener straight onto it using method chaining with dot notation. Each stream is only going to open once; we don't need to keep listening to it. Therefore, we can use the once method instead of on to automatically stop listening after the first event occurrence as follows: var s = fs.createReadStream(f).once('open', function () {   //do stuff when the readStream opens }); Before we fill out the open event callback, let's implement some error handling as follows: var s = fs.createReadStream(f).once('open', function () {   //do stuff when the readStream opens }).once('error', function (e) {   console.log(e);   response.writeHead(500);   response.end('Server Error!'); }); The key to this whole endeavor is the stream.pipe method. This is what enables us to take our file straight from disk and stream it directly to the network socket via our response object as follows: var s = fs.createReadStream(f).once('open', function () {   response.writeHead(200, headers);   this.pipe(response); }).once('error', function (e) {   console.log(e);   response.writeHead(500);   response.end('Server Error!'); }); But what about ending the response? Conveniently, stream.pipe detects when the stream has ended and calls response.end for us. There's one other event we need to listen to, for caching purposes. Within our fs.exists callback, underneath the createReadStream code block, we write the following code: fs.stat(f, function(err, stats) {   var bufferOffset = 0;   cache[f] = {content: new Buffer(stats.size)};   s.on('data', function (chunk) {     chunk.copy(cache[f].content, bufferOffset);     bufferOffset += chunk.length;   }); }); //end of createReadStream We've used the data event to capture the buffer as it's being streamed, and copied it into a buffer that we supplied to cache[f].content, using fs.stat to obtain the file size for the file's cache buffer. For this case, we're using the classic mode data event instead of the readable event coupled with stream.read() (see http://nodejs.org/api/stream.html#stream_readable_read_size_1) because it best suits our aim, which is to grab data from the stream as soon as possible. How it works... Instead of the client waiting for the server to load the entire file from disk prior to sending it to the client, we use a stream to load the file in small ordered pieces and promptly send them to the client. With larger files, this is especially useful as there is minimal delay between the file being requested and the client starting to receive the file. We did this by using fs.createReadStream to start streaming our file from disk. The fs.createReadStream method creates a readStream object, which inherits from the EventEmitter class. The EventEmitter class accomplishes the evented part pretty well. Due to this, we'll be using listeners instead of callbacks to control the flow of stream logic. We then added an open event listener using the once method since we want to stop listening to the open event once it is triggered. We respond to the open event by writing the headers and using the stream.pipe method to shuffle the incoming data straight to the client. If the client becomes overwhelmed with processing, stream.pipe applies backpressure, which means that the incoming stream is paused until the backlog of data is handled. While the response is being piped to the client, the content cache is simultaneously being filled. To achieve this, we had to create an instance of the Buffer class for our cache[f].content property. A Buffer class must be supplied with a size (or array or string), which in our case is the size of the file. To get the size, we used the asynchronous fs.stat method and captured the size property in the callback. The data event returns a Buffer variable as its only callback parameter. The default value of bufferSize for a stream is 64 KB; any file whose size is less than the value of the bufferSize property will only trigger one data event because the whole file will fit into the first chunk of data. But for files that are greater than the value of the bufferSize property, we have to fill our cache[f].content property one piece at a time. Changing the default readStream buffer size We can change the buffer size of our readStream object by passing an options object with a bufferSize property as the second parameter of fs.createReadStream. For instance, to double the buffer, you could use fs.createReadStream(f,{bufferSize: 128 * 1024});. We cannot simply concatenate each chunk with cache[f].content because this will coerce binary data into string format, which, though no longer in binary format, will later be interpreted as binary. Instead, we have to copy all the little binary buffer chunks into our binary cache[f].content buffer. We created a bufferOffset variable to assist us with this. Each time we add another chunk to our cache[f].content buffer, we update our new bufferOffset property by adding the length of the chunk buffer to it. When we call the Buffer.copy method on the chunk buffer, we pass bufferOffset as the second parameter, so our cache[f].content buffer is filled correctly. Moreover, operating with the Buffer class renders performance enhancements with larger files because it bypasses the V8 garbage-collection methods, which tend to fragment a large amount of data, thus slowing down Node's ability to process them. There's more... While streaming has solved the problem of waiting for files to be loaded into memory before delivering them, we are nevertheless still loading files into memory via our cache object. With larger files or a large number of files, this could have potential ramifications. Protecting against process memory overruns Streaming allows for intelligent and minimal use of memory for processing large memory items. But even with well-written code, some apps may require significant memory. There is a limited amount of heap memory. By default, V8's memory is set to 1400 MB on 64-bit systems and 700 MB on 32-bit systems. This can be altered by running node with --max-old-space-size=N, where N is the amount of megabytes (the actual maximum amount that it can be set to depends upon the OS, whether we're running on a 32-bit or 64-bit architecture—a 32-bit may peak out around 2 GB and of course the amount of physical RAM available). The --max-old-space-size method doesn't apply to buffers, since it applies to the v8 heap (memory allocated for JavaScript objects and primitives) and buffers are allocated outside of the v8 heap. If we absolutely had to be memory intensive, we could run our server on a large cloud platform, divide up the logic, and start new instances of node using the child_process class, or better still the higher level cluster module. There are other more advanced ways to increase the usable memory, including editing and recompiling the v8 code base. The http://blog.caustik.com/2012/04/11/escape-the-1-4gb-v8-heap-limit-in-node-js link has some tips along these lines. In this case, high memory usage isn't necessarily required and we can optimize our code to significantly reduce the potential for memory overruns. There is less benefit to caching larger files because the slight speed improvement relative to the total download time is negligible, while the cost of caching them is quite significant in ratio to our available process memory. We can also improve cache efficiency by implementing an expiration time on cache objects, which can then be used to clean the cache, consequently removing files in low demand and prioritizing high demand files for faster delivery. Let's rearrange our cache object slightly as follows: var cache = {   store: {},   maxSize : 26214400, //(bytes) 25mb } For a clearer mental model, we're making a distinction between the cache object as a functioning entity and the cache object as a store (which is a part of the broader cache entity). Our first goal is to only cache files under a certain size; we've defined cache.maxSize for this purpose. All we have to do now is insert an if condition within the fs.stat callback as follows: fs.stat(f, function (err, stats) {   if (stats.size<cache.maxSize) {     var bufferOffset = 0;     cache.store[f] = {content: new Buffer(stats.size),       timestamp: Date.now() };     s.on('data', function (data) {       data.copy(cache.store[f].content, bufferOffset);       bufferOffset += data.length;     });   } }); Notice that we also slipped in a new timestamp property into our cache.store[f] method. This is for our second goal—cleaning the cache. Let's extend cache as follows: var cache = {   store: {},   maxSize: 26214400, //(bytes) 25mb   maxAge: 5400 * 1000, //(ms) 1 and a half hours   clean: function(now) {     var that = this;     Object.keys(this.store).forEach(function (file) {       if (now > that.store[file].timestamp + that.maxAge) {         delete that.store[file];       }     });   } }; So in addition to maxSize, we've created a maxAge property and added a clean method. We call cache.clean at the bottom of the server with the help of the following code: //all of our code prior   cache.clean(Date.now()); }).listen(8080); //end of the http.createServer The cache.clean method loops through the cache.store function and checks to see if it has exceeded its specified lifetime. If it has, we remove it from the store. One further improvement and then we're done. The cache.clean method is called on each request. This means the cache.store function is going to be looped through on every server hit, which is neither necessary nor efficient. It would be better if we clean the cache, say, every two hours or so. We'll add two more properties to cache—cleanAfter to specify the time between cache cleans, and cleanedAt to determine how long it has been since the cache was last cleaned, as follows: var cache = {   store: {},   maxSize: 26214400, //(bytes) 25mb   maxAge : 5400 * 1000, //(ms) 1 and a half hours   cleanAfter: 7200 * 1000,//(ms) two hours   cleanedAt: 0, //to be set dynamically   clean: function (now) {     if (now - this.cleanAfter>this.cleanedAt) {       this.cleanedAt = now;       that = this;       Object.keys(this.store).forEach(function (file) {         if (now > that.store[file].timestamp + that.maxAge) {           delete that.store[file];         }       });     }   } }; So we wrap our cache.clean method in an if statement, which will allow a loop through cache.store only if it has been longer than two hours (or whatever cleanAfter is set to) since the last clean. See also The Securing against filesystem hacking exploits recipe discussed in this article Securing against filesystem hacking exploits For a Node app to be insecure, there must be something an attacker can interact with for exploitation purposes. Due to Node's minimalist approach, the onus is on the programmer to ensure that their implementation doesn't expose security flaws. This recipe will help identify some security risk anti-patterns that could occur when working with the filesystem. Getting ready We'll be working with the same content directory as we did in the previous recipes. But we'll start a new insecure_server.js file (there's a clue in the name!) from scratch to demonstrate mistaken techniques. How to do it... Our previous static file recipes tend to use path.basename to acquire a route, but this ignores intermediate paths. If we accessed localhost:8080/foo/bar/styles.css, our code would take styles.css as the basename property and deliver content/styles.css to us. How about we make a subdirectory in our content folder? Call it subcontent and move our script.js and styles.css files into it. We'd have to alter our script and link tags in index.html as follows: <link rel=stylesheet type=text/css href=subcontent/styles.css> <script src=subcontent/script.js type=text/javascript></script> We can use the url module to grab the entire pathname property. So let's include the url module in our new insecure_server.js file, create our HTTP server, and use pathname to get the whole requested path as follows: var http = require('http'); var url = require('url'); var fs = require('fs');   http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = (lookup === "/") ? '/index.html' : lookup;   var f = 'content' + lookup;   console.log(f);   fs.readFile(f, function (err, data) {     response.end(data);   }); }).listen(8080); If we navigate to localhost:8080, everything works great! We've gone multilevel, hooray! For demonstration purposes, a few things have been stripped out from the previous recipes (such as fs.exists); but even with them, this code presents the same security hazards if we type the following: curl localhost:8080/../insecure_server.js Now we have our server's code. An attacker could also access /etc/passwd with a few attempts at guessing its relative path as follows: curl localhost:8080/../../../../../../../etc/passwd If we're using Windows, we can download and install curl from http://curl.haxx.se/download.html. In order to test these attacks, we have to use curl or another equivalent because modern browsers will filter these sort of requests. As a solution, what if we added a unique suffix to each file we wanted to serve and made it mandatory for the suffix to exist before the server coughs it up? That way, an attacker could request /etc/passwd or our insecure_server.js file because they wouldn't have the unique suffix. To try this, let's copy the content folder and call it content-pseudosafe, and rename our files to index.html-serve, script.js-serve, and styles.css-serve. Let's create a new server file and name it pseudosafe_server.js. Now all we have to do is make the -serve suffix mandatory as follows: //requires section ...snip... http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = (lookup === "/") ? '/index.html-serve'     : lookup + '-serve';   var f = 'content-pseudosafe' + lookup; //...snip... rest of the server code... For feedback purposes, we'll also include some 404 handling with the help of fs.exists as follows: //requires, create server etc fs.exists(f, function (exists) {   if (!exists) {     response.writeHead(404);     response.end('Page Not Found!');     return;   } //read file etc So, let's start our pseudosafe_server.js file and try out the same exploit by executing the following command: curl -i localhost:8080/../insecure_server.js We've used the -i argument so that curl will output the headers. The result? A 404, because the file it's actually looking for is ../insecure_server.js-serve, which doesn't exist. So what's wrong with this method? Well it's inconvenient and prone to error. But more importantly, an attacker can still work around it! Try this by typing the following: curl localhost:8080/../insecure_server.js%00/index.html And voilà! There's our server code again. The solution to our problem is path.normalize, which cleans up our pathname before it gets to fs.readFile as shown in the following code: http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = path.normalize(lookup);   lookup = (lookup === "/") ? '/index.html' : lookup;   var f = 'content' + lookup } Prior recipes haven't used path.normalize and yet they're still relatively safe. The path.basename method gives us the last part of the path, thus removing any preceding double dot paths (../) that would take an attacker higher up the directory hierarchy than should be allowed. How it works... Here we have two filesystem exploitation techniques: the relative directory traversal and poison null byte attacks. These attacks can take different forms, such as in a POST request or from an external file. They can have different effects—if we were writing to files instead of reading them, an attacker could potentially start making changes to our server. The key to security in all cases is to validate and clean any data that comes from the user. In insecure_server.js, we pass whatever the user requests to our fs.readFile method. This is foolish because it allows an attacker to take advantage of the relative path functionality in our operating system by using ../, thus gaining access to areas that should be off limits. By adding the -serve suffix, we didn't solve the problem, we put a plaster on it, which can be circumvented by the poison null byte. The key to this attack is the %00 value, which is a URL hex code for the null byte. In this case, the null byte blinds Node to the ../insecure_server.js portion, but when the same null byte is sent through to our fs.readFile method, it has to interface with the kernel. But the kernel gets blinded to the index.html part. So our code sees index.html but the read operation sees ../insecure_server.js. This is known as null byte poisoning. To protect ourselves, we could use a regex statement to remove the ../ parts of the path. We could also check for the null byte and spit out a 400 Bad Request statement. But we don't have to, because path.normalize filters out the null byte and relative parts for us. There's more... Let's further delve into how we can protect our servers when it comes to serving static files. Whitelisting If security was an extreme priority, we could adopt a strict whitelisting approach. In this approach, we would create a manual route for each file we are willing to deliver. Anything not on our whitelist would return a 404 error. We can place a whitelist array above http.createServer as follows: var whitelist = [   '/index.html',   '/subcontent/styles.css',   '/subcontent/script.js' ]; And inside our http.createServer callback, we'll put an if statement to check if the requested path is in the whitelist array, as follows: if (whitelist.indexOf(lookup) === -1) {   response.writeHead(404);   response.end('Page Not Found!');   return; } And that's it! We can test this by placing a file non-whitelisted.html in our content directory and then executing the following command: curl -i localhost:8080/non-whitelisted.html This will return a 404 error because non-whitelisted.html isn't on the whitelist. Node static The module's wiki page (https://github.com/joyent/node/wiki/modules#wiki-web-frameworks-static) has a list of static file server modules available for different purposes. It's a good idea to ensure that a project is mature and active before relying upon it to serve your content. The node-static module is a well-developed module with built-in caching. It's also compliant with the RFC2616 HTTP standards specification, which defines how files should be delivered over HTTP. The node-static module implements all the essentials discussed in this article and more. For the next example, we'll need the node-static module. You could install it by executing the following command: npm install node-static The following piece of code is slightly adapted from the node-static module's GitHub page at https://github.com/cloudhead/node-static: var static = require('node-static'); var fileServer = new static.Server('./content'); require('http').createServer(function (request, response) {   request.addListener('end', function () {     fileServer.serve(request, response);   }); }).listen(8080); The preceding code will interface with the node-static module to handle server-side and client-side caching, use streams to deliver content, and filter out relative requests and null bytes, among other things. Summary To learn more about Node.js and creating web servers, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Node Cookbook Second Edition (https://www.packtpub.com/web-development/node-cookbook-second-edition) Node.js Design Patterns (https://www.packtpub.com/web-development/nodejs-design-patterns) Node Web Development Second Edition (https://www.packtpub.com/web-development/node-web-development-second-edition) Resources for Article: Further resources on this subject: Working With Commands And Plugins [article] Node.js Fundamentals And Asynchronous Javascript [article] Building A Movie API With Express [article]
Read more
  • 0
  • 0
  • 4648
article-image-magento-theme-development
Packt
24 Feb 2016
7 min read
Save for later

Magento Theme Development

Packt
24 Feb 2016
7 min read
In this article by Fernando J. Miguel, author of the book Magento 2 Development Essentials, we will learn the basics of theme development. Magento can be customized as per your needs because it is based on the Zend framework, adopting the Model-View-Controller (MVC) architecture as a software design pattern. When planning to create your own theme, the Magento theme process flow becomes a subject that needs to be carefully studied. Let's focus on the concepts that help you create your own theme. (For more resources related to this topic, see here.) The Magento base theme The Magento Community Edition (CE) version 1.9 comes with a new theme named rwd that implements the Responsive Web Design (RWD) practices. Magento CE's responsive theme uses a number of new technologies as follows: Sass/Compass: This is a CSS precompiler that provides a reusable CSS that can even be organized well. jQuery: This is used for customization of JavaScript in the responsive theme. jQuery operates in the noConflict() mode, so it doesn't conflict with Magento's existing JavaScript library. Basically, the folders that contain this theme are as follows: app/design/frontend/rwd skin/frontend/rwd The following image represents the folder structure: As you can see, all the files of the rwd theme are included in the app/design/frontend and skin/frontend folders: app/design/frontend: This folder stores all the .phtml visual files and .xml configurations files of all the themes. skin/frontend: This folder stores all JavaScript, CSS, and image files from their respective app/design/frontend themes folders. Inside these folders, you can see another important folder called base. The rwd theme uses some base theme features to be functional. How is it possible? Logically, Magento has distinct folders for every theme, but Magento is very smart to reuse code. Magento takes advantage of fall-back system. Let's check how it works. The fall-back system The frontend of Magento allows the designers to create new themes based on the basic theme, reusing the main code without changing its structure. The fall-back system allows us to create only the files that are necessary for the customization. To create the customization files, we have the following options: Create a new theme directory and write the entire new code Copy the files from the base theme and edit them as you wish The second option could be more productive for study purposes. You will learn basic structure by exercising the code edit. For example, let's say you want to change the header.phtml file. You can copy the header.html file from the app/design/frontend/base/default/template/page/html path to the app/design/frontend/custom_package/custom_theme/template/page/html path. In this example, if you activate your custom_theme on Magento admin panel, your custom_theme inherits all the structure from base theme, and applies your custom header.phtml on the theme. Magento packages and design themes Magento has the option to create design packages and themes as you saw on the previous example of custom_theme. This is a smart functionality because on same packages you can create more than one theme. Now, let's take a deep look at the main folders that manage the theme structure in Magento. The app/design structure In the app/design structure, we have the following folders: The folder details are as follows: adminhtml: In this folder, Magento keeps all the layout configuration files and .phtml structure of admin area. frontend: In this folder, Magento keeps all the theme's folders and their respective .phtml structure of site frontend. install: This folder stores all the files of installation Magento screen. The layout folder Let's take a look at the rwd theme folder: As you can see, the rwd is a theme folder and has a template folder called default. In Magento, you can create as many template folders as you wish. The layout folders allow you to define the structure of the Magento pages through the XML files. The layout XML files has the power to manage the behavior of your .phtml file: you can incorporate CSS or JavaScript to be loaded on specific pages. Every page on Magento is defined by a handle. A handle is a reference name that Magento uses to refer to a particular page. For example, the <cms_page> handle is used to control the layout of the pages in your Magento. In Magento, we have two main type of handles: Default handles: These manage the whole site Non-default handles: These manage specific parts of the site In the rwd theme, the .xml files are located in app/design/frontend/rwd/default/layout. Let's take a look at an .xml layout file example: This piece of code belongs to the page.xml layout file. We can see the <default> handle defining the .css and .js files that will be loaded on the page. The page.xml file has the same name as its respective folder in app/design/frontend/rwd/default/template/page. This is an internal Magento control. Please keep this in mind: Magento works with a predefined naming file pattern. Keeping this in your mind can avoid unnecessary errors. The template folder The template folder, taking rwd as a reference, is located at app/design/frontend/rwd/default/template. Every subdirectory of template controls a specific page of Magento. The template files are the .phtml files, a mix of HTML and PHP, and they are the layout structure files. Let's take a look at a page/1column.phtml example: The locale folder The locale folder has all the specific translation of the theme. Let's imagine that you want to create a specific translation file for the rwd theme. You can create a locale file at app/design/frontend/rwd/locale/en_US/translate.csv. The locale folder structure basically has a folder of the language (en_US), and always has the translate.csv filename. The app/locale folder in Magento is the main translation folder. You can take a look at it to better understand. But the locale folder inside the theme folder has priority in Magento loading. For example, if you want to create a Brazilian version of the theme, you have to duplicate the translate.csv file from app/design/frontend/rwd/locale/en_US/ to app/design/frontend/rwd/locale/pt_BR/. This will be very useful to those who use the theme and will have to translate it in the future. Creating new entries in translate If you want to create a new entry in your translate.csv, first of all put this code in your PHTML file: <?php echo $this->__('Translate test'); ?> In CSV file, you can put the translation in this format: 'Translate test', 'Translate test'. The SKIN structure The skin folder basically has the css and js files and images of the theme, and is located in skin/frontend/rwd/default. Remember that Magento has a filename/folder naming pattern. The skin folder named rwd will work with rwd theme folder. If Magento has rwd as a main theme and is looking for an image that is not in the skin folder, Magento will search this image in skin/base folder. Remember also that Magento has a fall-back system. It is keeping its search in the main themes folder to find the correct file. Take advantage of this! CMS blocks and pages Magento has a flexible theme system. Beyond Magento code customization, the admin can create blocks and content on Magento admin panel. CMS (Content Management System) pages and blocks on Magento give you the power to embed HTML code in your page. Summary In this article, we covered the basic concepts of Magento theme. These may be used to change the display of the website or its functionality. These themes are interchangeable with Magento installations. Resources for Article: Further resources on this subject: Preparing and Configuring Your Magento Website [article] Introducing Magento Extension Development [article] Installing Magento [article]
Read more
  • 0
  • 0
  • 4101

article-image-getting-started-react
Packt
24 Feb 2016
7 min read
Save for later

Getting Started with React

Packt
24 Feb 2016
7 min read
In this article by Vipul Amler and Prathamesh Sonpatki, author of the book ReactJS by Example- Building Modern Web Applications with React, we will learn how web development has seen a huge advent of Single Page Application (SPA) in the past couple of years. Early development was simple—reload a complete page to perform a change in the display or perform a user action. The problem with this was a huge round-trip time for the complete request to reach the web server and back to the client. Then came AJAX, which sent a request to the server, and could update parts of the page without reloading the current page. Moving in the same direction, we saw the emergence of the SPAs. Wrapping up the heavy frontend content and delivering it to the client browser just once, while maintaining a small channel for communication with the server based on any event; this is usually complemented by thin API on the web server. The growth in such apps has been complemented by JavaScript libraries and frameworks such as Ext JS, KnockoutJS, BackboneJS, AngularJS, EmberJS, and more recently, React and Polymer. (For more resources related to this topic, see here.) Let's take a look at how React fits in this ecosystem and get introduced to it in this article. What is React? ReactJS tries to solve the problem from the View layer. It can very well be defined and used as the V in any of the MVC frameworks. It's not opinionated about how it should be used. It creates abstract representations of views. It breaks down parts of the view in the Components. These components encompass both the logic to handle the display of view and the view itself. It can contain data that it uses to render the state of the app. To avoid complexity of interactions and subsequent render processing required, React does a full render of the application. It maintains a simple flow of work. React is founded on the idea that DOM manipulation is an expensive operation and should be minimized. It also recognizes that optimizing DOM manipulation by hand will result in a lot of boilerplate code, which is error-prone, boring, and repetitive. React solves this by giving the developer a virtual DOM to render to instead of the actual DOM. It finds difference between the real DOM and virtual DOM and conducts the minimum number of DOM operations required to achieve the new state. React is also declarative. When the data changes, React conceptually hits the refresh button and knows to only update the changed parts. This simple flow of data, coupled with dead simple display logic, makes development with ReactJS straightforward and simple to understand. Who uses React? If you've used any of the services such as Facebook, Instagram, Netflix, Alibaba, Yahoo, E-Bay, Khan-Academy, AirBnB, Sony, and Atlassian, you've already come across and used React on the Web. In just under a year, React has seen adoption from major Internet companies in their core products. In its first-ever conference, React also announced the development of React Native. React Native allows the development of mobile applications using React. It transpiles React code to the native application code, such as Objective-C for iOS applications. At the time of writing this, Facebook already uses React Native in its Groups iOS app. In this article, we will be following a conversation between two developers, Mike and Shawn. Mike is a senior developer at Adequate Consulting and Shawn has just joined the company. Mike will be mentoring Shawn and conducting pair programming with him. When Shawn meets Mike and ReactJS It's a bright day at Adequate Consulting. Its' also Shawn's first day at the company. Shawn had joined Adequate to work on its amazing products and also because it uses and develops exciting new technologies. After onboarding the company, Shelly, the CTO, introduced Shawn to Mike. Mike, a senior developer at Adequate, is a jolly man, who loves exploring new things. "So Shawn, here's Mike", said Shelly. "He'll be mentoring you as well as pairing with you on development. We follow pair programming, so expect a lot of it with him. He's an excellent help." With that, Shelly took leave. "Hey Shawn!" Mike began, "are you all set to begin?" "Yeah, all set! So what are we working on?" "Well we are about to start working on an app using https://openlibrary.org/. Open Library is collection of the world's classic literature. It's an open, editable library catalog for all the books. It's an initiative under https://archive.org/ and lists free book titles. We need to build an app to display the most recent changes in the record by Open Library. You can call this the Activities page. Many people contribute to Open Library. We want to display the changes made by these users to the books, addition of new books, edits, and so on, as shown in the following screenshot: "Oh nice! What are we using to build it?" "Open Library provides us with a neat REST API that we can consume to fetch the data. We are just going to build a simple page that displays the fetched data and format it for display. I've been experimenting and using ReactJS for this. Have you used it before?" "Nope. However, I have heard about it. Isn't it the one from Facebook and Instagram?" "That's right. It's an amazing way to define our UI. As the app isn't going to have much of logic on the server or perform any display, it is an easy option to use it." "As you've not used it before, let me provide you a quick introduction." "Have you tried services such as JSBin and JSFiddle before?" "No, but I have seen them." "Cool. We'll be using one of these, therefore, we don't need anything set up on our machines to start with." "Let's try on your machine", Mike instructed. "Fire up http://jsbin.com/?html,output" "You should see something similar to the tabs and panes to code on and their output in adjacent pane." "Go ahead and make sure that the HTML, JavaScript, and Output tabs are clicked and you can see three frames for them so that we are able to edit HTML and JS and see the corresponding output." "That's nice." "Yeah, good thing about this is that you don't need to perform any setups. Did you notice the Auto-run JS option? Make sure its selected. This option causes JSBin to reload our code and see its output so that we don't need to keep saying Run with JS to execute and see its output." "Ok." Requiring React library "Alright then! Let's begin. Go ahead and change the title of the page, to say, React JS Example. Next, we need to set up and we require the React library in our file." "React's homepage is located at http://facebook.github.io/react/. Here, we'll also locate the downloads available for us so that we can include them in our project. There are different ways to include and use the library. We can make use of bower or install via npm. We can also just include it as an individual download, directly available from the fb.me domain. There are development versions that are full version of the library as well as production version which is its minified version. There is also its version of add-on. We'll take a look at this later though." "Let's start by using the development version, which is the unminified version of the React source. Add the following to the file header:" <script src="http://fb.me/react-0.13.0.js"></script> "Done". "Awesome, let's see how this looks." <!DOCTYPE html> <html> <head> <script src="http://fb.me/react-0.13.0.js"></script> <meta charset="utf-8"> <title>React JS Example</title> </head> <body> </body> </html> Summary In this article, we started with React and built our first component. In the process we studied top level API of React for constructing components and elements. Resources for Article: Further resources on this subject: Create Your First React Element [article] An Introduction to ReactJs [article] An Introduction to Reactive Programming [article]
Read more
  • 0
  • 0
  • 4416