Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-optimizing-magento-performance-using-hhvm
Packt
16 May 2014
5 min read
Save for later

Optimizing Magento Performance — Using HHVM

Packt
16 May 2014
5 min read
(For more resources related to this topic, see here.) HipHop Virtual Machine As we can write a whole book (or two) about HHVM, we will just give the key ideas here. HHVM is a virtual machine that will translate any called PHP file into a HHVM byte code in the same spirit as the Java or .NET virtual machine. HHVM transforms your PHP code into a lower level language that is much faster to execute. Of course, the transformation time (compiling) does cost a lot of resources, therefore, HHVM is shipped with a cache mechanism similar to APC. This way, the compiled PHP files are stored and reused when the original file is requested. With HHVM, you keep the PHP flexibility and ease in writing, but you now have a performance like that of C++. Hear the words of the HHVM team at Facebook: "HHVM (aka the HipHop Virtual Machine) is a new open-source virtual machine designed for executing programs written in PHP. HHVM uses a just-in-time compilation approach to achieve superior performance while maintaining the flexibility that PHP developers are accustomed to. To date, HHVM (and its predecessor HPHPc) has realized over a 9x increase in web request throughput and over a 5x reduction in memory consumption for Facebook compared with the Zend PHP 5.2 engine + APC. HHVM can be run as a standalone webserver (in other words, without the Apache webserver and the "modphp" extension). HHVM can also be used together with a FastCGI-based webserver, and work is in progress to make HHVM work smoothly with Apache." If you think this is too good to be true, you're right! Indeed, HHVM have a major drawback. HHVM was and still is focused on the needs of Facebook. Therefore, you might have a bad time trying to use your custom made PHP applications inside it. Nevertheless, this opportunity to speed up large PHP applications has been seen by talented developers who improve it, day after day, in order to support more and more framework. As our interest is in Magento, I will introduce you Daniel Sloof who is a developer from Netherland. More interestingly, Daniel has done (and still does) an amazing work at adapting HHVM for Magento. Here are the commands to install Daniel Sloof's version of HHVM for Magento: $ sudo apt-get install git $ git clone https://github.com/danslo/hhvm.git $ sudo chmod +x configure_ubuntu_12.04.sh $ sudo ./configure_ubuntu_12.04.sh $ sudo CMAKE_PREFIX_PATH=`pwd`/.. make If you thought that the first step was long, you will be astonished by the time required to actually build HHVM. Nevertheless, the wait is definitely worth it. The following screenshot shows how your terminal will look for the next hour or so: Create a file named hhvm.hdf under /etc/hhvm and write the following code inside: Server { Port = 80 SourceRoot = /var/www/_MAGENTO_HOME_ } Eval { Jit = true } Log { Level = Error UseLogFile = true File = /var/log/hhvm/error.log Access { * { File = /var/log/hhvm/access.log Format = %h %l %u %t \"%r\" %>s %b } } } VirtualHost { * { Pattern = .* RewriteRules { dirindex { pattern = ^/(.*)/$ to = $1/index.php qsa = true } } } } StaticFile { FilesMatch { * { pattern = .*\.(dll|exe) headers { * = Content-Disposition: attachment } } } Extensions { css = text/css gif = image/gif html = text/html jpe = image/jpeg jpeg = image/jpeg jpg = image/jpeg png = image/png tif = image/tiff tiff = image/tiff txt = text/plain } } Now, run the following command: $ sudo ./hhvm –mode daemon –config /etc/hhvm.hdf The hhvm executable is under hhvm/hphp/hhvm. Is all of this worth it? Here's the response: ab -n 100 -c 5 http://192.168.0.105192.168.0.105/index.php/furniture/livingroom.html Server Software: Server Hostname: 192.168.0.105192.168.0.105 Server Port: 80 Document Path: /index.php/furniture/living-room.html Document Length: 35552 bytes Concurrency Level: 5 Time taken for tests: 4.970 seconds Requests per second: 20.12 [#/sec] (mean) Time per request: 248.498 [ms] (mean) Time per request: 49.700 [ms] (mean, across all concurrent requests) Transfer rate: 707.26 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 12.1 0 89 Processing: 107 243 55.9 243 428 Waiting: 107 242 55.9 242 427 Total: 110 245 56.7 243 428 We literally reach a whole new world here. Indeed, our Magento instance is six times faster than after all our previous optimizations and about 20 times faster than the default Magento served by Apache. The following graph shows the performances: Our Magento instance is now flying at lightening speed, but what are the drawbacks? Is it still as stable as before? All the optimization we did so far, are they still effective? Can we go even further? In what follows, we present a non-exhaustive list of answers: Fancy extensions and modules may (and will) trigger HHVM incompatibilities. Magento is a relatively old piece of software and combining it with a cutting edge technology such as HHVM can have some unpredictable (and undesirable) effects. HHVM is so complex that fixing a Magento-related bug requires a lot of skill and dedication. HHVM takes care of PHP, not of cache mechanisms or the accelerator we installed before. Therefore, APC, memcached, and Varnish are still running and helping to improve our performances. If you become addicted to performances, HHVM is now supporting Fast-CGI through Nginx and Apache. You can find out more about that at: http://www.hhvm.com/blog/1817/fastercgi-with-hhvm. Summary In this article, we successfully used the HipHop Virtual Machine (HHVM) from Facebook to serve Magento. This improvement optimizes our Magento performance incredibly (20 times faster), that is, the time required initially was 110 seconds while now it is less then 5 seconds. Resources for Article: Further resources on this subject: Magento: Exploring Themes [article] Getting Started with Magento Development [article] Enabling your new theme in Magento [article] Call Send SMS Add to Skype You'll need Skype CreditFree via Skype
Read more
  • 0
  • 0
  • 1236

article-image-building-simple-blog
Packt
15 May 2014
8 min read
Save for later

Building a Simple Blog

Packt
15 May 2014
8 min read
(For more resources related to this topic, see here.) Setting up the application Every application has to be set up, so we'll begin with that. Create a folder for your project—I'll call mine simpleBlog—and inside that, create a file named package.json. If you've used Node.js before, you know that the package.json file describes the project; lists the project home page, repository, and other links; and (most importantly for us) outlines the dependencies for the application. Here's what the package.json file looks like: { "name": "simple-blog", "description": "This is a simple blog.", "version": "0.1.0", "scripts": { "start": "nodemon server.js" }, "dependencies": { "express": "3.x.x", "ejs" : "~0.8.4", "bourne" : "0.3" }, "devDependencies": { "nodemon": "latest" } } This is a pretty bare-bones package.json file, but it has all the important bits. The name, description, and version properties should be self-explanatory. The dependencies object lists all the npm packages that this project needs to run: the key is the name of the package and the value is the version. Since we're building an ExpressJS backend, we'll need the express package. The ejs package is for our server-side templates and bourne is our database (more on this one later). The devDependencies property is similar to the dependencies property, except that these packages are only required for someone working on the project. They aren't required to just use the project. For example, a build tool and its components, such as Grunt, would be development dependencies. We want to use a package called nodemon. This package is really handy when building a Node.js backend: we can have a command line that runs the nodemon server.js command in the background while we edit server.js in our editor. The nodemon package will restart the server whenever we save changes to the file. The only problem with this is that we can't actually run the nodemon server.js command on the command line, because we're going to install nodemon as a local package and not a global process. This is where the scripts property in our package.json file comes in: we can write simple script, almost like a command-line alias, to start nodemon for us. As you can see, we're creating a script called start, and it runs nodemon server.js. On the command line, we can run npm start; npm knows where to find the nodemon binary and can start it for us. So, now that we have a package.json file, we can install the dependencies we've just listed. On the command line, change to the current directory to the project directory, and run the following command: npm install You'll see that all the necessary packages will be installed. Now we're ready to begin writing the code. Starting with the server I know you're probably eager to get started with the actual Backbone code, but it makes more sense for us to start with the server code. Remember, good Backbone apps will have strong server-side components, so we can't ignore the backend completely. We'll begin by creating a server.js file in our project directory. Here's how that begins: var express = require('express'); var path = require('path'); var Bourne = require("bourne"); If you've used Node.js, you know that the require function can be used to load Node.js components (path) or npm packages (express and bourne). Now that we have these packages in our application, we can begin using them as follows: var app = express(); var posts = new Bourne("simpleBlogPosts.json"); var comments = new Bourne("simpleBlogComments.json"); The first variable here is app. This is our basic Express application object, which we get when we call the express function. We'll be using it a lot in this file. Next, we'll create two Bourne objects. As I said earlier, Bourne is the database we'll use in our projects in this article. This is a simple database that I wrote specifically for this article. To keep the server side as simple as possible, I wanted to use a document-oriented database system, but I wanted something serverless (for example, SQLite), so you didn't have to run both an application server and a database server. What I came up with, Bourne, is a small package that reads from and writes to a JSON file; the path to that JSON file is the parameter we pass to the constructor function. It's definitely not good for anything bigger than a small learning project, but it should be perfect for this article. In the real world, you can use one of the excellent document-oriented databases. I recommend MongoDB: it's really easy to get started with, and has a very natural API. Bourne isn't a drop-in replacement for MongoDB, but it's very similar. You can check out the simple documentation for Bourne at https://github.com/andrew8088/bourne. So, as you can see here, we need two databases: one for our blog posts and one for comments (unlike most databases, Bourne has only one table or collection per database, hence the need for two). The next step is to write a little configuration for our application: app.configure(function(){ app.use(express.json()); app.use(express.static(path.join(__dirname, 'public'))); }); This is a very minimal configuration for an Express app, but it's enough for our usage here. We're adding two layers of middleware to our application; they are "mini-programs" that the HTTP requests that come to our application will run through before getting to our custom functions (which we have yet to write). We add two layers here: the first is express.json(), which parses the JSON requests bodies that Backbone will send to the server; the second is express.static(), which will statically serve files from the path given as a parameter. This allows us to serve the client-side JavaScript files, CSS files, and images from the public folder. You'll notice that both these middleware pieces are passed to app.use(), which is the method we call to choose to use these pieces. You'll notice that we're using the path.join() method to create the path to our public assets folder, instead of just doing __dirname and 'public'. This is because Microsoft Windows requires the separating slashes to be backslashes. The path.join() method will get it right for whatever operating system the code is running on. Oh, and __dirname (two underscores at the beginning) is just a variable for the path to the directory this script is in. The next step is to create a route method: app.get('/*', function (req, res) { res.render("index.ejs"); }); In Express, we can create a route calling a method on the app that corresponds to the desired HTTP verb (get, post, put, and delete). Here, we're calling app.get() and we pass two parameters to it. The first is the route; it's the portion of the URL that will come after your domain name. In our case, we're using an asterisk, which is a catchall; it will match any route that begins with a forward slash (which will be all routes). This will match every GET request made to our application. If an HTTP request matches the route, then a function, which is the second parameter, will be called. This function takes two parameters; the first is the request object from the client and the second is the response object that we'll use to send our response back. These are often abbreviated to req and res, but that's just a convention, you could call them whatever you want. So, we're going to use the res.render method, which will render a server-side template. Right now, we're passing a single parameter: the path to the template file. Actually, it's only part of the path, because Express assumes by default that templates are kept in a directory named views, a convention we'll be using. Express can guess the template package to use based on the file extension; that's why we don't have to select EJS as the template engine anywhere. If we had values that we want to interpolate into our template, we would pass a JavaScript object as the second parameter. We'll come back and do this a little later. Finally, we can start up our application; I'll choose to use the port 3000: app.listen(3000); We'll be adding a lot more to our server.js file later, but this is what we'll start with. Actually, at this point, you can run npm start on the command line and open up http://localhost:3000 in a browser. You'll get an error because we haven't made the view template file yet, but you can see that our server is working.
Read more
  • 0
  • 0
  • 3050

article-image-using-webrtc-data-api
Packt
09 May 2014
10 min read
Save for later

Using the WebRTC Data API

Packt
09 May 2014
10 min read
(For more resources related to this topic, see here.) What is WebRTC? Web Real-Time Communication is a new (still under an active development) open framework for the Web to enable browser-to-browser applications for audio/video calling, video chat, peer-to-peer file sharing without any third-party additional software/plugins. It was open sourced by Google in 2011 and includes the fundamental building components for high-quality communications on the Web. These components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to build their own rich, media web applications. Google, Mozilla, and Opera support WebRTC and are involved in the development process. Major components of WebRTC API are as follows: getUserMedia: This allows a web browser to access the camera and microphone PeerConnection: This sets up audio/video calls DataChannels: This allow browsers to share data via peer-to-peer connection Benefits of using WebRTC in your business Reducing costs: It is a free and open source technology. You don't need to pay for complex proprietary solutions ever. IT deployment and support costs can be lowered because now you don't need to deploy special client software for your customers. Plugins?: You don't need it ever. Before now you had to use Flash, Java applets, or other tricky solutions to build interactive rich media web applications. Customers had to download and install third-party plugins to be able using your media content. You also had to keep in mind different solutions/plugins for variety of operating systems and platforms. Now you don't need to care about it. Peer-to-peer communication: In most cases communication will be established directly between your customers and you don't need to have a middle point. Easy to use: You don't need to be a professional programmer or to have a team of certified developers with some kind of specific knowledge. In a basic case, you can easily integrate WebRTC functionality into your web services/sites by using open JavaScript API or even using a ready-to-go framework. Single solution for all platforms: You don't need to develop special native version of your web service for different platforms (iOS, Android, Windows, or any other). WebRTC is developed to be a cross-platform and universal tool. WebRTC is open source and free: Community can discover new bugs and solve them effectively and quick. Moreover, it is developed and standardized by Mozilla, Google, and Opera—world software companies. Topics The article covers the following topics: Developing a WebRTC application: You will learn the basics of the technology and build a complete audio/video conference real-life web application. We will also talk on SDP (Session Description Protocol), signaling, client-server sides' interoperation, and configuring STUN and TURN servers. In Data API, you will learn how to build a peer-to-peer, cross-platform file sharing web service using the WebRTC Data API. Media streaming and screen casting introduces you into streaming prerecorded media content peer-to-peer and desktop sharing. In this article, you will build a simple application that provides such kind of functionality. Nowadays, security and authentication is very important topic and you definitely don't want to forget on it while developing your applications. So, in this article, you will learn how to make your WebRTC solutions to be secure, why authentication might be very important, and how you can implement this functionality in your products. Nowadays, mobile platforms are literally part of our life, so it's important to make your interactive application to be working great on mobile devices also. This article will introduce you into aspects that will help you in developing great WebRTC products keeping mobile devices in mind. Session Description Protocol SDP is an important part of WebRTC stack. It used to negotiate on session/media options during establishing peer connection. It is a protocol intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. It does not deliver media data itself, but is used for negotiation between peers of media type, format, and all associated properties/options (resolution, encryption, codecs, and so on). The set of properties and parameters are usually called a session profile. Peers have to exchange SDP data using signaling channel before they can establish a direct connection. The following is example of an SDP offer: v=0 o=alice 2890844526 2890844526 IN IP4 host.atlanta.example.com s= c=IN IP4 host.atlanta.example.com t=0 0 m=audio 49170 RTP/AVP 0 8 97 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:97 iLBC/8000 m=video 51372 RTP/AVP 31 32 a=rtpmap:31 H261/90000 a=rtpmap:32 MPV/90000 Here we can see that this is a video and audio session, and multiple codecs are offered. The following is example of an SDP answer: v=0 o=bob 2808844564 2808844564 IN IP4 host.biloxi.example.com s= c=IN IP4 host.biloxi.example.com t=0 0 m=audio 49174 RTP/AVP 0 a=rtpmap:0 PCMU/8000 m=video 49170 RTP/AVP 32 a=rtpmap:32 MPV/90000 Here we can see that only one codec is accepted in reply to the offer above. You can find more SDP sessions examples at https://www.rfc-editor.org/rfc/rfc4317.txt. You can also find in-dept details on SDP in the appropriate RFC at http://tools.ietf.org/html/rfc4566. Configuring and installing your own STUN server As you already know, it is important to have an access to STUN/TURN server to work with peers located behind NAT or firewall. In this article, developing our application, we used pubic STUN servers (actually, they are public Google servers accessible from other networks). Nevertheless, if you plan to build your own service, you should install your own STUN/TURN server. This way your application will not be depended on a server you even can't control. Today we have public STUN servers from Google, tomorrow they can be switched off. So, the right way is to have your own STUN/TURN server. In this section, you will be introduced to installing STUN server as the simpler case. There are several implementations of STUN servers that can be found on the Internet. You can take one from http://www.stunprotocol.org. It is cross-platform and can be used under Windows, Mac OS X, or Linux. To start STUN server, you should use the following command line: stunserver --mode full --primaryinterface x1.x1.x1.x1 --altinterface x2.x2.x2.x2 Please, pay attention that you need two IP addresses on your machine to run STUN server. It is mandatory to make STUN protocol work correct. The machine can have only one physical network interface, but it should have then a network alias with IP address different of that used on the main network interface. WebSocket WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. This is a relatively young protocol but today all major web browsers including Chrome, Internet Explorer, Opera, Firefox, and Safari support it. WebSocket is a replacement for long-polling to get two-way communications between browser and server. In this article, we will use WebSocket as a transport channel to develop a signaling server for our videoconference service. Using it, our peers will communicate with the signaling server. The two important benefits of WebSocket is that it does support HTTPS (secure channel) and can be used via web proxy (nevertheless, some proxies can block WebSocket protocol). NAT traversal WebRTC has in-built mechanism to use such NAT traversal options like STUN and TURN servers. In this article, we used public STUN (Session Traversal Utilities for NAT) servers, but in real life you should install and configure your own STUN or TURN (Traversal Using Relay NAT) server. In most cases, you will use a STUN server. It helps to do NAT/firewall traversal and establish direct connection between peers. In other words, STUN server is utilized during connection establishing stage only. After the connection has been established, peers will transfer media data directly between them. In some cases (unfortunately, they are not so rare), STUN server won't help you to get through a firewall or NAT and establishing direct connection between peers will be impossible. For example, if both peers are behind symmetric NAT. In this case TURN server can help you. TURN server works as a retransmitter between peers. Using TURN server, all media data between peers will be transmitted through the TURN server. If your application gives a list of several STUN/TURN servers to the WebRTC API, the web browser will try to use STUN servers first and in case if connection failed it will try to use TURN servers automatically. Preparing environment We can prepare the environment by performing the following steps: Create a folder for the whole application somewhere on your disk. Let's call it my_rtc_project. Make a directory named my_rtc_project/www here, we will put all the client-side code (JavaScript files or HTML pages). Signaling server's code will be placed under its separate folder, so create directory for it my_rtc_project/apps/rtcserver/src. Kindly note that we will use Git, which is free and open source distributed version control system. For Linux boxes it can be installed using default package manager. For Windows system, I recommend to install and use this implementation: https://github.com/msysgit/msysgit. If you're using Windows box, install msysgit and add path to its bin folder to your PATH environment variable. Installing Erlang The signaling server is developed in Erlang language. Erlang is a great choice to develop server-side applications due to the following reasons: It is very comfortable and easy for prototyping Its processes (aktors) are very lightweight and cheap It does support network operations with no need of any external libraries The code been compiled to a byte code running by a very powerful Erlang Virtual Machine Some great projects The following projects are developed using Erlang: Yaws and Cowboy: These are web servers Riak and CouchDB: These are distributed databases Cloudant: This is a database service based on fork of CouchDB Ejabberd: This is a XMPP instant messaging service Zotonic: This is a Content Management System RabbitMQ: This is a message bus Wings 3D: This is a 3D modeler GitHub: This a web-based hosting service for software development projects that use Git. GitHub uses Erlang for RPC proxies to Ruby processes WhatsApp: This is a famous mobile messenger, sold to Facebook Call of Duty: This computer game uses Erlang on server side Goldman Sachs: This is high-frequency trading computer programs A very brief history of Erlang 1982 to 1985: During this period, Ericsson starts experimenting with programming of telecom. Existing languages do not suit for the task. 1985 to 1986: During this period, Ericsson decides they must develop their own language with desirable features from Lisp, Prolog, and Parlog. The language should have built-in concurrency and error recovery. 1987: In this year, first experiments with the new language Erlang were conducted. 1988: In this year, Erlang firstly used by external users out of the lab. 1989: In this year, Ericsson works on fast implementation of Erlang. 1990: In this year, Erlang is presented on ISS'90 and gets new users. 1991: In this year, Fast implementation of Erlang is released to users. Erlang is presented on Telecom'91, and has compiler and graphic interface. 1992: In this year, Erlang gets a lot of new users. Ericsson ported Erlang to new platforms including VxWorks and Macintosh. 1993: In this year, Erlang gets distribution. It makes it possible to run homogeneous Erlang system on a heterogeneous hardware. Ericsson starts selling Erlang implementations and Erlang Tools. Separate organization in Ericsson provides support. Erlang is supported by many platforms. You can download and install it using the main website: http://www.erlang.org. Summary In this article, we have discussed in detail about the WebRTC technology, and also about the WebRTC API. Resources for Article: Further resources on this subject: Applying WebRTC for Education and E-learning [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article] WebSphere MQ Sample Programs [Article]
Read more
  • 0
  • 0
  • 4572
Banner background image

article-image-creating-real-time-widget
Packt
22 Apr 2014
11 min read
Save for later

Creating a real-time widget

Packt
22 Apr 2014
11 min read
(For more resources related to this topic, see here.) The configuration options and well thought out methods of socket.io make for a highly versatile library. Let's explore the dexterity of socket.io by creating a real-time widget that can be placed on any website and instantly interfacing it with a remote Socket.IO server. We're doing this to begin providing a constantly updated total of all users currently on the site. We'll name it the live online counter (loc for short). Our widget is for public consumption and should require only basic knowledge, so we want a very simple interface. Loading our widget through a script tag and then initializing the widget with a prefabricated init method would be ideal (this allows us to predefine properties before initialization if necessary). Getting ready We'll need to create a new folder with some new files: widget_server.js, widget_client.js, server.js, and index.html. How to do it... Let's create the index.html file to define the kind of interface we want as follows: <html> <head> <style> #_loc {color:blue;} /* widget customization */ </style> </head> <body> <h1> My Web Page </h1> <script src = http://localhost:8081 > </script> <script> locWidget.init(); </script> </body> </html> The localhost:8081 domain is where we'll be serving a concatenated script of both the client-side socket.io code and our own widget code. By default, Socket.IO hosts its client-side library over HTTP while simultaneously providing a WebSocket server at the same address, in this case localhost:8081. See the There's more… section for tips on how to configure this behavior. Let's create our widget code, saving it as widget_client.js: ;(function() { window.locWidget = { style : 'position:absolute;bottom:0;right:0;font-size:3em', init : function () { var socket = io.connect('http://localhost:8081'), style = this.style; socket.on('connect', function () { var head = document.head, body = document.body, loc = document.getElementById('_lo_count'); if (!loc) { head.innerHTML += '<style>#_loc{' + style + '}</style>'; loc = document.createElement('div'); loc.id = '_loc'; loc.innerHTML = '<span id=_lo_count></span>'; body.appendChild(loc); } socket.on('total', function (total) { loc.innerHTML = total; }); }); } } }()); We need to test our widget from multiple domains. We'll just implement a quick HTTP server (server.js) to serve index.html so we can access it by http://127.0.0.1:8080 and http://localhost:8080, as shown in the following code: var http = require('http'); var fs = require('fs'); var clientHtml = fs.readFileSync('index.html'); http.createServer(function (request, response) { response.writeHead(200, {'Content-type' : 'text/html'}); response.end(clientHtml); }).listen(8080); Finally, for the server for our widget, we write the following code in the widget_server.js file: var io = require('socket.io')(), totals = {}, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, require('fs').readFileSync('widget_client.js') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.write(sioclient.source); res.write(widgetScript); res.end(); }).listen(8081)); io.on('connection', function (socket) { var origin = socket.request.socket.domain || 'local'; totals[origin] = totals[origin] || 0; totals[origin] += 1; socket.join(origin); io.sockets.to(origin).emit('total', totals[origin]); socket.on('disconnect', function () { totals[origin] -= 1; io.sockets.to(origin).emit('total', totals[origin]); }); }); To test it, we need two terminals; in the first one, we execute the following command: node widget_server.js In the other terminal, we execute the following command: node server.js We point our browser to http://localhost:8080 by opening a new tab or window and navigating to http://localhost:8080. Again, we will see the counter rise by one. If we close either window, it will drop by one. We can also navigate to http://127.0.0.1:8080 to emulate a separate origin. The counter at this address is independent from the counter at http://localhost:8080. How it works... The widget_server.js file is the powerhouse of this recipe. We start by using require with socket.io and calling it (note the empty parentheses following require); this becomes our io instance. Under this is our totals object; we'll be using this later to store the total number of connected clients for each domain. Next, we create our clientScript variable; it contains both the socket.io client code and our widget_client.js code. We'll be serving this to all HTTP requests. Both scripts are stored as buffers, not strings. We could simply concatenate them with the plus (+) operator; however, this would force a string conversion first, so we use Buffer.concat instead. Anything that is passed to res.write or res.end is converted to a Buffer before being sent across the wire. Using the Buffer.concat method means our data stays in buffer format the whole way through instead of being a buffer, then a string then a buffer again. When we require socket.io at the top of widget_server.js, we call it to create an io instance. Usually, at this point, we would pass in an HTTP server instance or else a port number, and optionally pass in an options object. To keep our top variables tidy, however, we use some configuration methods available on the io instance after all our requires. The io.static(false) call prevents socket.io from providing its client-side code (because we're providing our own concatenated script file that contains both the socket.io client-side code and our widget code). Then we use the io.attach call to hook up our socket.io server with an HTTP server. All requests that use the http:// protocol will be handled by the server we pass to io.attach, and all ws:// protocols will be handled by socket.io (whether or not the browser supports the ws:// protocol). We're only using the http module once, so we require it within the io.attach call; we use it's createServer method to serve all requests with our clientScript variable. Now, the stage is set for the actual socket action. We wait for a connection by listening for the connection event on io.sockets. Inside the event handler, we use a few as yet undiscussed socket.io qualities. WebSocket is formed when a client initiates a handshake request over HTTP and the server responds affirmatively. We can access the original request object with socket.request. The request object itself has a socket (this is the underlying HTTP socket, not our socket.io socket; we can access this via socket.request.socket. The socket contains the domain a client request came from. We load socket.request.socket.domain into our origin object unless it's null or undefined, in which case we say the origin is 'local'. We extract (and simplify) the origin object because it allows us to distinguish between websites that use a widget, enabling site-specific counts. To keep count, we use our totals object and add a property for every new origin object with an initial value of 0. On each connection, we add 1 to totals[origin] while listening to our socket; for the disconnect event, we subtract 1 from totals[origin]. If these values were exclusively for server use, our solution would be complete. However, we need a way to communicate the total connections to the client, but on a site by site basis. Socket.IO has had a handy new feature since Socket.IO version 0.7 that allows us to group sockets into rooms by using the socket.join method. We cause each socket to join a room named after its origin, then we use the io.sockets.to(origin).emit method to instruct socket.io to only emit to sockets that belongs to the originating sites room. In both the io.sockets connection and socket disconnect events, we emit our specific totals to corresponding sockets to update each client with the total number of connections to the site the user is on. The widget_client.js file simply creates a div element called #_loc and updates it with any new totals it receives from widget_server.js. There's more... Let's look at how our app could be made more scalable, as well as looking at another use for WebSockets. Preparing for scalability If we were to serve thousands of websites, we would need scalable memory storage, and Redis would be a perfect fit. It operates in memory but also allows us to scale across multiple servers. We'll need Redis installed along with the Redis module. We'll alter our totals variable so it contains a Redis client instead of a JavaScript object: var io = require('socket.io')(), totals = require('redis').createClient(), //other variables Now, we modify our connection event handler as shown in the following code: io.sockets.on('connection', function (socket) { var origin = (socket.handshake.xdomain) ? url.parse(socket.handshake.headers.origin).hostname : 'local'; socket.join(origin); totals.incr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); socket.on('disconnect', function () { totals.decr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); }); }); Instead of adding 1 to totals[origin], we use the Redis INCR command to increment a Redis key named after origin. Redis automatically creates the key if it doesn't exist. When a client disconnects, we do the reverse and readjust totals using DECR. WebSockets as a development tool When developing a website, we often change something small in our editor, upload our file (if necessary), refresh the browser, and wait to see the results. What if the browser would refresh automatically whenever we saved any file relevant to our site? We can achieve this with the fs.watch method and WebSockets. The fs.watch method monitors a directory, executing a callback whenever a change to any files in the folder occurs (but it doesn't monitor subfolders). The fs.watch method is dependent on the operating system. To date, fs.watch has also been historically buggy (mostly under Mac OS X). Therefore, until further advancements, fs.watch is suited purely to development environments rather than production (you can monitor how fs.watch is doing by viewing the open and closed issues at https://github.com/joyent/node/search?q=fs.watch&ref=cmdform&state=open&type=Issues). Our development tool could be used alongside any framework, from PHP to static files. For the server counterpart of our tool, we'll configure watcher.js: var io = require('socket.io')(), fs = require('fs'), totals = {}, watcher = function () { var socket = io.connect('ws://localhost:8081'); socket.on('update', function () { location.reload(); }); }, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, Buffer(';(' + watcher + '());') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.end(clientScript); }).listen(8081)); fs.watch('content', function (e, f) { if (f[0] !== '.') { io.sockets.emit('update'); } }); Most of this code is familiar. We make a socket.io server (on a different port to avoid clashing), generate a concatenated socket.io.js plus client-side watcher code file, and deliver it via our attached server. Since this is a quick tool for our own development uses, our client-side code is written as a normal JavaScript function (our watcher variable), converted to a string while wrapping it in self-calling function code, and then changed to Buffer so it's compatible with Buffer.concat. The last piece of code calls the fs.watch method where the callback receives the event name (e) and the filename (f). We check that the filename isn't a hidden dotfile. During a save event, some filesystems or editors will change the hidden files in the directory, thus triggering multiple callbacks and sending several messages at high speed, which can cause issues for the browser. To use it, we simply place it as a script within every page that is served (probably using server-side templating). However, for demonstration purposes, we simply place the following code into content/index.html: <script src = http://localhost:8081/socket.io/watcher.js > </script> Once we fire up server.js and watcher.js, we can point our browser to http://localhost:8080 and see the familiar excited Yay!. Any changes we make and save (either to index.html, styles.css, script.js, or the addition of new files) will be almost instantly reflected in the browser. The first change we can make is to get rid of the alert box in the script.js file so that the changes can be seen fluidly. Summary We saw how we could create a real-time widget in this article. We also used some third-party modules to explore some of the potential of the powerful combination of Node and WebSockets. Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [Article] So, what is Node.js? [Article] Setting up Node [Article]
Read more
  • 0
  • 0
  • 3885

article-image-best-practices-modern-web-applications
Packt
22 Apr 2014
9 min read
Save for later

Best Practices for Modern Web Applications

Packt
22 Apr 2014
9 min read
(For more resources related to this topic, see here.) The importance of search engine optimization Every day, web crawlers scrape the Internet for updates on new content to update their associated search engines. People's immediate reaction to finding web pages is to load a query on a search engine and select the first few results. Search engine optimization is a set of practices used to maintain and improve search result ranks over time. Item 1 – using keywords effectively In order to provide information to web crawlers, websites provide keywords in their HTML meta tags and content. The optimal procedure to attain effective keyword usage is to: Come up with a set of keywords that are pertinent to your topic Research common search keywords related to your website Take an intersection of these two sets of keywords and preemptively use them across the website Once this final set of keywords is determined, it is important to spread them across your website's content whenever possible. For instance, a ski resort in California should ensure that their website includes terms such as California, skiing, snowboarding, and rentals. These are all terms that individuals would look up via a search engine when they are interested in a weekend at a ski resort. Contrary to popular belief, the keywords meta tag does not create any value for site owners as many search engines consider it a deprecated index for search relevance. The reasoning behind this goes back many years to when many websites would clutter their keywords meta tag with irrelevant filler words to bait users into visiting their sites. Today, many of the top search engines have decided that content is a much more powerful indicator for search relevance and have concentrated on this instead. However, other meta tags, such as description, are still being used for displaying website content on search rankings. These should be brief but powerful passages to pull in users from the search page to your website. Item 2 – header tags are powerful Header tags (also known as h-tags) are often used by web crawlers to determine the main topic of a given web page or section. It is often recommended to use only one set of h1 tags to identify the primary purpose of the web page, and any number of the other header tags (h2, h3, and so on) to identify section headings. Item 3 – make sure to have alternative attributes for images Despite the recent advance in image recognition technology, web crawlers do not possess the resources necessary for parsing images for content through the Internet today. As a result, it is advisable to leave an alt attribute for search engines to parse while they scrape your web page. For instance, let us suppose you were the webmaster of Seattle Water Sanitation Plant and wished to upload the following image to your website: Since web crawlers make use of the alt tag while sifting through images, you would ideally upload the preceding image using the following code: <img src = "flow_chart.png" alt="Seattle Water Sanitation Process Flow Chart" /> This will leave the content in the form of a keyword or phrase that can help contribute to the relevancy of your web page on search results. Item 4 – enforcing clean URLs While creating web pages, you'll often find the need to identify them with a URL ID. The simplest way often is to use a number or symbol that maps to your data for simple information retrieval. The problem with this is that a number or symbol does not help to identify the content for web crawlers or your end users. The solution to this is to use clean URLs. By adding a topic name or phrase into the URL, you give web crawlers more keywords to index off. Additionally, end users who receive the link will be given the opportunity to evaluate the content with more information since they know the topic discussed in the web page. A simple way to integrate clean URLs while retaining the number or symbol identifier is to append a readable slug, which describes the topic, to the end of the clean URL and after the identifier. Then, apply a regular expression to parse out the identifier for your own use; for instance, take a look at the following sample URL: http://www.example.com/post/24/golden-dragon-review The number 24, when parsed out, helps your server easily identify the blog post in question. The slug, golden-dragon-review, communicates the topic at hand to both web crawlers and users. While creating the slug, the best practice is often to remove all non-alphanumeric characters and replace all spaces with dashes. Contractions such as can't, don't, or won't, can be replaced by cant, dont, or wont because search engines can easily infer their intended meaning. It is important to also realize that spaces should not be replaced by underscores as they are not interpreted appropriately by web crawlers. Item 5 – backlink whenever safe and possible Search rankings are influenced by your website's clout throughout websites that search engines deem as trustworthy. For instance, due to the restrictive access of .edu or .gov domains, websites that use these domains are deemed trustworthy and given a higher level of authority when it comes down to search rankings. This means that any websites that are backlinked on trustworthy websites are seen at a higher value as a result. Thus, it is important to often consider backlinking on relevant websites where users would actively be interested in the content. If you choose to backlink irrelevantly, there are often consequences that you'll face, as this practice can often be caught automatically by web crawlers while comparing the keywords between your link and the backlink host. Item 6 – handling HTTP status codes properly Server errors help the client and server communicate the status of page requests in a clean and consistent manner. The following chart will review the most important server errors and what they do: Status Code Alias Effect on SEO 200 Success This loads the page and the content is contributed to SEO 301 Permanent redirect This redirects the page and the redirected content is contributed to SEO 302 Temporary redirect This redirects the page and the redirected content doesn't contribute to SEO 404 Client error (not found) This loads the page and the content does not contribute to SEO 500 Server error This will not load the page and there is no content to contribute to SEO In an ideal world, all pages would return the 200 status code. Unfortunately, URLs get misspelled, servers throw exceptions, and old pages get moved, which leads to the need for other status codes. Thus, it is important that each situation be handled to maximize communication to both web crawlers and users and minimize damage to one's search ranking. When a URL gets misspelled, it is important to provide a 301 redirect to a close match or another popular web page. This can be accomplished by using a clean URL and parsing out an identifier, regardless of the slug that follows it. This way, there exists content that contributes directly to the search ranking instead of just leaving a 404 page. Server errors should be handled as soon as possible. When a page does not load, it harms the experience for both users and web crawlers, and over an extended period of time, can expire that page's rank. Lastly, the 404 pages should be developed with your users in mind. When you choose not to redirect them to the most relevant link, it is important to either pass in suggested web pages or a search menu to keep them engaged with your content. The connect-rest-test Grunt plugin can be a healthy addition to any software project to test the status codes and responses from a RESTful API. You can find it at https://www.npmjs.org/package/connect-rest-test. Alternatively, while testing pages outside of your RESTful API, you may be interested in considering grunt-http-verify to ensure that status codes are returned properly. You can find it at https://www.npmjs.org/package/grunt-http-verify. Item 7 – making use of your robots.txt and site map files Often, there exist directories in a website that are available to the public but should not be indexed by a search engine. The robots.txt file, when placed in your website's root, helps to define exclusion rules for web crawling and prevent a user-defined set of search engines from entering certain directories. For instance, the following example disallows all search engines that choose to parse your robots.txt file from visiting the music directory on a website: User-agent: * Disallow: /music/ While writing navigation tools with dynamic content such as JavaScript libraries or Adobe Flash widgets, it's important to understand that web crawlers have limited capability in scraping these. Site maps help to define the relational mapping between web pages when crawlers cannot heuristically infer it themselves. On the other hand, the robots.txt file defines a set of search engine exclusion rules, and the sitemap.xml file, also located in a website's root, helps to define a set of search engine inclusion rules. The following XML snippet is a brief example of a site map that defines the attributes: <?xml version="1.0" encoding="utf-8"?> <urlset > <url> <loc>http://example.com/</loc> <lastmod>2014-11-24</lastmod> <changefreq>always</changefreq> <priority>0.8</priority> </url> <url> <loc>http://example.com/post/24/golden-dragon-review</loc> <lastmod>2014-07-13</lastmod> <changefreq>never</changefreq> <priority>0.5</priority> </url> </urlset> The attributes mentioned in the preceding code are explained in the following table: Attribute Meaning loc This stands for the URL location to be crawled lastmod This indicates the date on which the web page was last modified changefreq This indicates the page is modified and the number of times the crawler should visit as a result priority This indicates the web page's priority in comparison to the other web pages Using Grunt to reinforce SEO practices With the rising popularity of client-side web applications, SEO practices are often not met when page links do not exist without JavaScript. Certain Grunt plugins provide a workaround for this by loading the web pages, waiting for an amount of time to allow the dynamic content to load, and taking an HTML snapshot. These snapshots are then provided to web crawlers for search engine purposes and the user-facing dynamic web applications are excluded from scraping completely. Some examples of Grunt plugins that accomplish this need are: grunt-html-snapshots (https://www.npmjs.org/package/grunt-html-snapshots) grunt-ajax-seo (https://www.npmjs.org/package/grunt-ajax-seo)
Read more
  • 0
  • 0
  • 1522

article-image-bootstrap-3-and-other-applications
Packt
21 Apr 2014
10 min read
Save for later

Bootstrap 3 and other applications

Packt
21 Apr 2014
10 min read
(For more resources related to this topic, see here.) Bootstrap 3 Bootstrap 3, formerly known as Twitter's Bootstrap, is a CSS and JavaScript framework for building application frontends. The third version of Bootstrap has important changes over the earlier versions of the framework. Bootstrap 3 is not compatible with the earlier versions. Bootstrap 3 can be used to build great frontends. You can download the complete framework, including CSS and JavaScript, and start using it right away. Bootstrap also has a grid. The grid of Bootstrap is mobile-first by default and has 12 columns. In fact, Bootstrap defines four grids: the extra-small grid up to 768 pixels (mobile phones), the small grid between 768 and 992 pixels (tablets), the medium grid between 992 and 1200 pixels (desktop), and finally, the large grid of 1200 pixels and above for large desktops. The grid, all other CSS components, and JavaScript plugins are described and well documented at http://getbootstrap.com/. Bootstrap's default theme looks like the following screenshot: Example of a layout built with Bootstrap 3 The time when all Bootstrap websites looked quite similar is far behind us now. Bootstrap will give you all the freedom you need to create innovative designs. There is much more to tell about Bootstrap, but for now, let's get back to Less. Working with Bootstrap's Less files All the CSS code of Bootstrap is written in Less. You can download Bootstrap's Less files and recompile your own version of the CSS. The Less files can be used to customize, extend, and reuse Bootstrap's code. In the following sections, you will learn how to do this. To download the Less files, follow the links at http://getbootstrap.com/ to Bootstrap's GitHub pages at https://github.com/twbs/bootstrap. On this page, choose Download Zip on the right-hand side column. Building a Bootstrap project with Grunt After downloading the files mentioned earlier, you can build a Bootstrap project with Grunt. Grunt is a JavaScript task runner; it can be used for the automation of your processes. Grunt helps you when performing repetitive tasks such as minifying, compiling, unit testing, and linting your code. Grunt runs on node.js and uses npm, which you saw while installing the Less compiler. Node.js is a standalone JavaScript interpreter built on Google's V8 JavaScript runtime, as used in Chrome. Node.js can be used for easily building fast, scalable network applications. When you unzip the files from the downloaded file, you will find Gruntfile.js and package.json among others. The package.json file contains the metadata for projects published as npm modules. The Gruntfile.js file is used to configure or define tasks and load Grunt plugins. The Bootstrap Grunt configuration is a great example to show you how to set up automation testing for projects containing HTML, Less (CSS), and JavaScript. The parts that are interesting for you as a Less developer are mentioned in the following sections. In package.json file, you will find that Bootstrap compiles its Less files with grunt-contrib-less. At the time of writing this article, the grunt-contrib-less plugin compiles Less with less.js Version 1.7. In contrast to Recess (another JavaScript build tool previously used by Bootstrap), grunt-contrib-less also supports source maps. Apart from grunt-contrib-less, Bootstrap also uses grunt-contrib-csslint to check the compiled CSS for syntax errors. The grunt-contrib-csslint plugin also helps improve browser compatibility, performance, maintainability, and accessibility. The plugin's rules are based on the principles of object-oriented CSS (http://www.slideshare.net/stubbornella/object-oriented-css). You can find more information by visiting https://github.com/stubbornella/csslint/wiki/Rules. Bootstrap makes heavy use of Less variables, which can be set by the customizer. Whoever has studied the source of Gruntfile.js may very well also find a reference to the BsLessdocParser Grunt task. This Grunt task is used to build Bootstrap's customizer dynamically based on the Less variables used by Bootstrap. Though the process of parsing Less variables to build, for instance, documentation will be very interesting, this task is not discussed here further. This section ends with the part of Gruntfile.js that does the Less compiling. The following code from Gruntfile.js should give you an impression of how this code will look: less: { compileCore: { options: { strictMath: true, sourceMap: true, outputSourceFiles: true, sourceMapURL: '<%= pkg.name %>.css.map', sourceMapFilename: 'dist/css/<%= pkg.name %>.css.map' }, files: { 'dist/css/<%= pkg.name %>.css': 'less/bootstrap.less' } } Last but not least, let's have a look at the basic steps to run Grunt from the command line and build Bootstrap. Grunt will be installed with npm. Npm checks Bootstrap's package.json file and automatically installs the necessary local dependencies listed there. To build Bootstrap with Grunt, you will have to enter the following commands on the command line: > npm install -g grunt-cli > cd /path/to/extracted/files/bootstrap After this, you can compile the CSS and JavaScript by running the following command: > grunt dist This will compile your files into the /dist directory. The > grunt test command will also run the built-in tests. Compiling your Less files Although you can build Bootstrap with Grunt, you don't have to use Grunt. You will find the Less files in a separate directory called /less inside the root /bootstrap directory. The main project file is bootstrap.less; other files will be explained in the next section. You can include bootstrap.less together with less.js into your HTML for the purpose of testing as follows: <link rel="bootstrap/less/bootstrap.less"type="text/css" href="less/styles.less" /> <script type="text/javascript">less = { env: 'development' };</script> <script src = "less.js" type="text/javascript"></script> Of course, you can compile this file server side too as follows: lessc bootstrap.less > bootstrap.css Dive into Bootstrap's Less files Now it's time to look at Bootstrap's Less files in more detail. The /less directory contains a long list of files. You will recognize some files by their names. You have seen files such as variables.less, mixins.less, and normalize.less earlier. Open bootstrap.less to see how the other files are organized. The comments inside bootstrap.less tell you that the Less files are organized by functionality as shown in the following code snippet: // Core variables and mixins // Reset // Core CSS // Components Although Bootstrap is strongly CSS-based, some of the components don't work without the related JavaScript plugins. The navbar component is an example of this. Bootstrap's plugins require jQuery. You can't use the newest 2.x version of jQuery because this version doesn't have support for Internet Explorer 8. To compile your own version of Bootstrap, you have to change the variables defined in variables.less. When using the last declaration wins and lazy loading rules, it will be easy to redeclare some variables. Creating a custom button with Less By default, Bootstrap defines seven different buttons, as shown in the following screenshot: The seven different button styles of Bootstrap 3 Please take a look at the following HTML structure of Bootstrap's buttons before you start writing your Less code: <!-- Standard button --> <button type="button" class="btn btn-default">Default</button> A button has two classes. Globally, the first .btn class only provides layout styles, and the second .btn-default class adds the colors. In this example, you will only change the colors, and the button's layout will be kept intact. Open buttons.less in your text editor. In this file, you will find the following Less code for the different buttons: // Alternate buttons // -------------------------------------------------- .btn-default { .button-variant(@btn-default-color; @btn-default-bg; @btn-default-border); } The preceding code makes it clear that you can use the .button-variant() mixin to create your customized buttons. For instance, to define a custom button, you can use the following Less code: // Customized colored button // -------------------------------------------------- .btn-colored { .button-variant(blue;red;green); } In the preceding case, you want to extend Bootstrap with your customized button, add your code to a new file, and call this file custom.less. Appending @import custom.less to the list of components inside bootstrap.less will work well. The disadvantage of doing this will be that you will have to change bootstrap.less again when updating Bootstrap; so, alternatively, you could create a file such as custombootstrap.less which contains the following code: @import "bootstrap.less"; @import "custom.less"; The previous step extends Bootstrap with a custom button; alternatively, you could also change the colors of the default button by redeclaring its variables. To do this, create a new file, custombootstrap.less again, and add the following code into it: @import "bootstrap.less"; //== Buttons // //## For each of Bootstrap's buttons, define text,background and border color. @btn-default-color: blue; @btn-default-bg: red; @btn-default-border: green; In some situations, you will, for instance, need to use the button styles without everything else of Bootstrap. In these situations, you can use the reference keyword with the @import directive. You can use the following Less code to create a Bootstrap button for your project: @import (reference) "bootstrap.less"; .btn:extend(.btn){}; .btn-colored { .button-variant(blue;red;green); } You can see the result of the preceding code by visiting http://localhost/index.html in your browser. Notice that depending on the version of less.js you use, you may find some unexpected classes in the compiled output. Media queries or extended classes sometimes break the referencing in older versions of less.js. Use CSS source maps for debugging When working with large LESS code bases finding the original source can be become complex when viewing your results in the browsers. Since version 1.5 LESS offers support for CSS source maps. CSS source maps enable developer tools to map calls back to their location in original source files. This also works for compressed files. The latest versions of Google's Chrome Developers Tools offer support for these sources files. Currently CSS source maps debugging won't work for client side compiling as used for the examples in this book. The server-side lessc compiler can generate useful CSS source maps. After installing the lessc compiler you can run: >> lessc –source-map=styles.css.map styles.less > styles.css The preceding code will generate two files: styles.css.map and styles.css. The last line of styles.css contains now an extra line which refers to the source map: /*# sourceMappingURL=boostrap.css.map */ In your HTML you only have to include the styles.css as you used to: <link href="styles.css" rel="stylesheet"> When using CSS source maps as described earlier and inspecting your HTML with Google's Chrome Developers Tools, you will see something like the following screenshot: Inspect source with Google's Chrome Developers Tools and source maps As you see styles now have a reference to their original LESS file such as grid.less, including line number, which helps you in the process of debugging. The styles.css.map file should be in the same directory as the styles.css file. You don't have to include your LESS files in this directory. Summary This article has covered the concept of Bootstrap, how to use Bootstrap's Less files, and how the files can be modified to be used according to your convenience. Resources for Article: Further resources on this subject: Getting Started with Bootstrap [Article] Bootstrap 3.0 is Mobile First [Article] Downloading and setting up Bootstrap [Article]
Read more
  • 0
  • 0
  • 2375
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-creating-responsive-magento-theme-bootstrap-3
Packt
21 Apr 2014
13 min read
Save for later

Creating a Responsive Magento Theme with Bootstrap 3

Packt
21 Apr 2014
13 min read
In this article, by Andrea Saccà, the author of Mastering Magento Theme Design, we will learn how to integrate the Bootstrap 3 framework and how to develop the main theme blocks. The following topics will be covered in this article: An introduction to Bootstrap Downloading Bootstrap (the current Version 3.1.1) Downloading and including jQuery Integrating the files into the theme Defining the main layout design template (For more resources related to this topic, see here.) An introduction to Bootstrap 3 Bootstrap is a sleek, intuitive, powerful, mobile-first frontend framework that enables faster and easier web development, as shown in the following screenshot: Bootstrap 3 is the most popular frontend framework that is used to create mobile-first websites. It includes a free collection of buttons, CSS components, and JavaScript to create websites or web applications; it was created by the Twitter team. Downloading Bootstrap (the current Version 3.1.1) First, you need to download the latest version of Bootstrap. The current version is 3.0. You can download the framework from http://getbootstrap.com/. The fastest way to download Bootstrap 3 is to download the precompiled and minified versions of CSS, JavaScript, and fonts. So, click on the Download Bootstrap button and unzip the file you downloaded. Once the archive is unzipped, you will see the following files: We need to take only the minified version of the files, that is, bootstrap.min.css from css, bootstrap.min.js from js, and all the files from font. For development, you can use bootstrap.css so that you can inspect the code and learn, and then switch to bootstrap.min.css when you go live. Copy all the selected files (CSS files inside the css folder, the .js files inside the js folder, and the font files inside the fonts folder) in the theme skin folder at skin/frontend/bookstore/default. Downloading and including jQuery Bootstrap is dependent on jQuery, so we have to download and include it before including boostrap.min.js. So, download jQuery from http://jquery.com/download/. The preceding URL takes us to the following screenshot: We will use the compressed production Version 1.10.2. Once you download jQuery, rename the file as jquery.min.js and copy it into the js skin folder at skin/frontend/bookstore/default/js/. In the same folder, also create the jquery.scripts.js file, where we will insert our custom scripts. Magento uses Prototype as the main JavaScript library. To make jQuery work correctly without conflicts, you need to insert the no conflict code in the jquery.scripts.js file, as shown in the following code: // This is important!jQuery.noConflict(); jQuery(document).ready(function() { // Insert your scripts here }); The following is a quick recap of CSS and JS files: Integrating the files into the theme Now that we have all the files, we will see how to integrate them into the theme. To declare the new JavaScript and CSS files, we have to insert the action in the local.xml file located at app/design/frontend/bookstore/default/layout. In particular, the file declaration needs to be done in the default handle to make it accessible by the whole theme. The default handle is defined by the following tags: <default> . . . </default> The action to insert the JavaScript and CSS files must be placed inside the reference head block. So, open the local.xml file and first create the following block that will define the reference: <reference name="head"> … </reference> Declaring the .js files in local.xml The action tag used to declare a new .js file located in the skin folder is as follows: <action method="addItem"> <type>skin_js</type><name>js/myjavascript.js</name> </action> In our skin folder, we copied the following three .js files: jquery.min.js jquery.scripts.js bootstrap.min.js Let's declare them as follows: <action method="addItem"> <type>skin_js</type><name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/jquery.scripts.js</name> </action> Declaring the CSS files in local.xml The action tag used to declare a new CSS file located in the skin folder is as follows: <action method="addItem"> <type>skin_css</type><name>css/mycss.css</name> </action> In our skin folder, we have copied the following three .css files: bootstrap.min.css styles.css print.css So let's declare these files as follows: <action method="addItem"> <type>skin_css</type><name>css/bootstrap.min.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/styles.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/print.css</name> </action> Repeat this action for all the additional CSS files. All the JavaScript and CSS files that you insert into the local.xml file will go after the files declared in the base theme. Removing and adding the style.css file By default, the base theme includes a CSS file called styles.css, which is hierarchically placed before the bootstrap.min.css. One of the best practices to overwrite the Bootstrap CSS classes in Magento is to remove the default CSS files declared by the base theme of Magento, and declare it after Bootstrap's CSS files. Thus, the styles.css file loads after Bootstrap, and all the classes defined in it will overwrite the boostrap.min.css file. To do this, we need to remove the styles.css file by adding the following action tag in the xml part, just before all the css declaration we have already made: <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> Hence, we removed the styles.css file and added it again just after adding Bootstrap's CSS file (bootstrap.min.css): <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> If it seems a little confusing, the following is a quick view of the CSS declaration: <!-- Removing the styles.css declared in the base theme --> <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css again --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> Adding conditional JavaScript code If you check the Bootstrap documentation, you can see that in the HTML5 boilerplate template, the following conditional JavaScript code is added to make Internet Explorer (IE) HTML 5 compliant: <!--[if lt IE 9]> <script src = "https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"> </script> <script src = "https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js"> </script> <![endif]--> To integrate them into the theme, we can declare them in the same way as the other script tags, but with conditional parameters. To do this, we need to perform the following steps: Download the files at https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js and https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js. Move the downloaded files into the js folder of the theme. Always integrate JavaScript through the .xml file, but with the conditional parameters as follows: <action method="addItem"> <type>skin_js</type><name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type><name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> A quick recap of our local.xml file Now, after we insert all the JavaScript and CSS files in the .xml file, the final local.xml file should look as follows: <?xml version="1.0" encoding="UTF-8"?> <layout version="0.1.0"> <default translate="label" module="page"> <reference name="head"> <!-- Adding Javascripts --> <action method="addItem"> <type>skin_js</type> <name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/jquery.scripts.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type> <name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> <!-- Removing the styles.css --> <action method="removeItem"> <type>skin_css</type><name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> </reference> </default> </layout> Defining the main layout design template A quick tip for our theme is to define the main template for the site in the default handle. To do this, we have to define the template into the most important reference, root. In a few words, the root reference is the block that defines the structure of a page. Let's suppose that we want to use a main structure having two columns with the left sidebar for the theme To change it, we should add the setTemplate action in the root reference as follows: <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> You have to insert the reference name "root" tag with the action inside the default handle, usually before every other reference. Defining the HTML5 boilerplate for main templates After integrating Bootstrap and jQuery, we have to create our HTML5 page structure for the entire base template. The following are the structure files that are located at app/design/frontend/bookstore/template/page/: 1column.phtml 2columns-left.phtml 2columns-right.phtml 3columns.phtml The Twitter Bootstrap uses scaffolding with containers, a row, and 12 columns. So, its page layout would be as follows: <div class="container"> <div class="row"> <div class="col-md-3"></div> <div class="col-md-9"></div> </div> </div> This structure is very important to create responsive sections of the store. Now we will need to edit the templates to change to HMTL5 and add the Bootstrap scaffolding. Let's look at the following 2columns-left.phtml main template file: <!DOCTYPE HTML> <html><head> <?php echo $this->getChildHtml('head') ?> </head> <body <?php echo $this->getBodyClass()?' class="'.$this->getBodyClass().'"':'' ?>> <?php echo $this->getChildHtml('after_body_start') ?> <?php echo $this->getChildHtml('global_notices') ?> <header> <?php echo $this->getChildHtml('header') ?> </header> <section id="after-header"> <div class="container"> <?php echo $this->getChildHtml('slider') ?> </div> </section> <section id="maincontent"> <div class="container"> <div class="row"> <?php echo $this->getChildHtml('breadcrumbs') ?> <aside class="col-left sidebar col-md-3"> <?php echo $this->getChildHtml('left') ?> </aside> <div class="col-main col-md-9"> <?php echo $this->getChildHtml('global_messages') ?> <?php echo $this->getChildHtml('content') ?> </div> </div> </div> </section> <footer id="footer"> <div class="container"> <?php echo $this->getChildHtml('footer') ?> </div> </footer> <?php echo $this->getChildHtml('before_body_end') ?> <?php echo $this->getAbsoluteFooter() ?> </body> </html> You will notice that I removed the Magento layout classes col-main, col-left, main, and so on, as these are being replaced by the Bootstrap classes. I also added a new section, after-header, because we will need it after we develop the home page slider. Don't forget to replicate this structure on the other template files 1column.phtml, 2columns-right.phtml, and 3columns.phtml, changing the columns as you need. Summary We've seen how to integrate Bootstrap and start the development of a Magento theme with the most famous framework in the world. Bootstrap is very neat, flexible, and modular, and you can use it as you prefer to create your custom theme. However, please keep in mind that it can be a big drawback on the loading time of the page. Following these techniques by adding the JavaScript and CSS classes via XML, you can allow Magento to minify them to speed up the loading time of the site. Resources for Article: Further resources on this subject: Integrating Twitter with Magento [article] Magento : Payment and shipping method [article] Magento: Exploring Themes [article]
Read more
  • 0
  • 0
  • 4702

article-image-skeuomorphic-versus-flat
Packt
18 Apr 2014
8 min read
Save for later

Skeuomorphic versus flat

Packt
18 Apr 2014
8 min read
(For more resources related to this topic, see here.) Skeuomorphism is defined as an element of design or structure that serves little or no purpose in the artifact fashioned from the new material but was essential to the object made from the original material (courtesy: Wikipedia — http://en.wikipedia.org/wiki/Skeuomorph). Apple created several skeuomorphic interfaces for their desktop and mobile apps; apps such as iCal, iBooks, Find My Friends, Podcast apps, and several others. This kind of interface was both loved and hated among the design community and users. It was a style that focused a lot on the detail and texture, making the interface heavier and often more complex, but interesting because of the clear connection to the real objects depicted here. It was an enjoyable and rich experience for the user due to the high detail and interaction that a skeuomorphic interface presented, which served to attract the eye to the detail and care put into these designs; for example, the page flip in iBooks, visually representing the swipe of a page as in a traditional book. But this style also had its downsides. Besides being a harsh transition from the traditional interfaces (as in the case of Apple, in which it meant coming from its famous glassy and clean looking Aqua interface), several skeuomorphic applications on the desktop didn't seem to fit in the overall OS look. Apart from stylistic preferences and incoherent looks, skeuomorphic design is also a bad design choice because the style in itself is a limitation to innovation. By replicating the traditional and analogical designs, the designer doesn't have the option or the freedom to imagine, create, and design new interfaces and interactions with the user. Flat design, being the extremely simple and clear style that it is, gives all the freedom to the designer by ignoring any kind of limitations and effects. But both styles have a place and time to be used, and skeuomorphic is great for applications such as Propellerheads that are directly replacing hardware, such as audio mixers. Using these kinds of interfaces makes it easier for new users to learn how to use the real hardware counterpart, while at the same time previous users of the hardware will already know how to use the interface with ease. Regardless of the style, a good designer must be ready to create an interface that is adapted to the needs of the user and the market. To exemplify this and to better learn the basic differences between flat and skeuomorphic, let's do a quick exercise. Exercise – the skeuomorphic and flat buttons In this exercise, we'll create a simple call to an action button, the copy of Buy Now. We'll create this element twice; first we'll take a look at the skeuomorphic approach by creating a realistic looking button with texture, shadow, and depth. Next, we will simply convert it to its flat counterpart by removing all those extra elements and adapting it to a minimalistic style. You should have all the materials you'll need for this exercise. We will use the typeface Lato, also available for free on Google Fonts, and the image wood.jpg for the texture on the skeuomorphic button. We'll just need Photoshop for this exercise, so let's open it up and use the following steps: Create a new Photoshop document with 800 x 600 px. This is where we will create our buttons. Let's start by creating the skeuomorphic one. We start by creating a rectangle with the rounded rectangle tool, with a radius of 20 px. This will be the face of our button. To make it easier to visualize the element while we create it, let's make it gray (#a2a2a2). Now that we have our button face created, let's give some depth to this button. Just duplicate the layer (command + J on Mac or Ctrl + J on Windows) and pull it down to 10 or 15 px, whichever you prefer. Let's make this new rectangle a darker shade of gray (#393939) and make sure that this layer is below the face layer. You should now have a simple gray button with some depth. The side layer simulates the depth of the button by being pulled down for just a couple of pixels, and since we made it darker, it resembles a shadow. Now for the call to action. Create a textbox on top of the button face, set its width to that of the button, and center the text. In there, write Buy Now, and set the text to Lato, weight to Black, and size to 50 pt. Center it vertically just by looking at the screen, until you find that it sits correctly in the center of the button. Now to make this button really skeuomorphic, let's get our image wood.jpg, and let's use it as our texture. Create a new layer named wood-face and make sure it's above our face layer. Now to define the layer as a texture and use our button as a mask, we're going to right-click on the layer and click on Create clipping mask. This will mask our texture to overlay the button face. For the side texture, duplicate the wood-face layer, rename it to wood-side and repeat the preceding instructions for the side layer. After that, and to have a different look, move the wood-face layer around and look for a good area of the texture to use on the side, ideally something with some up strips to make it look more realistic. To finish the side, create a new layer style in the side layer, gradient overlay, and make a gradient from black to transparent and change the settings as shown in the following screenshot. This will make a shadow effect on top of the wood, making it look a lot better. To finish our skeuomorphic button, let's go back to the text and define the color as #7b3201 (or another shade of brown; try to pick from the button and make it slightly darker until you find that it looks good), so that it looks like the text is carved in the wood. The last touch will be to add an Inner Shadow layer style in the text with the settings shown. Group all the layers and name it Skeuomorphic and we're done. And now we have our skeuomorphic button. It's a really simple way of doing it but we recreated the look of a button made out of wood just by using shapes, texture, and some layer styles. Now for our flat version: Duplicate the group we just created and name it flat. Move it to the other half of the workspace. Delete the following layers: wood-face, wood-side, and side. This button will not have any depth, so we do not need the side layer as well as the textures. To keep the button in the same color scheme as our previous one, we'll use the color #7b3201 for our text and face. Your document should look like what is shown in the following screenshot: Create a new layer style and choose Stroke with the following settings. This will create the border of our button. To make the button transparent, let's reduce the Layer Fill option to 0 percent, which will leave only the layer styles applied. Let's remove the layer styles from our text to make it flat, reduce the weight of the font to Bold to make it thinner and roughly the same weight of the border, and align it visually, and our flat button is done! This type of a transparent button is great for flat interfaces, especially when used over a blurred color background. This is because it creates an impactful button with very few elements to it, creating a transparent control and making great use of the white space in the design. In design, especially when designing flat, remember that less is more. With this exercise, you were able to build a skeuomorphic element and deconstruct it down to its flat version, which is as simple as a rounded rectangle with border and text. The font we chose is frequently used for flat design layouts; it's simple but rounded and it works great with rounded-corner shapes such as the ones we just created. Summary Flat design is a digital style of design that has been one of the biggest trends in recent years in web and user interface design. It is famous for its extremely minimalistic style. It has appeared at a time when skeuomorphic, a style of creating realistic interfaces, was considered to be the biggest and most famous trend, making this a really rough and extreme transition for both users and designers. We covered how to design in skeuomorphic and in flat, and what their main differences are. Resources for Article: Further resources on this subject: Top Features You Need to Know About – Responsive Web Design [Article] Web Design Principles in Inkscape [Article] Calendars in jQuery 1.3 with PHP using jQuery Week Calendar Plugin: Part 2 [Article]
Read more
  • 0
  • 0
  • 1661

article-image-software-task-management-tool-rake
Packt
16 Apr 2014
5 min read
Save for later

The Software Task Management Tool - Rake

Packt
16 Apr 2014
5 min read
(For more resources related to this topic, see here.) Installing Rake As Rake is a Ruby library, you should first install Ruby on the system if you don't have it installed already. The installation process is different for each operating system. However, we will see the installation example only for the Debian operating system family. Just open the terminal and write the following installation command: $ sudo apt-get install ruby If you have an operating system that doesn't contain the apt-get utility and if you have problems with the Ruby installation, please refer to the official instructions at https://www.ruby-lang.org/en/installation. There are a lot of ways to install Ruby, so please choose your operating system from the list on this page and select your desired installation method. Rake is included in the Ruby core as Ruby 1.9, so you don't have to install it as a separate gem. However, if you still use Ruby 1.8 or an older version, you will have to install Rake as a gem. Use the following command to install the gem: $ gem install rake The Ruby release cycle is slower than that of Rake and sometimes, you need to install it as a gem to work around some special issues. So you can still install Rake as a gem and in some cases, this is a requirement even for Ruby Version 1.9 and higher. To check if you have installed it correctly, open your terminal and type the following command: $ rake --version This should return the installed Rake version. The next sign that Rake is installed and is working correctly is an error that you see after typing the rake command in the terminal: $ mkdir ~/test-rake $ cd ~/test-rake $ rake rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) (See full trace by running task with --trace) Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Introducing rake tasks From the previous error message, it's clear that first you need to have Rakefile. As you can see, there are four variants of its name: rakefile, Rakefile, rakefile.rb, and Rakefile.rb. The most popularly used variant is Rakefile. Rails also uses it. However, you can choose any variant for your project. There is no convention that prohibits the user from using any of the four suggested variants. Rakefile is a file that is required for any Rake-based project. Apart from the fact that its content usually contains DSL, it's also a general Ruby file. Also, you can write any Ruby code in it. Perform the following steps to get started: Let's create a Rakefile in the current folder, which will just say Hello Rake, using the following commands: $ echo "puts 'Hello Rake'" > Rakefile $ cat Rakefile puts 'Hello Rake' Here, the first line creates a Rakefile with the content, puts 'Hello Rake', and the second line just shows us its content to make sure that we've done everything correctly. Now, run rake as we tried it before, using the following command: $ rake Hello Rake rake aborted! Don't know how to build task 'default' (See full trace by running task with --trace) The message has changed and it says Hello Rake. Then, it gets aborted because of another error message. At this moment, we have made the first step in learning Rake. Now, we have to define a default rake task that will be executed when you try to start Rake without any arguments. To do so, open your editor and change the created Rakefile with the following content: task :default do puts 'Hello Rake' end Now, run rake again: $ rake Hello, Rake The output that says Hello, Rake demonstrates that the task works correctly. The command-line arguments The most commonly used rake command-line argument is -T. It shows us a list of available rake tasks that you have already defined. We have defined the default rake task, and if we try to show the list of all rake tasks, it should be there. However, take a look at what happens in real life using the following command: $ rake -T The list is empty. Why? The answer lies within Rake. Run the rake command with the -h option to get the whole list of arguments. Pay attention to the description of the -T option, as shown in the following command-line output: -T, --tasks [PATTERN] Display the tasks (matching optional PATTERN) with descriptions, then exit. You can get more information on Rake in the repository at the following GitHub link at https://github.com/jimweirich/rake. The word description is the cornerstone here. It's a new term that we should know. Additionally, there is also an optional description to name a rake task. However, it's recommended that you define it because you won't see the list of all the defined rake tasks that we've already seen. It will be inconvenient for you to read your Rakefile every time you try to run some rake task. Just accept it as a rule: always leave a description for the defined rake tasks. Now, add a description to your rake tasks with the desc method call, as shown in the following lines of code: desc "Says 'Hello, Rake'" task :default do puts 'Hello, Rake.' end As you see, it's rather easy. Run the rake -T command again and you will see an output as shown: $ rake -T rake default # Says 'Hello, Rake' If you want to list all the tasks even if they don't have descriptions, you can pass an -A option with the -T option to the rake command. The resulting command will look like this: rake -T -A.
Read more
  • 0
  • 0
  • 1300

article-image-moodle-online-communities
Packt
14 Apr 2014
9 min read
Save for later

Moodle for Online Communities

Packt
14 Apr 2014
9 min read
(For more resources related to this topic, see here.) Now that you're familiar with the ways to use Moodle for different types of courses, it is time to take a look at how groups of people can come together as an online community and use Moodle to achieve their goals. For example, individuals who have the same interests and want to discuss and share information in order to transfer knowledge can do so very easily in a Moodle course that has been set up for that purpose. There are many practical uses of Moodle for online communities. For example, members of an association or employees of a company can come together to achieve a goal and finish a task. In this case, Moodle provides a perfect place to interact, collaborate, and create a final project or achieve a task. Online communities can also be focused on learning and achievement, and Moodle can be a perfect vehicle for encouraging online communities to support each other to learn, take assessments, and display their certificates and badges. Moodle is also a good platform for a Massive Open Online Course (MOOC). In this article, we'll create flexible Moodle courses that are ideal for online communities and that can be modified easily to create opportunities to harness the power of individuals in many different locations to teach and learn new knowledge and skills. In this article, we'll show you the benefit of Moodle and how to use Moodle for the following online communities and purposes: Knowledge-transfer-focused communities Task-focused communities Communities focused on learning and achievement Moodle and online communities It is often easy to think of Moodle as a learning management system that is used primarily by organizations for their students or employees. The community tends to be well defined as it usually consists of students pursuing a common end, employees of a company, or members of an association or society. However, there are many informal groups and communities that come together because they share interests, the desire to gain knowledge and skills, the need to work together to accomplish tasks, and let people know that they've reached milestones and acquired marketable abilities. For example, an online community may form around the topic of climate change. The group, which may use social media to communicate with each other, would like to share information and get in touch with like-minded individuals. While it's true that they can connect via Facebook, Twitter, and other social media formats, they may lack a platform that gives a "one-stop shopping" solution. Moodle makes it easy to share documents, videos, maps, graphics, audio files, and presentations. It also allows the users to interact with each other via discussion forums. Because we can use but not control social networks, it's important to be mindful of security issues. For that reason, Moodle administrators may wish to consider ways to back up or duplicate key posts or insights within the Moodle installation that can be preserved and stored. In another example, individuals may come together to accomplish a specific task. For example, a group of volunteers may come together to organize a 5K run fundraiser for epilepsy awareness. For such a case, Moodle has an array of activities and resources that can make it possible to collaborate in the planning and publicity of the event and even in the creation of post event summary reports and press releases. Finally, let's consider a person who may wish to ensure that potential employers know the kinds of skills they possess. They can display the certificates they've earned by completing online courses as well as their badges, digital certificates, mentions in high achievers lists, and other gamified evidence of achievement. There are also the MOOCs, which bring together instructional materials, guided group discussions, and automated assessments. With its features and flexibility, Moodle is a perfect platform for MOOCs. Building a knowledge-based online community For our knowledge-based online community, let's consider a group of individuals who would like to know more about climate change and its impact. To build a knowledge-based online community, the following are the steps we need to perform: Choose a mobile-friendly theme. Customize the appearance of your site. Select resources and activities. Moodle makes it possible for people from all locations and affiliations to come together and share information in order to achieve a common objective. We will see how to do this in the following sections. Choosing the best theme for your knowledge-based Moodle online communities As many of the users in the community access Moodle using smartphones, tablets, laptops, and desktops, it is a good idea to select a theme that is responsive, which means that it will be automatically formatted in order to display properly on all devices. You can learn more about themes for Moodle, review them, find out about the developers, read comments, and then download them at https://moodle.org/plugins/browse.php?list=category&id=3. There are many good responsive themes, such as the popular Buckle theme and the Clean theme, that also allow you to customize them. These are the core and contributed themes, which is to say that they were created by developers and are either part of the Moodle installation or available for free download. If you have Moodle 2.5 or a later version installed, your installation of Moodle includes many responsive themes. If it does not, you will need to download and install a theme. In order to select an installed theme, perform the following steps: In the Site administration menu, click on the Appearance menu. Click on Themes. Click on Theme selector. Click on the Change theme button. Review all the themes. Click on the Use theme button next to the theme you want to choose and then click on Continue. Using the best settings for knowledge-based Moodle online communities There are a number of things you can do to customize the appearance of your site so that it is very functional for knowledge-transfer-based Moodle online communities. The following is a brief checklist of items: Select Topics format under the Course format section in the Course default settings window. By selecting topics, you'll be able to organize your content around subjects. Use the General section, which is included as the first topic in all courses. It has the News forum link. You can use this for announcements highlighting resources shared by the community. Include the name of the main contact along with his/her photograph and a brief biographical sketch in News forum. You'll create the sense that there is a real "go-to" person who is helping guide the endeavor. Incorporate social media to encourage sharing and dissemination of new information. Brief updates are very effective, so you may consider including a Twitter feed by adding your Twitter account as one of your social media sites. Even though your main topic of discussion may contain hundreds of subtopics that are of great interest, when you create your Moodle course, it's best to limit the number of subtopics to four or five. If you have too many choices, your users will be too scattered and will not have a chance to connect with each other. Think of your Moodle site as a meeting point. Do you want to have too many breakout sessions and rooms or do you want to have a main networking site? Think of how you would like to encourage users to mingle and interact. Selecting resources and activities for a knowledge-based Moodle online community The following are the items to include if you want to configure Moodle such that it is ideal for individuals who have come together to gain knowledge on a specific topic or problem: Resources: Be sure to include multiple types of files: documents, videos, audio files, and presentations. Activities: Include Quiz and other such activities that allow individuals to test their knowledge. Communication-focused activities: Set up a discussion forum to enable community members to post their thoughts and respond to each other. The key to creating an effective Moodle course for knowledge-transfer-based communities is to give the individual members a chance to post critical and useful information, no matter what the format or the size, and to accommodate social networks. Building a task-based online community Let's consider a group of individuals who are getting together to plan a fundraising event. They need to plan activities, develop materials, and prepare a final report. Moodle can make it fairly easy for people to work together to plan events, collaborate on the development of materials, and share information for a final report. Choosing the best theme for your task-based Moodle online communities If you're using volunteers or people who are using Moodle just for the tasks or completion of tasks, you may have quite a few Moodle "newbies". Since people will be unfamiliar with navigating Moodle and finding the places they need to go, you'll need a theme that is clear, attention-grabbing, and that includes easy-to-follow directions. There are a few themes that are ideal for collaborations and multiple functional groups. We highly recommend the Formal white theme because it is highly customizable from the Theme settings page. You can easily customize the background, text colors, logos, font size, font weight, block size, and more, enabling you to create a clear, friendly, and brand-recognizable site. Formal white is a standard theme, kept up to date, and can be used on many versions of Moodle. You can learn more about the Formal white theme and download it by visiting http://hub.packtpub.com/wp-content/uploads/2014/04/Filetheme_formalwhite.png. In order to customize the appearance of your entire site, perform the following steps: In the Site administration menu, click on Appearance. Click on Themes. Click on Theme settings. Review all the themes settings. Enter the custom information in each box.
Read more
  • 0
  • 0
  • 1929
article-image-building-customizable-content-management-system
Packt
07 Apr 2014
15 min read
Save for later

Building a Customizable Content Management System

Packt
07 Apr 2014
15 min read
(For more resources related to this topic, see here.) Mission briefing This article deals with the creation of a Content Management System. This system will consist of two parts: A backend that helps to manage content, page parts, and page structure A frontend that displays the settings and content we just entered We will start this by creating an admin area and then create page parts with types. Page parts, which are like widgets, are fragments of content that can be moved around the page. Page parts also have types; for example, we can display videos in our left column or display news. So, the same content can be represented in multiple ways. For example, news can be a separate page as well as a page part if it needs to be displayed on the front page. These parts need to be enabled for the frontend. If enabled, then the frontend makes a call on the page part ID and renders it in the part where it is supposed to be displayed. We will do a frontend markup in Haml and Sass. The following screenshot shows what we aim to do in this article: Why is it awesome? Everyone loves to get a CMS built from scratch that is meant to suit their needs really closely. We will try to build a system that is extremely simple as well as covers several different types of content. This system is also meant to be extensible, and we will lay the foundation stone for a highly configurable CMS. We will also spice up our proceedings in this article by using MongoDB instead of a relational database such as MySQL. At the end of this article, we will be able to build a skeleton for a very dynamic CMS. Your Hotshot objectives While building this application, we will have to go through the following tasks: Creating a separate admin area Creating a CMS with the ability of handling different types of content pages Managing page parts Creating a Haml- and Sass-based template Generating the content and pages Implementing asset caching Mission checklist We need to install the following software on the system before we start with our mission: Ruby 1.9.3 / Ruby 2.0.0 Rails 4.0.0 MongoDB Bootstrap 3.0 Haml Sass Devise Git A tool for mockups jQuery ImageMagick and RMagick Memcached Creating a separate admin area We have used devise for all our projects and we will be using the same strategy in this article. The only difference is that we will use it to log in to the admin account and manage the site's data. This needs to be done when we navigate to the URL/admin. We will do this by creating a namespace and routing our controller through the namespace. We will use our default application layout and assets for the admin area, whereas we will create a different set of layout and assets altogether for our frontend. Also, before starting with this first step, create an admin role using CanCan and rolify and associate it with the user model. We are going to use memcached for caching, hence we need to add it to our development stack. We will do this by installing it through our favorite package manager, for example, apt on Ubuntu: sudo apt-get install memcached Prepare for lift off In order to start working on this article, we will have to first add the mongoid gem to Gemfile: Gemfile gem 'mongoid'4', github: 'mongoid/mongoid' Bundle the application and run the mongoid generator: rails g mongoid:config You can edit config/mongoid.yml to suit your local system's settings as shown in the following code: config/mongoid.yml development: database: helioscms_development hosts: - localhost:27017 options: test: sessions: default: database: helioscms_test hosts: - localhost:27017 options: read: primary max_retries: 1 retry_interval: 0 We did this because ActiveRecord is the default Object Relationship Mapper (ORM). We will override it with the mongoid Object Document Mapper (ODM) in our application. Mongoid's configuration file is slightly different from the database.yml file for ActiveRecord. The session's rule in mongoid.yml opens a session from the Rails application to MongoDB. It will keep the session open as long as the server is up. It will also open the connection automatically if the server is down and it restarts after some time. Also, as a part of the installation, we need to add Haml to Gemfile and bundle it: Gemfile gem 'haml' gem "haml-rails" Engage thrusters Let's get cracking to create our admin area now: We will first generate our dashboard controller: rails g controller dashboard indexcreate app/controllers/dashboard_controller.rbroute get "dashboard/index"invoke erbcreate app/views/dashboardcreate app/views/dashboard/index.html.erbinvoke test_unitcreate test/controllers/dashboard_controller_test.rbinvoke helpercreate app/helpers/dashboard_helper.rbinvoke test_unitcreate test/helpers/dashboard_helper_test.rbinvoke assetsinvoke coffeecreate app/assets/javascripts/dashboard.js.coffeeinvoke scsscreate app/assets/stylesheets/dashboard.css.scss We will then create a namespace called admin in our routes.rb file: config/routes.rbnamespace :admin doget '', to: 'dashboard#index', as: '/'end We have also modified our dashboard route such that it is set as the root page in the admin namespace. Our dashboard controller will not work anymore now. In order for it to work, we will have to create a folder called admin inside our controllers and modify our DashboardController to Admin::DashboardController. This is to match the admin namespace we created in the routes.rb file: app/controllers/admin/dashboard_controller.rbclass Admin::DashboardController < ApplicationControllerbefore_filter :authenticate_user!def indexendend In order to make the login specific to the admin dashboard, we will copy our devise/sessions_controller.rb file to the controllers/admin path and edit it. We will add the admin namespace and allow only the admin role to log in: app/controllers/admin/sessions_controller.rbclass Admin::SessionsController < ::Devise::SessionsControllerdef createuser = User.find_by_email(params[:email])if user && user.authenticate(params[:password]) &&user.has_role? "admin"session[:user_id] = user.idredirect_to admin_url, notice: "Logged in!"elseflash.now.alert = "Email or password is invalid /Only Admin is allowed "endendend redirect_to admin_url, notice: "Logged in!" else flash.now.alert = "Email or password is invalid / Only Admin is allowed " end end end Objective complete – mini debriefing In the preceding task, after setting up devise and CanCan in our application, we went ahead and created a namespace for the admin. In Rails, the namespace is a concept used to separate a set of controllers into a completely different functionality. In our case, we used this to separate out the login for the admin dashboard and a dashboard page as soon as the login happens. We did this by first creating the admin folder in our controllers. We then copied our Devise sessions controller into the admin folder. For Rails to identify the namespace, we need to add it before the controller name as follows: class Admin::SessionsController < ::Devise::SessionsController In our route, we defined a namespace to read the controllers under the admin folder: namespace :admin doend We then created a controller to handle dashboards and placed it within the admin namespace: namnamespace :admin doget '', to: 'dashboard#index', as: '/'end We made the dashboard the root page after login. The route generated from the preceding definition is localhost:3000/admin. We ensured that if someone tries to log in by clicking on the admin dashboard URL, our application checks whether the user has a role of admin or not. In order to do so, we used has_role from rolify along with user.authenticate from devise: if user && user.authenticate(params[:password]) && user.has_role? "admin" This will make devise function as part of the admin dashboard. If a user tries to log in, they will be presented with the devise login page as shown in the following screenshot: After logging in successfully, the user is redirected to the link for the admin dashboard: Creating a CMS with the ability to create different types of pages A website has a variety of types of pages, and each page serves a different purpose. Some are limited to contact details, while some contain detailed information about the team. Each of these pages has a title and body. Also, there will be subpages within each navigation; for example, the About page can have Team, Company, and Careers as subpages. Hence, we need to create a parent-child self-referential association. So, pages will be associated with themselves and be treated as parent and child. Engage thrusters In the following steps, we will create page management for our application. This will be the backbone of our application. Create a model, view, and controller for page. We will have a very simple page structure for now. We will create a page with title, body, and page type: app/models/page.rbclass Pageinclude Mongoid::Documentfield :title, type: Stringfield :body, type: Stringfield :page_type, type: Stringvalidates :title, :presence => truevalidates :body, :presence => truePAGE_TYPE= %w(Home News Video Contact Team Careers)end We need a home page for our main site. So, in order to set a home page, we will have to assign it the type home. However, we need two things from the home page: it should be the root of our main site and the layout should be different from the admin. In order to do this, we will start by creating an action called home_page in pages_controller: app/models/page.rb scope :home, ->where(page_type: "Home")} app/controllers/pages_controller.rb def home_page @page = Page.home.first rescue nil render :layout => 'page_layout' end We will find a page with the home type and render a custom layout called page_layout, which is different from our application layout. We will do the same for the show action as well, as we are only going to use show to display the pages in the frontend: app/controllers/pages_controller.rbdef showrender :layout => 'page_layout'end Now, in order to effectively manage the content, we need an editor. This will make things easier as the user will be able to style the content easily using it. We will use ckeditor in order to style the content in our application: Gemfilegem "ckeditor", :github => "galetahub/ckeditor"gem 'carrierwave', :github => "jnicklas/carrierwave"gem 'carrierwave-mongoid', :require => 'carrierwave/mongoid'gem 'mongoid-grid_fs', github: 'ahoward/mongoid-grid_fs' Add the ckeditor gem to Gemfile and run bundle install: helioscms$ rails generate ckeditor:install --orm=mongoid--backend=carrierwavecreate config/initializers/ckeditor.rbroute mount Ckeditor::Engine => '/ckeditor'create app/models/ckeditor/asset.rbcreate app/models/ckeditor/picture.rbcreate app/models/ckeditor/attachment_file.rbcreate app/uploaders/ckeditor_attachment_file_uploader.rb This will generate a carrierwave uploader for CKEditor, which is compatible with mongoid. In order to finish the configuration, we need to add a line to application.js to load the ckeditor JavaScript: app/assets/application.js//= require ckeditor/init We will display the editor in the body as that's what we need to style: views/pages/_form.html.haml.field= f.label :body%br/= f.cktext_area :body, :rows => 20, :ckeditor => {:uiColor =>"#AADC6E", :toolbar => "mini"} We also need to mount the ckeditor in our routes.rb file: config/routes.rbmount Ckeditor::Engine => '/ckeditor' The editor toolbar and text area will be generated as seen in the following screenshot: In order to display the content on the index page in a formatted manner, we will add the html_safe escape method to our body: views/pages/index.html.haml%td= page.body.html_safe The following screenshot shows the index page after the preceding step: At this point, we can manage the content using pages. However, in order to add nesting, we will have to create a parent-child structure for our pages. In order to do so, we will have to first generate a model to define this relationship: helioscms$ rails g model page_relationship Inside the page_relationship model, we will define a two-way association with the page model: app/models/page_relationship.rbclass PageRelationshipinclude Mongoid::Documentfield :parent_idd, type: Integerfield :child_id, type: Integerbelongs_to :parent, :class_name => "Page"belongs_to :child, :class_name => "Page"end In our page model, we will add inverse association. This is to check for both parent and child and span the tree both ways: has_many :child_page, :class_name => 'Page',:inverse_of => :parent_pagebelongs_to :parent_page, :class_name => 'Page',:inverse_of => :child_page We can now add a page to the form as a parent. Also, this method will create a tree structure and a parent-child relationship between the two pages: app/views/pages/_form.html.haml.field= f.label "Parent"%br/= f.collection_select(:parent_page_id, Page.all, :id,:title, :class => "form-control").field= f.label :body%br/= f.cktext_area :body, :rows => 20, :ckeditor =>{:uiColor => "#AADC6E", :toolbar => "mini"}%br/.actions= f.submit :class=>"btn btn-default"=link_to 'Cancel', pages_path, :class=>"btn btn-danger" We can see the the drop-down list with names of existing pages, as shown in the following screenshot: Finally, we will display the parent page: views/pages/_form.html.haml.field= f.label "Parent"%br/= f.collection_select(:parent_page_id, Page.all, :id,:title, :class => "form-control") In order to display the parent, we will call it using the association we created: app/views/pages/index.html.haml- @pages.each do |page|%tr%td= page.title%td= page.body.html_safe%td= page.parent_page.title if page.parent_page Objective complete – mini debriefing Mongoid is an ODM that provides an ActiveRecord type interface to access and use MongoDB. MongoDB is a document-oriented database, which follows a no-schema and dynamic-querying approach. In order to include Mongoid, we need to make sure we have the following module included in our model: include Mongoid::Document Mongoid does not rely on migrations such as ActiveRecord because we do not need to create tables but documents. It also comes with a very different set of datatypes. It does not have a datatype called text; it relies on the string datatype for all such interactions. Some of the different datatypes are as follows: Regular expressions: This can be used as a query string, and matching strings are returned as a result Numbers: This includes integer, big integer, and float Arrays: MongoDB allows the storage of arrays and hashes in a document field Embedded documents: This has the same datatype as the parent document We also used Haml as our markup language for our views. The main goal of Haml is to provide a clean and readable markup. Not only that, Haml significantly reduces the effort of templating due to its approach. In this task, we created a page model and a controller. We added a field called page_type to our page. In order to set a home page, we created a scope to find the documents with the page type home: scope :home, ->where(page_type: "Home")} We then called this scope in our controller, and we also set a specific layout to our show page and home page. This is to separate the layout of our admin and pages. The website structure can contain multiple levels of nesting, which means we could have a page structure like the following: About Us | Team | Careers | Work Culture | Job Openings In the preceding structure, we were dealing with a page model to generate different pages. However, our CMS should know that About Us has a child page called Careers and in turn has another child page called Work Culture. In order to create a parent-child structure, we need to create a self-referential association. In order to achieve this, we created a new model that holds a reference on the same model page. We first created an association in the page model with itself. The line inverse_of allows us to trace back in case we need to span our tree according to the parent or child: has_many :child_page, :class_name => 'Page', :inverse_of => :parent_pagebelongs_to :parent_page, :class_name => 'Page', :inverse_of =>:child_page We created a page relationship to handle this relationship in order to map the parent ID and child ID. Again, we mapped it to the class page: belongs_to :parent, :class_name => "Page"belongs_to :child, :class_name => "Page" This allowed us to directly find parent and child pages using associations. In order to manage the content of the page, we added CKEditor, which provides a feature rich toolbar to format the content of the page. We used the CKEditor gem and generated the configuration, including carrierwave. For carrierwave to work with mongoid, we need to add dependencies to Gemfile: gem 'carrierwave', :github => "jnicklas/carrierwave" gem 'carrierwave-mongoid', :require => 'carrierwave/mongoid' gem 'mongoid-grid_fs', github: 'ahoward/mongoid-grid_fs' MongoDB comes with its own filesystem called GridFs. When we extend carrierwave, we have an option of using a filesystem and GridFs, but the gem is required nonetheless. carrierwave and CKEditor are used to insert and manage pictures in the content wherever required. We then added a route to mount the CKEditor as an engine in our routes file. Finally, we called it in a form: = f.cktext_area :body, :rows => 20, :ckeditor => {:uiColor =>"#AADC6E", :toolbar => "mini"} CKEditor generates and saves the content as HTML. Rails sanitizes HTML by default and hence our HTML is safe to be saved. The admin page to manage the content of pages looks like the following screenshot:
Read more
  • 0
  • 0
  • 1134

article-image-organizing-jade-projects
Packt
24 Mar 2014
9 min read
Save for later

Organizing Jade Projects

Packt
24 Mar 2014
9 min read
(For more resources related to this topic, see here.) Now that you know how to use all the things that Jade can do, here's when you should use them. Jade is pretty flexible when it comes to organizing projects; the language itself doesn't impose much structure on your project. However, there are some conventions you should follow, as they will typically make your code easier to manage. This article will cover those conventions and best practices. General best practices Most of the good practices that are used when writing HTML carry over to Jade. Some of these include the following: Using a consistent naming convention for ID's, class names, and (in this case) mixin names and variables Adding alt text to images Choosing appropriate tags to describe content and page structure The list goes on, but these are all things you should already be familiar with. So now we're going to discuss some practices that are more Jade-specific. Keeping logic out of templates When working with a templating language, like Jade, that allows you to use advanced logical operations, separation of concerns (SoC) becomes an important practice. In this context, SoC is the separation of business and presentational logic, allowing each part to be developed and updated independently. An easy point to draw the border between business and presentation is where data is passed to the template. Business logic is kept in the main code of your application and passes the data to be presented (as well-formed JSON objects) to your template engine. From there, the presentation layer takes the data and performs whatever logic is needed to make that data into a readable web page. An additional advantage of this separation is that the JSON data can be passed to a template over stdio (to the server-side Jade compiler), or it can be passed over TCP/IP (to be evaluated client side). Since the template only formats the given data, it doesn't matter where it is rendered, and can be used on both server and client. For documenting the format of the JSON data, try JSON Schema (http://json-schema.org/). In addition to describing the interface between that your presentation layer uses, it can be used in tests to validate the structure of the JSON that your business layer produces. Inlining When writing HTML, it is commonly advised that you don't use inline styles or scripts because it is harder to maintain. This advice still applies to the way you write your Jade. For everything but the smallest one-page projects, tests, and mockups, you should separate your styles and scripts into different files. These files may then be compiled separately and linked to your HTML with style or link tags. Or, you could include them directly into the Jade. But either way, the important part is that you keep it separated from your markup in your source code. However, in your compiled HTML you don't need to worry about keeping inlined styles out. The advice about avoiding inline styles applies only to your source code and is purely for making your codebase easier to manage. In fact, according to Best Practices for Speeding Up Your Web Site (http://developer.yahoo.com/performance/rules.html) it is much better to combine your files to minimize HTTP requests, so inlining at compile time is a really good idea. It's also worth noting that, even though Jade can help you inline scripts and styles during compilation, there are better ways to perform these compile-time optimizations. For example, build-tools like AssetGraph (https://github.com/assetgraph/assetgraph) can do all the inlining, minifying, and combining you need, without you needing to put code to do so in your templates. Minification We can pass arguments through filters to compilers for things like minifying. This feature is useful for small projects for which you might not want to set up a full build-tool. Also, minification does reduce the size of your assets making it a very easy way to speed up your site. However, your markup shouldn't really concern itself with details like how the site is minified, so filter arguments aren't the best solution for minifying. Just like inlining, it is much better to do this with a tool like AssetGraph. That way your markup is free of "build instructions". Removing style-induced redundancy A lot of redundant markup is added just to make styling easier: we have wrappers for every conceivable part of the page, empty divs and spans, and plenty of other forms of useless markup. The best way to deal with this stuff is to improve your CSS so it isn't reliant on wrappers and the like. Failing that, we can still use mixins to take that redundancy out of the main part of our code and hide it away until we have better CSS to deal with it. For example, consider the following example that uses a repetitive navigation bar: input#home_nav(type='radio', name='nav', value='home', checked) label(for='home_nav') a(href='#home') home input#blog_nav(type='radio', name='nav', value='blog') label(for='blog_nav') a(href='#blog') blog input#portfolio_nav(type='radio', name='nav', value='portfolio') label(for='portfolio_nav') a(href='#portfolio') portfolio //- ...and so on Instead of using the preceding code, it can be refactored into a reusable mixin as shown in the following code snippet: mixin navbar(pages) - checked = true for page in pages input( type='radio', name='nav', value=page, id="#{page}_nav", checked=checked) label(for="#{page}_nav") a(href="##{page}") #{page} - checked = false The preceding mixin can be then called later in your markup using the following code: +navbar(['home', 'blog', 'portfolio']) Semantic divisions Sometimes, even though there is no redundancy present, dividing templates into separated mixins and blocks can be a good idea. Not only does it provide encapsulation (which makes debugging easier), but the division represents a logical separation of the different parts of a page. A common example of this would be dividing a page between the header, footer, sidebar, and main content. These could be combined into one monolithic file, but putting each in a separate block represents their separation, can make the project easier to navigate, and allows each to be extended individually. Server-side versus client-side rendering Since Jade can be used on both the client-side and server-side, we can choose to do the rendering of the templates off the server. However, there are costs and benefits associated with each approach, so the decision must be made depending on the project. Client-side rendering Using the Single Page Application (SPA) design, we can do everything but the compilation of the basic HTML structure on the client-side. This allows for a static page that loads content from a dynamic backend and passes that content to Jade templates compiled for client-side usage. For example, we could have simple webapp that, once loaded, fires off a AJAX request to a server running WordPress with a simple JSON API, and displays the posts it gets by passing the the JSON to templates. The benefits of this design is that the page itself is static (and therefore easily cacheable), with the SPA design, navigation is much faster (especially if content is preloaded), and significantly less data is transferred because of the terse JSON format that the content is formatted in (rather than it being already wrapped in HTML). Also, we get a very clean separation of content and presentation by actually forcing content to be moved into a CMS and out of the codebase. Finally, we avoid the risk of coupling the rendering too tightly with the CMS by forcing all content to be passed over HTTP in JSON—in fact, they are so separated that they don't even need to be on the same server. But, there are some issues too—the reliance on JavaScript for loading content means that users who don't have JS enabled will not be able to load content normally and search engines will not be able to see your content without implementing _escaped_fragment_ URLs. Thus, some fallback is needed, whether it is a full site that is able to function without JS or just simple HTML snapshots rendered using a headless browser, it is a source of additional work. Server-side rendering We can, of course, render everything on the server-side and just send regular HTML to the browser. This is the most backwards compatible, since the site will behave just as any static HTML site would, but we don't get any of the benefits of client-side rendering either. We could still use some client-side Jade for enhancements, but the idea is the same: the majority gets rendered on the server-side and full HTML pages need to be sent when the user navigates to a new page. Build systems Although the Jade compiler is fully capable of compiling projects on its own, in practice, it is often better to use a build system because they can make interfacing with the compiler easier. In addition, build systems often help automate other tasks such as minification, compiling other languages, and even deployment. Some examples of these build systems are Roots (http://roots.cx/), Grunt (http://gruntjs.com/), and even GNU's Make (http://www.gnu.org/software/make/). For example, Roots can recompile Jade automatically each time you save it and even refresh an in-browser preview of that page. Continuous recompilation helps you notice errors sooner and Roots helps you avoid the hassle of manually running a command to recompile. Summary In this article, we just finished taking a look at some of the best practices to follow when organizing Jade projects. Also, we looked at the use of third-party tools to automate tasks. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] RSS Web Widget [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 1451

article-image-article-phone-calls-send-sms-your-website-using-twilio
Packt
21 Mar 2014
9 min read
Save for later

Make phone calls and send SMS messages from your website using Twilio

Packt
21 Mar 2014
9 min read
(For more resources related to this topic, see here.) Sending a message from a website Sending messages from a website has many uses; sending notifications to users is one good example. In this example, we're going to present you with a form where you can enter a phone number and message and send it to your user. This can be quickly adapted for other uses. Getting ready The complete source code for this recipe can be found in the Chapter6/Recipe1/ folder. How to do it... Ok, let's learn how to send an SMS message from a website. The user will be prompted to fill out a form that will send the SMS message to the phone number entered in the form. Download the Twilio Helper Library from https://github.com/twilio/twilio-php/zipball/master and unzip it. Upload the Services/ folder to your website. Upload config.php to your website and make sure the following variables are set: <?php $accountsid = ''; // YOUR TWILIO ACCOUNT SID $authtoken = ''; // YOUR TWILIO AUTH TOKEN $fromNumber = ''; // PHONE NUMBER CALLS WILL COME FROM ?> Upload a file called sms.php and add the following code to it: <!DOCTYPE html> <html> <head> <title>Recipe 1 – Chapter 6</title> </head> <body> <?php include('Services/Twilio.php'); include("config.php"); include("functions.php"); $client = new Services_Twilio($accountsid, $authtoken); if( isset($_POST['number']) && isset($_POST['message']) ){ $sid = send_sms($_POST['number'],$_POST['message']); echo "Message sent to {$_POST['number']}"; } ?> <form method="post"> <input type="text" name="number" placeholder="Phone Number...." /><br /> <input type="text" name="message" placeholder="Message...." /><br /> <button type="submit">Send Message</button> </form> </body> </html> Create a file called functions.php and add the following code to it: <?php function send_sms($number,$message){ global $client,$fromNumber; $sms = $client->account->sms_messages->create( $fromNumber, $number, $message ); return $sms->sid; } How it works... In steps 1 and 2, we downloaded and installed the Twilio Helper Library for PHP. This library is the heart of your Twilio-powered apps. In step 3, we uploaded config.php that contains our authentication information to talk to Twilio's API. In steps 4 and 5, we created sms.php and functions.php, which will send a message to the phone number we enter. The send_sms function is handy for initiating SMS conversations; we'll be building on this function heavily in the rest of the article. Allowing users to make calls from their call logs We're going to give your user a place to view their call log. We will display a list of incoming calls and give them the option to call back on these numbers. Getting ready The complete source code for this recipe can be found in the Chapter9/Recipe4 folder. How to do it... Now, let's build a section for our users to log in to using the following steps: Update a file called index.php with the following content: <?php session_start(); include 'Services/Twilio.php'; require("system/jolt.php"); require("system/pdo.class.php"); require("system/functions.php"); $_GET['route'] = isset($_GET['route']) ? '/'.$_GET['route'] : '/'; $app = new Jolt('site',false); $app->option('source', 'config.ini'); #$pdo = Db::singleton(); $mysiteURL = $app->option('site.url'); $app->condition('signed_in', function () use ($app) { $app->redirect( $app->getBaseUri().'/login',!$app->store('user')); }); $app->get('/login', function() use ($app){ $app->render( 'login', array(),'layout' ); }); $app->post('/login', function() use ($app){ $sql = "SELECT * FROM `user` WHERE `email`='{$_POST['user']}' AND `password`='{$_POST['pass']}'"; $pdo = Db::singleton(); $res = $pdo->query( $sql ); $user = $res->fetch(); if( isset($user['ID']) ){ $_SESSION['uid'] = $user['ID']; $app->store('user',$user['ID']); $app->redirect( $app->getBaseUri().'/home'); }else{ $app->redirect( $app->getBaseUri().'/login'); } }); $app->get('/signup', function() use ($app){ $app->render( 'register', array(),'layout' ); }); $app->post('/signup', function() use ($app){ $client = new Services_Twilio($app->store('twilio.accountsid'), $app->store('twilio.authtoken') ); extract($_POST); $timestamp = strtotime( $timestamp ); $subaccount = $client->accounts->create(array( "FriendlyName" => $email )); $sid = $subaccount->sid; $token = $subaccount->auth_token; $sql = "INSERT INTO 'user' SET `name`='{$name}',`email`='{$email }',`password`='{$password}',`phone_number`='{$phone_number}',`sid` ='{$sid}',`token`='{$token}',`status`=1"; $pdo = Db::singleton(); $pdo->exec($sql); $uid = $pdo->lastInsertId(); $app->store('user',$uid ); // log user in $app->redirect( $app->getBaseUri().'/phone-number'); }); $app->get('/phone-number', function() use ($app){ $app->condition('signed_in'); $user = $app->store('user'); $client = new Services_Twilio($user['sid'], $user['token']); $app->render('phone-number'); }); $app->post("search", function() use ($app){ $app->condition('signed_in'); $user = get_user( $app->store('user') ); $client = new Services_Twilio($user['sid'], $user['token']); $SearchParams = array(); $SearchParams['InPostalCode'] = !empty($_POST['postal_code']) ? trim($_POST['postal_code']) : ''; $SearchParams['NearNumber'] = !empty($_POST['near_number']) ? trim($_POST['near_number']) : ''; $SearchParams['Contains'] = !empty($_POST['contains'])? trim($_ POST['contains']) : '' ; try { $numbers = $client->account->available_phone_numbers->getList('US', 'Local', $SearchParams); if(empty($numbers)) { $err = urlencode("We didn't find any phone numbers by that search"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } } catch (Exception $e) { $err = urlencode("Error processing search: {$e->getMessage()}"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } $app->render('search',array('numbers'=>$numbers)); }); $app->post("buy", function() use ($app){ $app->condition('signed_in'); $user = get_user( $app->store('user') ); $client = new Services_Twilio($user['sid'], $user['token']); $PhoneNumber = $_POST['PhoneNumber']; try { $number = $client->account->incoming_phone_numbers->create(array( 'PhoneNumber' => $PhoneNumber )); $phsid = $number->sid; if ( !empty($phsid) ){ $sql = "INSERT INTO numbers (user_id,number,sid) VALUES('{$u ser['ID']}','{$PhoneNumber}','{$phsid}');"; $pdo = Db::singleton(); $pdo->exec($sql); $fid = $pdo->lastInsertId(); $ret = editNumber($phsid,array( "FriendlyName"=>$PhoneNumber, "VoiceUrl" => $mysiteURL."/voice?id=".$fid, "VoiceMethod" => "POST", ),$user['sid'], $user['token']); } } catch (Exception $e) { $err = urlencode("Error purchasing number: {$e->getMessage()}"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } $msg = urlencode("Thank you for purchasing $PhoneNumber"); header("Location: index.php?msg=$msg"); $app->redirect( $app->getBaseUri().'/home?msg='.$msg); exit(0); }); $app->route('/voice', function() use ($app){ }); $app->get('/transcribe', function() use ($app){ }); $app->get('/logout', function() use ($app){ $app->store('user',0); $app->redirect( $app->getBaseUri().'/login'); }); $app->get('/home', function() use ($app){ $app->condition('signed_in'); $uid = $app->store('user'); $user = get_user( $uid ); $client = new Services_Twilio($user['sid'], $user['token']); $app->render('dashboard',array( 'user'=>$user, 'client'=>$client )); }); $app->get('/delete', function() use ($app){ $app->condition('signed_in'); }); $app->get('/', function() use ($app){ $app->render( 'home' ); }); $app->listen(); Upload a file called dashboard.php with the following content to your views folder: <h2>My Number</h2> <?php $pdo = Db::singleton(); $sql = "SELECT * FROM `numbers` WHERE `user_ id`='{$user['ID']}'"; $res = $pdo->query( $sql ); while( $row = $res->fetch() ){ echo preg_replace("/[^0-9]/", "", $row['number']); } try { ?> <h2>My Call History</h2> <p>Here are a list of recent calls, you can click any number to call them back, we will call your registered phone number and then the caller</p> <table width=100% class="table table-hover tabled-striped"> <thead> <tr> <th>From</th> <th>To</th> <th>Start Date</th> <th>End Date</th> <th>Duration</th> </tr> </thead> <tbody> <?php foreach ($client->account->calls as $call) { # echo "<p>Call from $call->from to $call->to at $call->start_time of length $call->duration</p>"; if( !stristr($call->direction,'inbound') ) continue; $type = find_in_list($call->from); ?> <tr> <td><a href="<?=$uri?>/call?number=<?=urlencode($call->from)?>"><?=$call->from?></a></td> <td><?=$call->to?></td> <td><?=$call->start_time?></td> <td><?=$call->end_time?></td> <td><?=$call->duration?></td> </tr> <?php } ?> </tbody> </table> <?php } catch (Exception $e) { echo 'Error: ' . $e->getMessage(); } ?> <hr /> <a href="<?=$uri?>/delete" onclick="return confirm('Are you sure you wish to close your account?');">Delete My Account</a> How it works... In step 1, we updated the index.php file. In step 2, we uploaded dashboard.php to the views folder. This file checks if we're logged in using the $app->condition('signed_in') method, which we discussed earlier, and if we are, it displays all incoming calls we've had to our account. We can then push a button to call one of those numbers and whitelist or blacklist them. Summary Thus in this article we have learned how to send messages and make phone calls from your website using Twilio. Resources for Article: Further resources on this subject: Make phone calls, send SMS from your website using Twilio [article] Trunks in FreePBX 2.5 [article] Trunks using 3CX: Part 1 [article]
Read more
  • 0
  • 0
  • 1586
article-image-getting-started-cmis
Packt
19 Mar 2014
9 min read
Save for later

Getting Started with CMIS

Packt
19 Mar 2014
9 min read
(For more resources related to this topic, see here.) What is CMIS? The goal of CMIS is to provide a standard method for accessing content from different content repositories. Using CMIS service calls, it is possible to navigate through and create content in a repository. CMIS also includes a query language for searching both the metadata and full-text content stored that is stored in a repository. The CMIS standard defines the protocols and formats for the requests and responses of API service calls made to a repository. CMIS acts as a standard interface and protocol for accessing content repositories, something similar to how ANSI-SQL acts as a common-denominator language for interacting with different databases. The use of the CMIS API for accessing repositories brings with it a number of benefits. Perhaps chief among these is the fact that access to CMIS is language neutral. Any language that supports HTTP services can be used to access a CMIS-enabled repository. Client software can be written to use a single API and be deployed to run against multiple CMIS-compliant repositories. Alfresco and CMIS The original draft for CMIS 0.5 was written by EMC, IBM and Microsoft. Shortly after that draft, Alfresco and other vendors joined the CMIS standards group. Alfresco was an early CMIS adopter and offered an implementation of CMIS version 0.5 in 2008. In 2009, Alfresco began hosting an on-line preview of the CMIS standard. The server, accessible via the http://cmis.alfresco.com URL, still exists and implements the latest CMIS standard. As of this writing, that URL hosts a preview of CMIS 1.1 features. In mid-2010, just after the CMIS 1.0 standard was approved, Alfresco released CMIS in both the Alfresco Community and Enterprise editions. In 2012, with Alfresco version 4.0, Alfresco moved from a home grown CMIS runtime implementation to one that uses the Apache Chemistry OpenCMIS Server Framework. From that release, developers have been able to customize Alfresco using the OpenCMIS Java API. Overview of the CMIS Standard Next we discuss the details of the CMIS specification, particularly the domain model, the different services that it provides, and the supported protocol bindings. Domain model (Object model) Every content repository vendor has their own definition of a content or object model. Alfresco, for example, has rich content modeling capabilities, such as types and aspects that can inherit from other types and aspects, and properties that can be assigned attributes like data-type, multi-valued and required. But there are wide differences in the ways in which different vendors have implemented content modeling. In the Documentum ECM system, for example, the generic content type is called dm_document, while in Alfresco it is called cm:content. Another example is the concept of an aspect as used in Alfresco – many repositories do not support that idea. The CMIS Domain Model is an attempt by the CMIS Standardization group to define a framework generic enough that can describe content models and map to concepts used by many different repository vendors. The CMIS Domain Model defines a Repository as a container and an entry point to all content items, from now on called objects. All objects are classified by an Object Type , which describes a common set of Properties (like Type ID, Parent, and Display Name). There are five base types of objects: Document, Folder, Relationship, Policy , Item(available from CMIS 1.1), and these all inherit from Object Type. In addition to the five base object types there are also a number of property types that can be used when defining new properties for an Object type. These are shown in the figure: String, Boolean, Decimal, Integer, and DateTime. Besides these property types there are also the URI, Id, and HTML property types, not shown in the figure. Taking a closer look at each one of the base types, we can see that: Document almost always corresponds to a file, although it need not have any content (when you upload a file via, for example, the AtomPub binding the metadata is created with the first request and the content for the file is posted with the second request). Folder is a container for file-able objects such as folders and documents. Immediately after filing a folder or document into a folder, an implicit parent-child relationship is automatically created. The fileable property of the object type definition specifies whether an object is file-able or not. Relationship object defines a relationship between a target and source object. Objects can have multiple relationships with other objects. The support for relationship objects is optional. Policy is a way of defining administrative policies to manage objects. An object to which a policy may be applied is called a controllable object (controllablePolicy property has to be set to true). For example, a CMIS policy could be used to define a retention policy. A policy is opaque and has no meaning to the repository. It must be implemented and enforced in a repository-specific way. For example, rules might be used in Alfresco to enforce a policy. The support for policy objects is optional. Item (CMIS 1.1) object represents a generic type of a CMIS information asset. For example, this could be a user or group object. Item objects are not versionable and do not have content streams like documents, but they do have properties like all other CMIS objects. The support for item objects is optional. Additional object types can be defined in a repository as custom subtypes of the base types. The Legal Case type shown in the figure above is an example. CMIS services are provided for the discovery of object types that are defined in a repository. However, object type management services, such as the creation, modification, and deletion of an object type, are not covered by the CMIS standard. An object has one primary base object type, such as Document or Folder, which cannot be changed. An object can also have secondary object types applied to it (CMIS 1.1). A secondary type is a named class that may add extra properties to an object in addition to the properties defined by the object's primary base object-type (This is similar to the concept of aspects in Alfresco). Every CMIS object has an opaque and immutable Object Identity (ID), which is assigned by the repository when the object is created. In the case of Alfresco, a Node Reference is created which becomes the Object ID. The ID uniquely identifies an object within a repository regardless of the type of the object. All CMIS objects have a set of named, but not explicitly ordered, properties. Within an object, each property is uniquely identified by its Property ID. In addition, a document object can have a Content Stream, which is then used to hold the actual byte content from a file. A document can also have one or more Renditions, like a thumbnail, a different sized image, or an alternate representation of the content stream. Document or folder objects can have one Access Control List (ACL), which controls access to the document or folder. An ACL is made up of a list of Access Control Entries (ACEs). An ACE in turn represents one or more permissions being granted to a principal, such as a user, group, role, or something similar. All objects and properties are defined in the cmis name-space. From now on we will refer to the different objects and properties via their fully qualified name, for example cmis:document or cmis:name. Services The following CMIS services can access and manage CMIS objects in the repository: Repository Services: These are used to discover information about the repository, including repository ids (more than one repository could be managed by the endpoint). Since many features are optional, this provides a way to find out which are supported. CMIS 1.1 compliant repositories also support the creation of new types dynamically. Methods: getRepositories, getRepositoryInfo , getTypeChildren, getTypeDescendants , getTypeDefinition , createType (CMIS 1.1), updateType (CMIS 1.1), deleteType (CMIS 1.1) Navigation Services: These are used to navigate the folder hierarchy. Methods: getChildren, getDescendants, getFolderTree, getFolderParent, getObjectParents, getCheckedOutDocs. Object Services: These services provide ID-based CRUD (Create, Read, Update, Delete) operations. Methods: createDocument, createDocumentFromSource, createFolder, createRelationship, createPolicy, createItem (CMIS 1.1), getAllowableActions, getObject, getProperties, getObjectByPath, getContentStream, getRenditions, updateProperties, bulkUpdateProperties (CMIS 1.1), moveObject, deleteObject, deleteTree, setContentStream, appendContentStream (CMIS 1.1), deleteContentStream. Multi-filing Services: These services (optional) makes it possible to put an object to several folders (multi-filing) or outside the folder hierarchy (un-filing). This service is not used to create or delete objects. Methods: addObjectToFolder, removeObjectFromFolder. Discovery Services: These are used to search for query-able objects within the Repository (objects with property queryable set to true ). Methods: query , getContentChanges. Versioning Services: These are used to manage versioning of document objects, other objects are not versionable. Whether or not a document can be versioned is controlled by the versionable property in the Object type. Methods: checkOut, cancelCheckOut, checkIn, getObjectOfLatestVersion, getPropertiesOfLatestVersion, getAllVersions. Relationship Services: These (optional) are used to retrieve the relationships in which an object is participating. Methods: getObjectRelationships. Policy Services: These (optional) are used to apply or remove a policy object to an object which has the property controllablePolicy set to true. Methods: applyPolicy, removePolicy, getAppliedPolicies. ACL Services: This service is used to discover and manage the Access Control List (ACL) for an object, if the object has one. Methods: applyACL, and getACL Summary In this article, we introduced the CMIS standard and how it came about. Then we covered the CMIS domain model with its five base object types: document, folder, relationship, policy, and item (CMIS 1.1.). We also learned that the CMIS standard defines a number of services, such as navigation and discovery, which makes it possible to manipulate objects in a content management system repository. Resources for Article: Further resources on this subject: Content Delivery in Alfresco 3 [Article] Getting Started with the Alfresco Records Management Module [Article] Managing Content in Alfresco [Article]
Read more
  • 0
  • 0
  • 2357

article-image-services
Packt
19 Mar 2014
12 min read
Save for later

Services

Packt
19 Mar 2014
12 min read
(For more resources related to this topic, see here.) Services A service is just a specific instance of a given class. For example, whenever you access doctrine such as $this->get('doctrine'); in a controller, it implies that you are accessing a service. This service is an instance of the Doctrine EntityManager class, but you never have to create this instance yourself. The code needed to create this entity manager is actually not that simple since it requires a connection to the database, some other configurations, and so on. Without this service already being defined, you would have to create this instance in your own code. Maybe you will have to repeat this initialization in each controller, thus making your application messier and harder to maintain. Some of the default services present in Symfony2 are as follows: The annotation reader Assetic—the asset management library The event dispatcher The form widgets and form factory The Symfony2 Kernel and HttpKernel Monolog—the logging library The router Twig—the templating engine It is very easy to create new services because of the Symfony2 framework. If we have a controller that has started to become quite messy with long code, a good way to refactor it and make it simpler will be to move some of the code to services. We have described all these services starting with "the" and a singular noun. This is because most of the time, services will be singleton objects where a single instance is needed. A geolocation service In this example, we imagine an application for listing events, which we will call "meetups". The controller makes it so that we can first retrieve the current user's IP address, use it as basic information to retrieve the user's location, and only display meetups within 50 kms of distance to the user's current location. Currently, the code is all set up in the controller. As it is, the controller is not actually that long yet, it has a single method and the whole class is around 50 lines of code. However, when you start to add more code, to only list the type of meetups that are the user's favorites or the ones they attended the most. When you want to mix that information and have complex calculations as to which meetups might be the most relevant to this specific user, the code could easily grow out of control! There are many ways to refactor this simple example. The geocoding logic can just be put in a separate method for now, and this will be a good step, but let's plan for the future and move some of the logic to the services where it belongs. Our current code is as follows: use GeocoderHttpAdapterCurlHttpAdapter; use GeocoderGeocoder; use GeocoderProviderFreeGeoIpProvider; public function indexAction()   { Initialize our geocoding tools (based on the excellent geocoding library at http://geocoder-php.org/) using the following code: $adapter = new CurlHttpAdapter(); $geocoder = new Geocoder(); $geocoder->registerProviders(array( new FreeGeoIpProvider($adapter), )); Retrieve our user's IP address using the following code: $ip = $this->get('request')->getClientIp(); // Or use a default one     if ($ip == '127.0.0.1') {   $ip = '114.247.144.250'; } Get the coordinates and adapt them using the following code so that they are roughly a square of 50 kms on each side: $result = $geocoder->geocode($ip); $lat = $result->getLatitude(); $long = $result->getLongitude(); $lat_max = $lat + 0.25; // (Roughly 25km) $lat_min = $lat - 0.25; $long_max = $long + 0.3; // (Roughly 25km) $long_min = $long - 0.3; Create a query based on all this information using the following code: $em = $this->getDoctrine()->getManager(); $qb = $em->createQueryBuilder(); $qb->select('e')     ->from('KhepinBookBundle:Meetup, 'e')     ->where('e.latitude < :lat_max')     ->andWhere('e.latitude > :lat_min')     ->andWhere('e.longitude < :long_max')     ->andWhere('e.longitude > :long_min')     ->setParameters([       'lat_max' => $lat_max,       'lat_min' => $lat_min,       'long_max' => $long_max,       'long_min' => $long_min     ]); Retrieve the results and pass them to the template using the following code: $meetups = $qb->getQuery()->execute(); return ['ip' => $ip, 'result' => $result, 'meetups' => $meetups]; } The first thing we want to do is get rid of the geocoding initialization. It would be great to have all of this taken care of automatically and we would just access the geocoder with: $this->get('geocoder');. You can define your services directly in the config.yml file of Symfony under the services key, as follows: services:   geocoder:     class: GeocoderGeocoder That is it! We defined a service that can now be accessed in any of our controllers. Our code now looks as follows: // Create the geocoding class $adapter = new GeocoderHttpAdapterCurlHttpAdapter(); $geocoder = $this->get('geocoder'); $geocoder->registerProviders(array(     new GeocoderProviderFreeGeoIpProvider($adapter), )); Well, I can see you rolling your eyes, thinking that it is not really helping so far. That's because initializing the geocoder is a bit more complex than just using the new GeocoderGeocoder() code. It needs another class to be instantiated and then passed as a parameter to a method. The good news is that we can do all of this in our service definition by modifying it as follows: services:     # Defines the adapter class     geocoder_adapter:         class: GeocoderHttpAdapterCurlHttpAdapter         public: false     # Defines the provider class     geocoder_provider:         class: GeocoderProviderFreeGeoIpProvider         public: false         # The provider class is passed the adapter as an argument         arguments: [@geocoder_adapter]     geocoder:         class: GeocoderGeocoder         # We call a method on the geocoder after initialization to set up the         # right parameters         calls:             - [registerProviders, [[@geocoder_provider]]] It's a bit longer than this, but it is the code that we never have to write anywhere else ever again. A few things to notice are as follows: We actually defined three services, as our geocoder requires two other classes to be instantiated. We used @+service_name to pass a reference to a service as an argument to another service. We can do more than just defining new Class($argument); we can also call a method on the class after it is instantiated. It is even possible to set properties directly when they are declared as public. We marked the first two services as private. This means that they won't be accessible in our controllers. They can, however, be used by the Dependency Injection Container (DIC) to be injected into other services. Our code now looks as follows: // Retrieve current user's IP address $ip = $this->get('request')->getClientIp(); // Or use a default one if ($ip == '127.0.0.1') {     $ip = '114.247.144.250'; } // Find the user's coordinates $result = $this->get('geocoder')->geocode($ip); $lat = $result->getLatitude(); // ... Remaining code is unchanged Here, our controllers are extending the BaseController class, which has access to DIC since it implements the ContainerAware interface. All calls to $this->get('service_name') are proxied to the container that constructs (if needed) and returns the service. Let's go one step further and define our own class that will directly get the user's IP address and return an array of maximum and minimum longitude and latitudes. We will create the following class: namespace KhepinBookBundleGeo; use GeocoderGeocoder; use SymfonyComponentHttpFoundationRequest; class UserLocator {     protected $geocoder;     protected $user_ip;     public function__construct(Geocoder $geocoder, Request $request) {         $this->geocoder = $geocoder;         $this->user_ip = $request->getClientIp();         if ($this->user_ip == '127.0.0.1') {             $this->user_ip = '114.247.144.250';         }     }     public function getUserGeoBoundaries($precision = 0.3) {         // Find the user's coordinates         $result = $this->geocoder->geocode($this->user_ip);         $lat = $result->getLatitude();         $long = $result->getLongitude();         $lat_max = $lat + 0.25; // (Roughly 25km)         $lat_min = $lat - 0.25;         $long_max = $long + 0.3; // (Roughly 25km)         $long_min = $long - 0.3;         return ['lat_max' => $lat_max, 'lat_min' => $lat_min,            'long_max' => $long_max, 'long_min' => $long_min];     } } It takes our geocoder and request variables as arguments, and then does all the heavy work we were doing in the controller at the beginning of the article. Just as we did before, we will define this class as a service, as follows, so that it becomes very easy to access from within the controllers: # config.yml services:     #...     user_locator:        class: KhepinBookBundleGeoUserLocator        scope: request        arguments: [@geocoder, @request] Notice that we have defined the scope here. The DIC has two scopes by default: container and prototype, to which the framework also adds a third one named request. The following table shows their differences: Scope Differences Container All calls to $this->get('service_name') return the sameinstance of the service. Prototype Each call to $this->get('service_name') returns a new instance of the service. Request Each call to $this->get('service_name') returns the same instance of the service within a request. Symfony can have subrequests (such as including a controller in Twig). Now, the advantage is that the service knows everything it needs by itself, but it also becomes unusable in contexts where there are no requests. If we wanted to create a command that gets all users' last-connected IP address and sends them a newsletter of the meetups around them on the weekend, this design would prevent us from using the KhepinBookBundleGeoUserLocator class to do so. As we see, by default, the services are in the container scope, which means they will only be instantiated once and then reused, therefore implementing the singleton pattern. It is also important to note that the DIC does not create all the services immediately, but only on demand. If your code in a different controller never tries to access the user_locator service, then that service and all the other ones it depends on (geocoder, geocoder_provider, and geocoder_adapter) will never be created. Also, remember that the configuration from the config.yml is cached when on a production environment, so there is also little to no overhead in defining these services. Our controller looks a lot simpler now and is as follows: $boundaries = $this->get('user_locator')->getUserGeoBoundaries(); // Create our database query $em = $this->getDoctrine()->getManager(); $qb = $em->createQueryBuilder(); $qb->select('e')     ->from('KhepinBookBundle:Meetup', 'e')     ->where('e.latitude < :lat_max')     ->andWhere('e.latitude > :lat_min')     ->andWhere('e.longitude < :long_max')     ->andWhere('e.longitude > :long_min')     ->setParameters($boundaries); // Retrieve interesting meetups $meetups = $qb->getQuery()->execute(); return ['meetups' => $meetups]; The longest part here is the doctrine query, which we could easily put on the repository class to further simplify our controller. As we just saw, defining and creating services in Symfony2 is fairly easy and inexpensive. We created our own UserLocator class, made it a service, and saw that it can depend on our other services such as @geocoder service. We are not finished with services or the DIC as they are the underlying part of almost everything related to extending Symfony2. Summary In this article, we saw the importance of services and also had a look at the geolocation service. We created a class, made it a service, and saw how it can depend on our other services. Resources for Article: Further resources on this subject: Developing an Application in Symfony 1.3 (Part 1) [Article] Developing an Application in Symfony 1.3 (Part 2) [Article] User Interaction and Email Automation in Symfony 1.3: Part1 [Article]
Read more
  • 0
  • 0
  • 966